Compare commits

..

570 Commits

Author SHA1 Message Date
Alexander Kanavin
2679a347c5 maintainers.inc: rename gtk-doc-stub to gtk-doc, reassign to me
(From meta-yocto rev: e09444f0f613cd7b092bab5cb0106c1447be1ecf)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:59 +01:00
Markus Lehtonen
68e35c8e37 oeqa.runtime.smart: work around smart race issues
Yucku hack around test failures which ultimately are caused by a race in
smartpm itself. Issuing smartpm commands in quick succession causes
races in package cache of smartpm on some systems. This patch mitigates
the problem by sleeping for 1 second after each smartpm command that
modifies the system.

[YOCTO #10244]

(From OE-Core rev: 4d268abc2fc892c5d34449f78c8e9f2b1a9d6bac)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Richard Purdie
d3991342ed oeqa/runtime/smart: Prune feeds to save memory
Full package feed indexes overload a 256MB image so reduce the number of rpms
the feed. Filter to p* since we use the psplash packages and this leaves some
allarch and machine arch packages too.

[YOCTO #8771]

(From OE-Core rev: f352c3b71cbf50846c7de31046202296b38713cc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Ross Burton
ab3f23970d gtk-doc: only depend on native gtk-doc for documentation generation
Now that gtk-doc-native works correctly, the gtk-doc class doesn't need to
depend on target gtk-doc.

(From OE-Core rev: 8dc4a45cc06fda29618f9f2379ed743dc0c536e3)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Ross Burton
620e2bf936 gtk-doc: use pkg-config-native in native gtk-doc.m4
When building gtk-doc-native the m4 functions for autoconf should use
pkg-config-native instead of pkg-config so that they can find the native
tooling.

This means that it is possible to generate gtk-doc without building the target
packages.

(From OE-Core rev: 755724d9d5f023450392ae8025a0cb6264283028)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Ross Burton
5041e6811f gstreamer: remove packaged copy of gtk-doc.m4
The gstreamer common module ships a copy of gtk-doc.m4 that will be used in
preference to our patched form, so delete it before configure is executed.

(From OE-Core rev: 50768af29ce8524f7bae387996aaed657a1ff80f)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Robert Yang
f22415435f gobject-introspection: set GI_SCANNER_DISABLE_CACHE for native
The native recipe should not write files to $HOME/.cache as target, this can
avoid problems when multi builds are running on the same host like:
|   File "./g-ir-scanner", line 66, in <module>
|     sys.exit(scanner_main(sys.argv))
|   File "../gobject-introspection-1.48.0/giscanner/scannermain.py", line 543, in scanner_main
|     transformer = create_transformer(namespace, options)
|   File "../gobject-introspection-1.48.0/giscanner/scannermain.py", line 389, in create_transformer
|     symbol_filter_cmd=options.symbol_filter_cmd)
|   File "../gobject-introspection-1.48.0/giscanner/transformer.py", line 54, in __init__
|     self._cachestore = CacheStore()
|   File "../gobject-introspection-1.48.0/giscanner/cachestore.py", line 61, in __init__
|     self._check_cache_version()
|   File "../gobject-introspection-1.48.0/giscanner/cachestore.py", line 89, in _check_cache_version
|     self._clean()
|   File "../gobject-introspection-1.48.0/giscanner/cachestore.py", line 141, in _clean
|     self._remove_filename(os.path.join(self._directory, filename))
|   File "../gobject-introspection-1.48.0/giscanner/cachestore.py", line 123, in _remove_filename
|     os.unlink(filename)
| FileNotFoundError: [Errno 2] No such file or directory: '/home/pokybuild/.cache/g-ir-scanner/0a47aa95823c95a0b5d1bd610b60d02f35785f26'
| Makefile:3518: recipe for target 'GModule-2.0.gir' failed

(From OE-Core rev: d3c48ff7d19e86b2338b1778f9563969bba3d336)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
ca9ff8b7d3 distro-alias.inc: rename gtk-doc-stub to gtk-doc
(From OE-Core rev: 54afc564cd13dc6b73a65ced9545d5d37d85f6a1)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
03425de493 clutter-1.0: do not use the prepackaged clutter.types file when generating gtk-doc
Doing so will fail when x11 is disabled in particular.

(From OE-Core rev: 98a9a30abdc7b877be574ac5914ec02f16c00887)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
9a8b39204c pango: fix gtk-doc build when x11 is not in use
(From OE-Core rev: 516d1a797d56e2753cbdd596387724f933350122)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
3ed15a88f5 util-linux: do not enable gtk-doc and explain why
(From OE-Core rev: ea98b08c65de100623a641505f3160848c8fdf20)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
c370dda61c p11-kit: enable gtk-doc
(From OE-Core rev: a9372c630e4a27d0ec2f139cba57d1b98d93eb5f)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
fe14b2415a libsoup-2.4: enable gtk-doc
(From OE-Core rev: 6a3e20f6faa79f25fd2c27d105b9383e8bd37824)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
5c899dea8e libtasn1: enable gtk-doc
(From OE-Core rev: 074e923b86ed244b1b52420d0623d620bf9ccf1e)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
b41127f503 gnutls: enable gtk-doc
gtk-doc also requires --enable-doc, so that is no longer configurable.

(From OE-Core rev: 32dd42e8930bf38abf280e04b4ee22c9a9a2fae9)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
3bea0e52fd harfbuzz: enable gtk-doc
(From OE-Core rev: 014c55e09764052f30c43390aa4ea3e604ea7760)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
363b2989c8 cairo: enable gtk-doc
(From OE-Core rev: 60c10d8c07c92e3b275a2cc422b9013cbcf3c93a)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
fa351bc8a8 libenck3: enable gtk-doc
(From OE-Core rev: 40593dc63a3a6bc8fa85adcfb7e08802a00a126e)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
ff774d01bf libgudev: enable gtk-doc
(From OE-Core rev: f02f0766f5f43c36a0a0c1326d4cde353389e0a8)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
3cc20551b6 json-glib: enable gtk-doc
(From OE-Core rev: 364846d2f8430789957cd0b3dfabb3c06284bb1d)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:23 +01:00
Alexander Kanavin
406a9b7c7d gnome-desktop3: enable gtk-doc
(From OE-Core rev: a8061788188fa1a367710dd7b262900f42a2efec)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
60604de090 gdk-pixbuf: enable gtk-doc
(From OE-Core rev: d16d4a1f24a7f0527e96d7fa77a62f044cc27753)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
1cfdc18660 libuser: enable gtk-doc
(From OE-Core rev: 908c9fce842b022dd285ccf363a0fda325cdf91b)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
2d20b54b24 libidn: enable gtk-doc
(From OE-Core rev: 40b4357c79f971b79fcb667cd6617068250ac4d1)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
b240b9b8e2 orc: enable gtk-doc
(From OE-Core rev: f4be8bb24fc38f7f132edafb3c0a96016dec1c1c)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
83301b44aa dbus-glib: enable gtk-doc
(From OE-Core rev: c7eb50aa65c6168945a8dacda0c3126b098c3c4f)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
063f84e1bf webkitgtk: re-enable introspection on powerpc
It seems to work under qemu-ppc now.

(From OE-Core rev: ef41d3c972786f0e9a48ef171a952af90a4cce59)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
38f2f51add gcr, libsecret, webkitgtk: disable gtk-doc on mips64
It fails with the same error as gobject-introspection

(From OE-Core rev: 6248ca13451101c32c754e20fc8e0fb802df7ef4)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
d6a10272a8 gcr: disable gtk-doc on x86_64
For same reason that introspection is disabled: the transient binary goes into infinite loop.

(From OE-Core rev: b3d7ccae7e19047836f6c9423e4569dccf98d759)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
ae773d6374 gtk+3: disable gtk-doc when x11 is not available
gtk-doc requires gdk/x11/gdkx.h which is not available if gdk x11 backend is disabled
(due to jku's patch).

(From OE-Core rev: d1e9927ba145036cb56d7512026df1a8c21a85dd)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
31029c8001 webkitgtk: enable gtk-doc support
(From OE-Core rev: ec972a24dbb93f822d69e253c4ecb563658029be)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
3878dacc38 gstreamer1.0: enable gtk-doc support
check support is no longer disabled by default because it is a requirement
of gtk-doc support in gstreamer.

(From OE-Core rev: 628a849ff14e165b8c00c6649d042225f5a35732)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
e1aad9e8a1 libglade: remove the recipe
Libglade has been obsolete for several years and is used by nothing in oe-core;
it will be moved to meta-oe so that old recipes still present there continue to build.

(From OE-Core rev: 6e5fa40b00039e48709db51c56caf0fa42a83f8e)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
3a093d1b1e systemd: drop unused gtkdoc-related variable
(From OE-Core rev: 3fa84900b0a008993dfbf0d5af12416f4bc3980f)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
6d266010dc kmod: do not let gtkdocize fail
(From OE-Core rev: 1e68a6b24b88c897de18e84245bf7b3e15254bef)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
2167573f68 gtk-doc.bbclass: enable building gtk-doc based documentation
This is done similarly to gobject-introspection, but with much
less delicate hacking around the upstream way of working.

(From OE-Core rev: 1b5b429f63c323fcd46b7419a531689717a73b91)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
5029d1fb15 gtk-doc: add a recipe, remove gtk-doc-stub
(From OE-Core rev: 8b958312d360e6692dc7c6dd3d2b2591301f9e59)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Alexander Kanavin
94317f52ce source-highlight: add a recipe
gtk-doc relies on this to highlight source code snippets

(From OE-Core rev: 380f449bc1881a6e8592463c7eeda3655efb97ea)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:12:22 +01:00
Richard Purdie
8c46605d16 oeqa: Use snapshot instead of copying the rootfs image
Rather than copying images, use the snapshot option to qemu. This fixes a regression
caused by the recent runqemu changes where the wrong images were being testes since
the image is copied without the qemuboot.conf file. This means the latest image is
found by runqemu rather than the specified one, leading to various confused testing
results.

It could be fixed by copying more files but use snapshot mode instead.

(From OE-Core rev: eab91997d415b0e690b3482749a32087e6a8b00a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Richard Purdie
0d214406c4 scripts/runqemu: Add snapshot support
Allow access to the snapshot option of qemu to simplify some of our runtime
testing to avoid copying images.

(From OE-Core rev: 8fec4a5a004f0e99734f8c0820c66522d08f213e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Richard Purdie
b2dc9c7ee2 runqemu: Enable virtio RNG for all platforms
We have problems where systems simply stop booting and hang. This is due
to a lack of entropy which means ssh keys and networking can't be brought
up. Adding in the virtio-rng passthrough support allows host entropy to
pass into the guess and avoids these hangs.

This is particularly problematic after the gnutls upgrade which starts
using /dev/random instead of /dev/urandom but was an issue we'd occasionally
seem before that.

It particualrly affected x86 and ppc machines for some reason.

(From OE-Core rev: 51b001909f1856c45cf87091d6e4446c266d5786)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Richard Purdie
eb1494f9da runqemu: Update to modern prefrerred net syntax
(From OE-Core rev: 5e61766d976b6d036946c1b4e4ac742a33a03815)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Richard Purdie
d1cb381977 runqemu: Allow unique network interface MAC addresses
Current qemu instances all share the same MAC address. This shouldn't be an
issue as they are all on separate network interfaces, however on the slight
chance this is causing problems, its easy enough to ensure we use unique
MAC addresses based on the IP numbers we assign.

(From OE-Core rev: c01962bf88786dd84ad83cc1d315297607d29f7c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Robert Yang
95331c6d4e qemurunner.py/qemutinyrunner.py: remove runqemu-internal
There is no runqemu-internal any more.

(From OE-Core rev: 14bacf7203ab7a638b67eb143225d8c75bbb703d)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Robert Yang
b18f8a58f8 nativesdk-qemu-helper: fix for new runqemu
There is no runqemu-internal anymore, and it is a python script now
which requires several python modules.

(From OE-Core rev: 94cb6eaec37c07e7903143fb53a568ab0bf2fc5c)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Joshua Lock
d1303c220e runqemu: fix run from testimage with non-standard DEPLOY_DIR_IMAGE
testimage.bbclass uses runqemu to execute runtime tests on a qemu
target, this means that bitbake is already running and `bitbake -e`
can't be called to obtain bitbake variables.

runqemu tries to work around being unable to read values for
bitbake variables by inferring the MACHINE from the
DEPLOY_DIR_IMAGE setting, however if a user sets that variable in
a manner which doesn't follow the systems expectations (i.e. if
running `bitbake -c testimage` against a directory of pre-generated
images in a user-specified path) the inferring of the MACHINE name
from the DEPLOY_DIR_IMAGE location will fail.

It's possible that check_arg_machine() shouldn't cause runqemu to
fail and that runqemu should proceed with the user-supplied value
even if it can't be verified. This patch simply ensures that a
workflow where the user sets DEPLOY_DIR_IMAGE continues to work
without changing too much of the runqemu code.

[YOCTO #10238]

(From OE-Core rev: f94ac02f459e2ea0fc471463966997814a67e0ca)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Joshua Lock
8a8948e0e1 runqemu: fixes for when invoked during a bitbake run
When runqemu is invoked from a running bitbake instance it will be
unable to call `bitbake -e` due to the lock held by the calling
bitbake instance.

Our test code sets an OE_TMPDIR environment variable from which we
can infer/guess paths. Add code to do so when self.bitbake_e can't
be set, much as the sh version of runqemu did.

[YOCTO #10240]

(From OE-Core rev: 1e8165ea2f19aecdc03ccd102ee44ef0544f0f39)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Joshua Lock
d5d4869634 runqemu: better handle running on a host with different paths
If the STAGING_*_NATIVE directories from the config file don't exist
and we're in a sourced OE build directory try to extract the paths
from `bitbake -e`

(From OE-Core rev: 9326af1c20636320c70caecebd47aedafb3f2d25)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Joshua Lock
e162303ecb runqemu: assume artefacts are relative to *.qemuboot.conf
When runqemu is started with a *.qemuboot.conf arg assume that image
artefacts are relative to that file, rather than in whatever
directory the DEPLOY_DIR_IMAGE variable in the conf file points to.

(From OE-Core rev: a6448371b87f754def669adfdc01b07d18003405)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Robert Yang
b405712414 runqemu: refactor it and remove machine knowledge
Previously, runqemu had hard coded machine knowledge, which limited its
usage, for example, qemu can boot genericx86, but runqemu can't, we need
edit runqemu/runqemu-internal a lot if we want to boot genericx86.

Now bsp conf files can set vars to make it can be boot by runqemu, and
qemuboot.bbclass will save these info to DEPLOY_DIR_IMAGE/qemuboot.conf.
Please see qemuboot.bbclass' comments on how to set the vars.

* Re-write it in python3, which can reduce lines from 1239 to about 750
  lines
* All the machine knowledges are gone
* All of the TUN_ARCH knowledge are gone
* All the previous options are preserved, and there is a new way to run
  runqemu: (it doesn't need run "bitake -e" in such a case)
  $ runqemu tmp/deploy/images/qemux86
  or:
  $ runqemu tmp/deploy/images/qemuarm/<image>.ext4
  or:
  $ runqemu tmp/deploy/images/qemuarm/qemuboot.conf
* Fixed audio support, not limited on x86 or x86_64
* Fix SLIRP mode, add help message, avoid mixing with tap
* Fix NFS boot, it will extract <image>.tar.bz2 or tar.gz to
  DEPLOY_DIR_IMAGE/<image>-nfsroot when no NFS_DIR, and remove it after
  stop.
* More bsps can be boot, such as genericx86 and genericx86-64.
* The patch for qemuzynq, qemuzynqmp, qemumicroblaze has been sent to
  meta-xilinx' mailing list.
* I can't find any qemush4 bsp or how to build it, so it is not
  considered atm.

[YOCTO #1018]
[YOCTO #4827]
[YOCTO #7459]
[YOCTO #7887]

(From OE-Core rev: 60ca8a8d899b90a4693fd62b6ec97d0c76a9f6c5)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Robert Yang
638d19adb4 qemu.inc: inherit qemuboot.bbclass
All qemu boards should be able to boot by runqemu.

(From OE-Core rev: 5174889d59a5d6da29b4290376010dd176767e1f)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Richard Purdie
f9732b410e qemuppc: Use virtio networking instead of pcnet
qemuppc can use virtio networking and we may as well do so for better
prformance as we do under the other emulated hardware.

(From OE-Core rev: 8a82ded799be79eacb64cf313b6f2799a4f5ffab)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Robert Yang
18c7c0daef qemuppc.conf: set vars for runqemu
(From OE-Core rev: 2c8e5657cafafe848c7e7c714e5e73bb82799d65)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:32 +01:00
Robert Yang
09a77107e7 qemumips/qemumips64.conf: set vars for runqemu
Add qemuboot-mips.inc to reduce duplicated code, the various mips bsps
which can be boot by runqemu can require qemuboot-mips.inc

(From OE-Core rev: cb28128477e98ed7dc7a90dd197f6dd04cf75be0)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Robert Yang
64da0d7799 qemux86.conf/qemux86-64.conf: set vars for runqemu
Add qemuboot-x86.inc to reduce duplicated code, the x86/x86_64 bsps
which can be boot by runqemu can require qemuboot-x86.inc.

(From OE-Core rev: b5ff3dda2a576ba7e5d68198ea6c6eb49cf80eb8)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Robert Yang
9b0a94cbed qemuarm64.conf: set vars for runqemu
(From OE-Core rev: 73bccbbfc0f987fc82aca5411e15f62c02e5336c)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Robert Yang
8b6e7729a3 qemuarm.conf: set PREFERRED_VERSION_linux-yocto
The base_version_less_or_equal() will raise errors if
PREFERRED_VERSION_linux-yocto is None. For example, when we build
DISTRO = "nodistro", PREFERRED_VERSION_linux-yocto is not be defined
since it is defined in poky.conf, and then bitbake will
choose the higher version which is 4.8 currently, so set
PREFERRED_VERSION_linux-yocto to 4.8, otherwise, runqemu can't boot it.

(From OE-Core rev: fd31e30f97ee9bd128d5b7b748987b0a6427b279)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Richard Purdie
9b7614e674 qemuarm: Add DTB file new kernel
For kernels after 4.7, we need to ensure the DTB file for the kernel is
used by runqemu. Doing this conditionally based upon the kernel verison
being built seems to be the only way forward for this.

(From OE-Core rev: 4615764509234bfb206ffe4cd430653b88d46ec3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Robert Yang
90ab47c6cf qemuarm.conf: set vars for runqemu
These info are from old runqemu.

(From OE-Core rev: f22f09f8e1bb24e92e9109fcd7a347277acedcce)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Robert Yang
605d8b1ef0 qemuboot.bbclass: add it for runqemu
It saves vars in ${DEPLOY_DIR_IMAGE}/<image>.qemuboot.conf, and runqemu
will read it.

The bsp which can be boot by runqemu will inherit it.

(From OE-Core rev: 1675e9b89e00b875d0da13411a2a939aa4ba5298)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 12:07:31 +01:00
Jianxun Zhang
2984863f42 qemu: fix: cp command cannot find tests/Makeflie
bitbake qemu

This error shows up:

ERROR: qemu-2.7.0-r1 do_install_ptest_base: Function failed:
do_install_ptest_base
...
cp: cannot stat '...tmp/work/core2-64-poky-linux/qemu/2.7.0-r1
/qemu-2.7.0/tests/Makefile: No such file or directory
...

Commit 46e7b70699d8bf4db08c8bb5111974318dd5416d in qemu project
renamed tests/Makefile to tests/Makefile.include, we apply the
same change in recipe accordingly to fix this issue.

Fixes [YOCTO #10245]

(From OE-Core rev: 7009a9309051061455fa7237e09796ef12c9e308)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:37 +01:00
Robert Yang
e6c48b1213 qemu: 2.6.0 -> 2.7.0
This upgrade can fix a qemuppc + openssh bug, the ssh connection maybe
refused or closed randomly, and it's not easy to reproduce. RP pointed
that this upgrade can fix the problem, and it does work in my local
testing.

* Update add-ptest-in-makefile.patch
* Drop backported patch 0001-configure-support-vte-2.91.patch

Here is the Changlog:
http://wiki.qemu.org/ChangeLog/2.7

(From OE-Core rev: 056ce17e168bf856ff95a6f659098403169cb889)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:37 +01:00
Robert Yang
af7ec7afc0 parselogs: Whitelist qemux86 error message with qemu 2.7.0
qemu 2.7.0 introduces kernel errors:

[    2.310768] pci 0000:00:00.0: [11ab:4620] type 00 class 0x060000
[    2.311338] pci 0000:00:00.0: [Firmware Bug]: reg 0x14: invalid BAR (can't size)
[    2.311604] pci 0000:00:00.0: [Firmware Bug]: reg 0x18: invalid BAR (can't size)
[    2.311835] pci 0000:00:00.0: [Firmware Bug]: reg 0x1c: invalid BAR (can't size)
[    2.312063] pci 0000:00:00.0: [Firmware Bug]: reg 0x20: invalid BAR (can't size)
[    2.312323] pci 0000:00:00.0: [Firmware Bug]: reg 0x24: invalid BAR (can't size)
[    2.314320] pci 0000:00:0a.0: [8086:7110] type 00 class 0x060100
[    2.315363] pci 0000:00:0a.1: [8086:7111] type 00 class 0x010180

Whitelist this for now since this is preferable to the random failures
we're seeing from qemuppc with 2.6.0.

(From OE-Core rev: 4d542cdc86c34f0f4a3dde8b0aab059bca76a9fb)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:37 +01:00
Richard Purdie
e399ca1dd8 parselogs: Whitelist qemuppc error message with qemu 2.7.0
qemu 2.7.0 introduces kernel errors:

[0.474981] pci 0000:00:0f.0: reg 0x10: [io  0x0400-0x041f]
[0.483796] pci 0000:00:0f.0: reg 0x20: can't handle BAR above 4GB (bus address 0xffffffff82800000)
[0.484204] pci 0000:00:0f.0: reg 0x20: [mem size 0x00800000 64bit pref]
[0.488077] pci 0000:00:0f.0: reg 0x30: [mem 0x83000000-0x8303ffff pref]
[0.488903] pci 0000:00:10.0: [1af4:1005] type 00 class 0x00ff00
[0.490485] pci 0000:00:10.0: reg 0x10: [io  0x0480-0x049f]
[0.496512] pci 0000:00:10.0: reg 0x20: can't handle BAR above 4GB (bus address 0xffffffff83800000)
[0.496783] pci 0000:00:10.0: reg 0x20: [mem size 0x00800000 64bit pref]
[0.500345] pci 0000:00:11.0: [1af4:1001] type 00 class 0x010000
[0.501790] pci 0000:00:11.0: reg 0x10: [io  0x0500-0x053f]
[0.507362] pci 0000:00:11.0: reg 0x20: can't handle BAR above 4GB (bus address 0xffffffff84000000)
[0.507677] pci 0000:00:11.0: reg 0x20: [mem size 0x00800000 64bit pref]
[0.513905] pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 00
[0.516493] PCI 0000:00 Cannot reserve Legacy IO [io  0x0000-0x0fff]
[0.517512] pci 0000:00:0f.0: BAR 4: assigned [mem 0x80800000-0x80ffffff 64bit pref]
[0.518877] pci 0000:00:10.0: BAR 4: assigned [mem 0x82800000-0x82ffffff 64bit pref]
[0.519890] pci 0000:00:11.0: BAR 4: assigned [mem 0x83800000-0x83ffffff 64bit pref]

Whitelist this for now since this is preferable to the random failures
we're seeing from qemuppc with 2.6.0.

(From OE-Core rev: 2472ed5393ab01ad79c011ea19c73224ed5125de)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:37 +01:00
Joe Slater
f6ff0379b7 libwebp: sepcify neon availability for arm
Defeat automatic neon detection.

(From OE-Core rev: 1a563214caf6bd5b3a026ebe953f8c692ebd640a)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Henry Bruce
4192e4bad5 utils.bbclass: Added error checking for oe_soinstall
Fixes [YOCTO #10146]

(From OE-Core rev: cd5d532bd2a3f409b9470591c8d6f6b21e5995dd)

Signed-off-by: Henry Bruce <henry.bruce@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Henry Bruce
e9126926d4 utils.bbclass: Remove trailing whitespace
(From OE-Core rev: 1868db95819b45961cd7e8499ecace403e6bc91d)

Signed-off-by: Henry Bruce <henry.bruce@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Fabio Berton
38602249e2 watchdog-config: Add recipe
Provides configuration files for watchdog.
Add watchdog-config as a runtime dependence of watchdog and remove
watchdog.conf file from watchdog installation.

(From OE-Core rev: 6864ad2e863205472f8ea2057c61e949dc450151)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Fabio Berton
7a79aba22b watchdog: Add wd_keepalive package
This is a simplified version of the watchdog daemon. It only opens
/dev/watchdog, and keeps writing to it often enough to keep the kernel
from resetting, at least once per minute. Each write delays the reboot
time another minute. After a minute of inactivity the watchdog hardware
will cause a reset. In the case of the software watchdog the ability to
reboot will depend on the state of the machines and interrupts.

Installs wd_keepalive binary and enable initscript.

(From OE-Core rev: b76af8a0982c3c7473bd6ba067d1c8030d4d2f26)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Bruce Ashfield
f344440172 linux-yocto: update LINUX_VERSION to -rc5
The SRCREVs were previously updated to -rc5, but the LINUX_VERSION
was missed. As such, we are building and booting -rc5, but all the
packaging says -rc4.

Worth a quick update while we wait for -rc6

(From OE-Core rev: ea2f99161a22ae2e9eefd3b337c9af7704c33e37)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Maxin B. John
649d79d752 kconfig-frontends: inherit pkgconfig
Instead of build depending on pkgconfig-native, inherit pkgconfig
class which does the same thing.

(From OE-Core rev: 3a2b6b5ee1e43ec9c968062a3fbeb0e1a4630c18)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Maxin B. John
04554443e4 kmod: inherit pkgconfig
Instead of DEPENDS += "pkgconfig-native", inherit pkgconfig class which does
the same.

(From OE-Core rev: dbaa536c569d728f47f949fedbab165b73c9985d)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Jussi Kukkonen
e292f77d54 x11-common: Remove Xserver script
X startup is now handled in xserver-nodm-init.

(From OE-Core rev: 877851cf0f76a5052900954670fb64aed27a7a1f)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Jussi Kukkonen
3b7cdffebd xserver-nodm-init: Deprecate /etc/X11/Xserver
This commit should provide the same functionality as before, but
should make meta-oe xserver-nodm-init-2.0 obsolete as well as
keep systemd and sysvinit startup better in sync.

/etc/X11/Xserver is not called anymore: it is provided by both
x11-common and xserver-common with no useful differences (but some
annoying ones). Instead xserver-nodm-init provides
/etc/xserver-nodm/Xserver as the startup script and
/etc/default/xserver-nodm as the default settings file. These are
used by both init systems.

The Xserver script could be completely removed (with sysv and
systemd calling xinit directly), but to keep compatibility with
meta-oes xserver-nodm-init-2.0 the Xserver script sources
/etc/X11/xserver-common if one exists -- and systemd EnvironmentFile
cannot do that.

x11-common used to have a packageconfig to easily control screen
blanking. Move this to xserver-nodm-init.

(From OE-Core rev: e8ce3d2626e505924a75de96650abca166fd230a)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
André Draszik
37f4b39d0e kernel-module-split.bbclass: no need for running depmod
With the recent changes, executing depmod is not needed
anymore.

This simplifies and removes a lot of unnecessary code.

(From OE-Core rev: 8296e258b36a6238605e068e13c1982b1d12fe53)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
André Draszik
df1635e5fb kernel-module-split.bbclass: generate dependencies across recipes
The information retrieved via depmod is incomplete with
regards to kernel modules that are dependencies, in
particular where two kernel modules are built from
different source trees / recipes, which leads to incomplete
dependency information for packages created.

So far, our packages created didn't contain dependencies on
packages created by other recipes, as we solely use depmod
for that, and depmod can only work well after *all* kernel
modules have been copied into one place - it doesn't work
well in a staged approach.

Now that all .ko have correct dependency information at packaging
time, we can use that information to properly track dependencies
across recipies, and can combine the information from the
.modinfo elf section with the information from depmod.

(From OE-Core rev: e4af1fa3aee7f1cf00ca27944b10b886f41f2fda)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
André Draszik
fe90376aca module.bbclass: use Module.symvers for dependants
When compiling multiple external kernel modules, where one
depends on the other, there are two problems at the
moment:
1) we get compile time warnings from the kernel build
   system due to missing symbols (from modpost).
2) Any modules generated are missing dependency
   information (in the .modinfo elf section) for any
   dependencies outside the current source tree and
   outside the kernel itself.

This is expected, but the kernel build system has a way to
deal with this - the dependent module is expected to
specify KBUILD_EXTRA_SYMBOLS (as a space-separated list)
to point to any and all Module.symvers of kernel modules
that are dependencies.

While 1) by itself is not really a big issue, 2) prevents
the packaging process from generating cross-source tree
package dependencies.

As a first step to solve the missing dependencies in
packages created, we:
1) install Module.symvers of all external kernel module
   builds (into a location that is automatically packaged
   into the -dev package)
2) make use of KBUILD_EXTRA_SYMBOLS and pass the location
   of all Module.symvers of all kernel-module-* packages
   we depend on

This solves both problems mentioned above.

(From OE-Core rev: 88f1bc77c22091fccb00e80839adfdf34534187f)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-09 11:53:36 +01:00
Bruce Ashfield
a2966330bc linux-yocto/4.4/4.8: uvesafb: provide option to specify timeout for task completion
Integrating the following patch:

[
   We try to make this change a generic extension, but it is
   actually for a corner case. When a VM (qemu) gets a very limited
   cpu bandwidth from host, which could be under a heavy load, the
   existing 5000 ms timeout could occur and trigger error messages
   in the task function's callers.

   This change adds a new timeout parameter so that we can tweak
   the value as a workaround or for troubleshooting purposes. In
   the infinite wait case, A warning message is printed at 5000ms
   interval.

   In real world, the current 5 sec is generous enough for a video
   request in my opinion, so this change could not be very useful.

   Upstream Status: Inappropriate

   Signed-off-by: Jianxun Zhang <jianxun.zhang@linux.intel.com>
]

(From OE-Core rev: 872a83be6e86005f6426c90073ece56de4534ac0)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 08:25:12 +01:00
Bruce Ashfield
f14532a572 linux-yocto: update to 4.8-rc5
(From OE-Core rev: 3d9735e3ccacbd60e060683c41c4203184fce109)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 08:25:11 +01:00
Bruce Ashfield
27eacf6314 kernel-yocto: restore missing configuration meta data
Some of the meta-data from the 4.4 kernel was missing from the 4.8
branch. This resulted in some functionality drops and also a size/time
increase in the kernel build (due to debug being turned on).

With this resync, we now have the missing config restored.

(From OE-Core rev: eb0b4f05f89ae014953492ea7bc0afc9fef1abce)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 08:25:11 +01:00
Bruce Ashfield
50c4c79315 kernel-yocto: allow --allnoconfig and --alldefconfig as KCONFIG_MODES
Previously merge_config.sh was wrapped by the configme script, configme
took the different KCONFIG_MODES as options, and used --allnoconfig
or --alldefconfig.

With the switch to merge_config.sh no longer being wrapped, the new
processing wasn't matching the existing values and only supported
allnoconfig or alldefconfig.

To avoid breaking existing layers, and also keep any working that
have already switched, we can make the case statement match both.

(From OE-Core rev: 614227f28a023fe148307e0d85a5e9b8d9b74372)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 08:25:11 +01:00
Bruce Ashfield
81297ee22d kernel-yocto: restore kernel-meta data detection for SRC_URI elements
Before the kernel tools were simplified and streamlined, there was code
which not only migrated a patch/cfg/scc to the kernel build tree, it
also migrated any subdirectories of those patches.

The effect of this data migration was that any other meta data in
a patch's directory structure would be available for processing.

While we don't want to do this migration anymore, it is possible to
check the path of any SRC_URI patches, and if they include a "kernel-meta"
subdirectory add it to the search path.

This restores the functionality without the old complexity.

(From OE-Core rev: 7ef7af5c03bad28faf380986f792f7f3d4d5944d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 08:25:11 +01:00
Paul Eggleton
ce4ea7a730 recipetool: create: avoid extra blank lines in output recipe
If we output extra blank lines (because of some automated editing) then
it makes the output recipe look a bit untidy. You could argue that we
should simply have the editing code not do that, but sometimes we don't
have enough context there for that to be practical. It's simple enough
to just filter out the extra blank lines when writing the file, so just
do it that way.

(From OE-Core rev: cbebc9a2edf7d7a422ee5c71219e79e3b349de3b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:49 +01:00
Paul Eggleton
ff259b095d recipetool: create: support node.js code outside of npm
If you have your own node.js application you may not publish it (or at
least not immediately) in an npm registry - it might just be in a
repository on github or on your local machine. Add support to recipetool
create for creating recipes to build such applications - extract their
dependencies, fetch them, and add corresponding npm:// URLs to SRC_URI,
and ensure that LICENSE / LIC_FILES_CHKSUM are updated to match. For
example, you can now run:

  recipetool create https://github.com/diversario/node-ssdp

(I had to borrow some code from bitbake/lib/bb/fetch2/npm.py to
implement this functionality; this should be refactored out but now
isn't the time to do that refactoring.)

Part of the fix for [YOCTO #9537].

(From OE-Core rev: 4fb8b399c05a1b66986fc76e13525f6c5e0d9b58)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:49 +01:00
Paul Eggleton
fa90c2f54d recipetool: create: allow license variable handling to be rerun
If you make adjustments to the source tree (as create_npm.py will be)
then you will need to re-run the license variable handling code at the
end so that we get all of the files that should go into
LIC_FILES_CHKSUM if nothing else. Split out the license variable
handling to a separate function in order to allow this.

(From OE-Core rev: f0d6f4b7e87ea781ac0dffcc8d0310570975811b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:49 +01:00
Paul Eggleton
b1c3e44dfb recipetool: create: add --keep-temp command line option
For debugging it's useful to be able to tell recipetool to keep the
temporary directory.

(From OE-Core rev: 480a6b745a85b2881e5cc1a0bbb572e3235ca008)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:48 +01:00
Paul Eggleton
17afc80320 recipetool: create: support git submodules
Ensure we fetch submodules and set SRC_URI correctly when pointing to a
git repository that contains submodules.

(From OE-Core rev: 65d5cc62d4ecfc78ce4b37b3886a7fe5aa05a75e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:48 +01:00
Paul Eggleton
9885a9dd31 recipetool: create: fix mapping python dependencies to python-dbg package
When trying to map python module dependencies to the packages that
provide them, if we're looking for .so files that satisfy
dependencies then we need to exclude files found under the .debug
directory, otherwise the dependency will get mapped to the python-dbg
package which isn't correct.

For example, this fixes creating a recipe for pyserial and not getting
python-fcntl in RDEPENDS_${PN}, leading to errors when trying to use the
serial module on the target.

(From OE-Core rev: 46a068ca35975988a8e9c0310f71fdcee55937a4)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:48 +01:00
Paul Eggleton
4da96ce61f recipetool: create: AX_PKG_SWIG should add dependency on swig-native
If AX_PKG_SWIG is found in configure.ac, then what's being looked for is
the swig binary, not swig for the target - so fix the dependency
accordingly.

(From OE-Core rev: 2600cd6f6c63ecf79804e2bc6eb6f198a012d5d6)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:48 +01:00
Paul Eggleton
39d3aa2828 devtool: update-recipe: support files with subdir=
It's rare but there are recipes that have individual files (as opposed
to archives) in SRC_URI using subdir= to put them under the source tree,
the examples in OE-Core being bzip2 and openssl. This broke devtool
update-recipe (and devtool finish) because the file wasn't unpacked into
the oe-local-files directory and thus when it came time to update the
recipe, the file was assumed to have been deleted by the user and thus
the file was erroneously removed. Add logic to handle these properly so
that this doesn't happen.

(We still have another potential problem in that these files become part
of the initial commit from upstream, which could be confusing because
they didn't come from there - but that's a separate issue and not one
that is trivially solved.)

(From OE-Core rev: 9069fef5dad5a873c8a8f720f7bcbc7625556309)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:48 +01:00
Paul Eggleton
94aefd9a39 lib/oe/patch: handle non-UTF8 encoding when reading patches
When extracting patches from a git repository with PATCHTOOL = "git" we
cannot assume that all patches will be UTF-8 formatted, so as with other
places in this module, try latin-1 if utf-8 fails.

This fixes UnicodeDecodeError running devtool update-recipe or devtool
finish on the openssl recipe.

(From OE-Core rev: 579e4d54a212d04cfece2c9fc0635d7ac1644058)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:36:48 +01:00
Ed Bartosh
ead0545626 bitbake: cooker: record events on cooker exit
Bitbake collects all events in special event queue when called with
-w option. However, it starts to write events to the eventlog only
after BuildStarted event is received. In some cases this event is
not received at all, e.g. when bitbake is run with --parse-only
command line option.

It makes sense to write all collected events when CookerExit event
received to make sure all events are written into the eventlog even
if BuildStarted event is not fired.

[YOCTO #10145]

(Bitbake rev: 57912de63fa83550c0ae658eb99b76e9cc91a8d1)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Ed Bartosh
b22a505414 bitbake: toaster: don't kill all runserver processes
Toaster script kills runserver process 2 ways:
 - sending signal to pid from .toastermain.pid.
 - sending signal to pids found by grepping ps output:
       ps fux | grep "python.*manage.py runserver"

Second approach is redundant and harmfull as it kills all django
development server running on the machine.

[YOCTO #7973]

(Bitbake rev: 0f47b17fe88dc660648d94b2d8d8286d87ae6295)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Ed Bartosh
d2e7ed0c88 bitbake: toaster: remove handling of .toasterui.pid
This file is not created anywhere, but handled in toaster
script code.

(Bitbake rev: 16f3cd3535c9eec71ea7594c1e3a83db00dba7ca)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Ed Bartosh
8719012c78 bitbake: toaster: don't kill toaster on start
There is no point of trying to kill django development server
when toaster starts because 'manage.py checksocket' command is already
used in the script code to check if development server port is occupied.

Even if Toaster is listening on another port, killing previous instance
looks quite implicit and doesn't solve anything as there are other
processes that might be still running.

(Bitbake rev: 0dab45e9815e8939219900264e86f569c714b7c6)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Belen Barros Pena
87cf84fead bitbake: toaster: orm Update IMAGE_FSTYPES values
This patch fixes a typo in one of the IMAGE_FSTYPES values listed in
Toaster. It also updates the hardcoded list of values to match the
latest list in meta/classes/image_types.bbclass

[YOCTO #9447]

(Bitbake rev: 46db3279cb81b3ca6ce047204aee620f5ee51220)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
David Reyna
36fe748957 bitbake: toaster: keep layer name in variable history path
When converting variable history file names to relative
paths, keep the layer directory's name so that the user
can distinguish between conf files with the same name.

[YOCTO #8188]

(Bitbake rev: 59561d652af91c2099b735084f0e44275d68e637)

Signed-off-by: David Reyna <david.reyna@windriver.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Belen Barros Pena
8ba4f54037 bitbake: toaster: Allow forward slash in variable names
Add forward slash to the list of special characters allowed in variable
names. Also update the list of allowed special characters in the error
messages.

[YOCTO #9611]

(Bitbake rev: 146f6f95a8753308edb31e952d7c440c8de11870)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Belen Barros Pena
b5070f5337 bitbake: toaster: layer details Fix "edit" form interaction
Make sure the layer information disappears when the edit form shows, and
that the layer details come back when you click the 'cancel' button in
the edit form.

(Bitbake rev: bd08abe7c1f5fc96ee73c20b2c9d10a591a5f69c)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:47 +01:00
Belen Barros Pena
23056fcc73 bitbake: toaster: import layer Layout fixes
The layout of the import layer form was looking a bit awkward. This
commit tidies things up a bit.

(Bitbake rev: e5e51ca1394bc392eba99742c59d86b8e5fd5623)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Belen Barros Pena
c4fcf41d7f bitbake: toaster: layer details Layout fixes
The layout of the layer details page was looking a bit awkward. This
commit tidies things up.

(Bitbake rev: ce9a5f885f43bebf39d191309f48da83b31e60e0)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Belen Barros Pena
65e1d66727 bitbake: toaster: configuration Provide machine help text
When you change the machine from the project configuration page, you get
some useful suggestions as you start typing a machine name. However, the
suggestions only include machines provided by the layers added to your
project. This is not necessarily clear from the design (yes, it should
be improved), which means you might be looking for a machine, not see it
in the suggestions, and assume the machine is not supported by
OpenEmbedded.

Since we are in no position to change the design of this page right now,
add some explanatory help text to address the situation.

[YOCTO #8034]

(Bitbake rev: 829c9bcb58f961c644e24b24265e0ef45f0fec57)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Belen Barros Pena
ebc6e84b3e bitbake: toaster: tasks Remove recipe version from defaults
The 'Recipe version' column should not be part of the set of columns
shown by default in the tasks table. Set the hidden property for that
column to 'True' so that it doesn't show when you load that table
for the first time.

[YOCTO #10179]

(Bitbake rev: 249dd31fcaabbec32fdee30b0c84be90d4f92430)

Signed-off-by: Belen Barros Pena <belen.barros.pena@linux.intel.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Paul Eggleton
ee7c6d00bb bitbake: lib/bb/utils: edit_metadata() comment tweaks
No functional changes, just make a couple of minor tweaks to the
comments for edit_metadata():

* There are four elements to be returned by the callback function
* Add an example return statement for when you don't want to modify the
  value

(Bitbake rev: 99675c19375c96140bc8ae8f9fc3a1945a77cebb)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Paul Eggleton
d67e3b4c70 bitbake: fetch2/npm: clarify comment
The correct name of the parameter is "version" not "ver" so ensure we
aren't misleading the user by giving the latter in an example.

(Bitbake rev: 14c045c6a20993d389b91ae2459d811a1430a7b2)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Paul Eggleton
3a0f5d95a7 bitbake: fetch2/npm: handle top-level shrinkwrap file
Allow using a top-level shrinkwrap file with one or more npm://
dependencies, i.e. if the module isn't found at the top level then look
one level down.

Part of the fix for [YOCTO #9537].

(Bitbake rev: f7de3f8b5f628dee043fe783148812914ab20813)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Paul Eggleton
5ab6867714 bitbake: fetch2/npm: support subdir= parameter
"npmpkg" can be a default, but it should respect the subdir parameter as
with other FetchMethods. This allows us to have more than one npm://
entry in SRC_URI without nasty hacks.

Fix required in order to support [YOCTO #9537].

(Bitbake rev: e6a94d2091ec5d42f25102334a8492a731b8dec3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Paul Eggleton
eb53750ab7 bitbake: fetch2/npm: fix broken fetches if more than one npm URL fetched
You cannot set a URL-specific value in an object-level variable on
the FetchMethod in urldata_init() or the result is the value specific to
the last URL will be the one that gets set. This prevented fetching more
than one npm:// URL correctly - the other tarballs would not download to
the correct location and do_unpack failed to find them as a result.

Fix required in order to support [YOCTO #9537].

(Bitbake rev: 1435b49ea7d0f9d4cc4a665fb2aa83d1eea7900f)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Paul Eggleton
1937b17f67 bitbake: fetch2/npm: explicitly specify workdir
We were downloading into the current directory here, which is fine if
that current directory can be expected to be the right place - but
that's not true when called from recipetool within OE. We should
explicitly specify the directory to run the command in and then there
won't be a problem.

(Bitbake rev: 0ddaf725e5a0675b252b7f80b1706370e478175b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:33:46 +01:00
Jack Mitchell
144e7dbc35 file: build with c std as c99
when using a toolchain not shipped by OE core such as linaro we
can't be sure what the std will be set to. Set to compile as c99
which is the lowest version supported.

(From OE-Core rev: e544ca08a2bcb5a8d98671e63f6c8b7b21c562ea)

Signed-off-by: Jack Mitchell <jack@embed.me.uk>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Robert Yang
9d4fd7de16 apt: add PACKAGECONFIG for lz4
Fixed:
apt-1.2.12: apt rdepends on lz4, but it isn't a build dependency, missing lz4 in DEPENDS or PACKAGECONFIG? [build-deps]

(From OE-Core rev: 06ddde2a986dc94962edb20cfbbb9b1e2f0977a8)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Markus Lehtonen
98264a9191 oeqa.buildperf: be sure to use the latest buildstats
Be sure to take the latest buildstats if multiple buildstats are found.

(From OE-Core rev: bad495f0d0144728a0132c3d3c4d98c24ead4afd)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Robert Yang
b673a562d1 pciutils: fix PACKAGECONFIG
The PACKAGECONFIG's value doens't go into EXTRA_OECONF, but
PACKAGECONFIG_CONFARGS.

Fixed:
pciutils-3.5.1: libpci rdepends on libudev, but it isn't a build dependency, missing eudev in DEPENDS or PACKAGECONFIG? [build-deps]
pciutils-3.5.1: pciutils rdepends on libudev, but it isn't a build dependency, missing eudev in DEPENDS or PACKAGECONFIG? [build-deps]

(From OE-Core rev: d941d66d714545eae589115db48f1243399711f2)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Markus Lehtonen
d021889ba9 oeqa.buildperf: try harder when splitting 'nevr' string
Try to be more intelligent when splitting out recipe name, epoch,
version and revision from the buildstat directory name. Previous
assumption was that package versions never contain a dash but obviously
that is not necessarily true. The new assumption is that the package
version starts with a number.

(From OE-Core rev: 91d3fce1eb3e27d646afba8cf3c03ae560412d1d)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Mingli Yu
4594f25fb0 webkitgtk: 2.12.4 -> 2.12.5
Fix a regression introduced in 2.12.4 that caused
a hang in the network process after a load failure.

Fix several crashes and rendering issues.

reference: https://webkitgtk.org/2016/09/05/webkitgtk2.12.5-released.html

(From OE-Core rev: 948583fcd328b53289c6735d3e355c8fe2da680e)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Martin Jansa
f203e5bfb8 lighttpd: fix EXTRA_OECONF
* --without-memcache was renamed to --without-memcached in:
  f3b577ddee
* causing:
  ERROR: lighttpd-1.4.41-r0 do_configure: QA Issue: lighttpd: configure was passed unrecognised options: --without-memcache [unknown-configure-option]

(From OE-Core rev: d53b220205259705649cb7741a21cb267519d565)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Andreas Müller
79d45bf56d cmake.bbclass: avoid treating imports as system includes
CMake sets all imported headers as system headers. This causes trouble for c++
projects [1].

Thanks to Jack Mitchell for pointing to the setting [2]. Build tested upon
meta-qt5-extra-world which had lots of fallout before.

[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129
[2] http://lists.openembedded.org/pipermail/openembedded-core/2016-September/126067.html

(From OE-Core rev: a5bf690e27a32c5470a4e110ab58ed0a92b9d039)

Signed-off-by: Andreas Müller <schnitzeltony@googlemail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Ross Burton
23ec1fcfa4 python: recompile _sysconfigdata.py after modifying it
We sed this file after the .pyc has been generated, so re-compile the .pyc to
ensure that it is up to date.

(From OE-Core rev: 66e55d3af7d7948869620ce24c06ba2bc705ae0a)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Aníbal Limón
d41968d8a8 systemd_230.bb: Set journal RuntimeMaxSize to 64M as default
At this time systemd journald uses the /run tmpfs to store logs
by default systemd uses 15% of available space [1] of the /run
partition, when the space runs out journald starts to vaccum/store
the logs into /var/log [1].

It causes two problems one of them is timeout dev-ttySN.device's
when enable debug and use journal as systemd.log_target [2] the other
is related to don't find syslog entries into the journal log [3].

This problems are now more evident because i recently enabled the
systemd debug option in testimage [4].

One area of improvement will be add support in systemd journald to
read these parameters from the kernel cmdline like systemd.log_target,
if the support exists we could add that parameter at level of testimage.

[1] https://www.freedesktop.org/software/systemd/man/journald.conf.html#SystemMaxUse=
[2] https://bugzilla.yoctoproject.org/show_bug.cgi?id=8142#c19
[3] https://bugzilla.yoctoproject.org/show_bug.cgi?id=10128#c4
[4] http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/?id=a86a1b2703372c12e7fca18918695d093ea6ee53

[YOCTO #10128]

(From OE-Core rev: 808952bf6d2b7549b456293ead4728b4dbf0d89b)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Stefan Agner
a477fcd752 busybox: avoid circular dependency when using initramfs
The kernel does not automatically mount devtmpfs when using initramfs
based booting (even when using CONFIG_DEVTMPFS_MOUNT). If the rootfs
is built with USE_DEVFS=1 (which is the default), the system ends up
with a completely empty /dev to begin with.

Busybox uses the first entry in inittab slightly different than
other init systems:
<id>: WARNING: This field has a non-traditional meaning for BusyBox init!

The id field is used by BusyBox init to specify the controlling tty for
the specified process to run on.  The contents of this field are
appended to "/dev/" and used as-is.

Since /dev/null is not there yet, Busybox throws errors instead of
executing the commands, and hence never mounts devtmpfs:
init started: BusyBox v1.24.1 (2016-09-04 11:53:14 PDT)
can't open /dev/null: No such file or directory
can't open /dev/null: No such file or directory
can't open /dev/null: No such file or directory
can't open /dev/null: No such file or directory
can't open /dev/null: No such file or directory
can't open /dev/null: No such file or directory
can't open /dev/null: No such file or directory

Avoid this circular dependency by not specifing <id>. With that
Busybox ends up using the stdio of the init process and executes
the inittab just fine.

(From OE-Core rev: 82de49b899bca915259ea7ea149f50e1401c2426)

Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Marek Vasut
b4f2c760aa libcap: Replace EXTRA_OECONF with PACKAGECONFIG_CONFARGS
When building libcap and DISTRO_FEATURES does not contain pam,
the build will fail on missing pam headers. This is because the
bits from EXTRA_OECONF moved to PACKAGECONFIG_CONFARGS and thus
the necessary options are not propagated to oe_runmake anymore.
Replace EXTRA_OECONF with PACKAGECONFIG_CONFARGS to fix this.

| arm-poky-linux-gnueabi-gcc  -march=armv7-a -mfpu=vfp  -mfloat-abi=softfp --sysroot=/b/tmp/sysroots/board  -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/b/tmp/work/armv7a-vfp-poky-linux-gnueabi/libcap/2.25-r0=/usr/src/debug/libcap/2.25-r0 -fdebug-prefix-map=/b/tmp/sysroots/x86_64-linux= -fdebug-prefix-map=/b/tmp/sysroots/board=  -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -Dlinux -Wall -Wwrite-strings -Wpointer-arith -Wcast-qual -Wcast-align -Wstrict-prototypes -Wmissing-prototypes -Wnested-externs -Winline -Wshadow -g  -Dlinux -Wall -Wwrite-strings -Wpointer-arith -Wcast-qual -Wcast-align -Wstrict-prototypes -Wmissing-prototypes -Wnested-externs -Winline -Wshadow -g  -fPIC -I/b/tmp/work/armv7a-vfp-poky-linux-gnueabi/libcap/2.25-r0/libcap-2.25/pam_cap/../libcap/include/uapi -I/b/tmp/work/armv7a-vfp-poky-linux-gnueabi/libcap/2.25-r0/libcap-2.25/pam_cap/../libcap/include -c pam_cap.c -o pam_cap.o
| pam_cap.c:19:34: fatal error: security/pam_modules.h: No such file or directory
|  #include <security/pam_modules.h>
|                                   ^
| compilation terminated.

(From OE-Core rev: f3a50f89a217014c0926498e99e62c617a8a4cae)

Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Ross Burton
f52cc8bdef autoconf: remove upstreamed patch
(From OE-Core rev: 3d4834860c0e9c2635c248d498d02160cbedebde)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
André Draszik
0d92f448c4 libffi: backport patch to fix building MIPS soft float
Upstream-Status: Backport [2ded2a4f49]

(From OE-Core rev: 0231a6f92d2c8b89b419aeb09a4b35f871bfb2bf)

(From OE-Core rev: 0ce7415bb50bf1941981ef61590fe642b055d290)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Jussi Kukkonen
3e53ab2ed1 gnutls: update to 3.5.3
Add patch to fix compile without libtasn headers.

(From OE-Core rev: b43e4499fb3bae4740660a729a900d951eab00e8)

(From OE-Core rev: 972ab9246e4b5a0f46a4f2b5b1e54773beac11bb)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-08 00:32:43 +01:00
Markus Lehtonen
f7ca989ddc oeqa.buildperf: correct globalres time format
Always use two digits for (integer part of) seconds, i.e. show '1:02.34'
instead of '1:2.34'.

(From OE-Core rev: 55bb6816aca39bfa25d4f7e2158a57a5f0ac1cca)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Markus Lehtonen
1f706698cd oe-build-perf-test: fix log file path
The --log-file command line argument was slightly broken as {out_dir}
string replacement was not working as expected.

(From OE-Core rev: fc62f54e3d788cc79fd27664f05db7efccef23ab)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Joshua Lock
e8e81789f9 selftest/liboe: add a test for copyhardlinktree()
Add a simple test to validate that the number of files in the
destination matches the number of files in the source after the
copyhardlinktree() has been performed.

(From OE-Core rev: ca5c718b309524e46818627f8b5c9260d009472d)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Joshua Lock
822c708e8f oe.path: fix copyhardlinktree()
The change to preserve extended attributes in copytree() and
copyhardlinktree() (e591d69103a40ec4f76d1132a6039d9cb1555103)
resulted in an incorrect cp invocation in copyhardlinktree() when
the source directory contained hidden files.

This was because the passed src was modified in place but some code
paths expected it to remain unmodified from the passed value.
Resolve the issue by constructing a new source string, rather than
modifying the passed in string.

(From OE-Core rev: 2b9fdd8448c2c29418d1c3fca9fe1789466f09b4)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Otavio Salvador
f5b4ca2ad7 lttng-modules: Do not fail if CONFIG_TRACEPOINTS is not enabled
The lttng-modules are being pulled by the tools-profile image feature,
however, not every kernel has the CONFIG_TRACEPOINTS feature enabled.

This change makes the build do not fail when CONFIG_TRACEPOINTS is not
available, allowing it to be kept being pulled by default.

(From OE-Core rev: 6215ffec6a3d5069cc74ae9853167c3c6395b1db)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Otavio Salvador
700501808a lttng-modules: Bump to 6e4fc6f3 revision
This moves the recipe to the tip of stable-2.8 branch which allows the
use of Linux 4.8 while keep us on a stable release.

(From OE-Core rev: 34cac40670e94a9e3ffc2a734ce1f826dc60516b)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Zhixiong Chi
0abb1d7e22 rpm: ensure rpm2cpio call rpm relocation code
We need to call rpmcliInit to ensure the rpm relocation code is called.
when we allow rpm2cpio to be relocatable, The adjusted path used to find
the macro files was being built into the binary and this path was valid
for the machine it was built on and some of our other build machines,
but invalid on some others, and was not being properly overridden at
runtime.

when we export the wrsdk and source the sdk, then execute rpm2cpio xxx.rpm|cpio -t.
we will get the following error :
"rpm-5.4.14/rpmdb/dbconfig.c:493:
db3New: Assertion `dbOpts != ((void *)0) && *dbOpts != '\0'' failed.

(From OE-Core rev: aea2bf5c8101ac0bb27776a5614be345835c4a03)

Signed-off-by: Zhixiong Chi <Zhixiong.Chi@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Robert Yang
92fc3ef973 coreutils: enable xattr for native
The lib/oe/path.py requires xattr, fixed:
Subprocess output:
cp: cannot preserve extended attributes, cp is built without xattr support

(From OE-Core rev: 18ff7efef77120538372a81b2cc8e8479742b064)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-06 10:24:04 +01:00
Richard Purdie
b9d90ace00 poky: Update to linux-yocto 4.8 for qemu* machines
This enables the 4.8 kernel nby default for the qemu machines.

(From meta-yocto rev: 2dd82f25a365070b79f0f2d6b4eb2c6e793c74f9)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:52 +01:00
Wang Xin
6a49d3fb25 sysstat: 11.3.5 -> 11.4.0
Upgrade sysstat from 11.3.5 to 11.4.0.

(From OE-Core rev: 3ec68a97d7addc857425a6e3cf0a219913d99c59)

Signed-off-by: Wang Xin <wangxin2015.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:02 +01:00
Markus Lehtonen
91ae03931e build-perf-test-wrapper.sh: fix handling of -C argument
Not specifying -C caused oe-build-perf-test to try to commit results to
the build directory.

(From OE-Core rev: 6f4786f5522c366a7fd92f630c3f32629a9f9471)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:02 +01:00
André Draszik
5f5115a64b ofono: RRECOMMENDS tun.ko & APN database
- kernel-module-tun is needed so that ofono can create the
  ppp network interface

- mobile-broadband-provider-info is needed as an explicit
  dependency even though it is in DEPENDS, because it's
  just an xml database, and the DEPENDS simply allows
  ofono to figure out its location in the file system
  (using pkg-config during configure). But there is no
  shared library dependency or so for bitbake to figure
  out this runtime dependency.
  We make it a recommendation only, so that it can still
  be removed from filesystem images in case people build
  images that don't need the provider database (and e.g.
  hard-code APNs for specific use-cases)

(From OE-Core rev: 1cb0eb9a013ad8a4092f610faeab2ee2720b9e66)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:02 +01:00
André Draszik
fb75cd2186 e2fsprogs: packaging cleanups (compile_et & mk_cmds)
While comparing what were supposed to be similar
filesystems from different build machines, some issues
have been noticed in the e2fsprogs recipe, in
particular with the compile_et and mk_cmds utilities.

1) target:
   move compile_et and mk_cmds into the -dev package

   Both are development tools, from the man pages:

   compile_et - error table compiler
     compile_et converts a table listing error-code names
     and associated messages into a C source file suitable
     for use with the com_err(3) library.

   mk_cmds - error table compiler
     mk_cmds converts a table listing command names and
     associated help messages into a C source file suitable
     for use with the ss(3) library.

2) native/nativesdk
   Also apply cleaning of host path (build directory) here,
   so that only the sysroot directory remains, which is
   properly adjusted by the sstate handling.

3) make cleaning of host path actually work
   The existing sed command wasn't working, in particular
   for compile_et; we fix up the sed command so that
   removal of references to the local build directory
   really works. Do the same changes for mk_cmds, for
   consistency.

(From OE-Core rev: 3982b57e179872eb119ecb75237981beec398cb6)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:02 +01:00
André Draszik
5b8206ff36 boost: fix MIPS16e compilation
Backport upstream patch to use g++ 4.1+ __sync intrinsics
instead of incompatible hand-written assembly when
compiling for MIPS16e

Upstream-Status: Backport https://svn.boost.org/trac/boost/ticket/12418

(From OE-Core rev: 8ded5da8952e4a39851e0184bde323e01dd73d2c)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:02 +01:00
André Draszik
7e712e1831 boost: fix mips soft float compilation
Upstream-Status: Submitted https://svn.boost.org/trac/boost/ticket/11756

(From OE-Core rev: 3e40a1d230a9c6f169f78c990b428019f321d90b)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
André Draszik
02d82eacc7 boost: fix a musl compilation warning
Upstream-Status: Submitted https://svn.boost.org/trac/boost/ticket/12419

(From OE-Core rev: 03b553e1b2b11ddd7d72a3bb0180d18f36da53b5)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Zubair Lutfullah Kakakhel
d7268e9bba valgrind: Disable for MIPS Soft Float
Valgrind doesn't build for MIPS soft float. Disable the build until
the package has support for it.

(From OE-Core rev: f45a2907ba621d5e87614adcc724838fd32ad8ba)

Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Zubair Lutfullah Kakakhel
8edb53c58b packagegroup: Disable packages not available on mipsel
These are not available on mipsel yet so disable them

(From OE-Core rev: d7ef5e14ab1f31b0dc34b6e5965ae783b063ecbb)

Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Zubair Lutfullah Kakakhel
88b99a4200 packagegroup-core-sdk: Disable sanitizers for mipsel
These are not available on mipsel yet, so disable them.

(From OE-Core rev: 33a3f2be1e84421efb0cb0f5a6f3a09b868f6931)

Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
ed8eceada1 linux-yocto/4.1: backport virtio HW_RANDOM_VIRTIO config
We enabled HW_RANDOM_VIRTIO for the 4.4+ kernels, but it is also needed
for 4.1 to ensure that VMs have sufficient entropy. Without this entropy
networking on qemuppc starves and triggers intermittent errors.

(From OE-Core rev: 89457aae92cf8748d8fbad2509f78f54a6b8fac1)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Richard Purdie
42e2b97441 oeqa/parselogs: Add qemuarm64 warning from 4.8 kernel to whitelist
(From OE-Core rev: ae865fee26d2a32ae07236fc7aa1cf1b234a2156)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
d9c2c02e96 oeqa/parselogs/qemuarm: Whitelist amba and jitter for 4.8+ kernels
With the update to the 4.8 kernel the versatile platform (and hence
qemuarm) has switched to a device tree boot.

We are using an ummodified mainline kernel versatilepb device tree,
which includes definitions of multiple amba devices. These devices
are not present in the qemu system emulation, hence throw warnings
during boot.

These warnings are not unique to oe-core, and rather than carry kernel
patches to the device tree (for now), we whitelist the known warnings
so qa testing will pass. We also can't turn amba off completely, since
it is providing valid devices (like the serial port) and AMBA is
force selected by other kconfig values.

We also have a jitterentropy warning that shows up on some hosts.
This warning is harmless, and like amba we can't turn it off in a
fragment since it is force selected by crypto (and we'd rather not
turn all crypto off). So we add it to the whitelist while investigations
continue into what is needed in the host to support this fully.

(From OE-Core rev: f5315b8c7998611da9984fd6bce2b48d6304ff6c)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Richard Purdie
8078c5e11f cryptodev: Add backported patches for 4.6+ kernels
This allows 4.6 onward kernels to build, backported from upstream
master.

(From OE-Core rev: e0e073a8e60b965333b537436a3441fc1ec37372)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
223eb651a6 linux-yocto/4.x: configuration updates
Integrating a series to expliclity set the quark build to 32 bits
and avoid 64 bit x86 defaults.

We also have a series of commits that fix configuration warnings on
x86 platforms:

 intel-quark.cfg: Explicitly disable CONFIG_64BIT
 common-pc-drivers.cfg: Remove I2O configs
 features: Fix dependencies and =m vs =y discrepancies for corei7
 intel-core2-32.cfg: Explicitly disable CONFIG_64BIT
 features: Add 6lowpan feature and add it where necessary

(From OE-Core rev: cd20f6b1f0e20caa5c0aee0263fd9eb21c3566e9)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Ioan-Adrian Ratiu
d1528403d5 kernel-yocto: do_kernel_configme: Fix silent sysroot poisoning error
do_kernel_configme calls merge_config.sh (installed in the sysroot by
the kern-tools-native recipe) which may invoke the compiler to complete
the configuration process.

Depending on the build (and dependencies), this may error due to sysroot poisoning [1].

The errors are similar to:

  make[1]: Entering directory '4.1+gitAUTOINC+a7e53ecc27-r0/linux-x64-standard-build' HOSTCC  scripts/basic/fixdep
  work-shared/x64/kernel-source/scripts/basic/fixdep.c:106:23: fatal error: sys/types.h: No such file or directory
  compilation terminated.
  make[2]: *** [work-shared/x64/kernel-source/scripts/basic/Makefile:22: scripts/basic/x86_64-nilrt-linux-fixdep] Error 1

Adding $TOOLCHAIN_OPTIONS to $CFLAGS before calling merge_configs.sh
fixes the error because $TOOLCHAIN_OPTIONS defines the sysroot and make
uses it to correctly compile & fill all missing kernel config options.

[1] http://lists.openembedded.org/pipermail/openembedded-core/2014-October/098253.html

(From OE-Core rev: 4b770d62472d1b1a26366de0a1742db240aa5239)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
5ea48fa5ee kernel-yocto: test for empty artifacts
With the updated kernel tools, we generate a list of sccs, patches,
configs and BSP definitions as part of the meta data generation.

It is valid if there aren't any of these artifacts found (i.e. you
are just building a branch and a default config), but invoking the
tools with no inputs isn't a good idea.

To avoid this issue, we generate a string based on the artifacts
and skip calling the tools if there's nothing to do.

(From OE-Core rev: 58715183493de1deb90f2ab075048462b4bf6c73)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
942e2afac2 linux-yocto/4.8: add qemuarm device tree specification
4.7+ requires a device tree for the arm versatile family of platforms.
We add the definition to our 4.8 linux-yocto recipes so we can continue
to boot!

(From OE-Core rev: 8c5cf8193441814e46b7e118655b4e622f785ce5)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Richard Purdie
98d57df225 linux-libc-headers: Refresh musl patches against newer kernel headers version
The musl patches need to be updated against the latest kernel verison
in order to apply. No functionality changes.

(From OE-Core rev: b9dd65b99ecf2ccbac3649cf4449fdba3f25a272)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
ab338be9e3 libc-headers: update to v4.8
Updating the libc-headers to use the 4.8 kernel as the default.

(From OE-Core rev: 253bf0332bd979b9fd9cf6fdc44682892f0bacf7)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
5cb0f38325 linux-yocto-dev: bump to v4.8+
(From OE-Core rev: 2624fc485f4c0d72ba10f2e3e0257a7fc1960807)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
6298912335 perf: adapt to Makefile.config
commit 4842576cd857 [perf tools: Move config/Makefile into Makefile.config]
relocated the configuration Makefile of perf. As such, we need to adapt
our fixup routines to work with the Makefile no matter where it is.

(From OE-Core rev: 573d584ff704025387782e35ed344e73294d6d0a)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Bruce Ashfield
4d1a124289 linux-yocto: introduce v4.8 recipes
(From OE-Core rev: 3585c71dc575dd28a1e2655efc967dd4d6086a37)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-05 11:56:01 +01:00
Ed Bartosh
9428b19a7d toaster: fire TaskArtifacts event
Fire TaskArtifact MetaData event for deployment tasks when task
either completed or skipped. Event contains full task id
(recipe+task) and list of deployment artifacts from sstate
manifest.

This should allow Toaster to always get notified about deployment
artifacts produced by the build.

[YOCTO #9869]

(From OE-Core rev: 9b08503eabf78bc1b114416523b41dcce3449f58)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:29 +01:00
Richard Purdie
25c46772a8 buildtools-tarball/uninative-tarball: Fix for working with populate_sdk under sstate control
Firstly, these recipes are not target (MACHINE) specific so they should
by SDK_ARCH based, not PACKAGE_ARCH.

Also fix use of SDK_DEPLOY -> SDKDEPOLYDIR after other recent changes.

Together these fixes avoid various build failures and ensure the tarballs
only get built once rather than multiple times.

(From OE-Core rev: 894c9b6ded702897ae4084ef75959cdc8cc6f7a3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:29 +01:00
Richard Purdie
e1de696674 populate_sdk_ext: Put populate_sdk_ext under sstate control
Adding populate_sdk task to SSTATE_TASKS should make sstate machinery
to generate manifest for deployed ext sdk artifacts and do final deployment
to SDK_DEPLOY.

This is done in a similar way to do_populate_sdk in a previous patch.

(From OE-Core rev: ea3587e626a184c53dc0f484d1a0299b2b00641d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:29 +01:00
Ed Bartosh
3c3962d27e populate_sdk_base: Put populate_sdk under sstate control
Adding populate_sdk task to SSTATE_TASKS should make sstate machinery
to generate manifest for deployed sdk artifacts and do final deployment
to SDK_DEPLOY.

Set stamp-extra-info flag for do_populate_sdk task. This flag is used
in the name of sstate manifest. Setting it to predetermined value for
populate_sdk task should help to get correct manifest filenames when
processing runQueueTask events.

The do_populate_sdk function is also executed by do_populate_sdk_ext
so in order to avoid conflicts with the sstate postfuncs, split
the main code into a separate function.

We also need to set SDKDEPLOYDIR as do_populate_sdk_ext expects
it in order not to break ESDK generation.

(From OE-Core rev: 8361376b8ef0147276c9ee31349e904d86900593)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:29 +01:00
Richard Purdie
bc31120ec6 sstate: Avoid duplicate README file errors for sdk under sstate control
(From OE-Core rev: 4bd3a90c8fb034b4d899d0560d75d81f56e27e0a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:29 +01:00
Ed Bartosh
def348533a image.bbclass: Put image_complete under sstate control
Adding image_complete task should make sstate machinery
to generate manifest for deployed images and do final
deployment to DEPLOY_DIR_IMAGE.

Made sure IMGDEPLOYDIR doesn't contain images from past deployments
to prevent them to be included into sstate manifests.

Set stamp-extra-info flag for do_image_complete task. This flag
is used in the name of sstate manifest. Setting it to predetermined
value for image_complete should help to get correct manifest
filenames when processing runQueueTask events.

(From OE-Core rev: d54339d4b1a7e884de636f6325ca60409ebd95ff)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:28 +01:00
Richard Purdie
5f9889edb3 populate_sdk_base: Deploy images to SDKDEPLOYDIR
Changed deployment directory from DEPLOY_DIR_IMAGE to
SDKDEPLOYDIR to make sstate machinery to do final deployment and
generate manifest.

(From OE-Core rev: 1c8c8d8a0e2c73b3bb8a9a222bf5e8aa9927e526)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:28 +01:00
Ed Bartosh
9cc4492732 image: Deploy images to IMGDEPLOYDIR
Changed deployment directory from DEPLOY_DIR_IMAGE to
IMGDEPLOYDIR to make sstate machinery to do final deployment and
generate manifest.

Renamed variable deploy_dir to deploy_dir_image in selftest code
to avoid confusion with DEPLOYDIR variable.

Updated the code of rootfs.py:Rootfs class to use IMGDEPLOYDIR variable
as it's now used as a new deployment destination.

(From OE-Core rev: 6d969bacc718e21a5246d4da9bf9639dcae29b02)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:28 +01:00
Ed Bartosh
619d2996fb image/populate_sdk_base: Add *DEPLOYDIR variables
This is a preparation for changing deployment directory for image
and populate_sdk targets.

Introduced new variables, IMGDEPLOYDIR and SDKDEPLOYDIR. Set it to current
image/sdk deployment locations.

(From OE-Core rev: 8969b885044eb46dba3dbf62a0243aef673443d3)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-04 00:07:28 +01:00
Alexander Kanavin
51afd4515f arch-mips.inc: Disable QEMU usermode usage when building with n32 ABI
QEMU usermode doesn't support n32 binaries, erroring with "Invalid
ELF image for this architecture".

(From OE-Core rev: 66aa39a959bd41f7063fe64a9225eb9fd6c3293b)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:54 +01:00
Dengke Du
08acf58572 busybox: fix "sed n (flushes pattern space, terminates early)" testcase failure
It is a busybox upstream known bug. When the busybox sed sub-command 'n'
hit the files EOF, it print an extra character that have been printed, but
the GNU sed would not print it.

In busybox source code ../editors/sed.c
------------------------------------------------------------------------
    case 'n':
        if (!G.be_quiet)
                sed_puts(pattern_space, last_gets_char);
            if (next_line) {
                    free(pattern_space);
                    pattern_space = next_line;
                    last_gets_char = next_gets_char;
                    next_line = get_next_line(&next_gets_char, &last_puts_char, last_gets_char);
                    substituted = 0;
                    linenum++;
                    break;
            }
            /* fall through */

    /* Quit.  End of script, end of input. */
    case 'q':
        /* Exit the outer while loop */
            free(next_line);
            next_line = NULL;
            goto discard_commands;
------------------------------------------------------------------------
when read at the end of the file, the 'next_line' is null, it would go
"case 'q'" and goto discard_commands, the discard_commands would print
the old pattern space which have been printed.

So in order to comply with GNU sed, in case 'n', when the next_line is null
I add "else" at the end of the second "if": "goto again;" and send it to
the busybox upstream, the busybox maintainer adopt it and make a little
changes to the patch, we can see it at:

His reply:

	http://lists.busybox.net/pipermail/busybox/2016-September/084613.html

The new patch on busybox master branch:

	https://git.busybox.net/busybox/commit/?id=76d72376e0244a5cafd4880cdc623e37d86a75e4

(From OE-Core rev: 5a680c267454d7c135c4bfe4e551a780f38a5087)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:54 +01:00
Mattias Waldo
104eb5794a kernel.bbclass: include signing keys when copying files required for module builds
The absence of signing_key.* in $kerneldir made signing of
out-of-tree kernel modules fail (silently). Add copying of these
files during the shared_workdir task.

(From OE-Core rev: 7aadc91b5ef86a89a827d59bd19e7b8272a5dd66)

Signed-off-by: Mattias Waldo <mattias.waldo@saabgroup.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:54 +01:00
Francisco Pedraza
0e38688bca oeqa/selftest Adds eSDK test cases to devtool verification.
The covered functions are, install libraries headers and image generation
binary feeds.

(From OE-Core rev: 994f8a41a16d0b82a1f7dfbcbbcc1df08225b14e)

Signed-off-by: Francisco Pedraza <francisco.j.pedraza.gonzalez@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:54 +01:00
Francisco Pedraza
1da953d631 /oeqa/sdkext Adds verification for devtool on eSDK.
The covered funcions are, build make, build esdk package, build cmake
extend autotools recipe creation, kernel module,
node.js installation and recipe creation.

(From OE-Core rev: 574a5d4cf3e79815aecc4d198545119d3bbfb023)

Signed-off-by: Francisco Pedraza <francisco.j.pedraza.gonzalez@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:54 +01:00
Paul Eggleton
e616beba1c scripts: ensure tinfoil is shut down correctly
We should always shut down tinfoil when we're finished with it, either
by explicitly calling the shutdown() method or by using it as a
context manager ("with ...").

(From OE-Core rev: 5ec6d9ef309b841cdcbf1d14ac678d106d5d888a)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:54 +01:00
Juro Bystricky
f2854c67ce gcc-runtime.inc: add CPP support for mips64-n32 tune
This patch fixes the problem where the CPP compiler cannot find include files.
The compiler is configured to look for the files in places that do not exist.
When querying the CPP for search paths, we observe messages such as these:

multilib configuration:

MACHINE="qemumips64"
require conf/multilib.conf
MULTILIBS = "multilib:lib64 multilib:lib32"
DEFAULTTUNE = "mips64-n32"
DEFAULTTUNE_virtclass-multilib-lib64 = "mips64"
DEFAULTTUNE_virtclass-multilib-lib32 = "mips32r2"

ignoring nonexistent directory "<path>/sysroots/mips64-n32-poky-linux-gnun32/usr/include/c++/6.2.0/mips64-poky-linux/32

single lib configuration:
MACHINE="qemumips64"
DEFAULTTUNE = "mips64-n32"
ignoring nonexistent directory "<path>/sysroots/mips64-n32-poky-linux-gnun32/usr/include/c++/6.2.0/mips64-poky-linux/

To fix this, create a symlink of the name CPP expects and point it to the corresponding "gnun32" directory.

[YOCTO#10142]

(From OE-Core rev: 55115f90f909d27599c686852e73df321ad1edff)

Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Stefan Müller-Klieser
c93ee72733 kernel.bbclass: add user output to savedefconfig
In a similar manner to diffconfig, tell the bitbake user where the
defconfig will be saved to.

(From OE-Core rev: 8e4cefb093e0df9660e2a6215cfe21c6c779c23f)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Andrew Goodbody
34d0cf3364 Fix out of tree builds of u-boot with gold linker
Need to reference config.mk file in source tree which is no longer
the current directory when using out of tree builds.

(From OE-Core rev: 32ba805e4ffbfcb17380ed6b5164e5b25a62f330)

Signed-off-by: Andrew Goodbody <andrew.goodbody@cambrionix.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Ed Bartosh
180eebf976 sstate.bbclass: skip packaging if SSTATE_SKIP_CREATION is set
SSTATE_SKIP_CREATION variable will be used to skip creation of
sstate .tgz files. It makes sense for image creation tasks as
tarring images and keeping them in sstate would consume a lot of
disk space.

(From OE-Core rev: 7e821ccd221916ae8482b9113df2de704f4a99a4)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Mike Looijmans
ea1d3ff7b5 initscripts: Start devpts at 06 instead of 38
For example bootlogd needs devpts to be running, but bootlogd starts
at 07. Starting bootlogd early makes perfect sense, so the best option
here is to move devpts up to 06 to prevent this error message at boot:
cannot allocate pseudo tty: No such file or directory

Systems that have CONFIG_LEGACY_PTYS in the kernel will not see this
message. Since it is called "LEGACY" for a reason, fixing this in
userspace appears to be the better option here.

The devpts script does not need anything except a mounted "/dev" which
has been arranged in "S02sysfs.sh" already.

(From OE-Core rev: 4cb06256e0d13f3f5d0b280853b900d7d342b7f2)

Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
eda43ca995 lighttpd: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: d7b2afd41d650e30a4a1fc453cae3ab060a7da57)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
beda03bfa5 xhost: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: 87498d742ffaa1e2307ac802e508c8572253a568)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
c8f383ab3a xauth: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: 35d03493ff18c15b37149850287f1e3bc0af6419)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
c0ea54db73 wget: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES instead of unconditionally enabled.

(From OE-Core rev: ab699155f2fa6f19b4020e7d1c2097e867d9e977)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
4d0ab9cd8e rsync: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: 0a6d496d31383682dbe842b681dc148de1c3158d)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
cc3c028b64 rsync: use rsync.inc to avoid duplicated codes
There are two versions of rsync but the rsync.inc is
only used by 3.x, there are duplicated codes in 2.x,
so this commit include changes:

* remove duplicated codes in 2.x and require the inc
* move the LICENSE from inc to each bb

(From OE-Core rev: 6817b6e02c2c042aa883fb4a359871c4b966ec4b)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
7e157da949 pulseaudio: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: de6b65a85cb3c3efa7a46b9fd9e1831ff6448c0c)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
d529fe5ca1 psmisc: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: a597000cb66163b7d75c578bfa1e6879229bad58)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
ae139c6c79 nspr: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: b7e045d0cb3d06b9e197ec985fc82c373f006d5c)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
0dbd6e45b8 nfs-utils: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: b72d04985a6e0dba8ab44b6eb55b62914266645c)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
d4d244157b libxmu: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: b5b612104cd4f554a9cc9216dc43e7a2710df95f)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
75f86b449a libxml2: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: 1a505037e9a6dc86b523b378d6446baae71f1a2c)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
68e3f3f104 libxfont: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: 3ec45c648c5c5a690d6d4102f8d65c97c8ff84e9)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
c1716f6aeb libsm: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: b7ed9b13492b09f7197fc095f8965f62411d9982)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
33664673f5 libpcap: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: cfa74a2d4f158601a35b96e235484dac14cbf4d5)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
92ce5feca8 libice: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: f109e4078b97debd5df253bb186beca462c609d1)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:53 +01:00
Jackie Huang
2e478ff220 apr: control ipv6 support based on DISTRO_FEATURES
Add PACKAGECONFIG for ipv6 and control it based
on DISTRO_FEATURES.

(From OE-Core rev: 91d29c5555557fb0637c886f76c859d704ecd980)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Paul Eggleton
4a5aa7ea4d scripts/contrib: update scripts for changes to internal API
The multiconfig changes altered some of the functions being called here,
so update the calls. Make use of the new Tinfoil.parse_recipe_file()
function to make parsing easier.

(From OE-Core rev: 95b6ceffd947271f315d8a7660797ab371adfbb9)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Jussi Kukkonen
1100af93cb base-files: Add shell test quoting
tty can return "not a tt" which results in warnings when /etc/profile
is executed.

(From OE-Core rev: eed586dd238efe859442b21b425f04e262bcdb2b)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Jackie Huang
c95af9cef9 meta-ide-support: inherit nopackages
The recipe is to generate an environment script in
do_populate_ide_support for using an IDE and it
doesn't generate packages at all, so inherit nopackages

(From OE-Core rev: 68e06f1782253d1b9c8d8c4d818bc4915b93d257)

Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Andreas Müller
bacb105e32 flex: fix gcc-6 failure
Gcc-6 does not allow c++ comments withing c-code. Files generated by flex
can fail with:

| error: C++ style comments are not allowed in ISO C90
| num_to_alloc = 1; // After all that talk, this was set to 1 anyways...

(From OE-Core rev: 6336c5bafe617e775037d5243d4bb5e236e74679)

Signed-off-by: Andreas Müller <schnitzeltony@googlemail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Nicolas Dechesne
fef13d890c gstreamer1.0-plugins-bad: add packageconfig for egl
In commit 9c3a94aea1d (gstreamer1.0-plugins-bad: Move EGL requirement for
Wayland), --enable-egl was explicitely added to the wayland packageconfig. While
this is correct that enabling wayland requires egl, it should be possible to
enable egl without wayland, even when using X11. For example, glimagesink can be
used for GPU based color conversion using EGL/GLES.

As such, let's make egl and wayland two separate PACKAGECONFIG flags.

(From OE-Core rev: c1ab87caae92a58b1dfab7abc1a856fab102e3ed)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Paul Eggleton
4b4387455c lib/oe/patch: commit with a dummy user/email when PATCHTOOL=git
When using PATCHTOOL = "git", the user of the system is not really the
committer - it's the build system itself. Thus, specify "dummy" values
for username and email instead of using the user's configured values.
Various parts of the devtool code that need to make commits have also
been updated to use the same logic.

This allows PATCHTOOL = "git" and devtool to be used on systems where
git user.name / user.email has not been set (on versions of git where
it doesn't default a value under this circumstance).

If you want to return to the old behaviour where the externally
configured user name / email are used, set the following in your
local.conf:

PATCH_GIT_USER_NAME = ""
PATCH_GIT_USER_EMAIL = ""

Fixes [YOCTO #8703].

(From OE-Core rev: 765a9017eaf77ea3204fb10afb8181629680bd82)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Paul Eggleton
e5f61f85c5 oe-selftest: devtool: fix test after recent change
OE-Core commit d3057cba0b01484712fcee3c52373c143608a436 fixed handling
of wildcard bbappends, which means that this test's expectations about
the bbappend file name are no longer met. devtool finish is meant to use
wildcard bbappends so fix the test accordingly.

(From OE-Core rev: 21603566e4a2e709dcb4a940b49d870c91c822be)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Jagadeesh Krishnanjanappa
bf1954c700 glibc-scripts: add RDEPENDS on libsotruss package required by sotruss script
It solves below error observed on qemux86 target:
root@qemux86:~# sotruss ./hello
ERROR: ld.so: object '/usr/$LIB/audit/sotruss-lib.so' cannot be loaded as audit
interface: cannot open shared object file; ignored.
Hello World
root@qemux86:~#

With this change, we get:
root@qemux86:~# sotruss ./hello
          hello -> libc.so.6      :*__libc_start_main(0x8048300, 0x1,
0xbfc86274)
          hello -> libc.so.6      :*puts(0x804851c, 0xb74af000, 0x0)
Hello World
root@qemux86:~#

(From OE-Core rev: aa2d2161c5b41358f732e95199f0c066d4e2d77a)

Signed-off-by: Jagadeesh Krishnanjanappa <jkrishnanjanappa@mvista.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Alexander Kanavin
71ffc92036 webkitgtk: fix racy double build of WebKit2-4.0.gir
This occasionally triggered autobuilder errors where the .gir file
appeared truncated to introspection tools.

(From OE-Core rev: 2154c1c803b7bd36a1401fa657e7fd8cb1060a70)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Alexander Kanavin
390d95991d webkitgtk: upgrade to 2.12.4
(From OE-Core rev: 94493f1a6e8d1fbd1fa78053f5ead3d0e363d184)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Alexander Kanavin
c25d5baa6e asciidoc: fix upstream version check
(From OE-Core rev: 88d18dc0a838b56e5b663320100184c381076cc4)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 23:45:52 +01:00
Joshua Lock
1429d9d463 oeqa.selftest.liboe: add test for xattr in copytree
Add a test to ensure that oe.path.copytree() preserves extended
attributes on files.

(From OE-Core rev: 2b047b8e3218f95978e41fee13635bff9af03dd6)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Joshua Lock
92f9308e2c oe.path: preserve xattr in copytree() and copyhardlinktree()
Pass appropriate options to tar invocations in copytree() and
copyhardlinktree() to ensure that any extended attributes on the files
are preserved during the copy.

We have to drop the use cpio in "Copy-pass" mode in copyhardlinktree()
because cpio doesn't support extended attributes on files. Instead we
revert back to using cp with different patterns depending on whether
or not the directory contains dot files.

(From OE-Core rev: e591d69103a40ec4f76d1132a6039d9cb1555103)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Joshua Lock
97677c11cf oeqa.selftest: add a test for oe.path.copytree()
One motivation for the use of cpio in oe.path.copytree() was to
ensure that files with spaces in their names were copied. Add a new
unittest module to test the OE module with a test case for copytree
with a spaces in a filename.

(From OE-Core rev: a408f8310d9426db4439cf8db0cf49f9bfe90b3b)

Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Mingli Yu
74d0a3dad1 ltp: remove useless script STPfailure_report.pl
* Remove useless script STPfailure_report.pl to
  avoid confusing about this script fails to run
  as it lacks dependency on some perl module such
  as LWP::Simple

  - The script STPfailure_report.pl previously is
    added as a tool to analyze failures from LTP
    runs on the OSDL's Scaleable Test Platform (STP) as below:

    commit f0573facbbbf14798cc5b7d4653a5e46b4b95fa5
    Author: robbiew <robbiew>
    Date: Wed Apr 28 19:21:39 2004 +0000

    Added tool for analyzing failures from LTP runs on
    the OSDL's Scaleable Test Platform (STP)

  - And the script STPfailure_report.pl mainly accesses
    http://khack.osdl.org to retrieve ltp test results
    run on OSDL's Scaleable Test Platform (STP) and prints
    the reports, and now the website http://khack.osdl.org
    not accessible, so the script is useless and drop it
    and not ship it on target system

(From OE-Core rev: ba6d01d432dd8244be6ac2b351477b771d5db308)

Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
7d77c02401 oeqa.buildperf: include commands log file name in results.json
(From OE-Core rev: b22a71cf3a53a33763ff02608119d2c73cbde006)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
c39db4bc45 oeqa.buildperf: include buildstats file name in results.json
No need to do lsdir magic for finding buildstats when reading results.

(From OE-Core rev: 4502f0979bf2e8698bb196345b89b170641fd43f)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
c5d1301245 oeqa.buildperf: show skipped tests in results, too
(From OE-Core rev: 4112779f9f314148b475fc4b8e33146de8be6b27)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
81b8ccc1f6 oeqa.buildperf: convert buildstats into json format
Instead of archiving buildstats in raw text file format convert all
buildstats into one json-formatted file. Some redundant information,
i.e. 'Event:', 'utime:', 'stime:', 'cutime:' and 'cstime:' fields, are
dropped.

(From OE-Core rev: efcf74b194f2a40eb3e6359dd41386db3eb25287)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
f1fb013d48 oeqa.buildperf: measure io stat
Add data from /proc/<pid>/io to system resource measurements.

(From OE-Core rev: e69a46a77854fac1169a09e0c5b70fa4b972255a)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
0b332039ea oeqa.buildperf: don't use Gnu time
Use Python standard library functionality instead of the time utility
for measuring elapsed (wall clock) time of commands. The time.* log
files are also ditched. However, the same detailed resource usage data,
previously found in time.* logs is now provided in results.json file.
This data is collected through the resource module of Python.

(From OE-Core rev: d5ad818dd501b18379aa3540bffa9b920d7c3bab)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:42 +01:00
Markus Lehtonen
33a38bc18a oeqa.buildperf: rename buildstats directories
Change directory name from 'buildstats-<test_name>' to just
'buildstats'. However, this patch adds the possibility to label
buildstats directory name with a postfix which makes it possible to save
multiple buildstats per test, for example.

(From OE-Core rev: 8997556040b2e7bfcfa6a75d4d97eb2e32207217)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
6d75f39f09 oeqa.buildperf: separate output dir for each test
Store the output data of each test in an individual subdirectory instead
of storing everything in the root output directory.

(From OE-Core rev: 64ff34df96aa9a74dd4303f76ec711aa5e9d5030)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
700ebe996a oeqa.buildperf: strip date from buildstats directory path
Archive buildstats in a directory like 'buildstats' instead of something
like 'buildstats/20160513120000'.

(From OE-Core rev: 95138cdc70bb7f9b7ab74e1d83305f009790dccc)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
35ae939e41 oe-build-perf-test: rename log file and implement --log-file
Rename the (main) log file of the oe-build-perf-test script from
'output.log' to 'oe-build-perf-test.log'. Also, add a new command line
option --log-file which makes it possible to use an alternative log file
name/path, if needed.  Note that the file name/path is relative to the
output directory.

(From OE-Core rev: 4909fae1a6d1d068b33252088b41b8d82d1a836c)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
e8c47a6343 oeqa.buildperf: enable json-formatted results
Automatically create a json.formatted file (results.json) in the results
directory that contains results from all tests.

(From OE-Core rev: 6df3263531a41805b2280bb999cb4a73f9f91eae)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
44188933ce oeqa.buildperf: add 'product' to test result data
This defaults to 'oe-core' but can be defined using the
OE_BUILDPERF_PRODUCT environment variable.

(From OE-Core rev: a22cc3e04001be5d11bd85dbdceb7088cae7c735)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
e16f00862f oe-build-perf-test: update globalres and git even if tests failed
Write globalres log file and commit results to Git even if some tests
failed. Now that tests do not depend on each other there should be no
risk of bogus results caused by test failures.

(From OE-Core rev: 8036975b268fe209476e230555006facd3cbda71)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
899b17413c oeqa.buildperf: treat failed measurements as errors
Now failed measurements correctly cause a test failure (recorded as an
error). There should be no need to continue the test if one step fails,
especially now that the tests don't depend on each other.

(From OE-Core rev: 446e32aadc775ca146d12173b1463f524d7fe6ef)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
85b7b10b4a oeqa.buildperf: make tests independent
Add test set-up functionality so that the individual tests do not depend
on each other. This should make sure that a failure in one test does not
affect the results of another test. The patch also makes it reasonable
to run only a subset of the tests by using the --run-tests option.

The increase in total execution time of the full suite - caused by the
additional set-up steps - is insignificant because normally no
additional tasks need to be run. The previous test has already done all
set-up work.

(From OE-Core rev: 69b3c63e32d09ea4a41b21daacdff6bf1fc447c1)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Markus Lehtonen
6722b0412c oeqa.buildperf: fix checking of invalid results
The test status check done when writing globalres log was incorrect.

(From OE-Core rev: 3efbd49fd80d2b349a8fd44dbcd509168dbc1061)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:41 +01:00
Stefan Müller-Klieser
40d8bef683 x264: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: 408b1f1879e4b90c90f6d139b08d2b6f8e555655)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
b8b1edbe6c systemtap: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: 4c1d679a0fd601ba37ab37b11f660cc41d8507ff)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
06415ab073 linux-libc-headers: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: c9dd7ebb89eb4ffc9e51ef0dca8accb617459dfe)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
2044c88adb lsof: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: b8aa0d9b5bb9d0fc53e3f065eac7f1cfac83b6ac)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
8ddb47b69a musl: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: ceb58f3c24f957982a80ea56e9b6fcef53dd8949)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
270d6dd4bc ifupdown: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: f37523e2d9ddf523da12aa962cf8fbe21a355d67)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
a2670159f4 kernel.bbclass: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: de720a8b10de17e613a8fb20d8df2af0b84507d7)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
d5b602c254 distutils-common-base.bbclass: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: 641ab36095eb72898ec808e655014bbc5900eb95)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Stefan Müller-Klieser
734a49baef autotools.bbclass: remove EXTRA_OEMAKE workaround
The default of EXTRA_OEMAKE is already empty since commit:

OE-Core rev: aeb653861a0ec39ea7a014c0622980edcbf653fa
bitbake.conf: Remove unhelpful default value for EXTRA_OEMAKE

(From OE-Core rev: 4fca6c95895d7d17cdfb637d383b28ee939fbd99)

Signed-off-by: Stefan Müller-Klieser <s.mueller-klieser@phytec.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:40 +01:00
Richard Purdie
5686645d7b lttng-modules: Update 2.7.3 -> 2.8.0+master
We need master for the changes to work with 4.8 kernels.

(From OE-Core rev: ab883b74634b8fa0c179b2c42b1503fa78fcc06f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Richard Purdie
c0155227ec lttng-tools: Add PACKAGECONFIG for manpages
(From OE-Core rev: 1ddae1c3a58931bbf348fd6fd912f0cd30598585)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Richard Purdie
e0acc0b00b lttng-tools: Update 2.7.1 -> 2.8.1
Drop backported patch.
Update ust configure option.
Update location of xml m4 file.

(From OE-Core rev: ea0375c5a38a761d296f5e20c95450c2df90fe39)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Richard Purdie
8f0ca9c0e3 lttng-ust: Update 2.7.1 -> 2.8.1
Drop aarch64_be patch which is now upstream.
Update doc patch to apply to latest version.
Disable man generation in configure options to match docs patch (for now).

(From OE-Core rev: 338320be00101cb182c8ccdad162076e7c3d3dbc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Alexander Kanavin
b0e728871e libyaml: update to 0.1.7
Drop backported libyaml-CVE-2014-9130.patch

(From OE-Core rev: 2dfdf483e9de5bcb24149f619b0c7fc466221204)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Alexander Kanavin
c32ce5929f ffmpeg: update to 3.1.3
(From OE-Core rev: ff6a73adf306cb80edae9d6025dcb62b9e4fa241)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Alexander Kanavin
acc113c6ca iso-codes: update to 3.70
(From OE-Core rev: 2c1f16ed94c82bd9e46f4c7dfc34fc9cd9edb5d5)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Maxin B. John
0d755c6b61 gstreamer1.0: upgrade to 1.8.3
1.8.2 -> 1.8.3

Remove backported patch from 1.8.3:
        0007-glplugin-gleffects-fix-little-rectangel-appears-at-t.patch

(From OE-Core rev: 0190736ef89447b81ab9a95e83ec205c5c1f4618)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Maxin B. John
bc42617fff sqlite3: upgrade to 3.14.1
(From OE-Core rev: 6858df73073d32f6301b2302ae563670e32db134)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Martin Jansa
8dcb8da9e3 base, autotools: Append PACKAGECONFIG_CONFARGS to EXTRA_OECONF only in autotools.bbclass
* recipes which don't inherit autotools or cmake bbclass and want to
  use the configure options from PACKAGECONFIG need to handle
  PACKAGECONFIG_CONFARGS themselves.

(From OE-Core rev: c98fb5f5129e71829ffab4449b3d28082bc95ab4)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:39 +01:00
Edwin Plauchu
f7d80257af unzip: fixes strange output
This fixes commit 763a3d424b

Output was strange when using unzip to extract zip file.
This patch fixed so.

[YOCTO #9551]

(From OE-Core rev: 30486429ed228e387ee574c6990b361d2ade6a32)

Signed-off-by: Edwin Plauchu <edwin.plauchu.camacho@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
c27e23c123 nss: update to 3.25
(From OE-Core rev: fa11e90f691e4f4eee8a231abfe179b0f4992da9)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
b1da4414d8 mpg123: update to 1.23.6
(From OE-Core rev: 7dd246aaacc7128d7c4860438714862af6ac050a)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
7bb1907287 lighttpd: update to 1.4.41
Rebase pkgconfig.patch

(From OE-Core rev: 45fac4161cb230bc03c6c08d21cc768e52700f02)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
16fae9fffe gobject-introspection: odd versions are development snapshots
(From OE-Core rev: 5c2fcbc42dc85764863771ed62c7415aafb85916)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
9587685d1a ffmpeg: update to 3.1.2
(From OE-Core rev: 0aeb601b9e211063aeedec5600354245c0491ff9)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Richard Purdie
190895d093 bdwgc: Add missing include to avoid musl build failures
(From OE-Core rev: 33459ffd0b5f3f303bcf8fb4dce817f6d73162a1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
bd02a3851c bdwgc: update to 7.6.0
Remove backported NIOS2 patch.
README.QUICK checksum updated; the license part of the file is unchaged.

(From OE-Core rev: ee16cc4ad552502212055af46b3e97a312a13e69)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
32ac9347d6 bash-completion: update to 2.4
(From OE-Core rev: 7f23afc08141b48c4adea51820b9ad9a8fa21867)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:38 +01:00
Alexander Kanavin
31eadec93c iso-codes: upgrade to 3.69
(From OE-Core rev: 9663d90f46102a04ff41c36da94140cee0a9ad44)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
Alexander Kanavin
673c007834 btrfs-tools: update to 4.7.1
(From OE-Core rev: af37bf57b2772851150cbdabf8e1c2db7475930f)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
Alexander Kanavin
e5b80aba82 libwebp: upgrade to 0.5.1
(From OE-Core rev: c896b61db5c8abe0b96f7c8468cbf1ba2b36f435)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
Robert Yang
ff0ebe98a1 libnl: fix RREPLACES and RCONFLICTS for libnl-genl
The libnl-genl.rpm provides libnl-genl-3-200 after the following 2 fixes:
libnl: update to v3.2.28
libnl: fix packaging mistakes

$ rpm -qp --provides tmp/deploy/rpm/core2_64/libnl-genl-3-200-3.2.28-r0.4.core2_64.rpm
elf(buildid) = 4e753b2361ba0b02f162244a87cc0680796e46cc
libnl-genl = 3.2.28
libnl-genl-3.so.200()(64bit)
libnl-genl-3.so.200(libnl_3)(64bit)
libnl-genl2
libnl-genl-3-200 = 1:3.2.28-r0.4

Note, the libnl-genl2 is introduced by REPLACES_${PN}-genl = "libnl-genl2".

So that we don't need set libnl-genl-3-200 in the RREPLACES and
RCONFLICTS, otherwise it would cause do_rootfs errors when install both
libnl-genl.rpm and lib32-libnl-genl.rpm:

Computing transaction...error: Can't install libnl-genl-3-200-1:3.2.28-r0.0@core2_64: conflicted package libnl-genl-3-200-1:3.2.28-r0.0@lib32_x86 is locked

We didn't meet this error before was because there was no libnl-genl.rpm,
but libnl-3-genl.rpm, and it doesn't provide libnl-genl-3-200 by default.

Remove libnl-genl-3-200 from RREPLACES and RCONFLICTS will fix the problem.

(From OE-Core rev: a2e9e0bb7a4901f819332df30ec265616e422826)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
André Draszik
75610e2d0b libnl: backport musl fix (strerror_r / strerror_l)
musl doesn't implement the non-posix compliant,
deprecated, glibc-only special version of strerror_r
that libnl had been using so far.

Backport the patch(set) that switches libnl over to
using strerror_l().

(From OE-Core rev: 3718761dd9bd841c4383b63346c1ff2c81570af6)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
André Draszik
126c4b244d libnl: update to v3.2.28
See
  http://lists.infradead.org/pipermail/libnl/2016-August/002187.html
  http://lists.infradead.org/pipermail/libnl/2016-August/002200.html

(From OE-Core rev: 448411845e5953d498847e9a8d85d4b68e230c37)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
André Draszik
d1c566f71e libnl: fix packaging mistakes
- *.la files belong into -dev packages
- the genl-ctrl-list command line utility should go to into the CLI
  package, so as to prevent the libnl-genl library package from
  pulling in all of the command line utilities (as genl-ctrl-list
  is linked against libnl-cli-3.so.200)

(From OE-Core rev: 57ddcbde8aad2a2d37619e11a0cd2e9b8d9fb239)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
Mikko Ylinen
5dcb2a1999 image_types: check COMPRESS_DEPENDS for backwards compatibility
To complete the transition/renaming to chained image type CONVERSION
while maintaining bacwards compatibility to COMPRESS(ION), make sure also
COMPRESS_DEPENDS is checked. Without this, the dependencies for legacy
COMPRESSIONTYPES do not get built.

(From OE-Core rev: 12a8ee44f05e21d5814e31cb9e13c9eab236b836)

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-03 09:58:37 +01:00
Christopher Larson
f2f177c94d bitbake: bb.fetch2.svn: correctly pass workdir when fetching
The ud.pkgdir argument was being passed as the 'quiet' argument to
runfetchcmd, not the 'workdir' argument, resulting in fetching the svn module
into the root of DL_DIR, not where it belongs.

Cc: Matt Madison <matt@madison.systems>
(Bitbake rev: dc756510a95f88b192352be6fcd1d5d77852c348)

Signed-off-by: Christopher Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Mariano Lopez
355e4ec0b6 bitbake: cooker.py: Catch when stdout doesn't have a file descriptor
Currently, there is a check to remove the TOSTOP attribute from
a tty to avoid hangs. It assumes that sys.stdout will have a
file descriptor and this is not always true, some IO classes
will throw exceptions when trying to get its file descriptor.

This will add a check for such cases and avoid throwing an
exception.

[YOCTO #10162]

(Bitbake rev: cb4f8f6efa28ef2b13bc738a0118b876baa15b3e)

Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Michael Wood
de83a8ab6d bitbake: toaster: localhostbecontroller Remove git assumption
We don't need to force everyone to use git for the method in which
openembedded-core is downloaded. For instance it could have been
downloaded and extracted as a tarball.

(Bitbake rev: 8b7180332691a41a013e07a52b26018402141b6a)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Michael Wood
ce592fc7f5 bitbake: toaster: Allow git information to be null for BRLayer
We no longer only deal with layers that have their source in a gir
repository, we also allow for local directories too so update the
BRLayer model to reflect this.

(Bitbake rev: a15f61f3ef5a87b87121457f76592c87f0ea5d7f)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Michael Wood
4daae79875 bitbake: toaster: tests Add selenium test layer source switching layer details page
Add selenium tests for the new layer source switching functionality on
the layer details page. Edits the values for git repository and saves
and then edits the details for directory information and saves.

(Bitbake rev: acdfafdd753abe38a313c42e3a9d6211338b4e73)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Michael Wood
2318f92580 bitbake: toaster: Move Custom image recipe rest api to api file
We now have a dedicated file for the rest API so move and rework for
class based views. Also clean up all flake8 identified warnings.

Remove unused imports from toastergui views.

The original work for this API was done by Elliot Smith, Ed Bartosh,
Michael Wood and Dave Lerner

(Bitbake rev: 37c2b4f105d7334cdd83d9675af787f4327e7fe7)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Michael Wood
3b87f2895a bitbake: toaster: Fix oe-core fixture
Due to a copy paste error we managed to get some of the wrong
information in the oe fixture that provides a suggested default settings
for Toaster. This meant it tested correctly when it shouldn't have.
Fix:
 - The use of local bitbake
 - An incorrect call to realpath which didn't include its parent module.
 - The field used for the local_dir of an existing openembedded-core

(Bitbake rev: d57a9124650e5367919668dfccf6aad4962a77f1)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Michael Wood
50a8d3a34c bitbake: toaster: layerdetails clean ups after integrating local layer changes
A few clean ups for the work done to integrate editing imported local layers
into the layer detail page.

(Bitbake rev: 092ef32e695b43c3337b7116722c4c6eba981396)

Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:50 +01:00
Sujith H
e99b4cd625 bitbake: toaster: update api to include local_source_dir
Add an additional argument to the api to handle
local_source_dir which is the value user passes
to import non-git layers.

[YOCTO #9913]

(Bitbake rev: 2b5728fc5c0e578560506697f271605e80b5918f)

Signed-off-by: Sujith H <sujith.h@gmail.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Sujith H
fa48ca677c bitbake: toaster: layerdetails js changes for switching layers
This patch helps to implement the switching of layers
between directories and git repositories. Specifically
selection of git and local directory. Also enabling
form to view the selection.

[YOCTO #9913]

(Bitbake rev: 5c20834691f1b65cfc4a0c4ec12958f86b34bbeb)

Signed-off-by: Sujith H <sujith.h@gmail.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Sujith H
d9b5d11664 bitbake: toaster: add switch of git and not-git layers imported
This patch updates the layerdetails html file to
add the feature of switching imported layers between
directories and git repositories.

[YOCTO #9913]

(Bitbake rev: 70319eb690a056b41b7e91d79560067edd623ee1)

Signed-off-by: Sujith H <sujith.h@gmail.com>
Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Elliot Smith
d2797b5ec2 bitbake: buildinfohelper: discover kernel artifacts correctly
Because some image_license.manifest files contain multiple
FILES lines, and because those lines can sometimes not contain
a list of files (i.e. they look like "FILES:\n"), we were
resetting the list of kernel artifacts when we hit the second
"empty" line.

Fix by ignoring any FILES line which doesn't list files, and by
appending any files found in a valid FILES line, rather than
overwriting the existing list.

[YOCTO #10107]

(Bitbake rev: 927ec3524625ac731326b3c1c1361c2a4d2bd9e1)

Signed-off-by: Elliot Smith <elliot.smith@intel.com>
Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Stephano Cetola
46bad463ef bitbake: wget: allow basic http auth for SSTATE_MIRRORS
If http basic auth creds were added to sstate mirrors like so:

https://foo.com/sstate/PATH;user=foo:bar;downloadfilename=PATH

The sstate mirror check would silently fail with 401 unauthorized.
This patch allows both the check, and the wget download to succeed by
checking for user credentials and if present adding the correct
headers, or wget params as needed.

[ YOCTO #9815 ]

(Bitbake rev: cea8113d14da9e12db80a5b6b5811a47a7dfdeef)

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Markus Lehtonen
3658f6d477 bitbake: cookerdata/ast: Fail gracefully if event handler function is not found
[YOCTO #10186]

(Bitbake rev: 107c47c4e6de6a596cf1aeca5c18dbc1c5b44dc4)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Richard Purdie
412a26e154 bitbake: build/runqueue: Add noextra stamp file parameter to fix multiconfig builds
We can't execute the same task for the same package_arch multiple
times as the current setup has conflicting directories. Since
these would usually have the same stamp/hash, we want to execute in
sequence rather than in parallel, so for the purposes of task execution,
don't consider the "extra-info" on the stamp files. We need to add
a parameter to the stamp function to achieve this.

This avoids multiple update-rc.d populate_sysroot tasks executing in
parallel and breaking multiconfig builds.

(Bitbake rev: a9041fc96a14e718c0c1d1676e705343b9e872d3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Richard Purdie
e7b2b7d40d bitbake: fetch2: Handle multiconfig fetcher issues
We need a separate fetcher cache per multiconfig as the revisions and other
SRC_URI data can potentially be different. For now, this is the simplest way
to achieve that and avoids linux-yocto kernel build failures when targeting
multiple machines for example.

(Bitbake rev: d98cc31d6668bc1d6372664593126b5e5132ef2c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Paul Eggleton
26aad57ece bitbake: tinfoil: add a parse_recipe_file function
Parsing a recipe is such a common task for tinfoil-using scripts, and is
a little awkward to do properly, so add an API function to do it. This
should also isolate scripts a little from future changes to the internal
code. The first user of this will be the OpenEmbedded layer index update
script.

Part of the fix for [YOCTO #10192].

(Bitbake rev: 39780b1ccbd76579db0fc6fb9369c848a3bafa9d)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:49 +01:00
Paul Eggleton
818a36590a bitbake: cache: allow parsing a recipe with a custom config datastore
To accommodate the OpenEmbedded layer index recipe parsing, we have to
have the ability to pass in a custom config datastore since it
constructs a synthetic one. To make this possible after the multi-config
changes, rename the internal _load_bbfile() function to parse_recipe(),
make it a function at the module level (since it doesn't actually need
to access any members of the class or instance) and move setting
__BBMULTICONFIG inside it since other code will expect that to be set.

Part of the fix for [YOCTO #10192].

(Bitbake rev: 5b3fedfe0822dd7effa4b6d5e96eaf42669a71df)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:48 +01:00
Paul Eggleton
f551e67fa7 bitbake: bitbake-diffsigs/bitbake-layers: Ensure tinfoil is shut down correctly
We should always shut down tinfoil when we're finished with it, either
by explicitly calling the shutdown() method or by using it as a
context manager ("with ...").

(Bitbake rev: 131e6dc4bbd197774d35d2b266bfb0816f6e6b1e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:48 +01:00
Paul Eggleton
8f277fcf33 bitbake: tinfoil: add context manager functions
Since calling the shutdown() function is highly recommended, make
tinfoil objects a little easier to deal with by adding context manager
support - so you can do the following:

    with bb.tinfoil.Tinfoil() as tinfoil:
        tinfoil.prepare(True)
        ...

(Bitbake rev: f59bc6be2b4af1acdcf6a1b184956b5ffd297743)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 18:09:48 +01:00
Scott Rifenbark
2f33bb30c7 bitbake: bitbake-user-manual: Added "Exporting Variables to the Environment"
Fixes [YOCTO #10196]

Added a new section named "Exporting Variables to the Environment".
This section provides a dedicated description for how to export
variables to the shell.

(Bitbake rev: b543458dd67d24a228fa2db0ecb4ddd20016a560)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 16:29:52 +01:00
Scott Rifenbark
9b20975fc2 bitbake: bitbake-user-manual: Corrected misspelled STAMPS_DIR
Fixes [YOCTO #10141]

Section on Checksums (Signatures) had this variable referred to as
STAMP_DIR.

(Bitbake rev: 7dff6762148bc2ac8f81d89bbe595dfbfdf7b119)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-09-02 16:29:52 +01:00
Alejandro Hernandez
087c580b28 init-install: Fixes the install script failing when not finding any mmcblk devices
The init-install.sh and init-install-efi.sh scripts perform a check
to see which devices are available on a booted system for installation.

Recently, the way we check for these devices changed on 993bfb,
greping for devices found on /sys/block/, this change caused the installer
to fail (at least) when not finding any mmcblk devices, due to the fact
that we call sh -e to execute this script, so any command (grep)
or pipeline exiting with a non-zero status causes the whole script to exit

This patch throws in a harmless true exit status at the end of the pipeline(s)
of the grep commands to avoid the installer script from exiting, fixing the issue.

[YOCTO #10189]

(From OE-Core rev: 384cf92ca9c3e66763c2c1ff2776c53d47ae25d6)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-30 07:57:50 +01:00
Scott Rifenbark
2fedd226c3 ref-manual: Fixed small wording in PKGR in the glossary
Fixes [YOCTO #10138]

(From yocto-docs rev: e49e5055e48f3c426090d2bc62b2bffbc2577dd0)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
3b70c96537 ref-manual: Replaced "bitbake-dumpsigs" with "bitbake-dumpsig".
Fixes [YOCTO #10141]

(From yocto-docs rev: e74a66d146e7f666a71f2dab6a5f78de5ad1966c)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
08e3ef9808 ref-manual: Updates to PKGV, PKGE, and PKGR.
Fixes [YOCTO #10138]

Small wording changes.

(From yocto-docs rev: 66afe7560f086ea350df92b2b40ce5790d3d523c)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
9ba4fc087e dev-manual, ref-manual: Systemd-boot: Update documents for new EFI bootloader
Fixes [YOCTO #9707]

* Replaced gummiboot with systemd-boot in the dev-manual
* Replaced the gummiboot class with a new systemd-boot class
* Replaced the appropriate gummiboot variables in the glossary
  with new variables SYSTEMD_BOOT_CFG, SYSTEMD_BOOT_ENTRIES,
  and SYSTEMD_BOOT_TIMEOUT.

(From yocto-docs rev: 778b620e65cc68531b3c41aeb8f27f2a07eb0d00)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
ac0fa7a296 ref-manual: Added bitbake.conf to list of example conf files
Fixes [YOCTO #10144]

In the "Viewing Variable Names" section, there is a list of
example configuration files.  I added bitbake.conf to the list.

(From yocto-docs rev: 5a19d5c314881e223aaa567c8eb8f6ed4fbc01df)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
1769e1a2de ref-manual: Suggested fleshing out of the sigdata/siginfo documentation
Fixes [YOCTO #10141]

Provided several fixes to address this situation:

 * Renamed "Debugging Build Failures" to "Debugging Tools and
   Techniques" as it fit better the subsections.

 * Renamed "Viewing Dependencies" to "Viewing Dependencies
   Between Recipes and Tasks" as it fit better the description.

 * Added a new "Viewing Task Variable Dependencies" section
   to describe how sigdata and siginfo stuff can be used.

 * Replaced the contents of "4.3.4.1 Debugging" with a shorter
   bit that now references into the new section on veiwing
   task variable dependencies.

(From yocto-docs rev: 539d76366055bed74ccc926519e969324cac470d)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
a2e8f196d7 ref-manual: Updated some variables in the glossary for nits.
Fixes [YOCTO #10138]

Small fixes for the following variables:

 * PKGV
 * PV
 * PE
 * PR

(From yocto-docs rev: 4ffc6a2fed330cec320e744561df3aad2a349cf5)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
b9df28b9b2 sdk-manual: Added developer note for updating to Neon
(From yocto-docs rev: bd21fdd102d7daa3f03b978760d9190a3815e243)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:29 +01:00
Scott Rifenbark
eb7781f633 sdk-manual: Updated boxes to check when installing pre-built mars yp plug-in
Removed the Bitbake commander item and renamed the ADT one to SDK.

(From yocto-docs rev: 7bb7823bd9991ce95315b76bdfb3175c53198401)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
3a5397d306 sdk-manual: Removed "snapshot" in an example version string.
(From yocto-docs rev: 5ce7ad30cfc95b459a3da7b1cc540d1207d50dd8)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
4d1e6423ba sdk-manual: Added note to link to the wiki on building an SDK
(From yocto-docs rev: 29704fa495a97279c5d4e29bee22f0aaa9e15cba)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
4bdd0cfffa sdk-manual: Added an example name for an extensible SDK
(From yocto-docs rev: bbc2ac36d19713242307b73393035d3fca6ed5a0)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
5393fd7b34 sdk-manual: Fixed a broken "do_install" task link
(From yocto-docs rev: bef1a51e0c0a5a0145e942c1cc3f868f1cfaa03c)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
0d4510e733 sdk-manual: Fixed a broken link to the "base" class
(From yocto-docs rev: 22eba313276ea95030634eef8632e4e05cb1e484)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
3c77e18c81 sdk-manual: "Linkified" the CC variable in section 3.3.4
(From yocto-docs rev: d020cfc08e5d0679d7d5d3fd4269be877413e863)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
9359c104ad sdk-manual: Grammar fix.
(From yocto-docs rev: 709481dd0711abda063120f775b35b58c9a2af15)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
e990bceca0 sdk-manual: Updated the extensible SDK installer example
(From yocto-docs rev: 3791f4abc21c565f7e258a550e66327dbbe7a384)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
66ca0c4fef sdk-manual: Re-worded Step 6 for deploying image in eclipse flow.
(From yocto-docs rev: dd0b96a3917ab6b6c0a22af1d23f48beee6a2cd3)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:28 +01:00
Scott Rifenbark
7bbaab7c4a sdk-manual: Added note about building the image for QEMU use
Placed a note in step 4 of the "Workflow Using Eclipse(tm)"
section that an alternative method to getting the target
root filesystem and toolchain is to build them out.
Referenced the wiki.

(From yocto-docs rev: 60720be0fe0d29a0b695005bb40f5b0c25475b55)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:27 +01:00
Scott Rifenbark
151a129877 sdk-manual: Removed bad link
Dumped a link to pre-built kernel naming information.  The
link was to the sdk-manual, which made no sense.

(From yocto-docs rev: 9b7a9f8217d9251f2d7166afc0bb3b4235264201)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:27 +01:00
Scott Rifenbark
766e91fa2b sdk-manual: Provided better wording to intro running sdk env script.
(From yocto-docs rev: 41b9b8170179a59b6534db9e926d5086be7d4328)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:27 +01:00
Scott Rifenbark
4e0a0d1253 sdk-manual: Added note about building and SDK
(From yocto-docs rev: 6518e03bc0259af04f01596f3f66c123616063e7)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:27 +01:00
Scott Rifenbark
99d43e6293 sdk-manual: Used &DISTRO; for some output release versions.
(From yocto-docs rev: 4dbcd9957366665028adf955951af6256e67c152)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:27 +01:00
Scott Rifenbark
4d5dc4a890 sdk-manual: Created new Mars Eclipse appendix
Fixes [YOCTO #7546]

First draft of the new appendix supporting the Mars version
of eclipse.  New appendix file created and entry made to
the sdk-manual.xml file to include that new appendix file
into the main book.

(From yocto-docs rev: 2fb79c29bcbb5c0801f67d4c245c07c3aa9d2ca2)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>

sdk-manual: WIP on appendix C

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:09:27 +01:00
Robert P. J. Day
a3f519e193 core-image-kernel-dev.bb: Standardize use of _append and leading space.
(From OE-Core rev: 00027aee12f4bbc9a4ba607c91fcc1e0e8257fa2)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:49 +01:00
Ross Burton
3ae2eef914 linux-firmware: set a preferred provider for brcmfmac-sdio.bin
This recipe packages six alternatives to brcmfmac-sdio.bin but as they all have
equal priority there is no determinism on what provider will be used if they are
all installed.

Arbitrarily select 4339 to be the highest priority.

(From OE-Core rev: 72a3b7eda202336014e9246019885357d8025050)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:49 +01:00
Zhenbo Gao
051d7aa18e groff: correct the location path for awk
awk is located at /usr/bin/, but not /bin/

(From OE-Core rev: a3d9d310866fe37f9c072bc81203cbf1b7ca688b)

Signed-off-by: Zhenbo Gao <zhenbo.gao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:49 +01:00
He Zhe
8232ce2b83 perl: Correct perl path for ptest
Substitute /usr/local with ${bindir}

(From OE-Core rev: bc372d65bc395290e1b7132908a3b943e1b73144)

Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Wang Xin
fb12fe9994 lsbinitscripts: 9.64 -> 9.68
Upgrade lsbinitscripts from 9.64 to 9.68.

(From OE-Core rev: d3f6df98318f0751948041a129faed1bd0f7a7c6)

Signed-off-by: Wang Xin <wangxin2015.fnst@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Chen Qi
5bd1a35c8c systemd: split systemd-container
Split container/vm related units into a new package, systemd-container.

The split mainly references Fedora 24, with a few differences.
Apart from the bash and zsh completion files, the differences include
adding systemd-spawn@.service into the systemd-container package.

[YOCTO #9835]

(From OE-Core rev: 2a4bf6e4c96a8104733add315166210f04c02caf)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Stephano Cetola
d45a3449bc rootfs.py: allow removal of unneeded packages
Current functionality allows for the removal of certain packages
based on the read-only image feature. This patch extends this
functionality by adding the FORCE_RO_REMOVE variable, which will
remove these packages regardless of any image features.

[ YOCTO #9491 ]

(From OE-Core rev: cfb869ffd4c37c3cc8e6b3eb732c1a7b7cfc3cb0)

Signed-off-by: Stephano Cetola <stephano.cetola@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Robert P. J. Day
31982f1659 unfs3: Simplify simultaneous usage of "_append" and "+="
(From OE-Core rev: 3437c0da8e89acb414298a338e13a8ae3efaad27)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Markus Lehtonen
96e68f15f0 build-perf-test-wrapper.sh: make workdir configurable
New command line argument '-w' may be used to specify work dir other
than the default <GIT_DIR>/build-perf-test.

(From OE-Core rev: 824284895f25146520a624b7b97f7475d0135814)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Markus Lehtonen
ee4c5f6171 build-perf-test-wrapper.sh: make archive dir configurable
Add new command line argument '-a' that can be used to define the
directory where results (tarballs) are archived. Giving an empty string
disables archiving which makes sense if you store results in Git.

(From OE-Core rev: d53cf92847aa80724be4412801c993948a09cd27)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Markus Lehtonen
a34fd3cf27 build-perf-test-wrapper.sh: allow saving results in Git
Add new command line argument '-C' that allows saving results in a Git
repository.

(From OE-Core rev: 3d06795d8cd9017b042a7283c16ac71d4f6317a6)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Markus Lehtonen
dc3025215b build-perf-test-wrapper.sh: parse args with getopts
Use getopts for parsing the command line. This changes the usage so that
if a commit (to-be-tested) is defined it must be given by using '-c',
instead of a positional argument.

(From OE-Core rev: b1f77ba41033397a2b25977963682b86f2f76471)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:48 +01:00
Markus Lehtonen
7155a9b64d oe-build-perf-test: add {git_commit_count} keyword for --commit-results-tag
Makes it possible to create easily sortable tags. Also, the default tag
format is updated to use the new keyword.

(From OE-Core rev: e3161654d75dfc3b059c519205b38b26e3ffb215)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
d0bac259bd oeqa.buildperf: add git commit count to result data
This number represents the number of commits since the beginning of git
history until the tested revision. This helps e.g. in ordering results.

(From OE-Core rev: b52070dd057ff5b410cd193f9be2f25bc4c506cc)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
caf6ad889c oe-build-perf-test: new {tag_num} keyword for --commit-results-tag
This makes it possible to create numbered tags, where the "basename" of
the tag is the same and the only difference is an (automatically)
increasing index number. This is useful if you do multiple test runs on
the same commit. For example, using:
--commit-results-tag {tester_host}/{git_commit}/{tag_num}

would give you tags something like:
myhost/decb3119dffd3fd38b800bebc1e510f9217a152e/0
myhost/decb3119dffd3fd38b800bebc1e510f9217a152e/1
...

The default tag format is updated to use this new keyword in order to
prevent unintentional tag name clashes.

(From OE-Core rev: cf2aba16338a147f81802f48d2e24a96c7133548)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
06b2c75c7e oe-build-perf-test: tag results committed to Git
Create a Git tag when committing results to a Git repository. This patch
also implements --commit-results-tag command line option for controlling
the tag name. The value
is a format string where the following fields may be used:
- {git_branch} - target branch being tested
- {git_commit} - target commit being tested
- {tester_host} - hostname of the tester machine

Tagging can be disabled by giving an empty string to
--commit-results-tag. The option has no effect if --commit-results is
not defined.

(From OE-Core rev: 60059ff5b81d6ba9ba344161d51d1290559ac2df)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
28333b3a2d oe-build-perf-test: pre-check Git repo when using --commit-results
Do a pre-check on the path that is specified with --commit-results
before running any tests. The script will create and/or initialize a
fresh Git repository if the given directory does not exist or if it is
an empty directory. It fails if it finds a non-empty directory that is
not a Git repository.

(From OE-Core rev: 759357a3bdbe75a3409b9e58979ab8b45d9b6ae8)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
6d9c52fe4b oeqa.utils.git: implement init() method
Method for doing 'git init'.

(From OE-Core rev: c848e1dac68cd859a563a82286f8bc5ddabaa423)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
96e8337830 oe-build-perf-test: implement --commit-results-branch
A new command line option for defining the branch where results are
commited. The value is actually a format string accepting two field
names:
- {git_branch} expands to the name of the target branch being tested
- {tester_host} expands to the hostname of the tester machine

The option has no effect if --commit-results is not used.

(From OE-Core rev: b54b63395ec632748a57a702812c8a9a07af35ab)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
8335422b00 oe-build-perf-test: support committing results data to Git
Implement a new command line option '--commit-results' which commits the
test results data into a Git repository. The given path must be an
existing initialized local Git repository.

(From OE-Core rev: b6f635513ca971402e7a970acc2168fb5d4a9476)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
66540ae5c1 oeqa.buildperf: use term commit instead of revision
This is basically a internal change, at this point. Term 'commit' better
represents the data we actually have. Term 'revision' is more vague and
could be understood to point to a tag object, for example.

(From OE-Core rev: f49cf7959b8aaa52b79b22a5884c6aa580a50302)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
979c735678 oeqa.utils.git.GitRepo: new arg to require topdir
Add a new 'is_topdir' argument to the GitRepo init method which
validates that the given path is the top directory of a Git repository.
Without this argument GitRepo also accepts subdirectories of a Git
repository (in which case GitRepo will point to the parent directory
that is the top directory of this repository) which may have undesired
in some cases.

(From OE-Core rev: 044c81bd916fbe7140d184eb103f74786cfef604)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:47 +01:00
Markus Lehtonen
618a2ede75 oeqa.utils.git: implement GitRepo.get_current_branch()
(From OE-Core rev: dcba2302adab47b398f1ce7d09c38828ea9ae426)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Markus Lehtonen
6cf74643e9 oeqa.utils.git: introduce GitRepo.rev_parse()
(From OE-Core rev: 55726e931536ed0cbd7b80588060b05a3145c934)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Markus Lehtonen
7fcc9f5ead oeqa.utils.git: support git commands with updated env
Extend GitRepo.run_cmd so that the caller may redefine and/or define
additional environment variables that will be used when the git command
is run.

(From OE-Core rev: 9b3c7c47f5d0fa473fe1db81b59b26531414781c)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Markus Lehtonen
665800fdf6 oe-build-perf-test: use absolute paths in cmdline args
This is safer as the current working directory may change.

(From OE-Core rev: 4b7bf7860713581ba351599fe32817ba24e8f8d0)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Markus Lehtonen
c284616ffb oe-build-perf-test: implement --run-tests option
Makes it possible to run only a subset of tests.

NOTE: The tests currently have (unwritten) dependencies on each other so
use this option with care. Mainly for debugging.

(From OE-Core rev: be4373be54e5b84f951771b0e75140f212838020)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Fabio Berton
e484378325 python-3.5-manifest: Add argparse module
Adding argparse module from Python's standard library. This allow use
argparse without installing all python-misc modules. For compatibility,
add python3-argparse as RDEPENDS to python3-misc.

(From OE-Core rev: f2b96001e074d26f5eb8711c2217a695fb02de4c)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Fabio Berton
4c184eb757 python-3.5-manifest: Rename Queue module to queue
The Queue module has been renamed to queue in Python 3.

(From OE-Core rev: e19a430da2ef60b2c6cf6a67210ec1a7b292c8ca)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Ulf Magnusson
c1f4befadc useradd.bblass: Simplify target overrides
The current style might be a leftover from when _class-target did not
exist.

Also change the assignment to SSTATECLEANFUNCS to an append, which makes
more sense. useradd.bbclass is the only user of SSTATECLEANFUNCS as of
writing, so it won't make any functional difference.

(From OE-Core rev: 79dd6be736211a722538a1234337ca16fefd5540)

Signed-off-by: Ulf Magnusson <ulfalizer@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Khem Raj
9b6bc21d07 gcc: Update to final 6.2.0 release
(From OE-Core rev: 38b29d6730d67cd2421b6177472f6ed78f4542e0)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:46 +01:00
Fabio Berton
f5ccefd735 piglit: Add python3-argparse module to RDEPENDS
Python module argparse was removed from python3-misc package, so we
need to add new python3-argparse package to RDEPENDS.

(From OE-Core rev: 4fafb32d0544c1babe4ac4f68cadd056aadd6c82)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Ola x Nilsson
07ebdbebdf devtool: build_image: Fix recipe filter
The missing split() causes dev and dbg packages to match.

(From OE-Core rev: bf83e0f0a3d52958c4380599f1afc4b8e058afd7)

Signed-off-by: Ola x Nilsson <ola.x.nilsson@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Ola x Nilsson
d97aaac2d5 devtool: Use the wildcard flag in update_recipe_patch
The --wilcard-version flag was only used in the srcrev variant of the
update-recipe command.

(From OE-Core rev: d3057cba0b01484712fcee3c52373c143608a436)

Signed-off-by: Ola x Nilsson <ola.x.nilsson@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Robert Yang
3656481b53 nasm: 2.11.08 -> 2.12.02
(From OE-Core rev: 2eddea3fe8cdc612a5e90806c832bea1570ddfce)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Robert Yang
5a35913b86 subversion: 1.9.3 -> 1.9.4
(From OE-Core rev: 8620d13f8cf18be13429b0015d11e4efefe75b20)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Robert Yang
713b2e5a66 git: 2.9.2 -> 2.9.3
(From OE-Core rev: 91dea2fdb9be2654f1f530138bd8a90901575646)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Joe Slater
5bd808b1b6 systemd-compat-units: do not inherit allarch
Even though we are just a script, we do depend on
systemd being on the target and need an RDEPENDS
which means we cannot also be allarch.

(From OE-Core rev: ef5be3c8256419d5abec566ce266718fe317417e)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Ross Burton
bf23d4f954 bash-completion: add bash-completion to DEPENDS for target packages
(From OE-Core rev: a2eedbc02321d8923492ffb38fec3cd8828cb1d3)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Ross Burton
ba736a3458 utils: check_app_exists: strip whitespace from binary when searching
It's possible that the binary to be searched for contains whitespace which will
cause the search to fail, so strip any whitespace before looking.

(From OE-Core rev: 9e920abdb0f3dcfd1a94a90461ec1ddfb2729d83)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Ross Burton
8670979d22 oeqa/runtime/rpm: use su instead of sudo
This test works fine with su, which is more likely to be installed in images
than sudo.

(From OE-Core rev: 59d10be745a1f7d31c68e4d5da9e1c3461b7d390)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:45 +01:00
Maxin B. John
578f8113da layer.conf: remove pointercal
remove pointercal reference from layer.conf file since we moved the
pointercal recipe from oe-core.

(From OE-Core rev: 7a0f93956f43a5d000e845eeb429e9e37d48ae2e)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
9228f75f63 distro_alias.inc: remove xtscal, pointercal and tslib references
Remove xtscal, pointercal and tslib reference from distro_alias.inc
file since we moved those recipes from oe-core

(From OE-Core rev: 7bcb388edf49b43b5642396cf1fb1036ed36e425)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
e373c7c80f pointercal: remove recipe
Remove pointercal recipe along with xtscal since we replace it with
xinput-calibrator

[YOCTO #9365]

(From OE-Core rev: d56dffe629dfc86a8d3c7a043c8c2893004f803e)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
e389a96cd4 tslib: remove recipe
Modern systems generally use the kernel driver or libinput instead
of tslib. Move tslib from oe-core along with xtscal.

(From OE-Core rev: d37f6b595fd9ce53c79ff9281f2e20df7fa0503d)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
d2531ac079 xtscal: remove recipe
Remove xtscal in preference of xinput-calibrator

[YOCTO #9365]

(From OE-Core rev: 5bcdb9f0995474635789cf0774aba9b774277c53)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
23b11c2a06 packagegroup-core-x11-base.bb: remove pointercal
Remove pointercal from packagegroup-core-x11-base since we removed
xtscal in favour of xinput-calibrator

(From OE-Core rev: 4ad04ae085c4ba2f0ddf3c717478853a419af492)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
be554d7e6c x11-common: replace xtscal with xinput-calibrator
Replace xtscal with xinput-calibrator as part of removing xtscal.

[YOCTO #9365]

(From OE-Core rev: 85afb3445da5c3526f6046eb98262f9af7b78cba)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
19214aeb1d packagegroup-core-tools-testapps: remove tslib references
Remove tslib references from packagegroup-core-tools-testapps since
we removed tslib along with xtscal.

(From OE-Core rev: fe4648423ab7cc72f2d702265ca54d61537e7f88)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Maxin B. John
4f31b12131 pointercal-xinput: add a dummy calibration file for qemu
In qemu, the emulated PS/2 mouse reports itself as an "absolute coordinate"
device and that makes xinput_calibrator think it could be calibrated.

Add a dummy calibration file as a work around to prevent xinput_calibrator from
popping up on every boot in qemu.

[YOCTO #8380]

(From OE-Core rev: d044049362c53681ce1170f74c0802511acd3161)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 23:03:44 +01:00
Richard Purdie
3e97690117 parselogs: Ignore uvesafb timeouts
We're periodically seeing uvesafb timeouts on the autobuilder. Whitelist these
errors as there is little it seems we can do about them and we therefore
choose to ignore them rather than fail the builds.

[YOCTO #8245]

There is a better solution proposed in the bug with a -1 timeout however
this avoids failed builds until such times as that is implemented.

(From OE-Core rev: 8097f2da79b7862733494d2321e3dfdb0880804d)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:57:36 +01:00
Ed Bartosh
e1ab8392d1 image_types: use COMPRESSIONTYPES variable for backward compatibility
Recent renaming of COMPRESSIONTYPES variable can break recipes that
still use it. Including value of COMPRESSIONTYPES variable into
CONVERSIONTYPES should prevent this.

(From OE-Core rev: 5b00d9bf5ebf2350e4a4d09b436193efba80a85c)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:54:49 +01:00
Ross Burton
ffcc52e90d hdparm: set LICENSE correctly
LICENSE is recipe-wide so should be BSD & GPLv2, and then override LICENSE_${PN}
to just BSD as LICENSE_wiper is GPLv2.

(From OE-Core rev: fd1b3fc1dc7ef1621ce6488db0cfa3878bc83a5d)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:54:49 +01:00
Markus Lehtonen
988f77af3e license: simple verification of LICENSE_<pkg> values
LICENSE should be a superset of all LICENSE_<pkg> values. That is,
LICENSE should contain all licenses and LICENSE_<pkg> can be used to
"filter" this on a per-package basis. LICENSE_<pkg> shouldn't contain
anything that isn't specified in LICENSE.

This patch implements simple checking of LICENSE_<pkg> values. It does
do not do advanced parsing/matching of license expressions, but,
checks that all licenses mentioned in LICENSE_<pkg> are also specified in
LICENSE. A warning is printed if problems are found.

(From OE-Core rev: 0f4163a12ea431d0ba6265880ee1e557333d3211)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:54:49 +01:00
Markus Lehtonen
831e983251 license.bbclass: do not process LICENSE_pn variables
The loop iterating over LICENSE_pn variables has never worked. In
addition, the LICENSE variable is supposed to contain all licenses
defined in LICENSE_pn variables. Thus, it is simpler just to use LICENSE
as the data we get is essentially the same.

[YOCTO #9499]

(From OE-Core rev: d7229489c7dfd35164fd107d7944f3c273776118)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:54:49 +01:00
Matt Madison
c22293622d bitbake: fetch2: clean up remaining cwd saves/changes
Now that the fetchers all preserve the current working
directory, the cwd changes in the try_mirror_url,
download, and checkstatus methods are no longer needed.

(Bitbake rev: 0ed8975c42718342a104a9764a58816f964ec4ea)

Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:53:43 +01:00
Alejandro Hernandez
970ff6c0fd linux-yocto: Update genericx86* SRCREVs for linux-yocto 4.4
Upgrades to Linux 4.4.18

(From meta-yocto rev: 3fadd68e9021993a082f453945bd8c0ce142ff6f)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:49:20 +01:00
Alejandro Hernandez
5d23bacae6 linux-yocto: Update genericx86* SRCREVs for linux-yocto 4.1
Upgrades to Linux 4.1.30

(From meta-yocto rev: 7f3a857f94e29d1476c03ea9193fddd83a9b28bf)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-25 22:49:20 +01:00
Ed Bartosh
a81b326933 combo-layer: python3: fix UnicodeDecodeError
check_patch function opens patch file in text mode. This causes
python3 to throw exception when calling readline():
    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa7 in position
                        NNNN: invalid start byte

Opening file in binary mode and using binary type instead of strings
should fix this.

(From OE-Core rev: a7f1435c4c26237cdb55066c9f5408b4fdf016aa)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-24 13:58:28 +01:00
Richard Purdie
94646f4828 oeqa/buildiptables: Switch from netfilter.org to yoctoproject.org mirror
We've had some upstream mirror instability so use our own mirror for the
iptables sources to ensure this doesn't affect the test results.

(From OE-Core rev: 25f6af8895d5f5c6dcedde0a21285d63522769c8)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:59:43 +01:00
Khem Raj
7e4d206777 xserver-xf86-config: pre-load int10 and exa modules
musl doesn't like lazy loading that xorg uses, therefore
load the needed modules explicitly

[YOCTO #10169]

(From OE-Core rev: e279c9a30f0df400b06a47a487967a734854714b)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:42 +01:00
Thomas Witt
7afa09db2e cmake.bbclass: call cmake with a relative path
CMake wants a relative path for CMAKE_INSTALL_*DIR, an absolute path
breaks cross-compilation. This fact is documented in the following
ticket: https://cmake.org/Bug/view.php?id=14367

$sysconfdir and $localstatedir are not relative to $prefix, so they are
still set as absolute paths. With his change ${PROJECT}Targets.cmake
that are generated by cmakes "export" function will contain relative
paths instead of absolute ones.

(From OE-Core rev: c03b32bd71dbe04f2f239556fea0b53215e403d7)

Signed-off-by: Thomas Witt <Thomas.Witt@bmw.de>
Signed-off-by: Clemens Lang <clemens.lang@bmw-carit.de>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:42 +01:00
Jussi Kukkonen
5a3947cce1 openssh: Upgrade 7.2p2 -> 7.3p1
Remove CVE-2015-8325.patch as it's included upstream. Rebase another
patch.

(From OE-Core rev: 4b695379dcf378e8d77deaf7e558e8cbd314683c)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:42 +01:00
Jussi Kukkonen
2b7541d375 wayland-protocols: 1.5 -> 1.7
* xdg-shell unstable v6 (backwards incompatible)
* new unstable protocols xdg-foreign, idle-inhibit

(From OE-Core rev: e3ea73039af5fbde52788188b750383aa5d6c2c8)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:42 +01:00
Jussi Kukkonen
a850ba3e5c libinput: Upgrade 1.3.0 -> 1.4.1
(From OE-Core rev: 33800186dbfa3a4b28ece558c9ff1eb68b99d54d)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:42 +01:00
Jussi Kukkonen
86884f6544 json-glib: Upgrade 1.2.0 -> 1.2.2
(From OE-Core rev: a00280a96cd770e6c26e30eab10cc49b54d4992b)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Jussi Kukkonen
5818fec038 vte: Upgrade 0.44.1 -> 0.44.2
(From OE-Core rev: 68898cf20f70fc7e7f517111ea7c2b901859263e)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Jussi Kukkonen
08d8afe92c gtk+3: Upgrade 3.20.6 -> 3.20.9
(From OE-Core rev: e82bdb1e5ddecb347a75098d53f4db2d0b5aa853)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Jussi Kukkonen
c0dcc5d44c gnome-themes-standard: Upgrade 3.18.0 -> 3.20.2
(From OE-Core rev: 1d0dcb0e8bc7b5f11b8249053008d9860a40dc61)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Jussi Kukkonen
69d8707bd2 fontconfig: Upgrade 2.12.0 -> 2.12.1
License text block moved, checksum remains same.

(From OE-Core rev: dbda47cab8742888189131716415777155105d9d)

Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Hongxu Jia
624597d922 libgcrypt: upgrade to 1.7.3
(From OE-Core rev: 0a6c2db4d79288fc8c9bebbf7d93bf142d358f7e)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Ed Bartosh
1a620968b1 rm_work: don't remove timestamps of image tasks
Excluded removal of do_bootimg, do_bootdirectdisk and do_vmimg
timestamps to prevent unneeded rootfs rebuilds.

[YOCTO #10159]

(From OE-Core rev: f214da502ad7eda27460dc6f06e9cd29a114f2d2)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Robert Yang
fd23044c46 python-3.5-manifest.inc: the signal module RDEPENDS on enum
Fixed:
$ python3
>>> import signal
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/path/to/sdk/sysroots/x86_64-pokysdk-linux/usr/lib/python3.5/signal.py", line 4, in <module>
    from enum import IntEnum as _IntEnum
ImportError: No module named 'enum'

(From OE-Core rev: 6306dc8351c19059c4c2a8e75bb5733e64532732)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Dai Caiyun
19b2f7c278 libidn: 1.32 -> 1.33
1)Upgrade libidn from 1.32 to 1.33.
2)Modify LIC_FILES_CHKSUM, since the date in it has been changed, But the LICENSE has not been changed

(From OE-Core rev: fa042b49a3a1a78ae28b19e66b30c279da65963a)

Signed-off-by: Dai Caiyun <daicy.fnst@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:41 +01:00
Maxin B. John
e8168a55e7 useradd_base: avoid unintended expansion for useradd parameters
Now, useradd dollar sign requires three prepending backslash characters to
avoid unintended expansion. It used to be just one prepending backslash
character before Krogoth. Restore that behaviour.

[YOCTO #10062]

(From OE-Core rev: 9e43a73c7ad576666d53c8c9e0283bc6bb9087a8)

Signed-off-by: Niko Mauno <niko.mauno@vaisala.com>
Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
Ross Burton
05c30223b4 xorg-proto: remove stale git recipes
These two recipes are old and unmaintained, so remove them to avoid confusion
with the tarball recipes.

(From OE-Core rev: edf5b379b4c111fd9870fb3ae139d88fcd9e752d)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
Ross Burton
bef6e6246b inputproto: explicitly disable asciidoc to avoid floating dependency
(From OE-Core rev: d6bb98d0c432d8f4ffaf74f63aca61354565a546)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
André Draszik
053bafd8b2 gettext_0.16.1: whitespace changes to align with v0.19.8.1
This further aligns this recipe with the GPLv3 version to make
it easier to  spot differences between the two recipes.

(From OE-Core rev: e25a533e8ca2fc1fa897df252830825cb9a5f028)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
André Draszik
5acf2f64ea gettext_0.16.1: align configure options with v0.19.8.1 recipe
It doesn't look like we need any of those features, so
let's disable them explicitly.

(From OE-Core rev: 0a095473eec333f918ef831dea1c2f269a64fc62)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
André Draszik
dff0cb45f1 gettext_0.16.1: fix lispdir configure option
The option is called --with(out)-lispdir, not --with(out)-lisp

(From OE-Core rev: 422c92d2806f776252c15ec9fe204b204503c4d2)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
André Draszik
bb96b161ae gettext_0.16.1: use musl gettext implementation
gettext uses internal symbols to detect whether the
implementation is compatible with GNU gettext. However,
these symbols don't are not part of the public API, they
are specific to glibc.

While musl implements the GNU gettext *API* version 1 and 2
  http://www.openwall.com/lists/musl/2015/04/16/3
it doesn't implement glibc internals. This means that
gettext fails to detect musl's working implementation.

More recent versions of gettext have changed the way
GNU gettext compatibility is done
  https://lists.gnu.org/archive/html/bug-gettext/2016-04/msg00000.html
  http://git.savannah.gnu.org/cgit/gettext.git/commit/gettext-runtime/m4/gettext.m4?id=b67399b40bc5bf3165b09e6a095ec941d4b30a97
and while we could backport the corresponding patch to
gettext.m4, we avoid doing that so as to avoid any
potential GPL-v3 issues.

So instead we force ./configure to assume that the gettext
implementation of the c-library (musl) is compatible.

As a side-effect, this also reduces image sizes as the
internal gettext implementation isn't built anymore, and
it's otherwise packaged into the main gettext package
which blows up the image as the main gettext package
contains a lot of things.

Similarly, libintl.h isn't generated anymore, as the one
from musl is OK.

(From OE-Core rev: 948f0bd162f0b1b0375db884e99a2338f47e8527)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
Ross Burton
edec1c4e1d insane: improve package_qa_clean_path
Instead of just removing TMPDIR from the path for display, optionally allow a
package to be passed and remove PKGDEST/package too.

This means that messages that specify a package name can pass that name and the
resulting path will be absolute inside that package.

(From OE-Core rev: 55061a43926baf6ff0e17aed02efd299ebba3c24)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:40 +01:00
Hongxu Jia
86164406dd grub: fix load module all_video failed
While using oe-core toolchain to strip grub module 'all_video.mod',
it stripped symbol table:
--------------
root@localhost:~# objdump -t all_video.mod

all_video.mod:     file format elf64-x86-64

SYMBOL TABLE:
no symbols
--------------

It caused grub to load module all_video failed.
(This module will be loaded by defalut which configed in grub.cfg)
--------------
grub> insmod all_video
error: no symbol table.
--------------

Tweak strip option to keep symbol .module_license could workaround
the issue.
--------------
root@localhost:~# objdump -t all_video.mod

all_video.mod:     file format elf64-x86-64

SYMBOL TABLE:
0000000000000000 l    d  .text  0000000000000000 .text
0000000000000000 l    d  .data  0000000000000000 .data
0000000000000000 l    d  .module_license        0000000000000000 .module_license
0000000000000000 l    d  .bss   0000000000000000 .bss
0000000000000000 l    d  .moddeps       0000000000000000 .moddeps
0000000000000000 l    d  .modname       0000000000000000 .modname
--------------

(From OE-Core rev: 17e7eb96e5446821ad81977ac9ccac26b05e67a7)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:39 +01:00
Dengke Du
ddaf84ce86 bash: fix run-intl ptest failed
1. Filter the extra white space in intl.right

   When the sub-test unicode2.sub of intl.tests executed, it produced
   compact results without extra white space, compared to intl.right,
   it failed.

   So we need to filter the extra white space in intl.right.

   Import this patch for intl.right from bash devel branch:

	http://git.savannah.gnu.org/cgit/bash.git/log/?h=devel

   Commit is:

	85ec0778f9d778e1820fb8c0e3e996f2d1103b45

2. Change intl.right correspond to the unicode3.sub's output

   In sub-test unicode3.sub of intl.tests have this:

		printf %q "$payload"

   The payload variable was assigned by ASCII characters, when using
   '%q' format strings, it means print the associated argument shell-quoted.

   When the strings contain the non-alpha && non-digit && non-punctuation &&
   non-ISO 646 character(7-bit), it would output like this: " $'...', ANSI-C
   style quoted string. We can check the bash source code at:

	http://git.savannah.gnu.org/cgit/bash.git/tree/builtins/printf.def#n557
	http://git.savannah.gnu.org/cgit/bash.git/tree/lib/sh/strtrans.c#n331

   So we need to change the intl.right contain the correct output of unicode3.sub.

   Import parts of this patch for intl.right from bash devel branch:

	http://git.savannah.gnu.org/cgit/bash.git/log/?h=devel

   Commit is:

	74b8cbb41398b4453d8ba04d0cdd1b25f9dcb9e3

3. Add the sanity check for locales

   When run the intl.tests, we need the following locales:

	en_US & fr_FR & de_DE

   So add the locales check for the intl.tests in run-ptest.

(From OE-Core rev: 640676226bb351420a0a8b2d2a3c120ae42da11e)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:38 +01:00
Ming Liu
0db6b0f363 bootchart2: fixes a BOOTLOG_DEST typo
A flaw was observed in bootchartd that BUILDLOG_DEST should actually be
BOOTLOG_DEST, this seems to be a typo or mix-up which has been fixed in
upstream.

Cherry pick the fix since bootchart2 0.14.8 is still the newest release
so far.

(From OE-Core rev: 299e67291f3d396ba93f4c4a94120228bb9b1d88)

Signed-off-by: Ming Liu <peter.x.liu@external.atlascopco.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:37 +01:00
Wang Xin
f0213bbe65 adwaita-icon-theme: 3.18.0 -> 3.20
1) Upgrade adwaita-icon-theme from 3.18.0 to 3.20.
2) Delete DEPENDS, since intltool is not needed.

(From OE-Core rev: c3fa2eca5d2667c668641373948acfb7172ff2e8)

Signed-off-by: Wang Xin <wangxin2015.fnst@cn.fujitsu.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:36 +01:00
Ovidiu Vancea
0d445840c4 initscripts: Check for logrotate in dmesg.sh
Autodetect previously hardcoded logrotate location because it can be
installed in multiple places like /usr/bin/logrotate which is very
common besides /usr/sbin

(From OE-Core rev: 277a5975d43125623b5a51ddcb48f9ee2474d0fc)

Signed-off-by: Ovidiu Vancea <ovidiu.vancea@ni.com>
Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:35 +01:00
Fabio Berton
4945cdfd76 python3-native: Extend python3-native rproviders
Add the following modules to RPROVIDES:

  - python3-email-native
  - python3-io-native
  - python3-json-native
  - python3-lang-native
  - python3-misc-native
  - python3-netclient-native
  - python3-netserver-native
  - python3-numbers-native
  - python3-pkgutil-native
  - python3-pprint-native
  - python3-re-native
  - python3-shell-native
  - python3-subprocess-native
  - python3-threading-native
  - python3-unittest-native

(From OE-Core rev: 1a62ffd108e6aa7b7e5d0a81819550e8a7afeb60)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:35 +01:00
Fabio Berton
cf0e3d0489 python3-native: Change code style for rprovides
Use a more readable code style for RPROVIDES and sort recipes
alphabetically.

(From OE-Core rev: 21130e2afc4762ad84c86e377146b99224d16032)

Signed-off-by: Fabio Berton <fabio.berton@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:34 +01:00
Pranav Tipnis
9e4e6b91b7 grub-efi.bbclass: Fix path in startup.nsh for iso image.
The path in startup.nsh for iso image is corrupted as follows:
fs0:\EFI\BOOT^Hootx64.efi

Using printf will emit correct path which is:
fs0:\EFI\BOOT\bootx64.efi

This happens because of echo command. Switching to printf
like the one used in efi_populate() function.

(From OE-Core rev: 7540b9e68d56e7779b478d2bc09fbbedcf28976b)

Signed-off-by: Pranav Tipnis <pranav.tipnis@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:34 +01:00
Mark Hatle
63ac3520b4 glibc: Fix scope resolution in glibc to be breadth first.
The ELF specification indicates symbol resolution should be breadth first, not
depth first.

The dl-deps.c: dl_build_locale_scope function is processing in a depth first
mode.  This is causes certain symbols to be incorrectly reported when
LD_TRACE_PRELINKING=1 is enabled.

See glibc BZ #20488 for more information.

(From OE-Core rev: fb72263eaa94e64ddeee457b5b1bc999f0e647da)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:34 +01:00
Bruce Ashfield
1bb1200a93 linux-yocto/4.4: fix configuration warnings
Integrating the following commits to address configuration warnings for
intel-corei7-64 and intel-core2-32:

  features: Fix dependencies and =m vs =y discrepancies for corei7
  intel-core2-32.cfg: Explicitly disable CONFIG_64BIT

(From OE-Core rev: b2a4e07390834fa41fe35d1124ac2a0cd6692524)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:34 +01:00
Bruce Ashfield
4b80b346a7 linux-yocto/4.1/4.4: -stable updates and configuration changes
Updating the 4.4 kernel to v4.4.18 and the 4.1 kernel to v4.1.30.

We also tweak the configuration with the following commits to remove
warnings being generated from the 4.4 kernel (due to options being
dropped from the final .config):

  features: Create mfd-intel-lpss feature and use where appropriate
  features/iio: Set IIO_BUFFER_CB to =m instead of =y
  features: Add 6lowpan feature and add it where necessary

Tested on qemux86, qemuppc, qemumips and qemuarm.

(From OE-Core rev: 18c6fb387aa6a15de514030c4a7c04dac9c68869)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 17:44:34 +01:00
Robert Yang
7b97f591a6 maintainers.inc: update maintainers for Dengke
* Take recipes from Jussi Kukkonen
* Take recipes from Kai, Wenzong and Yi.

(From meta-yocto rev: 508dfcf39e09661950c408497fa23ee8a8e20f55)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-23 10:08:11 +01:00
Jonathan Liu
f078ccf1ac bitbake: siggen: Fix file variable typo in compare_sigfiles
(Bitbake rev: deab9a30987b225922490ca186c5307c15d45b82)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:11:29 +01:00
Matt Madison
ab09541d55 bitbake: fetch2: preserve current working directory
Fix the methods in all fetchers so they don't change
the current working directory of the calling process, which
could lead to "changed cwd" warnings from bitbake.

(Bitbake rev: 6aa78bf3bd1f75728209e2d01faef31cb8887333)

Signed-off-by: Matt Madison <matt@madison.systems>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Robert Yang
eefb4b66c8 bitbake: dump_cache.py: use python3 as interpreter
Fixed:
  File "bitbake/contrib/dump_cache.py", line 39
    print("Error, need one argument!", file=sys.stderr)

(Bitbake rev: 435c6fb838b9f38c0477bcc2f07c8ce22999132b)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Martin Jansa
9d0962491c bitbake: toasterui, knotty: don't print taskid followed by taskstring which are now in most cases identical
* unify the format how the task is described
* don't show taskid followed by taskstring as the taskstring is
  different only for setscene tasks (by _setscene suffix)
* the duplicated output was introduced by:
  2c88afb   taskdata/runqueue: Rewrite without use of ID indirection
  as reported and confirmed as a bug here:
  http://lists.openembedded.org/pipermail/openembedded-core/2016-June/123148.html
* show:
  NOTE: Running task 541 of 548 (/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_package)
  instead of much longer:
  NOTE: Running task 541 of 548 (ID: /OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_package, /OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_package)

  and similarly for failed tasks:
  ERROR: Task (virtual:native:/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_install) failed with exit code '1'
  instead of much longer:
  ERROR: Task virtual:native:/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_install (virtual:native:/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_install) failed with exit code '1'

(Bitbake rev: 696693d45f5eff1226866ed79dbfb67161d8cd3f)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Markus Lehtonen
0b409117c9 bitbake: tests: add unit tests for the usehead url parameter
[YOCTO #9351]

(Bitbake rev: 63031c0236ace10a9d52b9db9bbb892c1b4bf7db)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Richard Purdie
c04468d113 bitbake: git: Allow local repos to use HEAD
Introduce a new 'usehead' url parameter for git repositories. Specifying
usehead=1 causes bitbake to use whatever commit the repository HEAD is
pointing to. Usage of usehead=1 is only allowed for local git
repositories, i.e. it must always be accompanied with protocol=file url
parameter.

[YOCTO #9351]

(Bitbake rev: 2673fac5a9d06de937101e3fb2ddf1e60ff99abf)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Markus Lehtonen
4093f0bad2 bitbake: bitbake-selftest: enable bitbake logging to stdout
Now you get the bb logger output for failed tests. This helps debugging
problems. Also, all stdout/stderr data for successful tests is silenced
which makes for less cluttered console output.

(Bitbake rev: ea19972a16f7639f944823d1d8a7728105460136)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Markus Lehtonen
9cce855f47 bitbake: bitbake-selftest: introduce BB_TMPDIR_NOCLEAN
Set this env variable to 'yes' to preserve temporary directories used by
the fetcher tests. Useful for debugging tests.

(Bitbake rev: 04132b261df9def3a0cff14c93c29b26ff906e8b)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:59 +01:00
Markus Lehtonen
81697a8661 bitbake: bitbake-selftest: add help text for env variable(s)
(Bitbake rev: 94c63a5b1e731e64eb8efbc09f2ab6a0ce11df05)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:58 +01:00
Markus Lehtonen
9139d75736 bitbake: bitbake-selftest: utilize unittest.main better
This simplifies the script, and, gives new features. It is now possible
to run single test functions, for example. This is nice when writing new
test cases.

(Bitbake rev: 8c513580b9406b031674f799117eae7410f8e01c)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:08:58 +01:00
Richard Purdie
5471b2cdc5 Revert "local.conf.sample: Disable ARM and PPC due to prelink test case failures"
This reverts commit 85d30c28277a040420c2b2f25028ae1500da54db.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:04 +01:00
Robert Yang
aaca17d6ba packagefeed-stability.bbclass: cleansstate should remove pkgs from deploy dir
"bitbake recipe -ccleansstate" should remove binary pkgs from deploy dir
as normal cleansstate does without packagefeed-stability.bbclass.

(From OE-Core rev: 0865a5b8b8fbf478fb4b2310f808bcffff84a091)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:04 +01:00
Dai Caiyun
bc315512d3 libdrm: 2.4.68 -> 2.4.70
Upgrade libdrm from 2.4.68 to 2.4.70.

(From OE-Core rev: 0f9ce74cb62afdd3a0c700be223d0ae0f88daa05)

Signed-off-by: Dai Caiyun <daicy.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:04 +01:00
Wang Xin
a2f9420358 glib-2.0: 2.48.1 -> 2.48.2
1) Upgrade glib-2.0 from 2.48.1 to 2.48.2.
2) Modify Enable-more-tests-while-cross-compiling.patch, since the data has changed.

(From OE-Core rev: f5af2742003b06f117ba34683cefd168cc78b5a0)

Signed-off-by: Wang Xin <wangxin2015.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Robert Yang
e7ec6cf27c kbd: remove PARALLEL_MAKEINST = ""
There isn't anything wrong when looked into its Makefile, I guess that
it had been fixed during ugprade, and I've applied this patch locally
for more than 2 months, there isn't anything wrong.

(From OE-Core rev: 53687cadaab307fc843768d61973ed1630eb28af)

Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Chen Qi
1e211f371c kmod: upgrade to 23
(From OE-Core rev: 651a08c9eda35edc31e637268be45cda0a439b6d)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Chen Qi
a65de2b200 diffutils: upgrade to 3.4
(From OE-Core rev: 98a23eaf837692ad7d2a1d04318118c41052f7b0)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Chen Qi
c80af1617e util-linux: upgrade to 2.28.1
(From OE-Core rev: 76e9ea8e5c74ad7ab78138bd330f70d69931410c)

Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Ming Liu
e22dabab4d bootchart2: Add ALTERNATIVE configuration for bootchartd
Since busybox also provides the bootchartd command use the
update-alternatives mechanism to address this.

Also let bootchartd-stop-initscript RDEPENDS on bootchart2, since
/sbin/bootchartd is being called in that script.

Ming Liu <peter.x.liu@external.atlascopco.com>

(From OE-Core rev: 4c4f440d3a8eb6171f619bceacf57835d1b9841a)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Awais Belal
0c9f39cc8f asciidoc-native: add dependency on docbook-xml-dtd4-native
During the compilation phase asciidoc runs a2x for validation
of some xmls which in turn invokes xmllint with --nonet
parameter that requires DTDs to be available locally in order
to succeed otherwise the do_compile fails.
We now add a direct dependency on docbook-xml-dtd4 so the
DTDs are always available locally.

(From OE-Core rev: 14be679c7b8241b2d0872242ed358e5eb4f7acac)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Andrej Valek
fe4e56b0eb openssl: fix add missing dependencies building for test directory
Regarding the last commit about missing dependencies, another issue
was found. The problem was found, while ptest has been built with some
set extra settings. It means, when ptest is going to be built,
it is necessary to rebuild dependencies for test directory too.

(From OE-Core rev: 030142d0410bec85aeacfff6be27d5fed41ce808)

Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
André Draszik
cd3c5582a0 libffi: fix a typo (mips)
While code elsewhere checks for
MIPS_INSTRUCTION_SET == mips16e in order to decide how
to compile, hence the typo doesn't affect behaviour, the
intention was to set it to 'mips', as is done everywhere
else. Fixing the typo also helps to avoid confusion.

(From OE-Core rev: 45b27564324c754a34a1930437a7167079fe1ee4)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Jonathan Liu
27c0e2f839 image-vm.bbclass: remove old images if RM_OLD_IMAGE is enabled
[YOCTO #10164]

(From OE-Core rev: 3762b42233651832c5909d7a3e873365fc0a9756)

Signed-off-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:03 +01:00
Markus Lehtonen
9cabc18016 oeqa.buildperf: fix crash when creating globalres.log
Fix a bug that was introduced when converting to unittest framework.

(From OE-Core rev: 3bdb7b2e512b2f160360e95ed5b2be3871ec0b4b)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:02 +01:00
Markus Lehtonen
b26c43c13e oe-build-perf-test: align log message format with testrunner output
The previous attempt on this was a bit erroneous, dropping time stamps
completely although only the timestamp format should've been changed.

(From OE-Core rev: bafcff95e2b5e0b9a8c76ce46a62667bf6f49b00)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:02 +01:00
Ed Bartosh
f019bb933f syslinux.bbclass: ensure creation of output directory
build_syslinux_cfg function creates syslinux configuration file.
The code assumes that the output directory exists, which is not
always the case. For example rm_work task removes rootfs directory
structure and causes build_syslinux_cfg to fail with this error:
Unable to open ../<image>-<version>/syslinux_vm.cfg

Made build_syslinux_cfg depend on output directory to ensure that
directory is created before running the function.

[YOCTO #10159]

(From OE-Core rev: c39b072fa7e96f385da338a727c67e607308d637)

Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:02 +01:00
Richard Purdie
f755bab792 busybox: Add parallel make fix
We're seeing regular parallel make failures in applet headers in busybox.
This adds a patch to try and avoid the issue, building upon a fix already
backported from upstream. The patch has been sent to upstream.

[YOCTO #10116]

(From OE-Core rev: 199cef0e8a50b20d0ee6fefd1d4cf3372eba7728)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:02 +01:00
Richard Purdie
7283c149df sanity.bbclass: Ensure we expand BUILD_PREFIX
This likely used to work when we expanded python functions and broke when
we stopped. Since it defaults to "", it never caused an issue but
is incorrect usage so fix it.

(From OE-Core rev: bfb395fdea642b306f110b4b8f1046f1992c622c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-20 16:06:02 +01:00
Mark Hatle
f038f06997 local.conf.sample: Disable ARM and PPC due to prelink test case failures
Internal prelink test cases reloc8 and reloc9 are failing on both ARM
and PPC systems.  Disable them by removing the prelink from the
IMAGE_CLASSES setting.

(From meta-yocto rev: 85d30c28277a040420c2b2f25028ae1500da54db)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 10:23:55 +01:00
Mark Hatle
63dcfa8f13 Revert "local.conf.sample: Disable prelink by default"
This reverts commit 300f858ba07c938427ccd05a3d7220027a03d461.

Reenable prelink

(From meta-yocto rev: 91705d8ae9f56d1de4f0fdcd6a9654b75921aa8c)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 10:23:55 +01:00
Mark Hatle
fc16cfc01a prelink: Move to latest version of prelink
* Uprev rtld emulation to glibc-2.23
* Fix compilation warnings
* Add additional debug scopes
* Change rtld build_local_scope to be breadth-first
* Fix LD_PRELOAD emulation
* Change function reordering to work with latest binutils

(From OE-Core rev: 9d2c82f7d3fc0fdafc5c4fdd1707324bc4cdbf22)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 10:23:55 +01:00
Khem Raj
b7157a266c gcc: Upgrade to 6.2 RC1
(From OE-Core rev: 41ce4b438795108025c79cd3eec067367d53623e)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 10:23:55 +01:00
Richard Purdie
b360310798 libunwind: Fix build race conflict with gcc and musl
Building libunwind, then gcc-runtime causes build failures. This is hard
to fix since gcc-runtime wants the internal gcc unwind.h header but libunwind
wants to provide this. There are differences in include behaviour between gcc
and glibc which are by design.

This patch hacks around the issue by looking for a define used during gcc-runtime's
build and skipping to the internal header in that case. The patch is only enabled
on musl and is the best workaround I could come up with to unblock failing builds
on our autobuilder.

[YOCTO #10129]

(From OE-Core rev: cd8b64b0a236b27e5383e2394de65b9bfd4b6677)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 10:23:55 +01:00
Maxin B. John
c66683671e ref-manual: swabber: remove from documentation
Remove documentation as swabber was removed from oe-core with
this commit: commit a7ddbea345

(From yocto-docs rev: f3c462b2c6aa20de53c77e5d93cf397ae36cb2bb)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:40:23 +01:00
California Sullivan
bfd0850e76 ref-manual: Update SERIAL_CONSOLES_CHECK description
The previous description was not accurate. Looking at the code,
SERIAL_CONSOLES_CHECK does not act like SERIAL_CONSOLES, as it
will not add consoles to enable but only check and disable
consoles defined by SERIAL_CONSOLES. Also, the previous patch
adds aliasing functionality that needed to be documented.

I (Scott Rifenbark) did a bit of word-smithing here from the
original patch.

(From yocto-docs rev: 55d07048e831f0dbc955b74e029fe26ed276675b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:40:23 +01:00
Jonathan Liu
dfb899d9c3 ref-manual: change SYSLINUX_ROOT to ROOT_VM for DISK_SIGNATURE variable
The SYSLINUX_ROOT variable was renamed to ROOT_VM in krogoth.

(From yocto-docs rev: c4bbe8bc4967dd631b939f6806d65e2862df3424)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-19 08:40:23 +01:00
Scott Rifenbark
f1d2a932f6 ref-manual: Applied review edits to "Viewing Dependencies"
Fixes [YOCTO #10131]

Fixed some small issues here and there.  Also, provided a
second itemized item in the note box turning it into a
notes box.

(From yocto-docs rev: a736c3bb707e81eda7760c642084a5a7c4de2539)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
b249c8d2d5 ref-manual: Removed the "pid" stuff from viewing failed tasks
Fixes [YOCTO #10132]

My attempt to be complete on the filenames that have a "pid"
portion were not correct.  I have removed them from the first
paragraph.

(From yocto-docs rev: 8261b93b39df9abc9f9d6ccb4c00dc11330ad516)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
feda248d59 ref-manual: Various small corrections to package-related stuff
Fixes [YOCTO #10135]

Some small problems were fixed:

 * Added a cross-reference in the FILES glossary entry to the
   PACKAGES variable.  The two are tied and there was not a
   reference to it.

 * Removed a redundant "/" character in a pathname example in
   the dev-manual.

 * Removed a redundant "/" character in an example pathname
   in the FILES glossary description.

(From yocto-docs rev: 11a397c232696deece7ac5c6dafcadb87d7a5775)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
f82a65776b ref-manual: Updated the "Viewing Logs from Failed Tasks" section.
Fixes [YOCTO #10132]

Provided a better description and removed a deprecated sentence
near the end.

(From yocto-docs rev: bbe588e19bb9ed58883ae7c770da551de659e982)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
3db5ff6386 ref-manual: Updated the section on viewing dependencies
Fixes [YOCTO #10131]

The section was renamed "Viewing Dependencies" for consistency.
The section was moved up to be the third item in the sub-section
list.  The section was extensively re-written to provide more
clarity and options for the user to view dependencies.

(From yocto-docs rev: d521c3aabe6ded105cde6f7b3563c85340f759fd)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
1a1fc42e9f ref-manual: Clarify and flesh out debugging using bitbake -e
Fixes [YOCTO #10099]

Renamed the log file section to better describe what the user
is accomplishing.

Renamed and repositioned the variables section to better describe
and emphasize the task.  Also fleshed out the variables section with
more information.

(From yocto-docs rev: 0606fe481416a07bf98fc8ae79a30c1d62e75e6d)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
813be27f8c dev-manual: Added a new "known issue" for running qemu.
Fixes [YOCTO #9285]

Added a new bullet item to note that Using QEMU in usermode
might not work properly when running 64-bit binaries under
32-bit host machines. In particular, "qemumips64" is known
to not work under i686.

(From yocto-docs rev: 896beb3fddd427f8327d4ddd35be253866c90377)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
72ebd62821 ref-manual: Updated PROVIDES and FILES variable descriptions
Fixes [YOCTO #10094]

For PROVIDES, I added information about how the do_package
task goes through PACKAGES and uses the FILES variable
corresponding to each package to assign files to the package.

For FILES, I added a blurb to the existing note how you can
find default values for the FILES* variables.

(From yocto-docs rev: c70f79a608076c5c0490918b87986554bc5d8353)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:47 +01:00
Scott Rifenbark
5337ac972f ref-manual: Updated the SERIAL_CONSOLES_CHECK variable description.
Provided a better, more accurate description of this variable.

(From yocto-docs rev: 020f927bc01d662601fb44b19e4c6bc70e5e5ee7)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
efb62666c7 ref-manual: Changed the BPN variable description.
Fixes [YOCTO #10068]

Removed redundant wording.

(From yocto-docs rev: b6c9c979a01c8070d3d2c23340d3c0f5ef358157)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
df337651e4 ref-manual: Updates to PARALLEL_MAKE, PARALLEL_MAKEINST, EXTRA_OEMAKE
Fixes [YOCTO #10070]

Updated these three variables with various items to make clear
that PARALLEL_MAKE and PARALLEL_MAKEINST won't work unless
EXTRA_OEMAKE is passed to "make".

(From yocto-docs rev: 4f8b56cc67502cd672e0296cf2f143ecbcde22ac)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
836b80cca2 ref-manual: Added note to PROVIDES variable description.
Fixes [YOCTO #10069]

Added a note at the end of the variable description to
explain how runtime virual dependencies work.

(From yocto-docs rev: de1d16017c27b6b2502735fc41acd22660f6e7b9)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
b88973f637 ref-manual: New "Fakeroot and Pseudo" section.
Fixes [YOCTO #10060]

I provided a new section in the Technical Details chapter.  Also
some extra explanation was added to both the do_install task and to
the D variable.

(From yocto-docs rev: 565fb11d72bf8c585469bcf65f92b6738e344813)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
c78c5006ec ref-manual: Added new variable description EXTRANATIVEPATH.
Fixes [YOCTO #10002]

Created new base description for EXTRANATIVEPATH.

(From yocto-docs rev: aafc2de2657203440fe4b0bf3895cf367063bed1)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
9576ddc6b3 ref-manual: Applied review changes to INITRAMF_IMAGE variable.
Fixes [YOCTO #10012]

There was a mistake referring to the wrong example recipe.  I
changed "core-image-sato-initramfs.bb" to
"core-image-minimal-initramfs.bb" for the fix.

(From yocto-docs rev: 4d63e1fc5786556dd0dd5ca1435252d43dbd745a)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
2996779354 ref-manual: Updated the INITRAMFS_IMAGE_BUNDLE variable description.
Fixes [YOCTO #10013]

I enhanced the description with more detail all around.

(From yocto-docs rev: 319dabecf5abf0884295b991f681bed0e1dbf673)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:46 +01:00
Scott Rifenbark
31b06fe353 ref-manual: Updated the do_compile task section.
Fixes [YOCTO #9964]

There was a small fix to change oe_runmake task to oe_runmake
function.

(From yocto-docs rev: aa049c9165c67e041c84fab9fabfbe98828c79bb)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:45 +01:00
Scott Rifenbark
85ce753de6 ref-manual: Updates to INITRAMFS_IMAGE and INITRAMFS_FSTYPES
Fixes [YOCTO #10012]

Applied review comments.

(From yocto-docs rev: 1862d03c1916c34c29c1aea86fbb6ee4c7f34cca)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:45 +01:00
Scott Rifenbark
4fc256d145 ref-manual: Updated the INITRAMFS_FSTYPES variable.
Fixes [YOCTO #10012]

Added a new paragraph to the end explaining the default behavior.

(From yocto-docs rev: 1591f96fc04f64906145f272d205ec6c44ac70c0)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:45 +01:00
Scott Rifenbark
ffdba3fafb ref-manual: Updated the INITRAMFS_IMAGE variable.
Fixes [YOCTO #10012]

Updated the description completely.  New more detailed information.

(From yocto-docs rev: cb6ce91674ab092324f97ca4e56a0cbcd9140fbe)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:45 +01:00
Scott Rifenbark
dfd845a126 ref-manual: Updated the PROVIDES variable description.
Added more information about virtual targets to the end of the
description.

Fixes [YOCTO #10011]

(From yocto-docs rev: ce7ae0c6ad4ad3a0c2422b797556563dc48a9a5b)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 23:50:45 +01:00
Richard Purdie
70ccc66126 systemd-compat-units: Only enable for systemd in DISTRO_FEATURES
This recipe only makes sense when systemd is enabled and otherwise causes
world build failures.

(From OE-Core rev: 5dca6cc2fcdb2799c19b1697f0647a16ce296290)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 16:52:06 +01:00
Joe Slater
bcc8b87c72 systemd-compat-units: pkg_postinst() does not work
The test for various files is wrong and will always be
true, even if init.d does not exist.

Exit if init.d does not exist, and correctly test for
file existence otherwise.

(From OE-Core rev: 8183309080aee45746daaff46b0506b09b5bd269)

Signed-off-by: Joe Slater <jslater@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 16:52:06 +01:00
Richard Purdie
6c24a6446a sanity: Require bitbake 1.31.1 for multi-config changes
(From OE-Core rev: ecc8d346223030ee06aa6d83427f757982e948e4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:07:24 +01:00
Richard Purdie
189371f839 devtool/recipetool/meta: Adapt to bitbake API changes for multi-configuration builds
Unfortunately to implenent multiconfig support in bitbake some APIs
had to change. This updates code in OE to match the changes in bitbake.
Its mostly periperhal changes around devtool/recipetool

[Will need a bitbake version requirement bump which I'll make when merging]

(From OE-Core rev: 041212fa37bb83acac5ce4ceb9b7b77ad172c5c3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:07:23 +01:00
Richard Purdie
8b35b032ed bitbake: bitbake: Update version to 1.31.1
(Bitbake rev: 3ff1c66e6f336e5de7dcbc983a97fcd19ddc6b81)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:27 +01:00
Richard Purdie
218b81acb6 bitbake: bitbake: Initial multi-config support
This patch adds the notion of supporting multiple configurations within
a single build. To enable it, set a line in local.conf like:

BBMULTICONFIG = "configA configB configC"

This would tell bitbake that before it parses the base configuration,
it should load conf/configA.conf and so on for each different
configuration. These would contain lines like:

MACHINE = "A"

or other variables which can be set which can be built in the same
build directory (or change TMPDIR not to conflict).

One downside I've already discovered is that if we want to inherit this
file right at the start of parsing, the only place you can put the
configurations is in "cwd", since BBPATH isn't constructed until the
layers are parsed and therefore using it as a preconf file isn't
possible unless its located there.

Execution of these targets takes the form "bitbake
multiconfig:configA:core-image-minimal core-image-sato" so similar to
our virtclass approach for native/nativesdk/multilib using BBCLASSEXTEND.

Implementation wise, the implication is that instead of tasks being
uniquely referenced with "recipename/fn:task" it now needs to be
"configuration:recipename:task".

We already started using "virtual" filenames for recipes when we
implemented BBCLASSEXTEND and this patch adds a new prefix to
these, "multiconfig:<configname>:" and hence avoid changes to a large
part of the codebase thanks to this. databuilder has an internal array
of data stores and uses the right one depending on the supplied virtual
filename.

That trick allows us to use the existing parsing code including the
multithreading mostly unchanged as well as most of the cache code.

For recipecache, we end up with a dict of these accessed by
multiconfig (mc). taskdata and runqueue can only cope with one recipecache
so for taskdata, we pass in each recipecache and have it compute the result
and end up with an array of taskdatas. We can only have one runqueue so there
extensive changes there.

This initial implementation has some drawbacks:

a) There are no inter-multi-configuration dependencies as yet

b) There are no sstate optimisations. This means if the build uses the
same object twice in say two different TMPDIRs, it will either load from
an existing sstate cache at the start or build it twice. We can then in
due course look at ways in which it would only build it once and then
reuse it. This will likely need significant changes to the way sstate
currently works to make that possible.

(Bitbake rev: 5287991691578825c847bac2368e9b51c0ede3f0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:27 +01:00
Paul Eggleton
fac16ff8f7 bitbake: siggen: properly close files rather than opening them inline
If you don't do this, with Python 3 you get a warning on exit under some
circumstances.

(Bitbake rev: 49502685df3e616023df352823156381b1f79cd3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Jérémy Rosen
0eb6d709b6 bitbake: ast/ConfHandler: Add a syntax to clear variable
unset VAR
will clear variable VAR
unset VAR[flag]
will clear flag "flag" from var VAR

(Bitbake rev: bedbd46ece8d1285b5cd2ea07dc64b4875b479aa)

Signed-off-by: Jérémy Rosen <jeremy.rosen@openwide.fr>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
b50b14e372 bitbake: cache: Build datastores from databuilder object
Rather than passing in a datastore to build on top of, use the data builder
object in the cache and base the parsed recipe from this. This turns
things into proper objects building from one another rather than messy
mixes of static and class functions.

This sets things up so we can support parsing and building multiple
configurations.

(Bitbake rev: fef18b445c0cb6b266cd939b9c78d7cbce38663f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
b176189df1 bitbake: cache: Split Cache() into a NoCache() parent object
There are some cases we want to parse recipes without any cache
setup or involvement. Split out the standalone functions into
a NoCache variant which the Cache is based upon, setting the scene
for further cleanup and restructuring.

(Bitbake rev: 120b64ea6a0c0ecae7af0fd15d989934fa4f1c36)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
e79550ea87 bitbake: cache/cooker: Pass databuilder into bb.cache.Cache()
Rather that the current mix of static and class methods, refactor
so that the cache has the databuilder object internally. This becomes
useful for the following patches for multi config support.

It effectively completes some of the object oriented work we've been
working towards in the bitbake core for a while.

(Bitbake rev: 7da062956bf40c1b9ac1aaee222a13f40bba9b19)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
97ce9126a6 bitbake: cache: Make virtualfn2realfn/realfn2virtual standalone functions
Needing to access these static methods through a class doesn't
make sense. Move these to become module level standalone functions.

(Bitbake rev: 6d06e93c6a2204af6d2cf747a4610bd0eeb9f202)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
4cd5647f12 bitbake: cache/ast: Move __VARIANTS handling to parse cache function
Simple refactoring to allow for multiconfig support.

(Bitbake rev: 266b848da40904446eb1d084bbdc5307a9b45197)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
0ef16f083e bitbake: runqueue: Abstract worker functionality to an object/array
With the introduction of multi-config and the possibility of distributed
builds we need arrays of workers rather than the existing two.

This refactors the code to have a dict() of workers and a dict of
fakeworkers, represented by objects. The code can iterate over these.

This is separated out from the multi-config changes since its separable
and clearer this way.

(Bitbake rev: 8181d96e0a4df0aa47287669681116fa65bcae16)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Richard Purdie
249686927b bitbake: cookerdata: Simplify prefiles/postfiles
The current codepaths are rather confusing. Stop passing these
as parameters and use the ones from when the object is created.

(Bitbake rev: 8c992c148d9619b10eeae8bbd9376ecf408037a5)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 10:06:26 +01:00
Bruce Ashfield
3be73dcd7a yocto-bsp/yocto-kernel: update to work with the latest kern-tools
With some recent changes in the kern tools, we can drop some changes in
the yocto-bsp and yocto-kernel tools that ensured proper patching and
BSP inheritance.

In particular, we no longer need to signify the start of patching, and
we must instruct the tools that we only want configuration fragments
via inheritance .. no patches (since they are already applied).

(From meta-yocto rev: 34ed5eebd0b5baab98b6b2d7b3f06ca40932b37d)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:54 +01:00
Richard Purdie
26b0657b2f parselogs: Ignore amb_nb warning messages under qemux86*
(From OE-Core rev: 857f4ca134e4575e71993b4fa255ebafec612d1e)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:54 +01:00
California Sullivan
56e53db4d9 parselogs.py: Add failed to setup card detect gpio error on x86
This error has occurred on the MinnowBoard Max and Turbot since its
inception. It supposedly indicates a non-working SD card reader, but
ours works fine. Whitelist the error.

(From OE-Core rev: d577028a1d756b70da056dee73df657cf8000baf)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:54 +01:00
California Sullivan
a9b2c9d83a parselogs.py: Add dmi and ioremap errors to ignore list for core2
These errors have been occuring since the introduction of the 4.4
kernel with no apparent functionality loss. Whitelist for now.

(From OE-Core rev: 47b9058994f15507fc18ce0b08ac82a4c052966e)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
California Sullivan
e290a83882 parselogs.py: Ignore Skylake graphics firmware load errors on genericx86-64
These errors can't be fixed without adding the firmware to the initramfs
and building it into the kernel, which we don't want to do for
genericx86-64. Since graphics still work acceptably without the firmware
blobs, just ignore the errors for that MACHINE.

(From OE-Core rev: d73a26a71b2b16be06cd9a80a6ba42ffae8412c4)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Bill Randle
f479e3866d testimage: allow using kvm when running qemux86* machines
Using kvm can provide significant speedups when running qemux86* machines
on an x86* host. Enabled by using the new QEMU_USE_KVM variable.

[YOCTO #9298]

(From OE-Core rev: ebac2c8d1fcd09ebce0659a4abb445e4f1c18571)

Signed-off-by: Bill Randle <william.c.randle@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Patrick Ohly
ff3a455ee8 image.bbclass: rename COMPRESS(ION) to CONVERSION
With the enhanced functionality, the term "compression" is no longer
accurate, because the mechanism also gets used for conversion
operations that do not actually compress data.

It is possible to remove this naming problem in a backward-compatible
manner by including COMPRESSIONTYPES in CONVERSIONTYPES and checking for
the old COMPRESS_CMD/DEPENDS as fallbacks.

[YOCTO #9346]

(From OE-Core rev: 9d68c024790850cab72ead1e3372a5fcec4ef7b0)

Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Alejandro del Castillo
10e44fe6fb grub: split grub-editenv into it's own package
grub-editenv edits the env block at runtime on a booted system. Other
tools can depend on it to configure a live system, for ex. to set next
boot mode upon reboot. By splitting grub-editenv, tools don't have to
depend on the entire grub package (grub-editenv just edits one file).

(From OE-Core rev: 24b832b6e31c4e358d0c7a0062b69f66469cdcee)

Signed-off-by: Alejandro del Castillo <alejandro.delcastillo@ni.com>
Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Dengke Du
2973bc2b29 bash: 4.3.39 -> 4.3.46
(From OE-Core rev: 2e12615ca5ab4acf7ec2952b7555054ca88e147d)

Signed-off-by: Dengke Du <dengke.du@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
André Draszik
f4ad606e02 openssh: add ed25519 host key location to read-only sshd config
It's simply been missing.

(From OE-Core rev: ebd1ea45e67211bd2ab0ec7affab409908126ef3)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
André Draszik
e27bfac24f connman: add missing space in _append
We do that everywhere else, and otherwise anybody
extending SRC_URI through bbappend must know to
add a space at the end, which is an unusual
requirement.

(From OE-Core rev: 4e7c641b38296ff46ba56cc45e7b14c9e2aa4018)

Signed-off-by: André Draszik <git@andred.net>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Bruce Ashfield
5ef0620b18 kernel-yocto: streamline patch, configuration and audit phases
We've been running with a set of kern-tools that were designed to work
with build systems that knew nothing about git, trees, commits, etc.

As such, there's been a set of shims/wrappers in place to work with
within bitbake/oe-core. These were the *me scripts: createme, updateme,
patchme and configme.

With this commit, we strip that legacy code and use the tools directly.
This means less complexity, fewer corner cases .. and no surprises
when the tools are arunning. As another benefit, the tools consume
much less time during a typical build and have no noticeable impact
on the overall build time.

Existing .scc files, features, and processing are not impacted as
these tools are compatible with existing feature descriptions and
kerne configuration fragments.

The audit of kernel configuration fragments is now detached
from the linux-yocto build structure and process. This means that
they can eventually be tweaked to offer kernel audit to any type of
kernel build and configuration process.

Additionally, the kernel symbol audit phase can now resolve symbol
dependencies and offer guidance when a symbol is missing:

   WARNING: linux-yocto-4.4.15+gitAUTOINC+b030d96c7b_f5e2c49d58-r0 do_kernel_configcheck: [kernel config]: specified values did not make it into the kernel's final configuration:

   ---------- CONFIG_BT_6LOWPAN -----------------
   Config: CONFIG_BT_6LOWPAN
   From: /home/bruce/poky/build/tmp/work-shared/qemux86-64/kernel-source/.kernel-meta/configs/standard/features/bluetooth/bluetooth.cfg
   Requested value:  CONFIG_BT_6LOWPAN=y
   Actual value:

   Config 'BT_6LOWPAN' has the following conditionals:
     BT_LE && 6LOWPAN (value: "n")
   Dependency values are:
     BT_LE [y] 6LOWPAN [n]

(From OE-Core rev: 0f698dfd1c8bbc0d53ae7977e26685a7a3df52a3)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Bruce Ashfield
028e133171 linux-yocto/4.4: -rt update patch meta-data to remove ()
The existing kernel patching scripts don't like () in patch names, since they
are detected as function calls. Although the scripts will be updated to avoid
this error, it is worthwhile fixing the patch names in the meantime.

(From OE-Core rev: de7e4da0c7abf5dcd8b95ec993e70041475603c2)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:53 +01:00
Bruce Ashfield
93ea3a517c linux-yocto/4.1: config updates
Integrating the following configuration changes:

 features: usb-net: provide more coverage on USB network devices
 features: broxton: enable iTCO watchdog support
 features: broxton: enable iSMT support
 features: broxton: enable LPC bridge function for Intel ICH and SCH

(From OE-Core rev: 02165c6bd9da6ac3a34eabe17d3a068afb6b1727)

Signed-off-by: Bruce Ashfield <bruce@zedd.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Bruce Ashfield
cb63dd10bb linux-yocto/4.1: bump to v4.1.29
Integrating the korg 4.1.29 -stable release

(From OE-Core rev: 2d7fff848b4e76c7c568492e1dcc32d4a2031297)

Signed-off-by: Bruce Ashfield <bruce@zedd.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Bruce Ashfield
e7c681720b linux-yocto/4.1: netfilter: x_tables: fix stable backport
There was an issue with a netfilter backport in 4.1.28-stable. To
address it, we backport the -stable fix:

    netfilter: x_tables: fix stable backport

    Stable-4.1 backport of mainline commit 364723410175 ("netfilter:
    x_tables: validate targets of jumps") doesn't handle correctly the fact
    that 4.1 kernel is missing commit 482cfc318559 ("netfilter: xtables:
    avoid percpu ruleset duplication") so that t->entries is still a per-cpu
    array in find_jump_target().

    Use the same fix as e.g. stable-3.14 backport.

    Fixes: 8163327a3a92 ("netfilter: x_tables: validate targets of jumps")
    Signed-off-by: Michal Kubecek <mkubecek@suse.cz>

(From OE-Core rev: c009297d44df98ba103ee267e40ffdbc837e411f)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Dai Caiyun
b6af219560 dbus: 1.10.8 -> 1.10.10
Upgrade dbus from 1.10.8 to 1.10.10.

(From OE-Core rev: e5581343303f2cf8724019c3cbfb92a87045a7f1)

Signed-off-by: Dai Caiyun <daicy.fnst@cn.fujitsu.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Richard Purdie
d3d395b939 busybox: Backport makefile fix from upstream
This at least partially addresses one of the build races we've seen
on the autobuilder in busybox. Its a straightforward backport from
upstream.

(From OE-Core rev: 8599059164ad0eb908fd1177044af8bc9a9881e4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Richard Purdie
d37837a00d gobject-introspection: Ensure prelink config file exists to avoid build failures
gobject-introspection relies upon prelink-rtld. In order to function correctly,
we generate an ld.so.conf file which is generated before users of prelink-rtld
are called.

There is currently a race in gobject-introspection since the configuration file
may not have been created. This adds in code to ensure that regardless of codepath
(new build, existing build, from sstate), we trigger the creation of the configuration
file and avoid build failures.

(From OE-Core rev: 10e0c1a3a452baa05d160a92a54b2e33cf0fd061)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Tanu Kaskinen
180a77c56a alsa-utils: 1.1.1 -> 1.1.2
Changelog:
http://www.alsa-project.org/main/index.php/Changes_v1.1.1_v1.1.2

The FFT code in alsabat changed from double precision to single
precision floating point numbers, which is why the fftw dependency
changed to fftwf.

(From OE-Core rev: 2b44e468d20a0256fba896562e2e7d1ae593a4c8)

Signed-off-by: Tanu Kaskinen <tanuk@iki.fi>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Tanu Kaskinen
400e2628f1 alsa-lib: 1.1.1 -> 1.1.2
Changelog:
http://www.alsa-project.org/main/index.php/Changes_v1.1.1_v1.1.2

Removed upstreamed patch:
0001-pcm_plugin-fix-appl-pointer-not-correct-when-mmap_co.patch

Rebased avoid-including-sys-poll.h-directly.patch

(From OE-Core rev: 4d3ec9312d9f721f57d0afc08ec1512709f75d17)

Signed-off-by: Tanu Kaskinen <tanuk@iki.fi>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-18 09:27:52 +01:00
Markus Lehtonen
9800b4d9ff oeqa.buildperf: use oe.path.remove()
Drop the self-baked force_rm() method.

(From OE-Core rev: c86bf80abd87acb0da5860806822c64ec9dee089)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
51970d10d6 oeqa.buildperf: be more verbose about failed commands
Log failures of commands whose output is stored.

(From OE-Core rev: 240f6e7366c8a9ea830e531d307dd2e27a61a6bd)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
818d4c2d8e oe-build-perf-test: simplify stderr log format
Remove timestamps from the stderr log in order to make the console
output more readable, i.e. more in line with the output from unittest
runner.

(From OE-Core rev: d28eeeabde9b4b7160a273445023a44fd50e29ab)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
5039a910b7 oe-build-perf-test: set-up file logging as early as possible
So that the log file would not miss any records.

(From OE-Core rev: 9ce6e20ce239067896dc65f09e3fef1173293065)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
74820e99f7 oe-build-perf-test: suppress logger output when tests are being run
Prevent logger from writing to stderr when the tests are being run by
the TestRunner. During this time the logger output is only written to
the log file. This way the console output from the script is cleaner and
not mixed with possible logger records.

(From OE-Core rev: 36f58b5172d4e2e182aa447fb3ec4d1ac9f6820d)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
d82a795683 oeqa.buildperf: introduce runCmd2()
Special runCmd() for build perf tests which doesn't raise an
AssertionError when the command fails. This causes command failures to
be detected as test errors instead of test failures. This way "failed"
state of tests is reserved for future making it possible to set e.g.
thresholds for certain measurement results.

(From OE-Core rev: 09590ac76a19ee1b1b4a9188f7fce5029f0de52a)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
f4128f0e46 oe-build-perf-test: use new unittest based framework
Convert scripts/oe-build-perf-test to be compatible with the new Python
unittest based buildperf test framework.

(From OE-Core rev: 249d99cd7ec00b3227c194eb4b9b21ea4dcb7315)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
979be848e2 oeqa.buildperf: convert test cases to unittest
This commit converts the actual tests to be compatible with the new
Python unittest based framework.

(From OE-Core rev: 4e81967131863df7ee6c8356cb41be51f1b8c260)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
09b9a4aeee oeqa.buildperf: add BuildPerfTestResult class
The new class is derived from unittest.TextTestResult class. It is
actually implemented by modifying the old BuildPerfTestRunner class
which, in turn, is replaced by a totally new simple implementation
derived from unittest.TestRunner.

(From OE-Core rev: 89eb37ef1ef8d5deb87fd55c9ea7b2cfa2681b07)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:44 +01:00
Markus Lehtonen
3acf648f58 oeqa.buildperf: add BuildPerfTestLoader class
(From OE-Core rev: b281c4a49b0df1de9b3137efb8ff50744e06c48d)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Markus Lehtonen
3f519df38e oeqa.buildperf: derive BuildPerfTestCase class from unitest.TestCase
Rename BuildPerfTest to BuildPerfTestCase and convert it to be derived
from TestCase class from the unittest framework of the Python standard
library. This doesn't work with our existing testcases or test runner
class and these need to be modified, too.

(From OE-Core rev: b0b434210a3dbd576f68344e29b8c20d18561099)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Markus Lehtonen
daee7558a5 oeqa.buildperf: rename module containing basic tests
(From OE-Core rev: 56e455cf4b42ff4db36debd342bcb03c5199ba52)

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Jose Perez Carranza
6203a77a53 buildperf: Add support for times without decimal part
Add logic for the cases when the time retrieved does
not have decimal part.

(From OE-Core rev: a6c9e515f8bc590612e3082ab1c4c254711c8e3b)

Signed-off-by: Jose Perez Carranza <jose.perez.carranza@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Ross Burton
a191285b97 lib/oeqa/selftest/bbtests: don't report expected failures
Another instance where expected failures need to be not reported to the error
reporting service.

(From OE-Core rev: bb1cbb8d5bd7639554edcddf1d2eac4abdbb48c7)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Aníbal Limón
dde033ab4c oeqa/runtime/syslog.py: Improve test_syslog_logger on systemd
When an image uses systemd journald acts as a main syslog daemon using
/dev/log.

The test_syslog_logger try to log a predifined message into the syslog
using logger and then search using grep in /var/log/messages if this
fails for some reason (file rotated) now search the predifined message
into the journal.

(From OE-Core rev: 26d7e5060a35d20df6f2586b70ed8d2853cc0186)

Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Alejandro Hernandez
0cff756e15 python3.5-manifest: Fixes several dependencies on the newest python3
This patch adds the following packages: python3-enum (needed by python3-git),
python3-selectors (needed by python3-subprocess), python3-signal (needed by python3-subprocess),
and it also fixes the following ones with missing dependencies: python3-subprocess,
python3-compression, python3-datetime

[YOCTO #10127] [YOCTO #10124] [YOCTO #10122]

(From OE-Core rev: 0575e8c9fb52a7b594025fd20445a2edd06e3c69)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Hongxu Jia
cd20d7678b ncurses: upgrade to 6.0+20160625
(From OE-Core rev: 10abc041c5ad4ae04c577c13100eef6e0a0b1cab)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Hongxu Jia
205418f28e gnupg: upgrade to 2.1.14
(From OE-Core rev: 4ae0ebfae05e2b3c78146f606eaa12b2e42cd07d)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Hongxu Jia
8ce7bc30dc man-pages: upgrade to 4.07
(From OE-Core rev: 979cae77baa75ffb8cdc9f7a3b20a1cbbbdd6df0)

Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:43 +01:00
Alejandro Hernandez
b2e6ac630a python3-git: Fixes dependencies, avoiding to install python3-misc
This patch adds the following dependencies to be able to import git on
python3: python3-enum, python3-logging, python3-datetime, python3-netclient.

[YOCTO #9757]

(From OE-Core rev: 9d232fadfaad4170bc867e0b97bbd0ec7cc9ade4)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Alejandro Hernandez
97ddbc8a52 python3-gitdb: Fixes zlib missing dependency
(From OE-Core rev: 3637e5c89bb39c194fac296080040b862602e3b0)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Khem Raj
cae5c7444b gmp: Fix wrong detection of -march flag
Configure detects -march flag based upon target
triplet, it wrongly passes -march=armv4 for all
arm, this is unearthed when compiling with clang
since it errors out with flags like

/tmp/kraj01/a-0c2038.s:27: Error: selected processor does not support `bx r0' in ARM mode

since it does not pass --fix-v4bx along with
-march=armv4, which does not happen with gcc
toolchain since this flag is passed impicitly hence
this error was indetected

Fixed thusly

(From OE-Core rev: 51caeccfc5b18b59deac5005e0059a414cbbed32)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Ross Burton
a56f14e5fc graph-tool: update to new networkx API, be iterative
Update the dot parser to the new networkx API (using pydotplus to parse).

Also, switch the path display to output the paths as they are found instead of
collecting them into a list, so output appears sooner.

(From OE-Core rev: c91898b07465fdd5f3629babb7ff9226454de24e)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Kyle Russell
a1f39e5117 python-3.5-manifest: Add some missing RDEPENDS
ctype's util.py needs subprocess
lang's inspect.py needs importlib.machinery
math's random.py needs crypt's hashlib
subprocess imports threading

(From OE-Core rev: 38f9d7910fb5b2be5f7b1f62c4c7631d9e7138eb)

Signed-off-by: Kyle Russell <bkylerussell@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Maxin B. John
a84bfd8643 libpng: update 1.6.23 -> 1.6.24
Updates in License files are due to changes in Copyright date
and Version.

Ensure all tools are packaged into $PN-tools.

(From OE-Core rev: e28b6042b1a81fe449b772b4698ad139edf46332)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Khem Raj
b66ec0ff0d libtasn1: Backport compiler warning fixes
These patches are backported from master to fix issues raised by clang
compiler.

(From OE-Core rev: 6e3ff002e1a24936acb20dd209ea758c065cc16a)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Khem Raj
af96bedd0b ffmpeg: Pas CC and CXX to configure
This helps in compiling it with with toolchain coming from
a sstate server where its built using a different build time
sysroot.

Secondly, also helps compiling with non-gcc ( clang ) compiler

(From OE-Core rev: 25deaf1368cc0a99d7b5b3f2d08d7fead51296e2)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Ross Burton
c0ea62e505 curl: upgrade to 7.50.1
This fixes 3 CVES:

CVE-2016-5419
CVE-2016-5420
CVE-2016-5421

(From OE-Core rev: 62157e2b31c206be40f95574bb205dae5e8e4b68)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Ming Liu
bba1b911c8 Use PYTHON_SITEPACKAGES_DIR insted of hard-coded *site-packages*
For thoese recipes that are inheriting python*-dir.bbclass, there is
already a PYTHON_SITEPACKAGES_DIR present, use that definition replacing
redundant "${libdir}/python*/site-packages".

(From OE-Core rev: e7d842673952aa4aaa141f64958bc1344dbe8210)

Signed-off-by: Ming Liu <peter.x.liu@external.atlascopco.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:42 +01:00
Awais Belal
92f505b016 init-install*: /etc/mtab make a link rather than a copy
Using a copy would only make management of devices erroneous
and makes the system unstable in some scenarios as tools will
have to manipulate both files separately. A link ensures that
both files /proc/mounts and /etc/mtab will have the same
information at all times and this is how it is handled
on newer systems where there is such a need. Same is
suggested by busybox.

(From OE-Core rev: 9f9240d175acee274c04242fd5781094b3f5491b)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Henry Bruce
caaff71f97 npm: npm.bbclass now adds nodejs to RDEPENDS
We expect that any package that uses the npm bbclass
will have a runtime dependency on node.js

(From OE-Core rev: 769fae0b74d7c7992aa593907f446fab98ef5128)

Signed-off-by: Henry Bruce <henry.bruce@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Maxin B. John
322435b890 iproute2: update 4.6.0 -> 4.7.0
4.6.0 -> 4.7.0

(From OE-Core rev: 8c556252b6c60d2fdbb9cd6d601206501467d2db)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Maxin B. John
aa1b6cb4c4 sqlite3: update 3.13.0 -> 3.14.0
3.13.0 -> 3.14.0

(From OE-Core rev: 1d42b95d1575c909b8cd5493ee9535d7a776b07c)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Ioan-Adrian Ratiu
0b8ddf597e perl-native: backport libnm link fix
pre-5.25.0 perl by default tries to link to an antiquated libnm (new
math) which is not used anymore since the early 1990's. After 2014
another libnm appeared for NetworkManager causing build failures.

(From OE-Core rev: 97d2ba227044571408151f84cfe611e1a72dd816)

Signed-off-by: Ioan-Adrian Ratiu <adrian.ratiu@ni.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Maxin B. John
a6b26f8a6c xinput-calibrator: remove bash dependency
Refresh add-geometry-input-when-calibrating.patch to remove
bashism from it.

(From OE-Core rev: c0b8e1ff40af05b29780164c860c68da35e7fc32)

Signed-off-by: Maxin B. John <maxin.john@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Paul Eggleton
99d109cdbc classes/populate_sdk_ext: drop duplicated error message
The preparation script itself prints out an error on failure, and we
aren't redirecting its output anymore, so we no longer need to print out
a message here when it fails. At the same time, make the message printed
out by the script a little clearer - we're just writing the log out to
the file, we shouldn't give the user an expectation that there will be
extra details in there (other than the output produced by
oe-init-build-env there won't be).

(From OE-Core rev: 80dfaf40e087b34d6360188df372c1c3805a00bd)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Paul Eggleton
64ffbd4869 classes/populate_sdk_ext: add some pre-install checks
Check a number of things as early as possible in the eSDK installer
script so that the user gets an error up front rather than waiting for
the build system to be extracted and then have the error produced:

* Check for missing utilities specified in SANITY_REQUIRED_UTILITIES
  (along with gcc and g++), taking into account that some of these are
  satisfied by buildtools which ships as part of the SDK. We use the
  newly added capability to list an SDK's contents to allow us to see
  exactly which binaries are inside the buildtools installer.
* Check that Python is available (since the buildtools installer's
  relocate script is written in Python).
* Check that locale value set by the script is actually available
* Check that the install path is not on NFS

This does duplicate some of the checks in sanity.bbclass but it's
difficult to avoid that given that here they have to be written in shell
and there they are written in Python, as well as the fact that we only
need to run some of the checks here and not all (i.e. the ones that
relate to the host system or install path, and not those that check the
configuration or metadata). Given those issues and the fact that the
amount of code is fairly small I elected to just re-implement the checks
here.

Fixes [YOCTO #8657].

(From OE-Core rev: 6e6999a920b913ad9fdd2751100219c07cd14e54)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Paul Eggleton
cea1632471 toolchain-shar-extract.sh: add option to list contents
Add a -l command-line option for SDK installers to get a list of files
that will be extracted by the SDK - internally this just runs "tar tv"
on the embedded tarball. This can be used to look at which files the SDK
provides without actually installing it. The initial user of this is the
extensible SDK build process which needs to know what binaries are going
to be installed by the buildtools installer without installing it.

(From OE-Core rev: 1d3e874f191f011eb9d7b0e12e513433c126036e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Paul Eggleton
7cd213d8a9 classes/populate_sdk_ext: properly determine buildtools filename
Determine the name of the current buildtools installer ahead of time,
set it in a variable and use that variable rather than the wildcarded
version everywhere, since it's much tidier.

(From OE-Core rev: d5a601db41ba3c561aced7f5a38689f6b4c9a87c)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:41 +01:00
Paul Eggleton
5895bb6d2c classes/populate_sdk_ext: properly handle buildtools install failure
If the buildtools installation failed, we were using a subshell instead
of a compound command and thus the subshell exited but the script
continued on, which is really not what we want to happen. Additionally
log the buildtools installer output to a file and cat it if it fails so
that you can actually see what went wrong, as well as amending the
environment setup script to print a warning as we do when the
preparation fails.

(From OE-Core rev: 8fb8adf309823660c3943df973c216621a71850d)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Paul Eggleton
7df442c340 lib/oe/copy_buildsystem: fix merging sstate directories for eSDK
When we don't have uninative enabled there's more merging to be done in
the default configuration (SDK_EXT_TYPE = "full" which by default means
SDK_INCLUDE_TOOLCHAIN = "1") and there are likely files that already
exist in the sstate feed we're assembling, so we need to take care to
merge the directory contents rather than just moving the directories
over. Additionally we now only run this if uninative genuinely isn't
enabled (i.e. NATIVELSBSTRING is different to the fixed value of
"universal".)

In the process of fixing this I discovered an unusual behaviour in
os.rename() - when we're merging these feeds we're dealing with
hard-linked sstate artifacts, and whilst os.rename() is supposed to
silently overwrite an existing destination (permissions allowing), if
you have the source and destination as hardlinks to the same file then
the os.rename() call will just silently fail. As a result the code now
just checks if the destination exists and deletes the source if so
(since we know it will be the same file, we don't need to check in this
case.)

(From OE-Core rev: 2b5b920c6b4f4d5c243192aa75beff402fd704d3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Paul Eggleton
37b81968bb classes/populate_sdk_ext: sstate filtering fixes
A couple of fixes for the recent sstate filtering implemented in OE-Core
revision 4b7b48fcb9b39fccf8222650c2608325df2a4507:

* We shouldn't be deleting the downloads directory here, since it
  contains the uninative tarball that we will need
* TMPDIR might not be named "tmp" - in OE-Core the default is tmp-glibc
  so use the actual name of TMPDIR here instead.

(From OE-Core rev: 71ecd3bea680ef8c589257844512a14b65e979d3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Paul Eggleton
53b79353ea classes/populate_sdk_ext: handle lack of uninative when filtering sstate
If the build in which the eSDK is being built isn't using uninative,
this will have an effect on NATIVELSBSTRING, which will mean that the
eSDK installer won't be able to find any of the native sstate packages.
To keep things simple, under this scenario just disable uninative
temporarily while we run the SDK installer to help us check the presence
of the sstate artifacts we need. Ideally I'd rather not have things like
this that are artificial in this verification step, but on the other
hand this was the least ugly way to solve the problem.

(From OE-Core rev: 9f39deea7c4af5244dbfa824a52e11590a1d4df6)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Paul Eggleton
cdd2460ff3 classes/populate_sdk_ext: ensure eSDK can build without uninative enabled
We were relying on uninative being enabled in the build in which the
eSDK was being produced, which is not the case for example for OE-Core's
default configuration. Move the code that copies the uninative tarball
and writes the checksum to copy_buildsystem so that it happens early
enough for that part of the configuration to be set up when we do the
filtering (which requires running bitbake).

(From OE-Core rev: 7bc95253098aca2ff195b159b34d9ac041806c75)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Paul Eggleton
0a78f987de gen-lockedsig-cache: ensure symlinks are dereferenced
If you set up a local mirror in SSTATE_MIRRORS then you can end up with
symlinks in SSTATE_DIR rather than real files. We don't want these
symlinks in the sstate-cache prodcued by gen-lockedsig-cache, so
dereference any symlinks before copying.

(From OE-Core rev: d65a6ee9e7a9c63b9a16bdb5025af8a7c6433c4f)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Paul Eggleton
65ff9f5e0a oe-buildenv-internal: hint at specifying bitbake path in error message
If you check out OE-Core and then run oe-init-build-env you get an error
about not having bitbake checked out in a "bitbake" subdirectory,
however it's possible to specify the bitbake path on the
oe-init-build-env command line, so hint at that in the error message
rather than implying it has to be in the default location.

(From OE-Core rev: 5a1efa91a418e3206b047564d0fd6d5bac22a8d3)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Andre McCurdy
eea2de9c70 ccache.bbclass: don't remove CCACHE_DIR as part of do_clean
Removing the ccache directory as part of do_clean is unnecessarily
conservative and defeats many of the benefits of ccache.

The original justification for this behaviour was to avoid confusion
in the corner case that the ccache directory becomes corrupted.
However the standard approach for dealing with such highly unlikely
corner cases (ie manually removing tmp) would also recover from
corruption of the ccache directories, without the negative impact of
defeating ccache during normal development.

(From OE-Core rev: 6ae6680ad8d51eff756dcb6500fca2530e3e3e73)

Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Dmitry Rozhkov
51d74f5673 systemd: fix typo in avoid-using-system-auth.patch
The patch 0015-systemd-user-avoid-using-system-auth.patch
makes PAM session for systemd-user include common-account file
which doesn't contain any session related lines and that breaks
launching "systemd --user" with the error:

Jul 29 13:03:24 intel-corei7-64 systemd[691]: user@0.service: Failed
at step PAM spawning /lib/systemd/systemd: Operation not permitted

This change fixes the patch by including common-session file
instead.

(From OE-Core rev: ecff74ab68ffca27ed856be6117124b8bc1ef2d6)

Signed-off-by: Dmitry Rozhkov <dmitry.rozhkov@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Awais Belal
993bfb55c7 init-install*: only pick root mmc devices
Some eMMC devices show special sub-devices such as mmcblk0boot0
etc. The installation script currently pick all of them up and
displays it to the user which makes some confusions because these
sub-devices are pretty small and complete installation including
rootfs won't be possible in most cases.
We simply now drop these sub-devices and only present the user
with the root of such mmc devices.

(From OE-Core rev: 4b4d80306de8d8a2e3a2d784890f34e4a0ecfcf0)

Signed-off-by: Awais Belal <awais_belal@mentor.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:40 +01:00
Olof Johansson
4b9c75a953 sanity.bbclass: Only verify /bin/sh link if it's a link
If /bin/sh is a regular file (and not a symlink), we assume it's a
reasonable shell and allow it.

(From OE-Core rev: eaa0dc21a5f058a39bd7867bd3cafdb3407abe36)

Signed-off-by: Olof Johansson <olof.johansson@axis.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Yi Zhao
9375b7effa tiff: Security fix CVE-2016-5323
CVE-2016-5323 libtiff: a maliciously crafted TIFF file could cause the
application to crash when using tiffcrop command

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5323
http://bugzilla.maptools.org/show_bug.cgi?id=2559

Patch from:
2f79856097

(From OE-Core rev: 4ad1220e0a7f9ca9096860f4f9ae7017b36e29e4)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Yi Zhao
1b03beb80a tiff: Security fix CVE-2016-5321
CVE-2016-5321 libtiff: a maliciously crafted TIFF file could cause the
application to crash when using tiffcrop command

External References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5321
http://bugzilla.maptools.org/show_bug.cgi?id=2558

Patch from:
d9783e4a14

(From OE-Core rev: 4a167cfb6ad79bbe2a2ff7f7b43c4a162ca42a4d)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Yi Zhao
b762eb937c tiff: Security fix CVE-2016-3186
CVE-2016-3186 libtiff: buffer overflow in the readextension function in
gif2tiff.c allows remote attackers to cause a denial of service via a
crafted GIF file

External References:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-3186
https://bugzilla.redhat.com/show_bug.cgi?id=1319503

Patch from:
https://bugzilla.redhat.com/attachment.cgi?id=1144235&action=diff

(From OE-Core rev: 3d818fc862b1d85252443fefa2222262542a10ae)

Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Armin Kuster
ecb7e52649 tiff: Security fix CVE-2015-8784
CVE-2015-8784 libtiff: out-of-bound write in NeXTDecode()

External Reference:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-8784

(From OE-Core rev: 36097da9679ab2ce3c4044cd8ed64e5577e3f63e)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Armin Kuster
dc75fc92b5 tiff: Security fix CVE-2015-8781
CVE-2015-8781 libtiff: out-of-bounds writes for invalid images

External Reference:
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-8781

(From OE-Core rev: 9e97ff5582fab9f157ecd970c7c3559265210131)

Signed-off-by: Armin Kuster <akuster@mvista.com>
Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Jackie Huang
955d6cb60f e2fsprogs: Fix missing check for permission denied.
If the path to "ROOT_SYSCONFDIR /mke2fs.conf" has a permission denied problem,
then the get_dirlist() call will return EACCES. But the code in profile_init
will treat that as a fatal error and all executions will fail with:
      Couldn't init profile successfully (error: 13).

But the problem should not really be visible for the target package as the path
then will be "/etc/mke2fs.conf", and it is not likely that a user have no
permission to read /etc.

(From OE-Core rev: 9d7c32a88e0670a09e5e1097ff8bca58e9a7943f)

Signed-off-by: Jian Liu <jian.liu@windriver.com>
Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
California Sullivan
df6694b7c8 initrdscripts/init-install*: Add rootwait when installing to USB devices
It can take a bit for USB devices to be detected, so if a USB device is
your rootfs and you don't set rootwait you will most likely get a kernel
panic. Fix this by adding rootwait to the kernel command line on
installation.

Fixes [YOCTO #9462].

(From OE-Core rev: 40e2d36573a7a6bce377b1f9653607065ba5ffb6)

Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Manjukumar Matha
08a54713ac u-boot.inc: Enable out-of-tree builds
This patch enabled out-of-tree builds for u-boot. This also helps building
u-boot using EXTERNALSRC flow

(From OE-Core rev: 36f110594506fbee5dc18de3a04981f019f2024d)

Signed-off-by: Manjukumar Matha <manjukumar.harthikote-matha@xilinx.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Mike Looijmans
23afc338f6 dropbear/init: Allow extra arguments for key generation
This patch adds DROPBEAR_RSAKEY_ARGS and DROPBEAR_DSSKEY_ARGS optional
parameters to /etc/default/dropbear. The contents are simply passed to
the 'dropbearkey' program when generating a host key.

The default keysize for RSA is currently 2048 bits. It takes a CortexA9
running at 700MHz between 4 and 10 seconds to calculate a keypair. The
board boots Linux in about a second, but you have to wait for several
seconds because of the keypair generation. This patch allows one to put
the line DROPBEAR_RSAKEY_ARGS="-s 1024" into /etc/default/dropbear, and
have a host key generated in about 0.2 seconds on the same CPU. This is
particulary useful for read-only rootfs systems which generate a key on
each boot.

(From OE-Core rev: c0efbcb47ab37c2d9c298fcd40ecaadd3ca050a7)

Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:39 +01:00
Alejandro Hernandez
4f9ddb6e90 initramfs-live-boot: Make sure we kill udev before switching root when live booting
When live booting, we need to make sure the running udev processes are killed
to avoid unexepected behavior, we do this just before switching root,
once we do, a new udev process will be spawned from init and will take care
of whatever work was still missing

[YOCTO #9520]

(From OE-Core rev: e88d9e56952414e6214804f9b450c7106d04318d)

Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:38 +01:00
Mark Hatle
7debab3e1f cross-canadian.bbclass: Add BASECANADIANEXTRAOS to specify main extraos
By default the system will expand the extra os entries for uclibc and musl
even if they are not enabled in the build.  There was no way to prevent this
behavior while still getting the expansion for things like x32 or spe.

The change adds a new setting which a distribution creator can override
easily, setting the base set of canadianextraos components.  The other
expansions are then based on this setting.

(From OE-Core rev: ea24d69fdf7ebbd7f2d9811cff8a77bffc19a75c)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:38 +01:00
Alexander Kanavin
310d860262 security_flags.inc: enable PIE for a few recipes
They used to fail with PIE enabled, but no longer do.

(From OE-Core rev: c999b3d88dfcffbe0fb66406fb0bff1fb66f34bc)

Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:35:38 +01:00
Richard Purdie
896f1c7696 oeqa/oetest: Improve subprocess error reporting
Without this, we get to know the command failed and the exit code but
have no idea how the command failed since we don't get the output by
default.

This makes it much easier to see what went wrong and stand a chance of
fixing it.

(From OE-Core rev: b020b01d41ccaae5d679f1f7950af2e1a1788d39)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:23:43 +01:00
Richard Purdie
a18e3c92e9 report-error: Fix tracebacks
Currently the code gives tracebacks if there are no recipes to be built in a
BuildStarted event. Parse the list into a string rather than just taking the
first item. There is nothing special about the first time.

(From OE-Core rev: 684a3d56ef393b56f38d3272f8865f6225a282ab)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:23:43 +01:00
Richard Purdie
fbc144d08f uninative: Update to 1.3
Uninative 1.2 didn't contain the nativesdk locale fix we really needed
to release and update to uninative 1.3 which does contain that fix
and also uses glibc 2.24 final release.

(From OE-Core rev: e0516960925e93f1801620897743b1cebcd806bc)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:23:43 +01:00
Scott Rifenbark
96a861eb02 bitbake: bitbake-user-manual: Re-write "Dependencies Internal to the .bb File"
Fixes [YOCTO #10117]

Applied a re-write to better clarify the behavior of dependencies.

(Bitbake rev: 28bb8ef7f737034055f3485795179cfdcdb9a41f)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:59 +01:00
Scott Rifenbark
cbf8516c08 bitbake: bitbake-user-manual: Added setting variable for a single task
Fixes [YOCTO #10095]

I added a third case to the "Conditional Metadata" section to
describe setting a variable for a single task.

(Bitbake rev: 24d648ce62b35f7d2b23fde732703c060579a0d2)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:59 +01:00
Scott Rifenbark
277a5a969f bitbake: bitbake-user-manual: Added more detail to anonymous Python functions.
Fixes [YOCTO #10093]

Provided much more detail on how these functions work.

(Bitbake rev: dbe25523d899850f85acb6986eca98bf1b0ef52a)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:59 +01:00
Scott Rifenbark
3a1ae38966 bitbake: bitbake-user-manual: Formatted all "flags" to be consistent
Fixes [YOCTO #10071]

The use of any flags throughout the manual was very inconsistent.
I changed all references to any named flag in the text to be
formatted as code and to be enclosed in square brackets.

(Bitbake rev: be0fb616e64e54ae3e2420249f21f4edfd97d648)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:59 +01:00
Scott Rifenbark
50d78130fd bitbake: bitbake-user-manual: Added detail to [dirs] and [cleardirs] flags
Fixes [YOCTO #10071]

Provided more clear descriptions for these two flags.

(Bitbake rev: c85c9a468dc3ce606a5f8797e6be8b411a9f3bdb)

Signed-off-by: Scott Rifenbark <srifenbark@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:59 +01:00
Francisco Pedraza
500ebdda6b bitbake: bb/utils.py: export_proxies add GIT_PROXY_COMMAND
This was added to enable the usage of git through proxies.

(Bitbake rev: 449fc52e483a3bf1cec1c5d8cf8c3946ec5292ab)

Signed-off-by: Francisco Pedraza <francisco.j.pedraza.gonzalez@intel.com>
Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:58 +01:00
Paul Eggleton
8a45291164 bitbake: knotty: don't show number of running tasks in quiet mode
There's not a whole lot of point showing how many tasks are running when
we're in quiet mode, it just looks a bit strange particularly when it's
not running any tasks.

(Bitbake rev: 5317200d9cd73c6f971bc1b0cfe8692749e27e3a)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:58 +01:00
Paul Eggleton
ea0800049d bitbake: knotty: fix task progress bar not starting at 0%
If we have the task number here we need to subtract 1 to get the number
of tasks completed.

(Bitbake rev: 7c78a1cd3f0638ae76f7c7a469b7f667c7c58090)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:58 +01:00
Paul Eggleton
1b6f701cd9 bitbake: runqueue: fix two minor issues with the initialising tasks progress
A couple of fixes for the "Initialising tasks" progress bar behaviour:
* Properly finish the progress bar when using bitbake -S
* Finish the progress bar before calling BB_HASHCHECK_FUNCTION (so that
  in OE when that shows its own "Checking sstate mirror object
  availability"  progress bar it gets shown on the next line as it
  should).

(Bitbake rev: de6759d8e9990e426e6d6464a2e05381cd4c12d6)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:58 +01:00
Ross Burton
407ba77fe2 bitbake: lib/bb/tests/fetch: remove URL that doesn't exist anymore
The CUPS ipptool URL we were checking now redirects to github where the tarball
isn't present, so remove it from the test suite.

(Bitbake rev: 4b50895fb3462b21e3874a2e99c363c8d05e89e6)

Signed-off-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:58 +01:00
David Reyna
40b655db22 bitbake: toaster: update web urls for openembedded-core's special case
The layer index update command has a special case for the
updating 'openembedded-core' layer, and it was missing reading
and updating the git web URL fields.

[YOCTO #8037]

(Bitbake rev: ce2f990a366d2d939e93e01f67688f12740c5fee)

Signed-off-by: David Reyna <david.reyna@windriver.com>
Signed-off-by: Michael Wood <michael.g.wood@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-08-17 10:22:58 +01:00
571 changed files with 12402 additions and 7703 deletions

View File

@@ -38,7 +38,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
__version__ = "1.31.0"
__version__ = "1.31.1"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -115,9 +115,9 @@ parser.add_option("-t", "--task",
options, args = parser.parse_args(sys.argv)
if options.taskargs:
tinfoil = bb.tinfoil.Tinfoil()
tinfoil.prepare(config_only = True)
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1])
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=True)
find_compare_task(tinfoil, options.taskargs[0], options.taskargs[1])
else:
if len(args) == 1:
parser.print_help()

View File

@@ -87,31 +87,34 @@ def main():
plugins = []
tinfoil = tinfoil_init(False)
for path in ([topdir] +
tinfoil.config_data.getVar('BBPATH', True).split(':')):
pluginpath = os.path.join(path, 'lib', 'bblayers')
bb.utils.load_plugins(logger, plugins, pluginpath)
try:
for path in ([topdir] +
tinfoil.config_data.getVar('BBPATH', True).split(':')):
pluginpath = os.path.join(path, 'lib', 'bblayers')
bb.utils.load_plugins(logger, plugins, pluginpath)
registered = False
for plugin in plugins:
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
registered = False
for plugin in plugins:
if hasattr(plugin, 'register_commands'):
registered = True
plugin.register_commands(subparsers)
if hasattr(plugin, 'tinfoil_init'):
plugin.tinfoil_init(tinfoil)
if not registered:
logger.error("No commands registered - missing plugins?")
sys.exit(1)
if not registered:
logger.error("No commands registered - missing plugins?")
sys.exit(1)
args = parser.parse_args(unparsed_args, namespace=global_args)
args = parser.parse_args(unparsed_args, namespace=global_args)
if getattr(args, 'parserecipes', False):
tinfoil.config_data.disableTracking()
tinfoil.parseRecipes()
tinfoil.config_data.enableTracking()
if getattr(args, 'parserecipes', False):
tinfoil.config_data.disableTracking()
tinfoil.parseRecipes()
tinfoil.config_data.enableTracking()
return args.func(args)
return args.func(args)
finally:
tinfoil.shutdown()
if __name__ == "__main__":

View File

@@ -25,31 +25,48 @@ try:
except RuntimeError as exc:
sys.exit(str(exc))
def usage():
print('usage: [BB_SKIP_NETTESTS=yes] %s [-v] [testname1 [testname2]...]' % os.path.basename(sys.argv[0]))
verbosity = 1
tests = sys.argv[1:]
if '-v' in sys.argv:
tests.remove('-v')
verbosity = 2
if tests:
if '--help' in sys.argv[1:]:
usage()
sys.exit(0)
else:
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils"]
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils"]
for t in tests:
t = '.'.join(t.split('.')[:3])
__import__(t)
unittest.main(argv=["bitbake-selftest"] + tests, verbosity=verbosity)
# Set-up logging
class StdoutStreamHandler(logging.StreamHandler):
"""Special handler so that unittest is able to capture stdout"""
def __init__(self):
# Override __init__() because we don't want to set self.stream here
logging.Handler.__init__(self)
@property
def stream(self):
# We want to dynamically write wherever sys.stdout is pointing to
return sys.stdout
handler = StdoutStreamHandler()
bb.logger.addHandler(handler)
bb.logger.setLevel(logging.DEBUG)
ENV_HELP = """\
Environment variables:
BB_SKIP_NETTESTS set to 'yes' in order to skip tests using network
connection
BB_TMPDIR_NOCLEAN set to 'yes' to preserve test tmp directories
"""
class main(unittest.main):
def _print_help(self, *args, **kwargs):
super(main, self)._print_help(*args, **kwargs)
print(ENV_HELP)
if __name__ == '__main__':
main(defaultTest=tests, buffer=True)

View File

@@ -115,7 +115,7 @@ def sigterm_handler(signum, frame):
os.killpg(0, signal.SIGTERM)
sys.exit()
def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdata, quieterrors=False):
def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, appends, taskdepdata, quieterrors=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
@@ -193,15 +193,19 @@ def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdat
if umask:
os.umask(umask)
data.setVar("BB_WORKERCONTEXT", "1")
data.setVar("BB_TASKDEPDATA", taskdepdata)
data.setVar("BUILDNAME", workerdata["buildname"])
data.setVar("DATE", workerdata["date"])
data.setVar("TIME", workerdata["time"])
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
try:
the_data = bb.cache.Cache.loadDataFull(fn, appends, data)
bb_cache = bb.cache.NoCache(databuilder)
(realfn, virtual, mc) = bb.cache.virtualfn2realfn(fn)
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
the_data.setVar("BUILDNAME", workerdata["buildname"])
the_data.setVar("DATE", workerdata["date"])
the_data.setVar("TIME", workerdata["time"])
bb.parse.siggen.set_taskdata(workerdata["sigdata"])
ret = 0
the_data = bb_cache.loadDataFull(fn, appends)
the_data.setVar('BB_TASKHASH', workerdata["runq_hash"][task])
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN", True), taskname.replace("do_", "")))
@@ -371,7 +375,8 @@ class BitbakeWorker(object):
bb.msg.loggerDefaultVerbose = self.workerdata["logdefaultverbose"]
bb.msg.loggerVerboseLogs = self.workerdata["logdefaultverboselogs"]
bb.msg.loggerDefaultDomains = self.workerdata["logdefaultdomain"]
self.data.setVar("PRSERV_HOST", self.workerdata["prhost"])
for mc in self.databuilder.mcdata:
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
def handle_ping(self, _):
workerlog_write("Handling ping\n")
@@ -389,7 +394,7 @@ class BitbakeWorker(object):
fn, task, taskname, quieterrors, appends, taskdepdata = pickle.loads(data)
workerlog_write("Handling runtask %s %s %s\n" % (task, fn, taskname))
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.workerdata, fn, task, taskname, appends, taskdepdata, quieterrors)
pid, pipein, pipeout = fork_off_task(self.cookercfg, self.data, self.databuilder, self.workerdata, fn, task, taskname, appends, taskdepdata, quieterrors)
self.build_pids[pid] = task
self.build_pipes[pid] = runQueueWorkerPipe(pipein, pipeout)

View File

@@ -33,9 +33,6 @@ webserverKillAll()
while kill -0 $pid 2>/dev/null; do
kill -SIGTERM -$pid 2>/dev/null
sleep 1
# Kill processes if they are still running - may happen
# in interactive shells
ps fux | grep "python.*manage.py runserver" | awk '{print $2}' | xargs kill
done
rm ${pidfile}
fi
@@ -95,10 +92,6 @@ stop_system()
# prevent reentry
if [ $INSTOPSYSTEM -eq 1 ]; then return; fi
INSTOPSYSTEM=1
if [ -f ${BUILDDIR}/.toasterui.pid ]; then
kill `cat ${BUILDDIR}/.toasterui.pid` 2>/dev/null
rm ${BUILDDIR}/.toasterui.pid
fi
webserverKillAll
# unset exported variables
unset TOASTER_DIR
@@ -249,14 +242,6 @@ case $CMD in
fi
fi
# kill Toaster web server if it's alive
if [ -e $BUILDDIR/.toastermain.pid ] && kill -0 `cat $BUILDDIR/.toastermain.pid`; then
echo "Warning: bitbake appears to be dead, but the Toaster web server is running." 1>&2
echo " Something fishy is going on." 1>&2
echo "Cleaning up the web server to start from a clean slate."
webserverKillAll
fi
# Create configuration file
conf=${BUILDDIR}/conf/local.conf
line='INHERIT+="toaster buildhistory"'

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#

View File

@@ -596,7 +596,7 @@
"<link linkend='checksums'>Checksums (Signatures)</link>"
section for information).
It is also possible to append extra metadata to the stamp using
the "stamp-extra-info" task flag.
the <filename>[stamp-extra-info]</filename> task flag.
For example, OpenEmbedded uses this flag to make some tasks machine-specific.
</para>
@@ -653,7 +653,8 @@
</itemizedlist>
It is possible to have functions run before and after a task's main
function.
This is done using the "prefuncs" and "postfuncs" flags of the task
This is done using the <filename>[prefuncs]</filename>
and <filename>[postfuncs]</filename> flags of the task
that lists the functions to run.
</para>
</section>
@@ -827,7 +828,7 @@
itself.
The simplest parameter to pass is "none", which causes a
set of signature information to be written out into
<filename>STAMP_DIR</filename>
<filename>STAMPS_DIR</filename>
corresponding to the targets specified.
The other currently available parameter is "printdiff",
which causes BitBake to try to establish the closest

View File

@@ -372,9 +372,9 @@
FOO[a] += "456"
</literallayout>
The variable <filename>FOO</filename> has two flags:
<filename>a</filename> and <filename>b</filename>.
<filename>[a]</filename> and <filename>[b]</filename>.
The flags are immediately set to "abc" and "123", respectively.
The <filename>a</filename> flag becomes "abc 456".
The <filename>[a]</filename> flag becomes "abc 456".
</para>
<para>
@@ -428,9 +428,30 @@
FOO := "${@foo()}"
</literallayout>
</note>
For a different way to set variables with Python code during
parsing, see the
"<link linkend='anonymous-python-functions'>Anonymous Python Functions</link>"
section.
</para>
</section>
<section id='unsetting-variables'>
<title>Unseting variables</title>
<para>
It is possible to completely remove a variable or a variable flag
from BitBake's internal data dictionary by using the "unset" keyword.
Here is an example:
<literallayout class='monospaced'>
unset DATE
unset do_fetch[noexec]
</literallayout>
These two statements remove the <filename>DATE</filename> and the
<filename>do_fetch[noexec]</filename> flag.
</para>
</section>
<section id='providing-pathnames'>
<title>Providing Pathnames</title>
@@ -455,6 +476,53 @@
</section>
</section>
<section id='exporting-variables-to-the-environment'>
<title>Exporting Variables to the Environment</title>
<para>
You can export variables to the environment of running
tasks by using the <filename>export</filename> keyword.
For example, in the following example, the
<filename>do_foo</filename> task prints "value from
the environment" when run:
<literallayout class='monospaced'>
export ENV_VARIABLE
ENV_VARIABLE = "value from the environment"
do_foo() {
bbplain "$ENV_VARIABLE"
}
</literallayout>
<note>
BitBake does not expand <filename>$ENV_VARIABLE</filename>
in this case because it lacks the obligatory
<filename>{}</filename>.
Rather, <filename>$ENV_VARIABLE</filename> is expanded
by the shell.
</note>
It does not matter whether
<filename>export ENV_VARIABLE</filename> appears before or
after assignments to <filename>ENV_VARIABLE</filename>.
</para>
<para>
It is also possible to combine <filename>export</filename>
with setting a value for the variable.
Here is an example:
<literallayout class='monospaced'>
export ENV_VARIABLE = "<replaceable>variable-value</replaceable>"
</literallayout>
In the output of <filename>bitbake -e</filename>, variables
that are exported to the environment are preceded by "export".
</para>
<para>
Among the variables commonly exported to the environment
are <filename>CC</filename> and <filename>CFLAGS</filename>,
which are picked up by many build systems.
</para>
</section>
<section id='conditional-syntax-overrides'>
<title>Conditional Syntax (Overrides)</title>
@@ -553,6 +621,34 @@
KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
</literallayout>
</para></listitem>
<listitem><para><emphasis>Setting a Variable for a Single Task:</emphasis>
BitBake supports setting a variable just for the
duration of a single task.
Here is an example:
<literallayout class='monospaced'>
FOO_task-configure = "val 1"
FOO_task-compile = "val 2"
</literallayout>
In the previous example, <filename>FOO</filename>
has the value "val 1" while the
<filename>do_configure</filename> task is executed,
and the value "val 2" while the
<filename>do_compile</filename> task is executed.
</para>
<para>Internally, this is implemented by prepending
the task (e.g. "task-compile:") to the value of
<link linkend='var-OVERRIDES'><filename>OVERRIDES</filename></link>
for the local datastore of the <filename>do_compile</filename>
task.</para>
<para>You can also use this syntax with other combinations
(e.g. "<filename>_prepend</filename>") as shown in the
following example:
<literallayout class='monospaced'>
EXTRA_OEMAKE_prepend_task-compile = "${PARALLEL_MAKE} "
</literallayout>
</para></listitem>
</itemizedlist>
</para>
</section>
@@ -1063,32 +1159,81 @@
<title>Anonymous Python Functions</title>
<para>
Sometimes it is useful to run some code during
parsing to set variables or to perform other operations
programmatically.
To do this, you can define an anonymous Python function.
Here is an example that conditionally sets a
variable based on the value of another variable:
<literallayout class='monospaced'>
python __anonymous () {
if d.getVar('SOMEVAR', True) == 'value':
d.setVar('ANOTHERVAR', 'value2')
}
</literallayout>
The "__anonymous" function name is optional, so the
following example is functionally equivalent to the above:
Sometimes it is useful to set variables or perform
other operations programmatically during parsing.
To do this, you can define special Python functions,
called anonymous Python functions, that run at the
end of parsing.
For example, the following conditionally sets a variable
based on the value of another variable:
<literallayout class='monospaced'>
python () {
if d.getVar('SOMEVAR', True) == 'value':
d.setVar('ANOTHERVAR', 'value2')
}
</literallayout>
Because unlike other Python functions anonymous
Python functions are executed during parsing, the
"d" variable within an anonymous Python function represents
the datastore for the entire recipe.
Consequently, you can set variable values here and
those values can be picked up by other functions.
An equivalent way to mark a function as an anonymous
function is to give it the name "__anonymous", rather
than no name.
</para>
<para>
Anonymous Python functions always run at the end
of parsing, regardless of where they are defined.
If a recipe contains many anonymous functions, they
run in the same order as they are defined within the
recipe.
As an example, consider the following snippet:
<literallayout class='monospaced'>
python () {
d.setVar('FOO', 'foo 2')
}
FOO = "foo 1"
python () {
d.appendVar('BAR', ' bar 2')
}
BAR = "bar 1"
</literallayout>
The previous example is conceptually equivalent to the
following snippet:
<literallayout class='monospaced'>
FOO = "foo 1"
BAR = "bar 1"
FOO = "foo 2"
BAR += "bar 2"
</literallayout>
<filename>FOO</filename> ends up with the value "foo 2",
and <filename>BAR</filename> with the value "bar 1 bar 2".
Just as in the second snippet, the values set for the
variables within the anonymous functions become available
to tasks, which always run after parsing.
</para>
<para>
Overrides and override-style operators such as
"<filename>_append</filename>" are applied before
anonymous functions run.
In the following example, <filename>FOO</filename> ends
up with the value "foo from anonymous":
<literallayout class='monospaced'>
FOO = "foo"
FOO_append = " from outside"
python () {
d.setVar("FOO", "foo from anonymous")
}
</literallayout>
For methods you can use with anonymous Python functions,
see the
"<link linkend='accessing-datastore-variables-using-python'>Accessing Datastore Variables Using Python</link>"
section.
For a different method to run Python code during parsing,
see the
"<link linkend='inline-python-variable-expansion'>Inline Python Variable Expansion</link>"
section.
</para>
</section>
@@ -1270,7 +1415,7 @@
<para>
If you want dependencies such as these to remain intact, use
the <filename>noexec</filename> varflag to disable the task
the <filename>[noexec]</filename> varflag to disable the task
instead of using the <filename>deltask</filename> command to
delete it:
<literallayout class='monospaced'>
@@ -1393,10 +1538,13 @@
Tasks support a number of these flags which control various
functionality of the task:
<itemizedlist>
<listitem><para><emphasis>cleandirs:</emphasis>
Empty directories that should created before the task runs.
<listitem><para><emphasis><filename>[cleandirs]</filename>:</emphasis>
Empty directories that should be created before the
task runs.
Directories that already exist are removed and recreated
to empty them.
</para></listitem>
<listitem><para><emphasis>depends:</emphasis>
<listitem><para><emphasis><filename>[depends]</filename>:</emphasis>
Controls inter-task dependencies.
See the
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
@@ -1404,7 +1552,7 @@
"<link linkend='inter-task-dependencies'>Inter-Task Dependencies</link>"
section for more information.
</para></listitem>
<listitem><para><emphasis>deptask:</emphasis>
<listitem><para><emphasis><filename>[deptask]</filename>:</emphasis>
Controls task build-time dependencies.
See the
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
@@ -1412,12 +1560,13 @@
"<link linkend='build-dependencies'>Build Dependencies</link>"
section for more information.
</para></listitem>
<listitem><para><emphasis>dirs:</emphasis>
<listitem><para><emphasis><filename>[dirs]</filename>:</emphasis>
Directories that should be created before the task runs.
The last directory listed will be used as the work directory
for the task.
Directories that already exist are left as is.
The last directory listed is used as the
current working directory for the task.
</para></listitem>
<listitem><para><emphasis>lockfiles:</emphasis>
<listitem><para><emphasis><filename>[lockfiles]</filename>:</emphasis>
Specifies one or more lockfiles to lock while the task
executes.
Only one task may hold a lockfile, and any task that
@@ -1426,23 +1575,23 @@
You can use this variable flag to accomplish mutual
exclusion.
</para></listitem>
<listitem><para><emphasis>noexec:</emphasis>
<listitem><para><emphasis><filename>[noexec]</filename>:</emphasis>
Marks the tasks as being empty and no execution required.
The <filename>noexec</filename> flag can be used to set up
The <filename>[noexec]</filename> flag can be used to set up
tasks as dependency placeholders, or to disable tasks defined
elsewhere that are not needed in a particular recipe.
</para></listitem>
<listitem><para><emphasis>nostamp:</emphasis>
<listitem><para><emphasis><filename>[nostamp]</filename>:</emphasis>
Tells BitBake to not generate a stamp file for a task,
which implies the task should always be executed.
</para></listitem>
<listitem><para><emphasis>postfuncs:</emphasis>
<listitem><para><emphasis><filename>[postfuncs]</filename>:</emphasis>
List of functions to call after the completion of the task.
</para></listitem>
<listitem><para><emphasis>prefuncs:</emphasis>
<listitem><para><emphasis><filename>[prefuncs]</filename>:</emphasis>
List of functions to call before the task executes.
</para></listitem>
<listitem><para><emphasis>rdepends:</emphasis>
<listitem><para><emphasis><filename>[rdepends]</filename>:</emphasis>
Controls inter-task runtime dependencies.
See the
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>
@@ -1452,7 +1601,7 @@
"<link linkend='inter-task-dependencies'>Inter-Task Dependencies</link>"
section for more information.
</para></listitem>
<listitem><para><emphasis>rdeptask:</emphasis>
<listitem><para><emphasis><filename>[rdeptask]</filename>:</emphasis>
Controls task runtime dependencies.
See the
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>
@@ -1462,12 +1611,12 @@
"<link linkend='runtime-dependencies'>Runtime Dependencies</link>"
section for more information.
</para></listitem>
<listitem><para><emphasis>recideptask:</emphasis>
<listitem><para><emphasis><filename>[recideptask]</filename>:</emphasis>
When set in conjunction with
<filename>recrdeptask</filename>, specifies a task that
should be inspected for additional dependencies.
</para></listitem>
<listitem><para><emphasis>recrdeptask:</emphasis>
<listitem><para><emphasis><filename>[recrdeptask]</filename>:</emphasis>
Controls task recursive runtime dependencies.
See the
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>
@@ -1477,12 +1626,12 @@
"<link linkend='recursive-dependencies'>Recursive Dependencies</link>"
section for more information.
</para></listitem>
<listitem><para><emphasis>stamp-extra-info:</emphasis>
<listitem><para><emphasis><filename>[stamp-extra-info]</filename>:</emphasis>
Extra stamp information to append to the task's stamp.
As an example, OpenEmbedded uses this flag to allow
machine-specific tasks.
</para></listitem>
<listitem><para><emphasis>umask:</emphasis>
<listitem><para><emphasis><filename>[umask]</filename>:</emphasis>
The umask to run the task under.
</para></listitem>
</itemizedlist>
@@ -1495,7 +1644,7 @@
"<link linkend='checksums'>Checksums (Signatures)</link>"
section.
<itemizedlist>
<listitem><para><emphasis>vardeps:</emphasis>
<listitem><para><emphasis><filename>[vardeps]</filename>:</emphasis>
Specifies a space-separated list of additional
variables to add to a variable's dependencies
for the purposes of calculating its signature.
@@ -1504,17 +1653,17 @@
does not allow BitBake to automatically determine
that the variable is referred to.
</para></listitem>
<listitem><para><emphasis>vardepsexclude:</emphasis>
<listitem><para><emphasis><filename>[vardepsexclude]</filename>:</emphasis>
Specifies a space-separated list of variables
that should be excluded from a variable's dependencies
for the purposes of calculating its signature.
</para></listitem>
<listitem><para><emphasis>vardepvalue:</emphasis>
<listitem><para><emphasis><filename>[vardepvalue]</filename>:</emphasis>
If set, instructs BitBake to ignore the actual
value of the variable and instead use the specified
value when calculating the variable's signature.
</para></listitem>
<listitem><para><emphasis>vardepvalueexclude:</emphasis>
<listitem><para><emphasis><filename>[vardepvalueexclude]</filename>:</emphasis>
Specifies a pipe-separated list of strings to exclude
from the variable's value when calculating the
variable's signature.
@@ -1763,44 +1912,48 @@
<literallayout class='monospaced'>
addtask printdate after do_fetch before do_build
</literallayout>
In this example, the <filename>printdate</filename> task is
depends on the completion of the <filename>do_fetch</filename>
In this example, the <filename>do_printdate</filename>
task depends on the completion of the
<filename>do_fetch</filename> task, and the
<filename>do_build</filename> task depends on the
completion of the <filename>do_printdate</filename>
task.
And, the <filename>do_build</filename> depends on the completion
of the <filename>printdate</filename> task.
<note>
Recipes are built by having their
<filename>do_build</filename> (not to be confused with
<filename>do_compile</filename>) tasks executed.
For a task to run when a recipe is built, the task must
therefore be a direct or indirect dependency of
<filename>do_build</filename>.
For illustration, here are some examples:
<note><para>
For a task to run, it must be a direct or indirect
dependency of some other task that is scheduled to
run.</para>
<para>For illustration, here are some examples:
<itemizedlist>
<listitem><para>
The directive
<filename>addtask mytask before do_build</filename>
causes <filename>mytask</filename> to run when the
recipe is built.
In this example, <filename>mytask</filename> is run
at an unspecified time relative to other tasks within
the recipe, since <filename>after</filename> is not used.
<filename>addtask mytask before do_configure</filename>
causes <filename>do_mytask</filename> to run before
<filename>do_configure</filename> runs.
Be aware that <filename>do_mytask</filename> still only
runs if its <link linkend='checksums'>input checksum</link>
has changed since the last time it was run.
Changes to the input checksum of
<filename>do_mytask</filename> also indirectly cause
<filename>do_configure</filename> to run.
</para></listitem>
<listitem><para>
The directive
<filename>addtask mytask after do_configure</filename>
by itself does not cause <filename>mytask</filename>
to run when the recipe is built.
The task can still be run manually using the following:
by itself never causes <filename>do_mytask</filename>
to run.
<filename>do_mytask</filename> can still be run manually
as follows:
<literallayout class='monospaced'>
$ bitbake <replaceable>recipe</replaceable> -c mytask
</literallayout>
<filename>mytask</filename> could also be declared as
a dependency of some other task.
Regardless, the task is run after
Declaring <filename>do_mytask</filename> as a dependency
of some other task that is scheduled to run also causes
it to run.
Regardless, the task runs after
<filename>do_configure</filename>.
</para></listitem>
</itemizedlist>
</itemizedlist></para>
</note>
</para>
</section>
@@ -1812,7 +1965,8 @@
BitBake uses the
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
variable to manage build time dependencies.
The "deptask" varflag for tasks signifies the task of each
The <filename>[deptask]</filename> varflag for tasks
signifies the task of each
item listed in <filename>DEPENDS</filename> that must
complete before that task can be executed.
Here is an example:
@@ -1841,7 +1995,8 @@
packages.
Each of those packages can have <filename>RDEPENDS</filename> and
<filename>RRECOMMENDS</filename> runtime dependencies.
The "rdeptask" flag for tasks is used to signify the task of each
The <filename>[rdeptask]</filename> flag for tasks is used to
signify the task of each
item runtime dependency which must have completed before that
task can be executed.
<literallayout class='monospaced'>
@@ -1857,7 +2012,7 @@
<title>Recursive Dependencies</title>
<para>
BitBake uses the "recrdeptask" flag to manage
BitBake uses the <filename>[recrdeptask]</filename> flag to manage
recursive task dependencies.
BitBake looks through the build-time and runtime
dependencies of the current recipe, looks through
@@ -1871,7 +2026,8 @@
</para>
<para>
The "recrdeptask" flag is most commonly used in high-level
The <filename>[recrdeptask]</filename> flag is most commonly
used in high-level
recipes that need to wait for some task to finish "globally".
For example, <filename>image.bbclass</filename> has the following:
<literallayout class='monospaced'>
@@ -1901,7 +2057,8 @@
<title>Inter-Task Dependencies</title>
<para>
BitBake uses the "depends" flag in a more generic form
BitBake uses the <filename>[depends]</filename>
flag in a more generic form
to manage inter-task dependencies.
This more generic form allows for inter-dependency
checks for specific tasks rather than checks for
@@ -1917,7 +2074,8 @@
</para>
<para>
The "rdepends" flag works in a similar way but takes targets
The <filename>[rdepends]</filename> flag works in a similar
way but takes targets
in the runtime namespace instead of the build-time dependency
namespace.
</para>

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.31.0"
__version__ = "1.31.1"
import sys
if sys.version_info < (3, 4, 0):

View File

@@ -633,7 +633,7 @@ def exec_task(fn, task, d, profile = False):
event.fire(failedevent, d)
return 1
def stamp_internal(taskname, d, file_name, baseonly=False):
def stamp_internal(taskname, d, file_name, baseonly=False, noextra=False):
"""
Internal stamp helper function
Makes sure the stamp directory exists
@@ -656,6 +656,8 @@ def stamp_internal(taskname, d, file_name, baseonly=False):
if baseonly:
return stamp
if noextra:
extrainfo = ""
if not stamp:
return
@@ -751,12 +753,12 @@ def write_taint(task, d, file_name = None):
with open(taintfn, 'w') as taintf:
taintf.write(str(uuid.uuid4()))
def stampfile(taskname, d, file_name = None):
def stampfile(taskname, d, file_name = None, noextra=False):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name)
return stamp_internal(taskname, d, file_name, noextra=noextra)
def add_tasks(tasklist, d):
task_deps = d.getVar('_task_deps', False)

View File

@@ -244,14 +244,136 @@ class CoreRecipeInfo(RecipeInfoCommon):
cachedata.fakerootdirs[fn] = self.fakerootdirs
cachedata.extradepsfunc[fn] = self.extradepsfunc
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
mc = ""
if virtualfn.startswith('multiconfig:'):
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls, mc)
def realfn2virtual(realfn, cls, mc):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls:
realfn = "virtual:" + cls + ":" + realfn
if mc:
realfn = "multiconfig:" + mc + ":" + realfn
return realfn
def variant2virtual(realfn, variant):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn
if variant.startswith("multiconfig:"):
elems = variant.split(":")
if elems[2]:
return "multiconfig:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
return "multiconfig:" + elems[1] + ":" + realfn
return "virtual:" + variant + ":" + realfn
def parse_recipe(bb_data, bbfile, appends, mc=''):
"""
Parse a recipe
"""
chdir_back = False
bb_data.setVar("__BBMULTICONFIG", mc)
# expand tmpdir to include this topdir
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR', True) or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = bb.parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
class Cache(object):
class NoCache(object):
def __init__(self, databuilder):
self.databuilder = databuilder
self.data = databuilder.data
def loadDataFull(self, virtualfn, appends):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
logger.debug(1, "Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
return bb_data[virtual]
def load_bbfile(self, bbfile, appends, virtonly = False):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
if virtonly:
(bbfile, virtual, mc) = virtualfn2realfn(bbfile)
bb_data = self.databuilder.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
datastores = parse_recipe(bb_data, bbfile, appends, mc)
return datastores
bb_data = self.data.createCopy()
datastores = parse_recipe(bb_data, bbfile, appends)
for mc in self.databuilder.mcdata:
if not mc:
continue
bb_data = self.databuilder.mcdata[mc].createCopy()
newstores = parse_recipe(bb_data, bbfile, appends, mc)
for ns in newstores:
datastores["multiconfig:%s:%s" % (mc, ns)] = newstores[ns]
return datastores
class Cache(NoCache):
"""
BitBake Cache implementation
"""
def __init__(self, data, data_hash, caches_array):
def __init__(self, databuilder, data_hash, caches_array):
super().__init__(databuilder)
data = databuilder.data
# Pass caches_array information into Cache Constructor
# It will be used later for deciding whether we
# need extra cache file dump/load support
@@ -260,7 +382,6 @@ class Cache(object):
self.clean = set()
self.checked = set()
self.depends_cache = {}
self.data = None
self.data_fn = None
self.cacheclean = True
self.data_hash = data_hash
@@ -355,69 +476,33 @@ class Cache(object):
len(self.depends_cache)),
self.data)
@staticmethod
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls)
@staticmethod
def realfn2virtual(realfn, cls):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls == "":
return realfn
return "virtual:" + cls + ":" + realfn
@classmethod
def loadDataFull(cls, virtualfn, appends, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
logger.debug(1, "Parsing %s (full)", fn)
cfgData.setVar("__ONLYFINALISE", virtual or "default")
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
@classmethod
def parse(cls, filename, appends, configdata, caches_array):
def parse(self, filename, appends):
"""Parse the specified filename, returning the recipe information"""
logger.debug(1, "Parsing %s", filename)
infos = []
datastores = cls.load_bbfile(filename, appends, configdata)
datastores = self.load_bbfile(filename, appends)
depends = []
variants = []
# Process the "real" fn last so we can store variants list
for variant, data in sorted(datastores.items(),
key=lambda i: i[0],
reverse=True):
virtualfn = cls.realfn2virtual(filename, variant)
virtualfn = variant2virtual(filename, variant)
variants.append(variant)
depends = depends + (data.getVar("__depends", False) or [])
if depends and not variant:
data.setVar("__depends", depends)
if virtualfn == filename:
data.setVar("__VARIANTS", " ".join(variants))
info_array = []
for cache_class in caches_array:
for cache_class in self.caches_array:
info = cache_class(filename, data)
info_array.append(info)
infos.append((virtualfn, info_array))
return infos
def load(self, filename, appends, configdata):
def load(self, filename, appends):
"""Obtain the recipe information for the specified filename,
using cached values if available, otherwise parsing.
@@ -431,20 +516,20 @@ class Cache(object):
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = self.realfn2virtual(filename, variant)
virtualfn = variant2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
return self.parse(filename, appends, configdata, self.caches_array)
return cached, infos
def loadData(self, fn, appends, cfgData, cacheData):
def loadData(self, fn, appends, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends, cfgData)
cached, infos = self.load(fn, appends)
for virtualfn, info_array in infos:
if info_array[0].skipped:
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
@@ -552,7 +637,7 @@ class Cache(object):
invalid = False
for cls in info_array[0].variants:
virtualfn = self.realfn2virtual(fn, cls)
virtualfn = variant2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", virtualfn)
@@ -564,7 +649,7 @@ class Cache(object):
# If any one of the variants is not present, mark as invalid for all
if invalid:
for cls in info_array[0].variants:
virtualfn = self.realfn2virtual(fn, cls)
virtualfn = variant2virtual(fn, cls)
if virtualfn in self.clean:
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
@@ -641,49 +726,13 @@ class Cache(object):
Save data we need into the cache
"""
realfn = self.virtualfn2realfn(file_name)[0]
realfn = virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
@staticmethod
def load_bbfile(bbfile, appends, config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
chdir_back = False
from bb import parse
# expand tmpdir to include this topdir
config.setVar('TMPDIR', config.getVar('TMPDIR', True) or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
parse.cached_mtime_noerror(bbfile_loc)
bb_data = config.createCopy()
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not bb_data.getVar('TOPDIR', False):
chdir_back = True
bb_data.setVar('TOPDIR', bbfile_loc)
try:
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
bb_data = parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
raise
def init(cooker):
"""

View File

@@ -30,7 +30,7 @@ import logging
import multiprocessing
import sre_constants
import threading
from io import StringIO
from io import StringIO, UnsupportedOperation
from contextlib import closing
from functools import wraps
from collections import defaultdict, namedtuple
@@ -141,7 +141,7 @@ class EventWriter:
else:
# init on bb.event.BuildStarted
name = "%s.%s" % (event.__module__, event.__class__.__name__)
if name == "bb.event.BuildStarted":
if name in ("bb.event.BuildStarted", "bb.cooker.CookerExit"):
with open(self.eventfile, "w") as f:
f.write("%s\n" % json.dumps({ "allvariables" : self.cooker.getAllKeysWithFlags(["doc", "func"])}))
@@ -166,7 +166,7 @@ class BBCooker:
"""
def __init__(self, configuration, featureSet=None):
self.recipecache = None
self.recipecaches = None
self.skiplist = {}
self.featureset = CookerFeatures()
if featureSet:
@@ -230,14 +230,17 @@ class BBCooker:
pass
# TOSTOP must not be set or our children will hang when they output
fd = sys.stdout.fileno()
if os.isatty(fd):
import termios
tcattr = termios.tcgetattr(fd)
if tcattr[3] & termios.TOSTOP:
buildlog.info("The terminal had the TOSTOP bit set, clearing...")
tcattr[3] = tcattr[3] & ~termios.TOSTOP
termios.tcsetattr(fd, termios.TCSANOW, tcattr)
try:
fd = sys.stdout.fileno()
if os.isatty(fd):
import termios
tcattr = termios.tcgetattr(fd)
if tcattr[3] & termios.TOSTOP:
buildlog.info("The terminal had the TOSTOP bit set, clearing...")
tcattr[3] = tcattr[3] & ~termios.TOSTOP
termios.tcsetattr(fd, termios.TCSANOW, tcattr)
except UnsupportedOperation:
pass
self.command = bb.command.Command(self)
self.state = state.initial
@@ -521,11 +524,14 @@ class BBCooker:
nice = int(nice) - curnice
buildlog.verbose("Renice to %s " % os.nice(nice))
if self.recipecache:
del self.recipecache
self.recipecache = bb.cache.CacheData(self.caches_array)
if self.recipecaches:
del self.recipecaches
self.multiconfigs = self.databuilder.mcdata.keys()
self.recipecaches = {}
for mc in self.multiconfigs:
self.recipecaches[mc] = bb.cache.CacheData(self.caches_array)
self.handleCollections( self.data.getVar("BBFILE_COLLECTIONS", True) )
self.handleCollections(self.data.getVar("BBFILE_COLLECTIONS", True))
def updateConfigOpts(self, options, environment):
clean = True
@@ -569,8 +575,8 @@ class BBCooker:
def showVersions(self):
pkg_pn = self.recipecache.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.data, self.recipecache, pkg_pn)
pkg_pn = self.recipecaches[''].pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.data, self.recipecaches[''], pkg_pn)
logger.plain("%-35s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version")
logger.plain("%-35s %25s %25s\n", "===========", "==============", "=================")
@@ -601,23 +607,25 @@ class BBCooker:
# this showEnvironment() code path doesn't use the cache
self.parseConfiguration()
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
fn, cls, mc = bb.cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn)
fn = bb.cache.Cache.realfn2virtual(fn, cls)
fn = bb.cache.realfn2virtual(fn, cls, mc)
elif len(pkgs_to_build) == 1:
ignore = self.expanded_data.getVar("ASSUME_PROVIDED", True) or ""
if pkgs_to_build[0] in set(ignore.split()):
bb.fatal("%s is in ASSUME_PROVIDED" % pkgs_to_build[0])
taskdata, runlist, pkgs_to_build = self.buildTaskData(pkgs_to_build, None, self.configuration.abort, allowincomplete=True)
taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.abort, allowincomplete=True)
fn = taskdata.build_targets[pkgs_to_build[0]][0]
mc = runlist[0][0]
fn = runlist[0][3]
else:
envdata = self.data
if fn:
try:
envdata = bb.cache.Cache.loadDataFull(fn, self.collection.get_file_appends(fn), self.data)
bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
envdata = bb_cache.loadDataFull(fn, self.collection.get_file_appends(fn))
except Exception as e:
parselog.exception("Unable to read %s", fn)
raise
@@ -651,29 +659,43 @@ class BBCooker:
task = self.configuration.cmd
fulltargetlist = self.checkPackages(pkgs_to_build)
taskdata = {}
localdata = {}
localdata = data.createCopy(self.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
taskdata = bb.taskdata.TaskData(abort, skiplist=self.skiplist, allowincomplete=allowincomplete)
for mc in self.multiconfigs:
taskdata[mc] = bb.taskdata.TaskData(abort, skiplist=self.skiplist, allowincomplete=allowincomplete)
localdata[mc] = data.createCopy(self.databuilder.mcdata[mc])
bb.data.update_data(localdata[mc])
bb.data.expandKeys(localdata[mc])
current = 0
runlist = []
for k in fulltargetlist:
mc = ""
if k.startswith("multiconfig:"):
mc = k.split(":")[1]
k = ":".join(k.split(":")[2:])
ktask = task
if ":do_" in k:
k2 = k.split(":do_")
k = k2[0]
ktask = k2[1]
taskdata.add_provider(localdata, self.recipecache, k)
taskdata[mc].add_provider(localdata[mc], self.recipecaches[mc], k)
current += 1
if not ktask.startswith("do_"):
ktask = "do_%s" % ktask
runlist.append([k, ktask])
if k not in taskdata[mc].build_targets or not taskdata[mc].build_targets[k]:
# e.g. in ASSUME_PROVIDED
continue
fn = taskdata[mc].build_targets[k][0]
runlist.append([mc, k, ktask, fn])
bb.event.fire(bb.event.TreeDataPreparationProgress(current, len(fulltargetlist)), self.data)
taskdata.add_unresolved(localdata, self.recipecache)
for mc in self.multiconfigs:
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
bb.event.fire(bb.event.TreeDataPreparationCompleted(len(fulltargetlist)), self.data)
return taskdata, runlist, fulltargetlist
return taskdata, runlist
def prepareTreeData(self, pkgs_to_build, task):
"""
@@ -682,7 +704,7 @@ class BBCooker:
# We set abort to False here to prevent unbuildable targets raising
# an exception when we're just generating data
taskdata, runlist, pkgs_to_build = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
taskdata, runlist = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
return runlist, taskdata
@@ -694,10 +716,15 @@ class BBCooker:
information.
"""
runlist, taskdata = self.prepareTreeData(pkgs_to_build, task)
rq = bb.runqueue.RunQueue(self, self.data, self.recipecache, taskdata, runlist)
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
rq.rqdata.prepare()
return self.buildDependTree(rq, taskdata)
@staticmethod
def add_mc_prefix(mc, pn):
if mc:
return "multiconfig:%s.%s" % (mc, pn)
return pn
def buildDependTree(self, rq, taskdata):
seen_fns = []
@@ -710,24 +737,27 @@ class BBCooker:
depend_tree["rdepends-pkg"] = {}
depend_tree["rrecs-pkg"] = {}
depend_tree['providermap'] = {}
depend_tree["layer-priorities"] = self.recipecache.bbfile_config_priorities
depend_tree["layer-priorities"] = self.bbfile_config_priorities
for name, fn in list(taskdata.get_providermap().items()):
pn = self.recipecache.pkg_fn[fn]
if name != pn:
version = "%s:%s-%s" % self.recipecache.pkg_pepvpr[fn]
depend_tree['providermap'][name] = (pn, version)
for mc in taskdata:
for name, fn in list(taskdata[mc].get_providermap().items()):
pn = self.recipecaches[mc].pkg_fn[fn]
pn = self.add_mc_prefix(mc, pn)
if name != pn:
version = "%s:%s-%s" % self.recipecaches[mc].pkg_pepvpr[fn]
depend_tree['providermap'][name] = (pn, version)
for tid in rq.rqdata.runtaskentries:
taskname = bb.runqueue.taskname_from_tid(tid)
fn = bb.runqueue.fn_from_tid(tid)
pn = self.recipecache.pkg_fn[fn]
version = "%s:%s-%s" % self.recipecache.pkg_pepvpr[fn]
(mc, fn, taskname) = bb.runqueue.split_tid(tid)
taskfn = bb.runqueue.taskfn_fromtid(tid)
pn = self.recipecaches[mc].pkg_fn[taskfn]
pn = self.add_mc_prefix(mc, pn)
version = "%s:%s-%s" % self.recipecaches[mc].pkg_pepvpr[taskfn]
if pn not in depend_tree["pn"]:
depend_tree["pn"][pn] = {}
depend_tree["pn"][pn]["filename"] = fn
depend_tree["pn"][pn]["filename"] = taskfn
depend_tree["pn"][pn]["version"] = version
depend_tree["pn"][pn]["inherits"] = self.recipecache.inherits.get(fn, None)
depend_tree["pn"][pn]["inherits"] = self.recipecaches[mc].inherits.get(taskfn, None)
# if we have extra caches, list all attributes they bring in
extra_info = []
@@ -738,36 +768,37 @@ class BBCooker:
# for all attributes stored, add them to the dependency tree
for ei in extra_info:
depend_tree["pn"][pn][ei] = vars(self.recipecache)[ei][fn]
depend_tree["pn"][pn][ei] = vars(self.recipecaches[mc])[ei][taskfn]
for dep in rq.rqdata.runtaskentries[tid].depends:
depfn = bb.runqueue.fn_from_tid(dep)
deppn = self.recipecache.pkg_fn[depfn]
(depmc, depfn, deptaskname) = bb.runqueue.split_tid(dep)
deptaskfn = bb.runqueue.taskfn_fromtid(dep)
deppn = self.recipecaches[mc].pkg_fn[deptaskfn]
dotname = "%s.%s" % (pn, bb.runqueue.taskname_from_tid(tid))
if not dotname in depend_tree["tdepends"]:
depend_tree["tdepends"][dotname] = []
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
if fn not in seen_fns:
seen_fns.append(fn)
if taskfn not in seen_fns:
seen_fns.append(taskfn)
packages = []
depend_tree["depends"][pn] = []
for dep in taskdata.depids[fn]:
for dep in taskdata[mc].depids[taskfn]:
depend_tree["depends"][pn].append(dep)
depend_tree["rdepends-pn"][pn] = []
for rdep in taskdata.rdepids[fn]:
for rdep in taskdata[mc].rdepids[taskfn]:
depend_tree["rdepends-pn"][pn].append(rdep)
rdepends = self.recipecache.rundeps[fn]
rdepends = self.recipecaches[mc].rundeps[taskfn]
for package in rdepends:
depend_tree["rdepends-pkg"][package] = []
for rdepend in rdepends[package]:
depend_tree["rdepends-pkg"][package].append(rdepend)
packages.append(package)
rrecs = self.recipecache.runrecs[fn]
rrecs = self.recipecaches[mc].runrecs[taskfn]
for package in rrecs:
depend_tree["rrecs-pkg"][package] = []
for rdepend in rrecs[package]:
@@ -779,7 +810,7 @@ class BBCooker:
if package not in depend_tree["packages"]:
depend_tree["packages"][package] = {}
depend_tree["packages"][package]["pn"] = pn
depend_tree["packages"][package]["filename"] = fn
depend_tree["packages"][package]["filename"] = taskfn
depend_tree["packages"][package]["version"] = version
return depend_tree
@@ -806,44 +837,54 @@ class BBCooker:
cachefields = getattr(cache_class, 'cachefields', [])
extra_info = extra_info + cachefields
for tid in taskdata.taskentries:
fn = bb.runqueue.fn_from_tid(tid)
pn = self.recipecache.pkg_fn[fn]
tids = []
for mc in taskdata:
for tid in taskdata[mc].taskentries:
tids.append(tid)
for tid in tids:
(mc, fn, taskname) = bb.runqueue.split_tid(tid)
taskfn = bb.runqueue.taskfn_fromtid(tid)
pn = self.recipecaches[mc].pkg_fn[taskfn]
pn = self.add_mc_prefix(mc, pn)
if pn not in depend_tree["pn"]:
depend_tree["pn"][pn] = {}
depend_tree["pn"][pn]["filename"] = fn
version = "%s:%s-%s" % self.recipecache.pkg_pepvpr[fn]
depend_tree["pn"][pn]["filename"] = taskfn
version = "%s:%s-%s" % self.recipecaches[mc].pkg_pepvpr[taskfn]
depend_tree["pn"][pn]["version"] = version
rdepends = self.recipecache.rundeps[fn]
rrecs = self.recipecache.runrecs[fn]
depend_tree["pn"][pn]["inherits"] = self.recipecache.inherits.get(fn, None)
rdepends = self.recipecaches[mc].rundeps[taskfn]
rrecs = self.recipecaches[mc].runrecs[taskfn]
depend_tree["pn"][pn]["inherits"] = self.recipecaches[mc].inherits.get(taskfn, None)
# for all extra attributes stored, add them to the dependency tree
for ei in extra_info:
depend_tree["pn"][pn][ei] = vars(self.recipecache)[ei][fn]
depend_tree["pn"][pn][ei] = vars(self.recipecaches[mc])[ei][taskfn]
if fn not in seen_fns:
seen_fns.append(fn)
if taskfn not in seen_fns:
seen_fns.append(taskfn)
depend_tree["depends"][pn] = []
for item in taskdata.depids[fn]:
for item in taskdata[mc].depids[taskfn]:
pn_provider = ""
if dep in taskdata.build_targets and taskdata.build_targets[dep]:
fn_provider = taskdata.build_targets[dep][0]
pn_provider = self.recipecache.pkg_fn[fn_provider]
if dep in taskdata[mc].build_targets and taskdata[mc].build_targets[dep]:
fn_provider = taskdata[mc].build_targets[dep][0]
pn_provider = self.recipecaches[mc].pkg_fn[fn_provider]
else:
pn_provider = item
pn_provider = self.add_mc_prefix(mc, pn_provider)
depend_tree["depends"][pn].append(pn_provider)
depend_tree["rdepends-pn"][pn] = []
for rdep in taskdata.rdepids[fn]:
for rdep in taskdata[mc].rdepids[taskfn]:
pn_rprovider = ""
if rdep in taskdata.run_targets and taskdata.run_targets[rdep]:
fn_rprovider = taskdata.run_targets[rdep][0]
pn_rprovider = self.recipecache.pkg_fn[fn_rprovider]
if rdep in taskdata[mc].run_targets and taskdata[mc].run_targets[rdep]:
fn_rprovider = taskdata[mc].run_targets[rdep][0]
pn_rprovider = self.recipecaches[mc].pkg_fn[fn_rprovider]
else:
pn_rprovider = rdep
pn_rprovider = self.add_mc_prefix(mc, pn_rprovider)
depend_tree["rdepends-pn"][pn].append(pn_rprovider)
depend_tree["rdepends-pkg"].update(rdepends)
@@ -927,7 +968,7 @@ class BBCooker:
# Determine which bbappends haven't been applied
# First get list of recipes, including skipped
recipefns = list(self.recipecache.pkg_fn.keys())
recipefns = list(self.recipecaches[''].pkg_fn.keys())
recipefns.extend(self.skiplist.keys())
# Work out list of bbappends that have been applied
@@ -951,20 +992,21 @@ class BBCooker:
def handlePrefProviders(self):
localdata = data.createCopy(self.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
for mc in self.multiconfigs:
localdata = data.createCopy(self.databuilder.mcdata[mc])
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
# Handle PREFERRED_PROVIDERS
for p in (localdata.getVar('PREFERRED_PROVIDERS', True) or "").split():
try:
(providee, provider) = p.split(':')
except:
providerlog.critical("Malformed option in PREFERRED_PROVIDERS variable: %s" % p)
continue
if providee in self.recipecache.preferred and self.recipecache.preferred[providee] != provider:
providerlog.error("conflicting preferences for %s: both %s and %s specified", providee, provider, self.recipecache.preferred[providee])
self.recipecache.preferred[providee] = provider
# Handle PREFERRED_PROVIDERS
for p in (localdata.getVar('PREFERRED_PROVIDERS', True) or "").split():
try:
(providee, provider) = p.split(':')
except:
providerlog.critical("Malformed option in PREFERRED_PROVIDERS variable: %s" % p)
continue
if providee in self.recipecaches[mc].preferred and self.recipecaches[mc].preferred[providee] != provider:
providerlog.error("conflicting preferences for %s: both %s and %s specified", providee, provider, self.recipecaches[mc].preferred[providee])
self.recipecaches[mc].preferred[providee] = provider
def findCoreBaseFiles(self, subdir, configfile):
corebase = self.data.getVar('COREBASE', True) or ""
@@ -1059,10 +1101,10 @@ class BBCooker:
"""
pkg_list = []
for pfn in self.recipecache.pkg_fn:
inherits = self.recipecache.inherits.get(pfn, None)
for pfn in self.recipecaches[''].pkg_fn:
inherits = self.recipecaches[''].inherits.get(pfn, None)
if inherits and klass in inherits:
pkg_list.append(self.recipecache.pkg_fn[pfn])
pkg_list.append(self.recipecaches[''].pkg_fn[pfn])
return pkg_list
@@ -1095,10 +1137,10 @@ class BBCooker:
shell.start( self )
def handleCollections( self, collections ):
def handleCollections(self, collections):
"""Handle collections"""
errors = False
self.recipecache.bbfile_config_priorities = []
self.bbfile_config_priorities = []
if collections:
collection_priorities = {}
collection_depends = {}
@@ -1176,7 +1218,7 @@ class BBCooker:
parselog.error("BBFILE_PATTERN_%s \"%s\" is not a valid regular expression", c, regex)
errors = True
continue
self.recipecache.bbfile_config_priorities.append((c, regex, cre, collection_priorities[c]))
self.bbfile_config_priorities.append((c, regex, cre, collection_priorities[c]))
if errors:
# We've already printed the actual error(s)
raise CollectionError("Errors during parsing layer configuration")
@@ -1199,7 +1241,7 @@ class BBCooker:
if bf.startswith("/") or bf.startswith("../"):
bf = os.path.abspath(bf)
self.collection = CookerCollectFiles(self.recipecache.bbfile_config_priorities)
self.collection = CookerCollectFiles(self.bbfile_config_priorities)
filelist, masked = self.collection.collect_bbfiles(self.data, self.expanded_data)
try:
os.stat(bf)
@@ -1249,17 +1291,17 @@ class BBCooker:
if (task == None):
task = self.configuration.cmd
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
fn, cls, mc = bb.cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn)
self.buildSetVars()
infos = bb.cache.Cache.parse(fn, self.collection.get_file_appends(fn), \
self.data,
self.caches_array)
bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
infos = bb_cache.parse(fn, self.collection.get_file_appends(fn))
infos = dict(infos)
fn = bb.cache.Cache.realfn2virtual(fn, cls)
fn = bb.cache.realfn2virtual(fn, cls, mc)
try:
info_array = infos[fn]
except KeyError:
@@ -1268,29 +1310,30 @@ class BBCooker:
if info_array[0].skipped:
bb.fatal("%s was skipped: %s" % (fn, info_array[0].skipreason))
self.recipecache.add_from_recipeinfo(fn, info_array)
self.recipecaches[mc].add_from_recipeinfo(fn, info_array)
# Tweak some variables
item = info_array[0].pn
self.recipecache.ignored_dependencies = set()
self.recipecache.bbfile_priority[fn] = 1
self.recipecaches[mc].ignored_dependencies = set()
self.recipecaches[mc].bbfile_priority[fn] = 1
# Remove external dependencies
self.recipecache.task_deps[fn]['depends'] = {}
self.recipecache.deps[fn] = []
self.recipecache.rundeps[fn] = []
self.recipecache.runrecs[fn] = []
self.recipecaches[mc].task_deps[fn]['depends'] = {}
self.recipecaches[mc].deps[fn] = []
self.recipecaches[mc].rundeps[fn] = []
self.recipecaches[mc].runrecs[fn] = []
# Invalidate task for target if force mode active
if self.configuration.force:
logger.verbose("Invalidate task %s, %s", task, fn)
if not task.startswith("do_"):
task = "do_%s" % task
bb.parse.siggen.invalidate_task(task, self.recipecache, fn)
bb.parse.siggen.invalidate_task(task, self.recipecaches[mc], fn)
# Setup taskdata structure
taskdata = bb.taskdata.TaskData(self.configuration.abort)
taskdata.add_provider(self.data, self.recipecache, item)
taskdata = {}
taskdata[mc] = bb.taskdata.TaskData(self.configuration.abort)
taskdata[mc].add_provider(self.data, self.recipecaches[mc], item)
buildname = self.data.getVar("BUILDNAME", True)
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.expanded_data)
@@ -1298,9 +1341,9 @@ class BBCooker:
# Execute the runqueue
if not task.startswith("do_"):
task = "do_%s" % task
runlist = [[item, task]]
runlist = [[mc, item, task, fn]]
rq = bb.runqueue.RunQueue(self, self.data, self.recipecache, taskdata, runlist)
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
def buildFileIdle(server, rq, abort):
@@ -1381,23 +1424,20 @@ class BBCooker:
packages = ["%s:%s" % (target, task) for target in targets]
bb.event.fire(bb.event.BuildInit(packages), self.expanded_data)
taskdata, runlist, fulltargetlist = self.buildTaskData(targets, task, self.configuration.abort)
taskdata, runlist = self.buildTaskData(targets, task, self.configuration.abort)
buildname = self.data.getVar("BUILDNAME", False)
# make targets to always look as <target>:do_<task>
ntargets = []
for target in fulltargetlist:
if ":" in target:
if ":do_" not in target:
target = "%s:do_%s" % tuple(target.split(":", 1))
else:
target = "%s:%s" % (target, task)
ntargets.append(target)
for target in runlist:
if target[0]:
ntargets.append("multiconfig:%s:%s:%s" % (target[0], target[1], target[2]))
ntargets.append("%s:%s" % (target[1], target[2]))
bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.data)
rq = bb.runqueue.RunQueue(self, self.data, self.recipecache, taskdata, runlist)
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
if 'universe' in targets:
rq.rqdata.warn_multi_bb = True
@@ -1512,13 +1552,14 @@ class BBCooker:
if CookerFeatures.SEND_SANITYEVENTS in self.featureset:
bb.event.fire(bb.event.SanityCheck(False), self.data)
ignore = self.expanded_data.getVar("ASSUME_PROVIDED", True) or ""
self.recipecache.ignored_dependencies = set(ignore.split())
for mc in self.multiconfigs:
ignore = self.databuilder.mcdata[mc].getVar("ASSUME_PROVIDED", True) or ""
self.recipecaches[mc].ignored_dependencies = set(ignore.split())
for dep in self.configuration.extra_assume_provided:
self.recipecache.ignored_dependencies.add(dep)
for dep in self.configuration.extra_assume_provided:
self.recipecaches[mc].ignored_dependencies.add(dep)
self.collection = CookerCollectFiles(self.recipecache.bbfile_config_priorities)
self.collection = CookerCollectFiles(self.bbfile_config_priorities)
(filelist, masked) = self.collection.collect_bbfiles(self.data, self.expanded_data)
self.parser = CookerParser(self, filelist, masked)
@@ -1532,13 +1573,15 @@ class BBCooker:
raise bb.BBHandledException()
self.show_appends_with_no_recipes()
self.handlePrefProviders()
self.recipecache.bbfile_priority = self.collection.collection_priorities(self.recipecache.pkg_fn, self.data)
for mc in self.multiconfigs:
self.recipecaches[mc].bbfile_priority = self.collection.collection_priorities(self.recipecaches[mc].pkg_fn, self.data)
self.state = state.running
# Send an event listing all stamps reachable after parsing
# which the metadata may use to clean up stale data
event = bb.event.ReachableStamps(self.recipecache.stamp)
bb.event.fire(event, self.expanded_data)
for mc in self.multiconfigs:
event = bb.event.ReachableStamps(self.recipecaches[mc].stamp)
bb.event.fire(event, self.databuilder.mcdata[mc])
return None
return True
@@ -1557,23 +1600,26 @@ class BBCooker:
parselog.warning("Explicit target \"%s\" is in ASSUME_PROVIDED, ignoring" % pkg)
if 'world' in pkgs_to_build:
bb.providers.buildWorldTargetList(self.recipecache)
pkgs_to_build.remove('world')
for t in self.recipecache.world_target:
pkgs_to_build.append(t)
for mc in self.multiconfigs:
bb.providers.buildWorldTargetList(self.recipecaches[mc])
for t in self.recipecaches[mc].world_target:
if mc:
t = "multiconfig:" + mc + ":" + t
pkgs_to_build.append(t)
if 'universe' in pkgs_to_build:
parselog.warning("The \"universe\" target is only intended for testing and may produce errors.")
parselog.debug(1, "collating packages for \"universe\"")
pkgs_to_build.remove('universe')
for t in self.recipecache.universe_target:
pkgs_to_build.append(t)
for mc in self.multiconfigs:
for t in self.recipecaches[mc].universe_target:
if mc:
t = "multiconfig:" + mc + ":" + t
pkgs_to_build.append(t)
return pkgs_to_build
def pre_serve(self):
# Empty the environment. The environment will be populated as
# necessary from the data store.
@@ -1822,7 +1868,7 @@ class CookerCollectFiles(object):
# Calculate priorities for each file
matched = set()
for p in pkgfns:
realfn, cls = bb.cache.Cache.virtualfn2realfn(p)
realfn, cls, mc = bb.cache.virtualfn2realfn(p)
priorities[p] = self.calc_bbfile_priority(realfn, matched)
# Don't show the warning if the BBFILE_PATTERN did match .bbappend files
@@ -1943,7 +1989,7 @@ class Parser(multiprocessing.Process):
except queue.Full:
pending.append(result)
def parse(self, filename, appends, caches_array):
def parse(self, filename, appends):
try:
# Record the filename we're parsing into any events generated
def parse_filter(self, record):
@@ -1956,7 +2002,7 @@ class Parser(multiprocessing.Process):
bb.event.set_class_handlers(self.handlers.copy())
bb.event.LogHandler.filter = parse_filter
return True, bb.cache.Cache.parse(filename, appends, self.cfg, caches_array)
return True, self.bb_cache.parse(filename, appends)
except Exception as exc:
tb = sys.exc_info()[2]
exc.recipe = filename
@@ -1974,6 +2020,7 @@ class CookerParser(object):
self.cooker = cooker
self.cfgdata = cooker.data
self.cfghash = cooker.data_hash
self.cfgbuilder = cooker.databuilder
# Accounting statistics
self.parsed = 0
@@ -1988,13 +2035,13 @@ class CookerParser(object):
self.current = 0
self.process_names = []
self.bb_cache = bb.cache.Cache(self.cfgdata, self.cfghash, cooker.caches_array)
self.bb_cache = bb.cache.Cache(self.cfgbuilder, self.cfghash, cooker.caches_array)
self.fromcache = []
self.willparse = []
for filename in self.filelist:
appends = self.cooker.collection.get_file_appends(filename)
if not self.bb_cache.cacheValid(filename, appends):
self.willparse.append((filename, appends, cooker.caches_array))
self.willparse.append((filename, appends))
else:
self.fromcache.append((filename, appends))
self.toparse = self.total - len(self.fromcache)
@@ -2012,7 +2059,7 @@ class CookerParser(object):
if self.toparse:
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
def init():
Parser.cfg = self.cfgdata
Parser.bb_cache = self.bb_cache
bb.utils.set_process_name(multiprocessing.current_process().name)
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
@@ -2083,7 +2130,7 @@ class CookerParser(object):
def load_cached(self):
for filename, appends in self.fromcache:
cached, infos = self.bb_cache.load(filename, appends, self.cfgdata)
cached, infos = self.bb_cache.load(filename, appends)
yield not cached, infos
def parse_generator(self):
@@ -2162,13 +2209,13 @@ class CookerParser(object):
if info_array[0].skipped:
self.skipped += 1
self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
self.bb_cache.add_info(virtualfn, info_array, self.cooker.recipecache,
(fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
self.bb_cache.add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
parsed=parsed, watcher = self.cooker.add_filewatch)
return True
def reparse(self, filename):
infos = self.bb_cache.parse(filename,
self.cooker.collection.get_file_appends(filename),
self.cfgdata, self.cooker.caches_array)
infos = self.bb_cache.parse(filename, self.cooker.collection.get_file_appends(filename))
for vfn, info_array in infos:
self.cooker.recipecache.add_from_recipeinfo(vfn, info_array)
(fn, cls, mc) = bb.cache.virtualfn2realfn(vfn)
self.cooker.recipecaches[mc].add_from_recipeinfo(vfn, info_array)

View File

@@ -237,9 +237,9 @@ class CookerDataBuilder(object):
bb.utils.set_context(bb.utils.clean_context())
bb.event.set_class_handlers(bb.event.clean_class_handlers())
self.data = bb.data.init()
self.basedata = bb.data.init()
if self.tracking:
self.data.enableTracking()
self.basedata.enableTracking()
# Keep a datastore of the initial environment variables and their
# values from when BitBake was launched to enable child processes
@@ -250,15 +250,40 @@ class CookerDataBuilder(object):
self.savedenv.setVar(k, cookercfg.env[k])
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.data, self.savedenv, filtered_keys)
self.data.setVar("BB_ORIGENV", self.savedenv)
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
if worker:
self.data.setVar("BB_WORKERCONTEXT", "1")
self.basedata.setVar("BB_WORKERCONTEXT", "1")
self.data = self.basedata
self.mcdata = {}
def parseBaseConfiguration(self):
try:
self.parseConfigurationFiles(self.prefiles, self.postfiles)
bb.parse.init_parser(self.basedata)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(self.data)
bb.codeparser.parser_cache_init(self.data)
bb.event.fire(bb.event.ConfigParsed(), self.data)
if self.data.getVar("BB_INVALIDCONF", False) is True:
self.data.setVar("BB_INVALIDCONF", False)
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
bb.parse.init_parser(self.data)
self.data_hash = self.data.get_hash()
self.mcdata[''] = self.data
multiconfig = (self.data.getVar("BBMULTICONFIG", True) or "").split()
for config in multiconfig:
mcdata = self.parseConfigurationFiles(['conf/multiconfig/%s.conf' % config] + self.prefiles, self.postfiles)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
except SyntaxError:
raise bb.BBHandledException
except bb.data_smart.ExpansionError as e:
@@ -272,8 +297,7 @@ class CookerDataBuilder(object):
return findConfigFile("bblayers.conf", data)
def parseConfigurationFiles(self, prefiles, postfiles):
data = self.data
bb.parse.init_parser(data)
data = bb.data.createCopy(self.basedata)
# Parse files for loading *before* bitbake.conf and any includes
for f in prefiles:
@@ -333,23 +357,13 @@ class CookerDataBuilder(object):
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
if not handlerfn:
parselog.critical("Undefined event handler function '%s'" % var)
sys.exit(1)
handlerln = int(data.getVarFlag(var, "lineno", False))
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask", True) or "").split(), handlerfn, handlerln)
if data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(data)
bb.codeparser.parser_cache_init(data)
bb.event.fire(bb.event.ConfigParsed(), data)
if data.getVar("BB_INVALIDCONF", False) is True:
data.setVar("BB_INVALIDCONF", False)
self.parseConfigurationFiles(self.prefiles, self.postfiles)
return
bb.parse.init_parser(data)
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
self.data = data
self.data_hash = data.get_hash()
return data

View File

@@ -779,7 +779,7 @@ def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
return fetcher.localpath(url)
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None):
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
"""
Run cmd returning the command output
Raise an error if interrupted or cmd fails
@@ -821,7 +821,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None):
error_message = ""
try:
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE)
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
success = True
except bb.process.NotFoundError as e:
error_message = "Fetch command %s" % (e.command)
@@ -935,8 +935,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
return found
return False
os.chdir(ld.getVar("DL_DIR", True))
if not verify_donestamp(ud, ld, origud) or ud.method.need_update(ud, ld):
ud.method.download(ud, ld)
if hasattr(ud.method,"build_mirror_data"):
@@ -1436,17 +1434,11 @@ class FetchMethod(object):
if not cmd:
return
# Change to unpackdir before executing command
save_cwd = os.getcwd();
os.chdir(unpackdir)
path = data.getVar('PATH', True)
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (file, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
os.chdir(save_cwd)
bb.note("Unpacking %s to %s/" % (file, unpackdir))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True, cwd=unpackdir)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), urldata.url)
@@ -1514,8 +1506,9 @@ class Fetch(object):
self.connection_cache = connection_cache
fn = d.getVar('FILE', True)
if cache and fn and fn in urldata_cache:
self.ud = urldata_cache[fn]
mc = d.getVar('__BBMULTICONFIG', True) or ""
if cache and fn and mc + fn in urldata_cache:
self.ud = urldata_cache[mc + fn]
for url in urls:
if url not in self.ud:
@@ -1527,7 +1520,7 @@ class Fetch(object):
pass
if fn and cache:
urldata_cache[fn] = self.ud
urldata_cache[mc + fn] = self.ud
def localpath(self, url):
if url not in self.urls:
@@ -1581,8 +1574,6 @@ class Fetch(object):
if premirroronly:
self.d.setVar("BB_NO_NETWORK", "1")
os.chdir(self.d.getVar("DL_DIR", True))
firsterr = None
verified_stamp = verify_donestamp(ud, self.d)
if not localpath and (not verified_stamp or m.need_update(ud, self.d)):

View File

@@ -88,19 +88,15 @@ class Bzr(FetchMethod):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", ud.url)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", ud.url)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
@@ -109,7 +105,8 @@ class Bzr(FetchMethod):
tar_flags = "--exclude='.bzr' --exclude='.bzrtags'"
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)),
d, cleanup=[ud.localpath], workdir=ud.pkgdir)
def supports_srcrev(self):
return True

View File

@@ -202,11 +202,10 @@ class ClearCase(FetchMethod):
def _remove_view(self, ud, d):
if os.path.exists(ud.viewdir):
os.chdir(ud.ccasedir)
cmd = self._build_ccase_command(ud, 'rmview');
logger.info("cleaning up [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d)
output = runfetchcmd(cmd, d, workdir=ud.ccasedir)
logger.info("rmview output: %s", output)
def need_update(self, ud, d):
@@ -241,11 +240,10 @@ class ClearCase(FetchMethod):
raise e
# Set configspec: Setting the configspec effectively fetches the files as defined in the configspec
os.chdir(ud.viewdir)
cmd = self._build_ccase_command(ud, 'setcs');
logger.info("fetching data [VOB=%s label=%s view=%s]", ud.vob, ud.label, ud.viewname)
bb.fetch2.check_network_access(d, cmd, ud.url)
output = runfetchcmd(cmd, d)
output = runfetchcmd(cmd, d, workdir=ud.viewdir)
logger.info("%s", output)
# Copy the configspec to the viewdir so we have it in our source tarball later

View File

@@ -123,22 +123,23 @@ class Cvs(FetchMethod):
pkg = d.getVar('PN', True)
pkgdir = os.path.join(d.getVar('CVSDIR', True), pkg)
moddir = os.path.join(pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + ud.url)
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
# update sources there
os.chdir(moddir)
workdir = moddir
cmd = cvsupdatecmd
else:
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(pkgdir)
os.chdir(pkgdir)
workdir = pkgdir
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
runfetchcmd(cmd, d, cleanup = [moddir])
runfetchcmd(cmd, d, cleanup=[moddir], workdir=workdir)
if not os.access(moddir, os.R_OK):
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
@@ -150,15 +151,15 @@ class Cvs(FetchMethod):
tar_flags = "--exclude='CVS'"
# tar them up to a defined filename
workdir = None
if 'fullpath' in ud.parm:
os.chdir(pkgdir)
workdir = pkgdir
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
os.chdir(moddir)
os.chdir('..')
workdir = os.path.dirname(os.path.realpath(moddir))
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
runfetchcmd(cmd, d, cleanup = [ud.localpath])
runfetchcmd(cmd, d, cleanup=[ud.localpath], workdir=workdir)
def clean(self, ud, d):
""" Clean CVS Files and tarballs """

View File

@@ -49,6 +49,10 @@ Supported SRC_URI options are:
referring to commit which is valid in tag instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
For local git:// urls to use the current branch HEAD as the revsion for use with
AUTOREV. Implies nobranch.
"""
#Copyright (C) 2005 Richard Purdie
@@ -153,6 +157,13 @@ class Git(FetchMethod):
ud.nobranch = ud.parm.get("nobranch","0") == "1"
# usehead implies nobranch
ud.usehead = ud.parm.get("usehead","0") == "1"
if ud.usehead:
if ud.proto != "file":
raise bb.fetch2.ParameterError("The usehead option is only for use with local ('protocol=file') git repositories", ud.url)
ud.nobranch = 1
# bareclone implies nocheckout
ud.bareclone = ud.parm.get("bareclone","0") == "1"
if ud.bareclone:
@@ -168,6 +179,9 @@ class Git(FetchMethod):
ud.branches[name] = branch
ud.unresolvedrev[name] = branch
if ud.usehead:
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git -c core.fsyncobjectfiles=0"
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
@@ -205,9 +219,8 @@ class Git(FetchMethod):
def need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
return True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name):
if not self._contains_ref(ud, d, name, ud.clonedir):
return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
@@ -228,8 +241,7 @@ class Git(FetchMethod):
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d, workdir=ud.clonedir)
repourl = self._get_repo_url(ud)
@@ -244,34 +256,32 @@ class Git(FetchMethod):
progresshandler = GitProgressHandler(d)
runfetchcmd(clone_cmd, d, log=progresshandler)
os.chdir(ud.clonedir)
# Update the checkout if needed
needupdate = False
for name in ud.names:
if not self._contains_ref(ud, d, name):
if not self._contains_ref(ud, d, name, ud.clonedir):
needupdate = True
if needupdate:
try:
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
runfetchcmd(fetch_cmd, d, log=progresshandler)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
runfetchcmd(fetch_cmd, d, log=progresshandler, workdir=ud.clonedir)
runfetchcmd("%s prune-packed" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d, workdir=ud.clonedir)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud, d, name):
if not self._contains_ref(ud, d, name, ud.clonedir):
raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
def build_mirror_data(self, ud, d):
@@ -281,10 +291,9 @@ class Git(FetchMethod):
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=ud.clonedir)
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
@@ -307,21 +316,21 @@ class Git(FetchMethod):
cloneflags += " --mirror"
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, ud.clonedir, destdir), d)
os.chdir(destdir)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
if not ud.nocheckout:
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
elif not ud.nobranch:
branchname = ud.branches[ud.names[0]]
runfetchcmd("%s checkout -B %s %s" % (ud.basecmd, branchname, \
ud.revisions[ud.names[0]]), d)
ud.revisions[ud.names[0]]), d, workdir=destdir)
runfetchcmd("%s branch --set-upstream %s origin/%s" % (ud.basecmd, branchname, \
branchname), d)
branchname), d, workdir=destdir)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=destdir)
return True
@@ -335,7 +344,7 @@ class Git(FetchMethod):
def supports_srcrev(self):
return True
def _contains_ref(self, ud, d, name):
def _contains_ref(self, ud, d, name, wd):
cmd = ""
if ud.nobranch:
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (
@@ -344,7 +353,7 @@ class Git(FetchMethod):
cmd = "%s branch --contains %s --list %s 2> /dev/null | wc -l" % (
ud.basecmd, ud.revisions[name], ud.branches[name])
try:
output = runfetchcmd(cmd, d, quiet=True)
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
except bb.fetch2.FetchError:
return False
if len(output.split()) > 1:
@@ -387,7 +396,7 @@ class Git(FetchMethod):
"""
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/":
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
head = ud.unresolvedrev[name]
tag = ud.unresolvedrev[name]
else:

View File

@@ -34,43 +34,42 @@ class GitANNEX(Git):
"""
return ud.type in ['gitannex']
def uses_annex(self, ud, d):
def uses_annex(self, ud, d, wd):
for name in ud.names:
try:
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True)
runfetchcmd("%s rev-list git-annex" % (ud.basecmd), d, quiet=True, workdir=wd)
return True
except bb.fetch.FetchError:
pass
return False
def update_annex(self, ud, d):
def update_annex(self, ud, d, wd):
try:
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True)
runfetchcmd("%s annex get --all" % (ud.basecmd), d, quiet=True, workdir=wd)
except bb.fetch.FetchError:
return False
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True)
runfetchcmd("chmod u+w -R %s/annex" % (ud.clonedir), d, quiet=True, workdir=wd)
return True
def download(self, ud, d):
Git.download(self, ud, d)
os.chdir(ud.clonedir)
annex = self.uses_annex(ud, d)
annex = self.uses_annex(ud, d, ud.clonedir)
if annex:
self.update_annex(ud, d)
self.update_annex(ud, d, ud.clonedir)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
os.chdir(ud.destdir)
try:
runfetchcmd("%s annex init" % (ud.basecmd), d)
runfetchcmd("%s annex init" % (ud.basecmd), d, workdir=ud.destdir)
except bb.fetch.FetchError:
pass
annex = self.uses_annex(ud, d)
annex = self.uses_annex(ud, d, ud.destdir)
if annex:
runfetchcmd("%s annex get" % (ud.basecmd), d)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True)
runfetchcmd("%s annex get" % (ud.basecmd), d, workdir=ud.destdir)
runfetchcmd("chmod u+w -R %s/.git/annex" % (ud.destdir), d, quiet=True, workdir=ud.destdir)

View File

@@ -43,10 +43,10 @@ class GitSM(Git):
"""
return ud.type in ['gitsm']
def uses_submodules(self, ud, d):
def uses_submodules(self, ud, d, wd):
for name in ud.names:
try:
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True)
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=wd)
return True
except bb.fetch.FetchError:
pass
@@ -107,28 +107,25 @@ class GitSM(Git):
os.mkdir(tmpclonedir)
os.rename(ud.clonedir, gitdir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
os.chdir(tmpclonedir)
runfetchcmd(ud.basecmd + " reset --hard", d)
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d)
runfetchcmd(ud.basecmd + " reset --hard", d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=tmpclonedir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=tmpclonedir)
self._set_relative_paths(tmpclonedir)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d)
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir)
os.rename(gitdir, ud.clonedir,)
bb.utils.remove(tmpclonedir, True)
def download(self, ud, d):
Git.download(self, ud, d)
os.chdir(ud.clonedir)
submodules = self.uses_submodules(ud, d)
submodules = self.uses_submodules(ud, d, ud.clonedir)
if submodules:
self.update_submodules(ud, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
os.chdir(ud.destdir)
submodules = self.uses_submodules(ud, d)
submodules = self.uses_submodules(ud, d, ud.destdir)
if submodules:
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d)
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir)

View File

@@ -169,25 +169,22 @@ class Hg(FetchMethod):
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d, workdir=ud.pkgdir)
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
# Found the source, check whether need pull
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
try:
runfetchcmd(updatecmd, d)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
except bb.fetch2.FetchError:
# Runnning pull in the repo
pullcmd = self._buildhgcommand(ud, d, "pull")
logger.info("Pulling " + ud.url)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", pullcmd)
bb.fetch2.check_network_access(d, pullcmd, ud.url)
runfetchcmd(pullcmd, d)
runfetchcmd(pullcmd, d, workdir=ud.moddir)
try:
os.unlink(ud.fullmirror)
except OSError as exc:
@@ -200,17 +197,15 @@ class Hg(FetchMethod):
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d)
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d)
runfetchcmd(updatecmd, d, workdir=ud.moddir)
def clean(self, ud, d):
""" Clean the hg dir """
@@ -246,10 +241,9 @@ class Hg(FetchMethod):
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
os.chdir(ud.pkgdir)
logger.info("Creating tarball of hg repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d, workdir=ud.pkgdir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=ud.pkgdir)
def localpath(self, ud, d):
return ud.pkgdir
@@ -269,10 +263,8 @@ class Hg(FetchMethod):
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
logger.debug(2, "Unpack: updating source in '" + codir + "'")
os.chdir(codir)
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d)
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d)
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d, workdir=codir)
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
else:
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
os.chdir(ud.moddir)
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d)
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)

View File

@@ -13,7 +13,7 @@ Usage in the recipe:
- name
- version
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};ver=${PV}
npm://registry.npmjs.org/${PN}/-/${PN}-${PV}.tgz would become npm://registry.npmjs.org;name=${PN};version=${PV}
The fetcher all triggers off the existence of ud.localpath. If that exists and has the ".done" stamp, its assumed the fetch is good/done
"""
@@ -88,7 +88,7 @@ class Npm(FetchMethod):
ud.localpath = d.expand("${DL_DIR}/npm/%s" % ud.bbnpmmanifest)
self.basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -O -t 2 -T 30 -nv --passive-ftp --no-check-certificate "
self.basecmd += " --directory-prefix=%s " % prefixdir
ud.prefixdir = prefixdir
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0")
ud.mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
@@ -102,7 +102,8 @@ class Npm(FetchMethod):
def _runwget(self, ud, d, command, quiet):
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command)
runfetchcmd(command, d, quiet)
dldir = d.getVar("DL_DIR", True)
runfetchcmd(command, d, quiet, workdir=dldir)
def _unpackdep(self, ud, pkg, data, destdir, dldir, d):
file = data[pkg]['tgz']
@@ -113,16 +114,13 @@ class Npm(FetchMethod):
bb.fatal("NPM package %s downloaded not a tarball!" % file)
# Change to subdir before executing command
save_cwd = os.getcwd()
if not os.path.exists(destdir):
os.makedirs(destdir)
os.chdir(destdir)
path = d.getVar('PATH', True)
if path:
cmd = "PATH=\"%s\" %s" % (path, cmd)
bb.note("Unpacking %s to %s/" % (file, os.getcwd()))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
os.chdir(save_cwd)
bb.note("Unpacking %s to %s/" % (file, destdir))
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True, cwd=destdir)
if ret != 0:
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
@@ -140,7 +138,12 @@ class Npm(FetchMethod):
workobj = json.load(datafile)
dldir = "%s/%s" % (os.path.dirname(ud.localpath), ud.pkgname)
self._unpackdep(ud, ud.pkgname, workobj, "%s/npmpkg" % destdir, dldir, d)
if 'subdir' in ud.parm:
unpackdir = '%s/%s' % (destdir, ud.parm.get('subdir'))
else:
unpackdir = '%s/npmpkg' % destdir
self._unpackdep(ud, ud.pkgname, workobj, unpackdir, dldir, d)
def _parse_view(self, output):
'''
@@ -184,7 +187,7 @@ class Npm(FetchMethod):
outputurl = pdata['dist']['tarball']
data[pkg] = {}
data[pkg]['tgz'] = os.path.basename(outputurl)
self._runwget(ud, d, "%s %s" % (self.basecmd, outputurl), False)
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
@@ -201,8 +204,15 @@ class Npm(FetchMethod):
for dep, version in depsfound.items():
self._getdependencies(dep, data[pkg]['deps'], version, d, ud)
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest):
def _getshrinkeddependencies(self, pkg, data, version, d, ud, lockdown, manifest, toplevel=True):
logger.debug(2, "NPM shrinkwrap file is %s" % data)
if toplevel:
name = data.get('name', None)
if name and name != pkg:
for obj in data.get('dependencies', []):
if obj == pkg:
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest, False)
return
outputurl = "invalid"
if ('resolved' not in data) or (not data['resolved'].startswith('http')):
# will be the case for ${PN}
@@ -211,7 +221,7 @@ class Npm(FetchMethod):
outputurl = runfetchcmd(fetchcmd, d, True)
else:
outputurl = data['resolved']
self._runwget(ud, d, "%s %s" % (self.basecmd, outputurl), False)
self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
manifest[pkg] = {}
manifest[pkg]['tgz'] = os.path.basename(outputurl).rstrip()
manifest[pkg]['deps'] = {}
@@ -228,7 +238,7 @@ class Npm(FetchMethod):
if 'dependencies' in data:
for obj in data['dependencies']:
logger.debug(2, "Found dep is %s" % str(obj))
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'])
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest[pkg]['deps'], False)
def download(self, ud, d):
"""Fetch url"""
@@ -239,10 +249,7 @@ class Npm(FetchMethod):
if not os.listdir(ud.pkgdatadir) and os.path.exists(ud.fullmirror):
dest = d.getVar("DL_DIR", True)
bb.utils.mkdirhier(dest)
save_cwd = os.getcwd()
os.chdir(dest)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d)
os.chdir(save_cwd)
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d, workdir=dest)
return
shwrf = d.getVar('NPM_SHRINKWRAP', True)
@@ -275,10 +282,8 @@ class Npm(FetchMethod):
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
save_cwd = os.getcwd()
os.chdir(d.getVar("DL_DIR", True))
dldir = d.getVar("DL_DIR", True)
logger.info("Creating tarball of npm data")
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d)
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
os.chdir(save_cwd)
runfetchcmd("tar -cJf %s npm/%s npm/%s" % (ud.fullmirror, ud.bbnpmmanifest, ud.pkgname), d,
workdir=dldir)
runfetchcmd("touch %s.done" % (ud.fullmirror), d, workdir=dldir)

View File

@@ -88,23 +88,21 @@ class Osc(FetchMethod):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d)
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d)
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d,
cleanup=[ud.localpath], workdir=os.path.join(ud.pkgdir + ud.path))
def supports_srcrev(self):
return False

View File

@@ -168,15 +168,13 @@ class Perforce(FetchMethod):
bb.utils.remove(ud.pkgdir, True)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
for afile in filelist:
p4fetchcmd = self._buildp4command(ud, d, 'print', afile)
bb.fetch2.check_network_access(d, p4fetchcmd)
runfetchcmd(p4fetchcmd, d)
runfetchcmd(p4fetchcmd, d, workdir=ud.pkgdir)
os.chdir(ud.pkgdir)
runfetchcmd('tar -czf %s p4' % (ud.localpath), d, cleanup = [ud.localpath])
runfetchcmd('tar -czf %s p4' % (ud.localpath), d, cleanup=[ud.localpath], workdir=ud.pkgdir)
def clean(self, ud, d):
""" Cleanup p4 specific files and dirs"""

View File

@@ -69,15 +69,14 @@ class Repo(FetchMethod):
else:
username = ""
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
repodir = os.path.join(codir, "repo")
bb.utils.mkdirhier(repodir)
if not os.path.exists(os.path.join(repodir, ".repo")):
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir)
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
runfetchcmd("repo sync", d)
os.chdir(codir)
runfetchcmd("repo sync", d, workdir=repodir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
@@ -86,7 +85,7 @@ class Repo(FetchMethod):
tar_flags = "--exclude='.repo' --exclude='.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d, workdir=codir)
def supports_srcrev(self):
return False

View File

@@ -126,25 +126,22 @@ class Svn(FetchMethod):
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + ud.url)
# update sources there
os.chdir(ud.moddir)
# We need to attempt to run svn upgrade first in case its an older working format
try:
runfetchcmd(ud.basecmd + " upgrade", d)
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
except FetchError:
pass
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d)
runfetchcmd(svnupdatecmd, d, workdir=ud.moddir)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + ud.url)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
runfetchcmd(svnfetchcmd, d)
runfetchcmd(svnfetchcmd, d, workdir=ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
@@ -152,9 +149,9 @@ class Svn(FetchMethod):
else:
tar_flags = "--exclude='.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d, cleanup = [ud.localpath])
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d,
cleanup=[ud.localpath], workdir=ud.pkgdir)
def clean(self, ud, d):
""" Clean SVN specific files and dirs """

View File

@@ -108,6 +108,10 @@ class Wget(FetchMethod):
bb.utils.mkdirhier(os.path.dirname(dldir + os.sep + ud.localfile))
fetchcmd += " -O " + dldir + os.sep + ud.localfile
if ud.user:
up = ud.user.split(":")
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (up[0],up[1])
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
@@ -300,6 +304,13 @@ class Wget(FetchMethod):
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
if ud.user:
import base64
encodeuser = base64.b64encode(ud.user.encode('utf-8')).decode("utf-8")
authheader = "Basic %s" % encodeuser
r.add_header("Authorization", authheader)
opener.open(r)
except urllib.error.URLError as e:
if try_again:

View File

@@ -69,6 +69,33 @@ class ExportNode(AstNode):
def eval(self, data):
data.setVarFlag(self.var, "export", 1, op = 'exported')
class UnsetNode(AstNode):
def __init__(self, filename, lineno, var):
AstNode.__init__(self, filename, lineno)
self.var = var
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVar(self.var,**loginfo)
class UnsetFlagNode(AstNode):
def __init__(self, filename, lineno, var, flag):
AstNode.__init__(self, filename, lineno)
self.var = var
self.flag = flag
def eval(self, data):
loginfo = {
'variable': self.var,
'file': self.filename,
'line': self.lineno,
}
data.delVarFlag(self.var, self.flag, **loginfo)
class DataNode(AstNode):
"""
Various data related updates. For the sake of sanity
@@ -270,6 +297,12 @@ def handleInclude(statements, filename, lineno, m, force):
def handleExport(statements, filename, lineno, m):
statements.append(ExportNode(filename, lineno, m.group(1)))
def handleUnset(statements, filename, lineno, m):
statements.append(UnsetNode(filename, lineno, m.group(1)))
def handleUnsetFlag(statements, filename, lineno, m):
statements.append(UnsetFlagNode(filename, lineno, m.group(1), m.group(2)))
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
@@ -311,6 +344,8 @@ def finalize(fn, d, variant = None):
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
if not handlerfn:
bb.fatal("Undefined event handler function '%s'" % var)
handlerln = int(d.getVarFlag(var, "lineno", False))
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask", True) or "").split(), handlerfn, handlerln)
@@ -469,9 +504,5 @@ def multi_finalize(fn, d):
except bb.parse.SkipRecipe as e:
datastores[variant].setVar("__SKIPPED", e.args[0])
if len(datastores) > 1:
variants = filter(None, datastores.keys())
safe_d.setVar("__VARIANTS", " ".join(variants))
datastores[""] = d
return datastores

View File

@@ -57,6 +57,8 @@ __config_regexp__ = re.compile( r"""
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
__unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/]+)\[([a-zA-Z0-9\-_+.${}/]+)\]$" )
def init(data):
topdir = data.getVar('TOPDIR', False)
@@ -185,6 +187,16 @@ def feeder(lineno, s, fn, statements):
ast.handleExport(statements, fn, lineno, m)
return
m = __unset_regexp__.match(s)
if m:
ast.handleUnset(statements, fn, lineno, m)
return
m = __unset_flag_regexp__.match(s)
if m:
ast.handleUnsetFlag(statements, fn, lineno, m)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
# Add us to the handlers list

File diff suppressed because it is too large Load Diff

View File

@@ -144,8 +144,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
def finalise(self, fn, d, variant):
if variant:
fn = "virtual:" + variant + ":" + fn
mc = d.getVar("__BBMULTICONFIG", False) or ""
if variant or mc:
fn = bb.cache.realfn2virtual(fn, variant, mc)
try:
taskdeps = self._build_data(fn, d)
@@ -300,16 +301,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[k], k))
def dump_sigs(self, dataCache, options):
def dump_sigs(self, dataCaches, options):
for fn in self.taskdeps:
for task in self.taskdeps[fn]:
tid = fn + ":" + task
(mc, _, _) = bb.runqueue.split_tid(tid)
k = fn + "." + task
if k not in self.taskhash:
continue
if dataCache.basetaskhash[k] != self.basehash[k]:
if dataCaches[mc].basetaskhash[k] != self.basehash[k]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCaches[mc].stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
@@ -363,10 +366,12 @@ def clean_basepaths_list(a):
def compare_sigfiles(a, b, recursecb = None):
output = []
p1 = pickle.Unpickler(open(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(open(b, "rb"))
b_data = p2.load()
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
with open(b, 'rb') as f:
p2 = pickle.Unpickler(f)
b_data = p2.load()
def dict_diff(a, b, whitelist=set()):
sa = set(a.keys())
@@ -563,8 +568,9 @@ def calc_taskhash(sigdata):
def dump_sigfile(a):
output = []
p1 = pickle.Unpickler(open(a, "rb"))
a_data = p1.load()
with open(a, 'rb') as f:
p1 = pickle.Unpickler(f)
a_data = p1.load()
output.append("basewhitelist: %s" % (a_data['basewhitelist']))

View File

@@ -360,7 +360,10 @@ class FetcherTest(unittest.TestCase):
def tearDown(self):
os.chdir(self.origdir)
bb.utils.prunedir(self.tempdir)
if os.environ.get("BB_TMPDIR_NOCLEAN") == "yes":
print("Not cleaning up %s. Please remove manually." % self.tempdir)
else:
bb.utils.prunedir(self.tempdir)
class MirrorUriTest(FetcherTest):
@@ -585,6 +588,36 @@ class FetcherNetworkTest(FetcherTest):
url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;tag=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
def test_gitfetch_localusehead(self):
# Create dummy local Git repo
src_dir = tempfile.mkdtemp(dir=self.tempdir,
prefix='gitfetch_localusehead_')
src_dir = os.path.abspath(src_dir)
bb.process.run("git init", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit'",
cwd=src_dir)
# Use other branch than master
bb.process.run("git checkout -b my-devel", cwd=src_dir)
bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
cwd=src_dir)
stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
orig_rev = stdout[0].strip()
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
url = "git://" + src_dir + ";protocol=file;usehead=1"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
stdout = bb.process.run("git rev-parse HEAD",
cwd=os.path.join(self.unpackdir, 'git'))
unpack_rev = stdout[0].strip()
self.assertEqual(orig_rev, unpack_rev)
def test_gitfetch_remoteusehead(self):
url = "git://git.openembedded.org/bitbake;usehead=1"
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
def test_gitfetch_premirror(self):
url1 = "git://git.openembedded.org/bitbake"
url2 = "git://someserver.org/bitbake"
@@ -768,7 +801,6 @@ class FetchLatestVersionTest(FetcherTest):
class FetchCheckStatusTest(FetcherTest):
test_wget_uris = ["http://www.cups.org/software/1.7.2/cups-1.7.2-source.tar.bz2",
"http://www.cups.org/software/ipptool/ipptool-20130731-linux-ubuntu-i686.tar.gz",
"http://www.cups.org/",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.1.tar.gz",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",

View File

@@ -68,6 +68,23 @@ C = "3"
with self.assertRaises(bb.parse.ParseError):
d = bb.parse.handle(f.name, self.d)['']
unsettest = """
A = "1"
B = "2"
B[flag] = "3"
unset A
unset B[flag]
"""
def test_parse_unset(self):
f = self.parsehelper(self.unsettest)
d = bb.parse.handle(f.name, self.d)['']
self.assertEqual(d.getVar("A", True), None)
self.assertEqual(d.getVarFlag("A","flag", True), None)
self.assertEqual(d.getVar("B", True), "2")
overridetest = """
RRECOMMENDS_${PN} = "a"
RRECOMMENDS_${PN}_libc = "b"

View File

@@ -59,6 +59,12 @@ class Tinfoil:
def register_idle_function(self, function, data):
pass
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.shutdown()
def parseRecipes(self):
sys.stderr.write("Parsing recipes..")
self.logger.setLevel(logging.WARNING)
@@ -74,16 +80,52 @@ class Tinfoil:
self.logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.recipecache
self.cooker_data = self.cooker.recipecaches['']
def prepare(self, config_only = False):
if not self.cooker_data:
if config_only:
self.cooker.parseConfiguration()
self.cooker_data = self.cooker.recipecache
self.cooker_data = self.cooker.recipecaches['']
else:
self.parseRecipes()
def parse_recipe_file(self, fn, appends=True, appendlist=None, config_data=None):
"""
Parse the specified recipe file (with or without bbappends)
and return a datastore object representing the environment
for the recipe.
Parameters:
fn: recipe file to parse - can be a file path or virtual
specification
appends: True to apply bbappends, False otherwise
appendlist: optional list of bbappend files to apply, if you
want to filter them
config_data: custom config datastore to use. NOTE: if you
specify config_data then you cannot use a virtual
specification for fn.
"""
if appends and appendlist == []:
appends = False
if appends:
if appendlist:
appendfiles = appendlist
else:
if not hasattr(self.cooker, 'collection'):
raise Exception('You must call tinfoil.prepare() with config_only=False in order to get bbappends')
appendfiles = self.cooker.collection.get_file_appends(fn)
else:
appendfiles = None
if config_data:
# We have to use a different function here if we're passing in a datastore
localdata = bb.data.createCopy(config_data)
envdata = bb.cache.parse_recipe(localdata, fn, appendfiles)['']
else:
# Use the standard path
parser = bb.cache.NoCache(self.cooker.databuilder)
envdata = parser.loadDataFull(fn, appendfiles)
return envdata
def shutdown(self):
self.cooker.shutdown(force=True)
self.cooker.post_serve()

View File

@@ -1127,7 +1127,8 @@ class BuildInfoHelper(object):
abs_file_name = vh['file']
for pp in path_prefixes:
if abs_file_name.startswith(pp + "/"):
vh['file']=abs_file_name[len(pp + "/"):]
# preserve layer name in relative path
vh['file']=abs_file_name[pp.rfind("/")+1:]
break
# save the variables
@@ -1616,7 +1617,10 @@ class BuildInfoHelper(object):
if line.startswith('FILES'):
files_str = line.split(':')[1].strip()
files_str = re.sub(r' {2,}', ' ', files_str)
files = files_str.split(' ')
# ignore lines like "FILES:" with no filenames
if files_str:
files += files_str.split(' ')
return files
def _endswith(self, str_to_test, endings):
@@ -1734,9 +1738,9 @@ class BuildInfoHelper(object):
real_image_name,
'image_license.manifest')
# if image_license.manifest exists, we can read the names of bzImage
# and modules files for this build from it, then look for them
# in the DEPLOY_DIR_IMAGE; note that this file is only produced
# if image_license.manifest exists, we can read the names of
# bzImage, modules etc. files for this build from it, then look for
# them in the DEPLOY_DIR_IMAGE; note that this file is only produced
# if an image file was produced
if os.path.isfile(image_license_manifest_path):
has_files = True

View File

@@ -256,17 +256,22 @@ class TerminalFilter(object):
content = "Waiting for %s running tasks to finish:" % len(activetasks)
print(content)
else:
if not len(activetasks):
if self.quiet:
content = "Running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
elif not len(activetasks):
content = "No currently running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
else:
content = "Currently %2s running tasks (%s of %s)" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total)
maxtask = self.helper.tasknumber_total + 1
maxtask = self.helper.tasknumber_total
if not self.main_progress or self.main_progress.maxval != maxtask:
widgets = [' ', progressbar.Percentage(), ' ', progressbar.Bar()]
self.main_progress = BBProgress("Running tasks", maxtask, widgets=widgets)
self.main_progress.start(False)
self.main_progress.setmessage(content)
self.main_progress.update(self.helper.tasknumber_current)
progress = self.helper.tasknumber_current - 1
if progress < 0:
progress = 0
self.main_progress.update(progress)
print('')
lines = 1 + int(len(content) / (self.columns + 1))
if not self.quiet:
@@ -583,23 +588,23 @@ def main(server, eventHandler, params, tf = TerminalFilter):
tasktype = 'noexec task'
else:
tasktype = 'task'
logger.info("Running %s %s of %s (ID: %s, %s)",
logger.info("Running %s %d of %d (%s)",
tasktype,
event.stats.completed + event.stats.active +
event.stats.failed + 1,
event.stats.total, event.taskid, event.taskstring)
event.stats.total, event.taskstring)
continue
if isinstance(event, bb.runqueue.runQueueTaskFailed):
return_value = 1
taskfailures.append(event.taskstring)
logger.error("Task %s (%s) failed with exit code '%s'",
event.taskid, event.taskstring, event.exitcode)
logger.error("Task (%s) failed with exit code '%s'",
event.taskstring, event.exitcode)
continue
if isinstance(event, bb.runqueue.sceneQueueTaskFailed):
logger.warning("Setscene task %s (%s) failed with exit code '%s' - real task will be run instead",
event.taskid, event.taskstring, event.exitcode)
logger.warning("Setscene task (%s) failed with exit code '%s' - real task will be run instead",
event.taskstring, event.exitcode)
continue
if isinstance(event, bb.event.DepTreeGenerated):

View File

@@ -358,8 +358,8 @@ def main(server, eventHandler, params):
if isinstance(event, bb.runqueue.runQueueTaskFailed):
buildinfohelper.update_and_store_task(event)
taskfailures.append(event.taskstring)
logger.error("Task %s (%s) failed with exit code '%s'",
event.taskid, event.taskstring, event.exitcode)
logger.error("Task (%s) failed with exit code '%s'",
event.taskstring, event.exitcode)
continue
if isinstance(event, (bb.runqueue.sceneQueueTaskCompleted, bb.runqueue.sceneQueueTaskFailed)):

View File

@@ -1081,7 +1081,7 @@ def edit_metadata(meta_lines, variables, varfunc, match_overrides=False):
newlines: list of lines up to this point. You can use
this to prepend lines before this variable setting
if you wish.
and should return a three-element tuple:
and should return a four-element tuple:
newvalue: new value to substitute in, or None to drop
the variable setting entirely. (If the removal
results in two consecutive blank lines, one of the
@@ -1095,6 +1095,8 @@ def edit_metadata(meta_lines, variables, varfunc, match_overrides=False):
multi-line value to continue on the same line as
the assignment, False to indent before the first
element.
To clarify, if you wish not to change the value, then you
would return like this: return origvalue, None, 0, True
match_overrides: True to match items with _overrides on the end,
False otherwise
Returns a tuple:
@@ -1461,7 +1463,8 @@ def export_proxies(d):
import os
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY']
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY',
'GIT_PROXY_COMMAND']
exported = False
for v in variables:

View File

@@ -173,7 +173,7 @@ build results (as the layer priority order has effectively changed).
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.tinfoil.cooker.recipecache.bbfile_config_priorities:
for layername, pattern, regex, _ in self.tinfoil.cooker.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break

View File

@@ -23,7 +23,7 @@ class QueryPlugin(LayerPlugin):
"""show current configured layers."""
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layer, _, regex, pri in self.tinfoil.cooker.recipecache.bbfile_config_priorities:
for layer, _, regex, pri in self.tinfoil.cooker.bbfile_config_priorities:
layerdir = self.bbfile_collections.get(layer, None)
layername = self.get_layer_name(layerdir)
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), pri))
@@ -121,9 +121,9 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
logger.error('No class named %s found in BBPATH', classfile)
sys.exit(1)
pkg_pn = self.tinfoil.cooker.recipecache.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.tinfoil.config_data, self.tinfoil.cooker.recipecache, pkg_pn)
allproviders = bb.providers.allProviders(self.tinfoil.cooker.recipecache)
pkg_pn = self.tinfoil.cooker.recipecaches[''].pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.tinfoil.config_data, self.tinfoil.cooker.recipecaches[''], pkg_pn)
allproviders = bb.providers.allProviders(self.tinfoil.cooker.recipecaches[''])
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
@@ -170,13 +170,13 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
if len(allproviders[p]) > 1 or not show_multi_provider_only:
pref = preferred_versions[p]
realfn = bb.cache.Cache.virtualfn2realfn(pref[1])
realfn = bb.cache.virtualfn2realfn(pref[1])
preffile = realfn[0]
# We only display once per recipe, we should prefer non extended versions of the
# recipe if present (so e.g. in OpenEmbedded, openssl rather than nativesdk-openssl
# which would otherwise sort first).
if realfn[1] and realfn[0] in self.tinfoil.cooker.recipecache.pkg_fn:
if realfn[1] and realfn[0] in self.tinfoil.cooker.recipecaches[''].pkg_fn:
continue
if inherits:
@@ -200,7 +200,7 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
same_ver = True
provs = []
for prov in allproviders[p]:
provfile = bb.cache.Cache.virtualfn2realfn(prov[1])[0]
provfile = bb.cache.virtualfn2realfn(prov[1])[0]
provlayer = self.get_file_layer(provfile)
provs.append((provfile, provlayer, prov[0]))
if provlayer != preflayer:
@@ -297,7 +297,7 @@ Lists recipes with the bbappends that apply to them as subitems.
def get_appends_for_files(self, filenames):
appended, notappended = [], []
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
_, cls, _ = bb.cache.virtualfn2realfn(filename)
if cls:
continue
@@ -328,7 +328,7 @@ NOTE: .bbappend files can impact the dependencies.
# The bb's DEPENDS and RDEPENDS
for f in pkg_fn:
f = bb.cache.Cache.virtualfn2realfn(f)[0]
f = bb.cache.virtualfn2realfn(f)[0]
# Get the layername that the file is in
layername = self.get_file_layer(f)
@@ -471,7 +471,7 @@ NOTE: .bbappend files can impact the dependencies.
def check_cross_depends(self, keyword, layername, f, needed_file, show_filenames, ignore_layers):
"""Print the DEPENDS/RDEPENDS file that crosses a layer boundary"""
best_realfn = bb.cache.Cache.virtualfn2realfn(needed_file)[0]
best_realfn = bb.cache.virtualfn2realfn(needed_file)[0]
needed_layername = self.get_file_layer(best_realfn)
if needed_layername != layername and not needed_layername in ignore_layers:
if not show_filenames:

View File

@@ -98,8 +98,12 @@ class LocalhostBEController(BuildEnvironmentController):
# 1. get a list of repos with branches, and map dirpaths for each layer
gitrepos = {}
gitrepos[(bitbake.giturl, bitbake.commit)] = []
gitrepos[(bitbake.giturl, bitbake.commit)].append( ("bitbake", bitbake.dirpath) )
# if we're using a remotely fetched version of bitbake add its git
# details to the list of repos to clone
if bitbake.giturl and bitbake.commit:
gitrepos[(bitbake.giturl, bitbake.commit)] = []
gitrepos[(bitbake.giturl, bitbake.commit)].append(
("bitbake", bitbake.dirpath))
for layer in layers:
# We don't need to git clone the layer for the CustomImageRecipe
@@ -142,18 +146,22 @@ class LocalhostBEController(BuildEnvironmentController):
logger.info("Using pre-checked out source for layer %s", cached_layers)
# 3. checkout the repositories
for giturl, commit in gitrepos.keys():
localdirname = os.path.join(self.be.sourcedir, self.getGitCloneDirectory(giturl, commit))
logger.debug("localhostbecontroller: giturl %s:%s checking out in current directory %s" % (giturl, commit, localdirname))
# make sure our directory is a git repository
# see if our directory is a git repository
if os.path.exists(localdirname):
localremotes = self._shellcmd("git remote -v", localdirname)
if not giturl in localremotes:
raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl))
try:
localremotes = self._shellcmd("git remote -v",
localdirname)
if not giturl in localremotes:
raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl))
except ShellCmdException:
# our localdirname might not be a git repository
#- that's fine
pass
else:
if giturl in cached_layers:
logger.debug("localhostbecontroller git-copying %s to %s" % (cached_layers[giturl], localdirname))

View File

@@ -84,8 +84,9 @@ class Command(NoArgsCommand):
print("Loading OE-Core configuration")
call_command("loaddata", "oe-core")
if template_conf:
oe_core_path = os.realpath(template_conf +
"/../")
oe_core_path = os.path.realpath(
template_conf +
"/../")
else:
print("TEMPLATECONF not found. You may have to"
" manually configure layer paths")
@@ -94,8 +95,9 @@ class Command(NoArgsCommand):
"layer: ")
# Update the layer instances of openemebedded-core
for layer in Layer.objects.filter(
name="openembedded-core"):
layer.local_source_dir = oe_core_path
name="openembedded-core",
local_source_dir="OE-CORE-LAYER-DIR"):
layer.local_path = oe_core_path
layer.save()
# Import the custom fixture if it's present

View File

@@ -126,13 +126,14 @@ class BuildRequest(models.Model):
# These tables specify the settings for running an actual build.
# They MUST be kept in sync with the tables in orm.models.Project*
class BRLayer(models.Model):
req = models.ForeignKey(BuildRequest)
name = models.CharField(max_length = 100)
giturl = models.CharField(max_length = 254)
req = models.ForeignKey(BuildRequest)
name = models.CharField(max_length=100)
giturl = models.CharField(max_length=254, null=True)
local_source_dir = models.CharField(max_length=254, null=True)
commit = models.CharField(max_length = 254)
dirpath = models.CharField(max_length = 254)
commit = models.CharField(max_length=254, null=True)
dirpath = models.CharField(max_length=254, null=True)
layer_version = models.ForeignKey(Layer_Version, null=True)
class BRBitbake(models.Model):

View File

@@ -1,17 +1,19 @@
<?xml version="1.0" encoding="utf-8"?>
<django-objects version="1.0">
<!-- Set the project default value for DISTRO -->
<object model="orm.toastersetting" pk="1">
<field type="CharField" name="name">DEFCONF_DISTRO</field>
<field type="CharField" name="value">nodistro</field>
</object>
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
<field type="CharField" name="name">master</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="2">
<field type="CharField" name="name">HEAD</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
<field type="CharField" name="branch">HEAD</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<!-- Releases available -->
@@ -43,15 +45,15 @@
<!-- TYPE_LOCAL = 0 Layers for the Local release -->
<object model="orm.layer" pk="1">
<field type="CharField" name="name">openembedded-core</field>
<field type="CharField" name="layer_index_url"></field>
<field type="CharField" name="vcs_url">git://git.openembedded.org/openembedded-core</field>
</object>
<object model="orm.layer_version" pk="1">
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">2</field>
<field type="CharField" name="local_path">OE-CORE-LAYER-DIR</field>
<field type="CharField" name="branch">HEAD</field>
<field type="CharField" name="commit">HEAD</field>
<field type="CharField" name="dirpath">meta</field>
<field type="IntegerField" name="layer_source">0</field>
</object>
</django-objects>

View File

@@ -1,5 +1,11 @@
<?xml version="1.0" encoding="utf-8"?>
<django-objects version="1.0">
<!-- Set the project default value for DISTRO -->
<object model="orm.toastersetting" pk="1">
<field type="CharField" name="name">DEFCONF_DISTRO</field>
<field type="CharField" name="value">poky</field>
</object>
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
<field type="CharField" name="name">master</field>

View File

@@ -1,34 +1,31 @@
<?xml version="1.0" encoding="utf-8"?>
<django-objects version="1.0">
<!-- Default project settings -->
<object model="orm.toastersetting" pk="1">
<!-- pk=1 is DISTRO -->
<object model="orm.toastersetting" pk="2">
<field type="CharField" name="name">DEFAULT_RELEASE</field>
<field type="CharField" name="value">master</field>
</object>
<object model="orm.toastersetting" pk="2">
<object model="orm.toastersetting" pk="3">
<field type="CharField" name="name">DEFCONF_PACKAGE_CLASSES</field>
<field type="CharField" name="value">package_rpm</field>
</object>
<object model="orm.toastersetting" pk="3">
<object model="orm.toastersetting" pk="4">
<field type="CharField" name="name">DEFCONF_MACHINE</field>
<field type="CharField" name="value">qemux86</field>
</object>
<object model="orm.toastersetting" pk="4">
<object model="orm.toastersetting" pk="5">
<field type="CharField" name="name">DEFCONF_SSTATE_DIR</field>
<field type="CharField" name="value">${TOPDIR}/../sstate-cache</field>
</object>
<object model="orm.toastersetting" pk="5">
<object model="orm.toastersetting" pk="6">
<field type="CharField" name="name">DEFCONF_IMAGE_INSTALL_append</field>
<field type="CharField" name="value"></field>
</object>
<object model="orm.toastersetting" pk="6">
<object model="orm.toastersetting" pk="7">
<field type="CharField" name="name">DEFCONF_IMAGE_FSTYPES</field>
<field type="CharField" name="value">ext3 jffs2 tar.bz2</field>
</object>
<object model="orm.toastersetting" pk="7">
<field type="CharField" name="name">DEFCONF_DISTRO</field>
<field type="CharField" name="value">poky</field>
</object>
<object model="orm.toastersetting" pk="8">
<field type="CharField" name="name">DEFCONF_DL_DIR</field>
<field type="CharField" name="value">${TOPDIR}/../downloads</field>

View File

@@ -165,6 +165,12 @@ class Command(NoArgsCommand):
# layerindex
oe_core_l.summary = li['summary']
oe_core_l.description = li['description']
oe_core_l.vcs_web_url = li['vcs_web_url']
oe_core_l.vcs_web_tree_base_url = \
li['vcs_web_tree_base_url']
oe_core_l.vcs_web_file_base_url = \
li['vcs_web_file_base_url']
oe_core_l.save()
li_layer_id_to_toaster_layer_id[li['id']] = oe_core_l.pk
self.mini_progress("layers", i, total)

View File

@@ -876,9 +876,10 @@ class Target_Image_File(models.Model):
SUFFIXES = {
'btrfs', 'cpio', 'cpio.gz', 'cpio.lz4', 'cpio.lzma', 'cpio.xz',
'cramfs', 'elf', 'ext2', 'ext2.bz2', 'ext2.gz', 'ext2.lzma', 'ext4',
'ext4.gz', 'ext3', 'ext3.gz', 'hddimg', 'iso', 'jffs2', 'jffs2.sum',
'squashfs', 'squashfs-lzo', 'squashfs-xz', 'tar.bz2', 'tar.lz4',
'tar.xz', 'tartar.gz', 'ubi', 'ubifs', 'vmdk'
'ext4.gz', 'ext3', 'ext3.gz', 'hdddirect', 'hddimg', 'iso', 'jffs2',
'jffs2.sum', 'multiubi', 'qcow2', 'squashfs', 'squashfs-lzo',
'squashfs-xz', 'tar', 'tar.bz2', 'tar.gz', 'tar.lz4', 'tar.xz', 'ubi',
'ubifs', 'vdi', 'vmdk', 'wic', 'wic.bz2', 'wic.gz', 'wic.lzma'
}
target = models.ForeignKey(Target)

View File

@@ -82,15 +82,17 @@ class TestLayerDetailsPage(SeleniumTestCase):
self.get(self.url)
self.click("#add-remove-layer-btn")
self.click("#edit-layer-source")
self.click("#repo")
self.wait_until_visible("#layer-git-repo-url")
# Open every edit box
for btn in self.find_all("dd .glyphicon-edit"):
btn.click()
self.wait_until_visible("dd input")
# Edit each value
for inputs in self.find_all("dd input[type=text]") + \
for inputs in self.find_all("#layer-git input[type=text]") + \
self.find_all("dd textarea"):
# ignore the tt inputs (twitter typeahead input)
if "tt-" in inputs.get_attribute("class"):
@@ -104,16 +106,20 @@ class TestLayerDetailsPage(SeleniumTestCase):
inputs.send_keys("-edited")
# Save the new values
for save_btn in self.find_all(".change-btn"):
save_btn.click()
self.click("#save-changes-for-switch")
self.wait_until_visible("#edit-layer-source")
# Refresh the page to see if the new values are returned
self.get(self.url)
new_values = ["%s-edited" % old_val
for old_val in self.initial_values]
for inputs in self.find_all('dd input[type="text"]') + \
for inputs in self.find_all('#layer-git input[type="text"]') + \
self.find_all('dd textarea'):
# ignore the tt inputs (twitter typeahead input)
if "tt-" in inputs.get_attribute("class"):
@@ -125,6 +131,24 @@ class TestLayerDetailsPage(SeleniumTestCase):
"Expecting any of \"%s\" but got \"%s\"" %
(new_values, value))
# Now convert it to a local layer
self.click("#edit-layer-source")
self.click("#dir")
dir_input = self.wait_until_visible("#layer-dir-path-in-details")
new_dir = "/home/test/my-meta-dir"
dir_input.send_keys(new_dir)
self.click("#save-changes-for-switch")
self.wait_until_visible("#edit-layer-source")
# Refresh the page to see if the new values are returned
self.get(self.url)
dir_input = self.find("#layer-dir-path-in-details")
self.assertTrue(new_dir in dir_input.get_attribute("value"),
"Expected %s in the dir value for layer directory" %
new_dir)
def test_delete_layer(self):
""" Delete the layer """

View File

@@ -16,21 +16,29 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# Please run flake8 on this file before sending patches
# Temporary home for the UI's misc API
import re
import logging
from orm.models import Project, ProjectTarget, Build, Layer_Version
from orm.models import LayerVersionDependency, LayerSource, ProjectLayer
from orm.models import Recipe, CustomImageRecipe, CustomImagePackage
from orm.models import Layer, Target, Package, Package_Dependency
from bldcontrol.models import BuildRequest
from bldcontrol import bbcontroller
from django.http import HttpResponse, JsonResponse
from django.views.generic import View
from django.core.urlresolvers import reverse
from django.core import serializers
from django.utils import timezone
from django.template.defaultfilters import date
from django.db.models import Q, F
from django.db import Error
from toastergui.templatetags.projecttags import json, sectohms, get_tasks
from toastergui.templatetags.projecttags import filtered_filesizeformat
logger = logging.getLogger("toaster")
def error_response(error):
return JsonResponse({"error": error})
@@ -135,7 +143,8 @@ class XhrLayer(View):
Method: POST
Args:
vcs_url, dirpath, commit, up_branch, summary, description
vcs_url, dirpath, commit, up_branch, summary, description,
local_source_dir
add_dep = append a layerversion_id as a dependency
rm_dep = remove a layerversion_id as a depedency
@@ -166,6 +175,9 @@ class XhrLayer(View):
layer_version.layer.summary = request.POST["summary"]
if "description" in request.POST:
layer_version.layer.description = request.POST["description"]
if "local_source_dir" in request.POST:
layer_version.layer.local_source_dir = \
request.POST["local_source_dir"]
if "add_dep" in request.POST:
lvd = LayerVersionDependency(
@@ -212,6 +224,7 @@ class XhrLayer(View):
"redirect": reverse('project', args=(kwargs['pid'],))
})
class MostRecentBuildsView(View):
def _was_yesterday_or_earlier(self, completed_on):
now = timezone.now()
@@ -226,13 +239,11 @@ class MostRecentBuildsView(View):
"""
Returns a list of builds in JSON format.
"""
mrb_type = 'all'
project = None
project_id = request.GET.get('project_id', None)
if project_id:
try:
mrb_type = 'project'
project = Project.objects.get(pk=project_id)
except:
# if project lookup fails, assume no project
@@ -241,9 +252,6 @@ class MostRecentBuildsView(View):
recent_build_objs = Build.get_recent(project)
recent_builds = []
# for timezone conversion
tz = timezone.get_current_timezone()
for build_obj in recent_build_objs:
dashboard_url = reverse('builddashboard', args=(build_obj.pk,))
buildtime_url = reverse('buildtime', args=(build_obj.pk,))
@@ -262,7 +270,8 @@ class MostRecentBuildsView(View):
build['buildrequest_id'] = buildrequest_id
build['recipes_parsed_percentage'] = \
int((build_obj.recipes_parsed / build_obj.recipes_to_parse) * 100)
int((build_obj.recipes_parsed /
build_obj.recipes_to_parse) * 100)
tasks_complete_percentage = 0
if build_obj.outcome in (Build.SUCCEEDED, Build.FAILED):
@@ -296,7 +305,8 @@ class MostRecentBuildsView(View):
completed_on_template = '%H:%M'
if self._was_yesterday_or_earlier(completed_on):
completed_on_template = '%d/%m/%Y ' + completed_on_template
build['completed_on'] = completed_on.strftime(completed_on_template)
build['completed_on'] = completed_on.strftime(
completed_on_template)
targets = []
target_objs = build_obj.get_sorted_target_list()
@@ -319,3 +329,446 @@ class MostRecentBuildsView(View):
recent_builds.append(build)
return JsonResponse(recent_builds, safe=False)
class XhrCustomRecipe(View):
""" Create a custom image recipe """
def post(self, request, *args, **kwargs):
"""
Custom image recipe REST API
Entry point: /xhr_customrecipe/
Method: POST
Args:
name: name of custom recipe to create
project: target project id of orm.models.Project
base: base recipe id of orm.models.Recipe
Returns:
{"error": "ok",
"url": <url of the created recipe>}
or
{"error": <error message>}
"""
# check if request has all required parameters
for param in ('name', 'project', 'base'):
if param not in request.POST:
return error_response("Missing parameter '%s'" % param)
# get project and baserecipe objects
params = {}
for name, model in [("project", Project),
("base", Recipe)]:
value = request.POST[name]
try:
params[name] = model.objects.get(id=value)
except model.DoesNotExist:
return error_response("Invalid %s id %s" % (name, value))
# create custom recipe
try:
# Only allowed chars in name are a-z, 0-9 and -
if re.search(r'[^a-z|0-9|-]', request.POST["name"]):
return error_response("invalid-name")
custom_images = CustomImageRecipe.objects.all()
# Are there any recipes with this name already in our project?
existing_image_recipes_in_project = custom_images.filter(
name=request.POST["name"], project=params["project"])
if existing_image_recipes_in_project.count() > 0:
return error_response("image-already-exists")
# Are there any recipes with this name which aren't custom
# image recipes?
custom_image_ids = custom_images.values_list('id', flat=True)
existing_non_image_recipes = Recipe.objects.filter(
Q(name=request.POST["name"]) & ~Q(pk__in=custom_image_ids)
)
if existing_non_image_recipes.count() > 0:
return error_response("recipe-already-exists")
# create layer 'Custom layer' and verion if needed
layer = Layer.objects.get_or_create(
name=CustomImageRecipe.LAYER_NAME,
summary="Layer for custom recipes",
vcs_url="file:///toaster_created_layer")[0]
# Check if we have a layer version already
# We don't use get_or_create here because the dirpath will change
# and is a required field
lver = Layer_Version.objects.filter(Q(project=params['project']) &
Q(layer=layer) &
Q(build=None)).last()
if lver is None:
lver, created = Layer_Version.objects.get_or_create(
project=params['project'],
layer=layer,
dirpath="toaster_created_layer")
# Add a dependency on our layer to the base recipe's layer
LayerVersionDependency.objects.get_or_create(
layer_version=lver,
depends_on=params["base"].layer_version)
# Add it to our current project if needed
ProjectLayer.objects.get_or_create(project=params['project'],
layercommit=lver,
optional=False)
# Create the actual recipe
recipe, created = CustomImageRecipe.objects.get_or_create(
name=request.POST["name"],
base_recipe=params["base"],
project=params["project"],
layer_version=lver,
is_image=True)
# If we created the object then setup these fields. They may get
# overwritten later on and cause the get_or_create to create a
# duplicate if they've changed.
if created:
recipe.file_path = request.POST["name"]
recipe.license = "MIT"
recipe.version = "0.1"
recipe.save()
except Error as err:
return error_response("Can't create custom recipe: %s" % err)
# Find the package list from the last build of this recipe/target
target = Target.objects.filter(Q(build__outcome=Build.SUCCEEDED) &
Q(build__project=params['project']) &
(Q(target=params['base'].name) |
Q(target=recipe.name))).last()
if target:
# Copy in every package
# We don't want these packages to be linked to anything because
# that underlying data may change e.g. delete a build
for tpackage in target.target_installed_package_set.all():
try:
built_package = tpackage.package
# The package had no recipe information so is a ghost
# package skip it
if built_package.recipe is None:
continue
config_package = CustomImagePackage.objects.get(
name=built_package.name)
recipe.includes_set.add(config_package)
except Exception as e:
logger.warning("Error adding package %s %s" %
(tpackage.package.name, e))
pass
return JsonResponse(
{"error": "ok",
"packages": recipe.get_all_packages().count(),
"url": reverse('customrecipe', args=(params['project'].pk,
recipe.id))})
class XhrCustomRecipeId(View):
"""
Set of ReST API processors working with recipe id.
Entry point: /xhr_customrecipe/<recipe_id>
Methods:
GET - Get details of custom image recipe
DELETE - Delete custom image recipe
Returns:
GET:
{"error": "ok",
"info": dictionary of field name -> value pairs
of the CustomImageRecipe model}
DELETE:
{"error": "ok"}
or
{"error": <error message>}
"""
@staticmethod
def _get_ci_recipe(recipe_id):
""" Get Custom Image recipe or return an error response"""
try:
custom_recipe = \
CustomImageRecipe.objects.get(pk=recipe_id)
return custom_recipe, None
except CustomImageRecipe.DoesNotExist:
return None, error_response("Custom recipe with id=%s "
"not found" % recipe_id)
def get(self, request, *args, **kwargs):
custom_recipe, error = self._get_ci_recipe(kwargs['recipe_id'])
if error:
return error
if request.method == 'GET':
info = {"id": custom_recipe.id,
"name": custom_recipe.name,
"base_recipe_id": custom_recipe.base_recipe.id,
"project_id": custom_recipe.project.id}
return JsonResponse({"error": "ok", "info": info})
def delete(self, request, *args, **kwargs):
custom_recipe, error = self._get_ci_recipe(kwargs['recipe_id'])
if error:
return error
custom_recipe.delete()
return JsonResponse({"error": "ok"})
class XhrCustomRecipePackages(View):
"""
ReST API to add/remove packages to/from custom recipe.
Entry point: /xhr_customrecipe/<recipe_id>/packages/<package_id>
Methods:
PUT - Add package to the recipe
DELETE - Delete package from the recipe
GET - Get package information
Returns:
{"error": "ok"}
or
{"error": <error message>}
"""
@staticmethod
def _get_package(package_id):
try:
package = CustomImagePackage.objects.get(pk=package_id)
return package, None
except Package.DoesNotExist:
return None, error_response("Package with id=%s "
"not found" % package_id)
def _traverse_dependents(self, next_package_id,
rev_deps, all_current_packages, tree_level=0):
"""
Recurse through reverse dependency tree for next_package_id.
Limit the reverse dependency search to packages not already scanned,
that is, not already in rev_deps.
Limit the scan to a depth (tree_level) not exceeding the count of
all packages in the custom image, and if that depth is exceeded
return False, pop out of the recursion, and write a warning
to the log, but this is unlikely, suggesting a dependency loop
not caught by bitbake.
On return, the input/output arg rev_deps is appended with queryset
dictionary elements, annotated for use in the customimage template.
The list has unsorted, but unique elements.
"""
max_dependency_tree_depth = all_current_packages.count()
if tree_level >= max_dependency_tree_depth:
logger.warning(
"The number of reverse dependencies "
"for this package exceeds " + max_dependency_tree_depth +
" and the remaining reverse dependencies will not be removed")
return True
package = CustomImagePackage.objects.get(id=next_package_id)
dependents = \
package.package_dependencies_target.annotate(
name=F('package__name'),
pk=F('package__pk'),
size=F('package__size'),
).values("name", "pk", "size").exclude(
~Q(pk__in=all_current_packages)
)
for pkg in dependents:
if pkg in rev_deps:
# already seen, skip dependent search
continue
rev_deps.append(pkg)
if (self._traverse_dependents(pkg["pk"], rev_deps,
all_current_packages,
tree_level+1)):
return True
return False
def _get_all_dependents(self, package_id, all_current_packages):
"""
Returns sorted list of recursive reverse dependencies for package_id,
as a list of dictionary items, by recursing through dependency
relationships.
"""
rev_deps = []
self._traverse_dependents(package_id, rev_deps, all_current_packages)
rev_deps = sorted(rev_deps, key=lambda x: x["name"])
return rev_deps
def get(self, request, *args, **kwargs):
recipe, error = XhrCustomRecipeId._get_ci_recipe(
kwargs['recipe_id'])
if error:
return error
# If no package_id then list all the current packages
if not kwargs['package_id']:
total_size = 0
packages = recipe.get_all_packages().values("id",
"name",
"version",
"size")
for package in packages:
package['size_formatted'] = \
filtered_filesizeformat(package['size'])
total_size += package['size']
return JsonResponse({"error": "ok",
"packages": list(packages),
"total": len(packages),
"total_size": total_size,
"total_size_formatted":
filtered_filesizeformat(total_size)})
else:
package, error = XhrCustomRecipePackages._get_package(
kwargs['package_id'])
if error:
return error
all_current_packages = recipe.get_all_packages()
# Dependencies for package which aren't satisfied by the
# current packages in the custom image recipe
deps = package.package_dependencies_source.for_target_or_none(
recipe.name)['packages'].annotate(
name=F('depends_on__name'),
pk=F('depends_on__pk'),
size=F('depends_on__size'),
).values("name", "pk", "size").filter(
# There are two depends types we don't know why
(Q(dep_type=Package_Dependency.TYPE_TRDEPENDS) |
Q(dep_type=Package_Dependency.TYPE_RDEPENDS)) &
~Q(pk__in=all_current_packages)
)
# Reverse dependencies which are needed by packages that are
# in the image. Recursive search providing all dependents,
# not just immediate dependents.
reverse_deps = self._get_all_dependents(kwargs['package_id'],
all_current_packages)
total_size_deps = 0
total_size_reverse_deps = 0
for dep in deps:
dep['size_formatted'] = \
filtered_filesizeformat(dep['size'])
total_size_deps += dep['size']
for dep in reverse_deps:
dep['size_formatted'] = \
filtered_filesizeformat(dep['size'])
total_size_reverse_deps += dep['size']
return JsonResponse(
{"error": "ok",
"id": package.pk,
"name": package.name,
"version": package.version,
"unsatisfied_dependencies": list(deps),
"unsatisfied_dependencies_size": total_size_deps,
"unsatisfied_dependencies_size_formatted":
filtered_filesizeformat(total_size_deps),
"reverse_dependencies": list(reverse_deps),
"reverse_dependencies_size": total_size_reverse_deps,
"reverse_dependencies_size_formatted":
filtered_filesizeformat(total_size_reverse_deps)})
def put(self, request, *args, **kwargs):
recipe, error = XhrCustomRecipeId._get_ci_recipe(kwargs['recipe_id'])
package, error = self._get_package(kwargs['package_id'])
if error:
return error
included_packages = recipe.includes_set.values_list('pk',
flat=True)
# If we're adding back a package which used to be included in this
# image all we need to do is remove it from the excludes
if package.pk in included_packages:
try:
recipe.excludes_set.remove(package)
return {"error": "ok"}
except Package.DoesNotExist:
return error_response("Package %s not found in excludes"
" but was in included list" %
package.name)
else:
recipe.appends_set.add(package)
# Make sure that package is not in the excludes set
try:
recipe.excludes_set.remove(package)
except:
pass
# Add the dependencies we think will be added to the recipe
# as a result of appending this package.
# TODO this should recurse down the entire deps tree
for dep in package.package_dependencies_source.all_depends():
try:
cust_package = CustomImagePackage.objects.get(
name=dep.depends_on.name)
recipe.includes_set.add(cust_package)
try:
# When adding the pre-requisite package, make
# sure it's not in the excluded list from a
# prior removal.
recipe.excludes_set.remove(cust_package)
except package.DoesNotExist:
# Don't care if the package had never been excluded
pass
except:
logger.warning("Could not add package's suggested"
"dependencies to the list")
return JsonResponse({"error": "ok"})
def delete(self, request, *args, **kwargs):
recipe, error = XhrCustomRecipeId._get_ci_recipe(kwargs['recipe_id'])
package, error = self._get_package(kwargs['package_id'])
if error:
return error
try:
included_packages = recipe.includes_set.values_list('pk',
flat=True)
# If we're deleting a package which is included we need to
# Add it to the excludes list.
if package.pk in included_packages:
recipe.excludes_set.add(package)
else:
recipe.appends_set.remove(package)
all_current_packages = recipe.get_all_packages()
reverse_deps_dictlist = self._get_all_dependents(
package.pk,
all_current_packages)
ids = [entry['pk'] for entry in reverse_deps_dictlist]
reverse_deps = CustomImagePackage.objects.filter(id__in=ids)
for r in reverse_deps:
try:
if r.id in included_packages:
recipe.excludes_set.add(r)
else:
recipe.appends_set.remove(r)
except:
pass
return JsonResponse({"error": "ok"})
except CustomImageRecipe.DoesNotExist:
return error_response("Tried to remove package that wasn't"
" present")

View File

@@ -482,7 +482,8 @@ class BuildTasksTable(BuildTablesMixin):
orderable=True)
self.add_column(title="Recipe version",
field_name='recipe__version')
field_name='recipe__version',
hidden=True)
self.add_column(title="Executed",
static_data_name="task_executed",

View File

@@ -161,7 +161,13 @@ dd .glyphicon-edit { margin-left: 5px; }
/* Style the forms and definition lists in the layer details pages */
#change-repo-form .form-control { width: 17em; }
#information { margin-bottom: 5em; }
#information dd > form { margin-bottom: 5px; margin-top: 5px; }
#edit-layer-source-form fieldset { margin-top: 20px; }
#directory-info,
#git-repo-info { margin-top: 20px; }
#layer-dir-path-in-details { width: 55%; }
.add-deps .form-control { width: 15em; }
/* Style the forms and definition lists in the BitBake variables page */
.variable-list { margin-bottom: 20px; }
@@ -176,6 +182,7 @@ dd.variable-list form { margin-top: 10px; }
.scrolling.has-error { border-color: #a94442; }
.help-block.text-danger { color: #a94442; }
.tooltip-inner code { color: #fff; }
.text-danger > code { color: #a94442; }
dd.variable-list .glyphicon-question-sign { font-size: 14px; }
dd.variable-list .glyphicon-edit { font-size: 16px; }
dt .glyphicon-trash { margin-left: 5px; font-size: 16px; }
@@ -196,7 +203,8 @@ h2 { margin-bottom: 25px; }
.tt-suggestion:active { background-color: #f5f5f5; cursor: pointer; }
/* Style the import layer form controls*/
legend { border: none; margin-top: 20px; }
legend { border: none; }
fieldset.fields-apart-from-layer-name { margin-top: 20px; }
.radioLegend { margin-bottom: 0; }
#layer-name-ctrl { margin-top: 20px; }
#import-layer-name,
@@ -316,6 +324,8 @@ h2.panel-title { font-size: 30px; }
/* Make the help in tables insivisble until you hover over the right cell */
.hover-help { visibility: hidden; }
#add-remove-layer-btn { margin-bottom: 20px; }
/* Blue hightlight animation for tasks and directory structure tables */
.highlight { -webkit-animation: target-fade 15s 1; -moz-animation: target-fade 15s 1; animation: target-fade 15s 1; }
@-webkit-keyframes target-fade { 0% { background-color: #D9EDF7; } 25% { background-color: #D9EDF7; } 100% { background-color: white; } }

View File

@@ -10,11 +10,20 @@ function layerDetailsPageInit (ctx) {
var targetTab = $("#targets-tab");
var machineTab = $("#machines-tab");
var detailsTab = $("#details-tab");
var editLayerSource = $("#edit-layer-source");
var saveSourceChangesBtn = $("#save-changes-for-switch");
var layerGitRefInput = $("#layer-git-ref");
var layerSubDirInput = $('#layer-subdir');
targetTab.on('show.bs.tab', targetsTabShow);
detailsTab.on('show.bs.tab', detailsTabShow);
machineTab.on('show.bs.tab', machinesTabShow);
/* setup the dependencies typeahead */
libtoaster.makeTypeahead(layerDepInput, libtoaster.ctx.layersTypeAheadUrl, { include_added: "true" }, function(item){
libtoaster.makeTypeahead(layerDepInput,
libtoaster.ctx.layersTypeAheadUrl,
{ include_added: "true" }, function(item){
currentLayerDepSelection = item;
layerDepBtn.removeAttr("disabled");
});
@@ -25,20 +34,6 @@ function layerDetailsPageInit (ctx) {
}
});
$(window).on('hashchange', function(e){
switch(window.location.hash){
case '#machines':
machineTab.tab('show');
break;
case '#recipes':
targetTab.tab('show');
break;
default:
detailsTab.tab('show');
break;
}
});
function addRemoveDep(depLayerId, add, doneCb) {
var data = { layer_version_id : ctx.layerVersion.id };
if (add)
@@ -150,6 +145,7 @@ function layerDetailsPageInit (ctx) {
});
});
function defaultAddBtnText(){
var text = " Add the "+ctx.layerVersion.name+" layer to your project";
addRmLayerBtn.text(text);
@@ -157,12 +153,12 @@ function layerDetailsPageInit (ctx) {
addRmLayerBtn.removeClass("btn-danger");
}
detailsTab.on('show', function(){
function detailsTabShow(){
if (!ctx.layerVersion.inCurrentPrj)
defaultAddBtnText();
window.location.hash = "details";
});
window.location.hash = "information";
}
function targetsTabShow(){
if (!ctx.layerVersion.inCurrentPrj){
@@ -216,7 +212,6 @@ function layerDetailsPageInit (ctx) {
});
targetTab.on('show.bs.tab', targetsTabShow);
function machinesTabShow(){
if (!ctx.layerVersion.inCurrentPrj) {
@@ -233,8 +228,6 @@ function layerDetailsPageInit (ctx) {
window.location.hash = "machines";
}
machineTab.on('show.bs.tab', machinesTabShow);
$(".pagesize").change(function(){
var search = libtoaster.parseUrlParams();
search.limit = this.value;
@@ -423,4 +416,101 @@ function layerDetailsPageInit (ctx) {
$(".glyphicon-trash").tooltip();
$(".commit").tooltip();
editLayerSource.click(function() {
/* Kindly bring the git layers imported from layerindex to normal page
* and not this new page :(
*/
$(this).hide();
saveSourceChangesBtn.attr("disabled", "disabled");
$("#git-repo-info, #directory-info").hide();
$("#edit-layer-source-form").fadeIn();
if ($("#layer-dir-path-in-details").val() == "") {
//Local dir path is empty...
$("#repo").prop("checked", true);
$("#layer-git").fadeIn();
$("#layer-dir").hide();
} else {
$("#layer-git").hide();
$("#layer-dir").fadeIn();
}
});
$('input:radio[name="source-location"]').change(function() {
if ($('input[name=source-location]:checked').val() == "repo") {
$("#layer-git").fadeIn();
$("#layer-dir").hide();
if ($("#layer-git-repo-url").val().length === 0 && layerGitRefInput.val().length === 0) {
saveSourceChangesBtn.attr("disabled", "disabled");
}
} else {
$("#layer-dir").fadeIn();
$("#layer-git").hide();
}
});
$("#layer-dir-path-in-details").keyup(function() {
saveSourceChangesBtn.removeAttr("disabled");
});
$("#layer-git-repo-url").keyup(function() {
if ($("#layer-git-repo-url").val().length > 0 && layerGitRefInput.val().length > 0) {
saveSourceChangesBtn.removeAttr("disabled");
}
});
layerGitRefInput.keyup(function() {
if ($("#layer-git-repo-url").val().length > 0 && layerGitRefInput.val().length > 0) {
saveSourceChangesBtn.removeAttr("disabled");
}
});
layerSubDirInput.keyup(function(){
if ($(this).val().length > 0){
saveSourceChangesBtn.removeAttr("disabled");
}
});
$('#cancel-changes-for-switch').click(function() {
$("#edit-layer-source-form").hide();
$("#directory-info, #git-repo-info").fadeIn();
editLayerSource.show();
});
saveSourceChangesBtn.click(function() {
var layerData = {
vcs_url: $('#layer-git-repo-url').val(),
commit: layerGitRefInput.val(),
dirpath: layerSubDirInput.val(),
local_source_dir: $('#layer-dir-path-in-details').val(),
};
if ($('input[name=source-location]:checked').val() == "repo") {
layerData.local_source_dir = "";
} else {
layerData.vcs_url = "";
layerData.git_ref = "";
}
$.ajax({
type: "POST",
url: ctx.xhrUpdateLayerUrl,
data: layerData,
headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
success: function (data) {
if (data.error != "ok") {
console.warn(data.error);
} else {
/* success layer property changed */
window.location.reload();
}
},
error: function (data) {
console.warn("Call failed");
console.warn(data);
}
});
});
}

View File

@@ -113,16 +113,18 @@
<span class="glyphicon glyphicon-question-sign get-help" title="You can provide a Git branch, a tag or a commit SHA as the revision"></span>
</label>
<input type="text" class="form-control" id="layer-git-ref" required>
<span class="help-inline" style="diaply:none;" id="invalid-layer-revision-hint"></span>
<span class="help-inline" style="display:none;" id="invalid-layer-revision-hint"></span>
</div>
</fieldset>
<fieldset class="fields-apart-from-layer-name" id="local-dir" style="display:none;">
<legend>Layer directory information</legend>
<label for="local-dir-path" class="control-label">Enter the absolute path to the layer directory</label>
<input type="text" class="form-control" id="local-dir-path" required/>
<p class="help-block" id="hintError-dir-path-starts-with-slash" style="display:none;">The absolute path must start with "/".</p>
<p class="help-block" id="hintError-dir-path" style="display:none;">The directory path cannot include spaces or any of these characters: . \ ? % * : | " " &lt; &gt;</p>
<div class="form-group">
<label for="local-dir-path" class="control-label">Enter the absolute path to the layer directory</label>
<input type="text" class="form-control" id="local-dir-path" required/>
<p class="help-block" id="hintError-dir-path-starts-with-slash" style="display:none;">The absolute path must start with "/".</p>
<p class="help-block" id="hintError-dir-path" style="display:none;">The directory path cannot include spaces or any of these characters: . \ ? % * : | " " &lt; &gt;</p>
</div>
</fieldset>
<fieldset class="fields-apart-from-layer-name">

View File

@@ -84,11 +84,16 @@
</script>
<div class="page-header">
{% if layerversion.layer.local_source_dir %}
<h1>{{layerversion.layer.name}} <small class="commit" style="display:none;"></small>
</h1>
{% else %}
<h1>{{layerversion.layer.name}} <small class="commit"
{% if layerversion.get_vcs_reference|length > 13 %}
data-toggle="tooltip" title="{{layerversion.get_vcs_reference}}"
{% endif %}>({{layerversion.get_vcs_reference|truncatechars:13}})</small>
</h1>
{% endif %}
</div>
<div class="row">
<!-- container for tabs -->
@@ -97,6 +102,19 @@
<button type="button" class="close" id="dismiss-alert">&times;</button>
<span id="alert-msg"></span>
</div>
{% if layerversion.id not in projectlayers %}
<button id="add-remove-layer-btn" data-directive="add" class="btn btn-default btn-lg btn-block">
<span class="glyphicon glyphicon-plus"></span>
Add the {{layerversion.layer.name}} layer to your project
</button>
{% else %}
<button id="add-remove-layer-btn" data-directive="remove" class="btn btn-default btn-block btn-lg btn-danger">
<span class="glyphicon glyphicon-trash"></span>
Remove the {{layerversion.layer.name}} layer from your project
</button>
{% endif %}
<ul class="nav nav-tabs">
<li class="active">
<a data-toggle="tab" href="#information" id="details-tab">Layer details</a>
@@ -109,23 +127,21 @@
</li>
</ul>
<div class="tab-content">
<span class="button-place">
{% if layerversion.id not in projectlayers %}
<button id="add-remove-layer-btn" data-directive="add" class="btn btn-default btn-lg btn-block">
<span class="glyphicon glyphicon-plus"></span>
Add the {{layerversion.layer.name}} layer to your project
</button>
{% else %}
<button id="add-remove-layer-btn" data-directive="remove" class="btn btn-default btn-block btn-lg btn-danger">
<span class="glyphicon glyphicon-trash"></span>
Remove the {{layerversion.layer.name}} layer from your project
</button>
{% endif %}
</span>
<!-- layer details pane -->
<div id="information" class="tab-pane active">
<dl class="dl-horizontal">
<h3>Layer source code location</h3>
{% if layerversion.layer.local_source_dir %}
<dl class="dl-horizontal" id="directory-info">
<dt>
Path to the layer directory
</dt>
<dd>
<code>{{layerversion.layer.local_source_dir}}</code>
</dd>
</dl>
{% else %}
<dl class="dl-horizontal" id="git-repo-info">
<dt class="">
<span class="glyphicon glyphicon-question-sign get-help" title="Fetch/clone URL of the repository"></span>
Repository URL
@@ -139,11 +155,9 @@
<div class="form-group">
<input type="text" class="form-control" value="{{layerversion.layer.vcs_url}}">
</div>
<button data-layer-prop="vcs_url" class="btn btn-default change-btn" type="button">Save</button>
<a href="#" style="display:none" class="btn btn-link cancel">Cancel</a>
</form>
<span class="glyphicon glyphicon-edit"></span>
</dd>
{% if layerversion.dirpath %}
<dt>
<span class="glyphicon glyphicon-question-sign get-help" title="Subdirectory within the repository where the layer is located, if not in the root (usually only used if the repository contains more than one layer)"></span>
Repository subdirectory
@@ -158,12 +172,9 @@
<div class="form-group">
<input type="text" class="form-control" value="{{layerversion.dirpath}}">
</div>
<button data-layer-prop="dirpath" class="btn btn-default change-btn" type="button">Save</button>
<a href="#" style="display:none" class="btn btn-link cancel">Cancel</a>
</form>
<span id="change-subdir" class="glyphicon glyphicon-edit"></span>
<span class="glyphicon glyphicon-trash delete-current-value" data-toggle="tooltip" title="Delete"></span>
</dd>
{% endif %}
<dt>
<span class="glyphicon glyphicon-question-sign get-help" title="The Git branch, tag or commit"></span>
Git revision
@@ -174,36 +185,96 @@
<div class="form-group">
<input type="text" class="form-control" value="{{layerversion.get_vcs_reference}}">
</div>
<button data-layer-prop="commit" class="btn btn-default change-btn" type="button">Save</button>
<a href="#" style="display:none" class="btn btn-link cancel">Cancel</a>
</form>
<span class="glyphicon glyphicon-edit"></i>
</dd>
<dt>
<span class="glyphicon glyphicon-question-sign get-help" title="Other layers this layer depends upon"></span>
Layer dependencies
</dt>
<dd>
<ul class="list-unstyled current-value" id="layer-deps-list">
{% for ld in layerversion.dependencies.all %}
<li data-layer-id="{{ld.depends_on.id}}">
<a data-toggle="tooltip" title="{{ld.depends_on.layer.vcs_url}} | {{ld.depends_on.get_vcs_reference}}" href="{% url 'layerdetails' project.id ld.depends_on.id %}">{{ld.depends_on.layer.name}}</a>
<span class="glyphicon glyphicon-trash " data-toggle="tooltip" title="Delete"></span>
</li>
{% endfor %}
</ul>
<form class="form-inline add-deps">
<div class="form-group">
<input class="form-control" type="text" autocomplete="off" data-minLength="1" data-autocomplete="off" placeholder="Type a layer name" id="layer-dep-input">
</div>
<a class="btn btn-default" id="add-layer-dependency-btn" disabled="disabled">
Add layer
</a>
<span class="help-block add-deps">You can only add layers Toaster knows about</span>
</form>
</dd>
</dl>
</div>
{% endif %}
{% if layerversion.layer_source == layer_source.TYPE_IMPORTED %}
<button class="btn btn-default btn-lg" id="edit-layer-source" style="margin-left:220px;">Edit layer source code location</button>
{% endif %}
<form id="edit-layer-source-form" style="display:none;">
<fieldset>
<legend class="radioLegend">Where is the layer source code?</legend>
<div class="radio">
<label>
<input type="radio" name="source-location" id="repo" value="repo">
In a <strong>Git repository</strong>
</label>
<p class="help-block" style="margin-left:20px;width:70%;">To build the layer Toaster must be able to access the Git repository, otherwise builds will fail. Toaster will fetch and checkout your chosen Git revision every time you start a build.</p>
</div>
<div class="radio" style="margin-top:15px;">
<label>
<input type="radio" name="source-location" id="dir" value="dir" checked>
In a <strong>directory</strong>
</label>
<p class="help-block" style="margin-left:20px;width:70%;">Use this option for quick layer development, by simply providing the path to the layer source code.</p>
</div>
</fieldset>
<fieldset id="layer-git">
<legend>Git repository information</legend>
<div class="form-group">
<label for="layer-git-repo-url">
Git repository URL
<span class="glyphicon glyphicon-question-sign get-help" title="Fetch/clone URL of the repository. Currently, Toaster only supports Git repositories." ></span>
</label>
<input type="text" id="layer-git-repo-url" class="form-control" value="{{layerversion.layer.vcs_url|default_if_none:''}}">
</div>
<div class="form-group">
<label for="layer-subdir">
Repository subdirectory
<span class="text-muted">(optional)</span>
<span class="glyphicon glyphicon-question-sign get-help" title="Subdirectory within the repository where the layer is located, if not in the root (usually only used if the repository contains more than one layer)"></span>
</label>
<input type="text" class="form-control" id="layer-subdir" value="{{layerversion.dirpath|default_if_none:''}}">
</div>
<div class="form-group" id="layer-revision-ctrl">
<label for="layer-git-ref">Git revision
<span class="glyphicon glyphicon-question-sign get-help" title="You can provide a Git branch, a tag or a commit SHA as the revision"></span>
</label>
<input type="text" class="form-control" id="layer-git-ref" value="{{layerversion.get_vcs_reference|default_if_none:''}}">
<span class="help-inline" style="display:none;" id="invalid-layer-revision-hint"></span>
</div>
</fieldset>
<fieldset id="layer-dir">
<legend>Layer directory information</legend>
<div class="form-group">
<label for="layer-dir-path">
Enter the absolute path to the layer directory
</label>
<input type="text" id="layer-dir-path-in-details" class="form-control" value="{{layerversion.layer.local_source_dir}}" required>
</div>
</fieldset>
<div style="margin-top:25px;">
<a href="#" class="btn btn-primary btn-lg" id="save-changes-for-switch">Save changes</a>
<a href="#" class="btn btn-link btn-lg" id="cancel-changes-for-switch">Cancel</a>
</div>
</form>
<h3 class="top-air">Layer dependencies
<span class="glyphicon glyphicon-question-sign get-help" title="Other layers this layer depends upon"></span>
</h3>
<ul class="list-unstyled current-value lead" id="layer-deps-list">
{% for ld in layerversion.dependencies.all %}
<li data-layer-id="{{ld.depends_on.id}}">
<a data-toggle="tooltip" title="{{ld.depends_on.layer.vcs_url}} | {{ld.depends_on.get_vcs_reference}}" href="{% url 'layerdetails' project.id ld.depends_on.id %}">{{ld.depends_on.layer.name}}</a>
<span class="glyphicon glyphicon-trash " data-toggle="tooltip" title="Delete"></span>
</li>
{% endfor %}
</ul>
<form class="form-inline add-deps">
<div class="form-group">
<input class="form-control" type="text" autocomplete="off" data-minLength="1" data-autocomplete="off" placeholder="Type a layer name" id="layer-dep-input">
</div>
<a class="btn btn-default" id="add-layer-dependency-btn" disabled="disabled">
Add layer
</a>
<span class="help-block add-deps">You can only add layers Toaster knows about</span>
</form>
</div>
<!-- end layerdetails tab -->
<!-- targets tab -->
<div id="recipes" class="tab-pane">

View File

@@ -57,10 +57,7 @@
<p class="lead"><span id="project-machine-name"></span> <span class="glyphicon glyphicon-edit" id="change-machine-toggle"></span></p>
<form id="select-machine-form" style="display:none;" class="form-inline">
<div class="alert alert-info">
<strong>Machine changes have a big impact on build outcome.</strong> You cannot really compare the builds for the new machine with the previous ones.
</div>
<span class="help-block">Machine suggestions come from the list of layers added to your project. If you don't see the machine you are looking for, <a href="{% url 'projectmachines' project.id %}">check the full list of machines</a></span>
<div class="form-group">
<input class="form-control" id="machine-change-input" autocomplete="off" value="" data-provide="typeahead" data-minlength="1" data-autocomplete="off" type="text">
</div>

View File

@@ -251,16 +251,16 @@ function validate_new_variable() {
}
}
var bad_chars = /[^a-zA-Z0-9\-_]/.test(variable);
var bad_chars = /[^a-zA-Z0-9\-_/]/.test(variable);
var has_spaces = (0 <= variable.indexOf(" "));
var only_spaces = (0 < variable.length) && (0 == variable.trim().length);
if (only_spaces) {
error_msg = "A valid variable name cannot include spaces";
} else if (bad_chars && has_spaces) {
error_msg = "A valid variable name can only include letters, numbers, underscores, dashes, and cannot include spaces";
error_msg = "A valid variable name can only include letters, numbers and the special characters <code> _ - /</code>. Variable names cannot include spaces";
} else if (bad_chars) {
error_msg = "A valid variable name can only include letters, numbers, underscores, and dashes";
error_msg = "A valid variable name can only include letters, numbers and the special characters <code>_ - /</code>";
}
if ("" != error_msg) {

View File

@@ -199,19 +199,25 @@ urlpatterns = patterns('toastergui.views',
url(r'^js-unit-tests/$', 'jsunittests', name='js-unit-tests'),
# image customisation functionality
url(r'^xhr_customrecipe/(?P<recipe_id>\d+)/packages/(?P<package_id>\d+|)$',
'xhr_customrecipe_packages', name='xhr_customrecipe_packages'),
url(r'^xhr_customrecipe/(?P<recipe_id>\d+)'
'/packages/(?P<package_id>\d+|)$',
api.XhrCustomRecipePackages.as_view(),
name='xhr_customrecipe_packages'),
url(r'^xhr_customrecipe/(?P<recipe_id>\d+)/packages/$',
'xhr_customrecipe_packages', name='xhr_customrecipe_packages'),
api.XhrCustomRecipePackages.as_view(),
name='xhr_customrecipe_packages'),
url(r'^xhr_customrecipe/(?P<recipe_id>\d+)$', 'xhr_customrecipe_id',
url(r'^xhr_customrecipe/(?P<recipe_id>\d+)$',
api.XhrCustomRecipeId.as_view(),
name='xhr_customrecipe_id'),
url(r'^xhr_customrecipe/', 'xhr_customrecipe',
url(r'^xhr_customrecipe/',
api.XhrCustomRecipe.as_view(),
name='xhr_customrecipe'),
url(r'^xhr_buildrequest/project/(?P<pid>\d+)$',
api.XhrBuildRequest.as_view(),
api.XhrBuildRequest.as_view(),
name='xhr_buildrequest'),
url(r'^mostrecentbuilds$', api.MostRecentBuildsView.as_view(),

View File

@@ -19,43 +19,37 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# pylint: disable=method-hidden
# Gives E:848, 4: An attribute defined in json.encoder line 162 hides this method (method-hidden)
# which is an invalid warning
import operator,re
import re
from django.db.models import F, Q, Sum, Count, Max
from django.db import IntegrityError, Error
from django.db.models import F, Q, Sum
from django.db import IntegrityError
from django.shortcuts import render, redirect, get_object_or_404
from orm.models import Build, Target, Task, Layer, Layer_Version, Recipe, LogMessage, Variable
from orm.models import Task_Dependency, Recipe_Dependency, Package, Package_File, Package_Dependency
from orm.models import Target_Installed_Package, Target_File, Target_Image_File, CustomImagePackage
from orm.models import TargetKernelFile, TargetSDKFile
from orm.models import Build, Target, Task, Layer, Layer_Version, Recipe
from orm.models import LogMessage, Variable, Package_Dependency, Package
from orm.models import Task_Dependency, Package_File
from orm.models import Target_Installed_Package, Target_File
from orm.models import TargetKernelFile, TargetSDKFile, Target_Image_File
from orm.models import BitbakeVersion, CustomImageRecipe
from bldcontrol import bbcontroller
from django.views.decorators.cache import cache_control
from django.core.urlresolvers import reverse, resolve
from django.core.exceptions import MultipleObjectsReturned, ObjectDoesNotExist
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from django.http import HttpResponseBadRequest, HttpResponseNotFound
from django.http import HttpResponseNotFound
from django.utils import timezone
from django.utils.html import escape
from datetime import timedelta, datetime
from django.utils import formats
from toastergui.templatetags.projecttags import json as jsonfilter
from decimal import Decimal
import json
import os
from os.path import dirname
from functools import wraps
import itertools
import mimetypes
import logging
logger = logging.getLogger("toaster")
class MimeTypeFinder(object):
# setting this to False enables additional non-standard mimetypes
# to be included in the guess
@@ -1498,18 +1492,6 @@ if True:
return context
def xhr_response(fun):
"""
Decorator for REST methods.
calls jsonfilter on the returned dictionary and returns result
as HttpResponse object of content_type application/json
"""
@wraps(fun)
def wrapper(*args, **kwds):
return HttpResponse(jsonfilter(fun(*args, **kwds)),
content_type="application/json")
return wrapper
def jsunittests(request):
""" Provides a page for the js unit tests """
bbv = BitbakeVersion.objects.filter(branch="master").first()
@@ -1767,187 +1749,6 @@ if True:
return HttpResponse(jsonfilter(json_response), content_type = "application/json")
@xhr_response
def xhr_customrecipe(request):
"""
Custom image recipe REST API
Entry point: /xhr_customrecipe/
Method: POST
Args:
name: name of custom recipe to create
project: target project id of orm.models.Project
base: base recipe id of orm.models.Recipe
Returns:
{"error": "ok",
"url": <url of the created recipe>}
or
{"error": <error message>}
"""
# check if request has all required parameters
for param in ('name', 'project', 'base'):
if param not in request.POST:
return {"error": "Missing parameter '%s'" % param}
# get project and baserecipe objects
params = {}
for name, model in [("project", Project),
("base", Recipe)]:
value = request.POST[name]
try:
params[name] = model.objects.get(id=value)
except model.DoesNotExist:
return {"error": "Invalid %s id %s" % (name, value)}
# create custom recipe
try:
# Only allowed chars in name are a-z, 0-9 and -
if re.search(r'[^a-z|0-9|-]', request.POST["name"]):
return {"error": "invalid-name"}
custom_images = CustomImageRecipe.objects.all()
# Are there any recipes with this name already in our project?
existing_image_recipes_in_project = custom_images.filter(
name=request.POST["name"], project=params["project"])
if existing_image_recipes_in_project.count() > 0:
return {"error": "image-already-exists"}
# Are there any recipes with this name which aren't custom
# image recipes?
custom_image_ids = custom_images.values_list('id', flat=True)
existing_non_image_recipes = Recipe.objects.filter(
Q(name=request.POST["name"]) & ~Q(pk__in=custom_image_ids)
)
if existing_non_image_recipes.count() > 0:
return {"error": "recipe-already-exists"}
# create layer 'Custom layer' and verion if needed
layer = Layer.objects.get_or_create(
name=CustomImageRecipe.LAYER_NAME,
summary="Layer for custom recipes",
vcs_url="file:///toaster_created_layer")[0]
# Check if we have a layer version already
# We don't use get_or_create here because the dirpath will change
# and is a required field
lver = Layer_Version.objects.filter(Q(project=params['project']) &
Q(layer=layer) &
Q(build=None)).last()
if lver == None:
lver, created = Layer_Version.objects.get_or_create(
project=params['project'],
layer=layer,
dirpath="toaster_created_layer")
# Add a dependency on our layer to the base recipe's layer
LayerVersionDependency.objects.get_or_create(
layer_version=lver,
depends_on=params["base"].layer_version)
# Add it to our current project if needed
ProjectLayer.objects.get_or_create(project=params['project'],
layercommit=lver,
optional=False)
# Create the actual recipe
recipe, created = CustomImageRecipe.objects.get_or_create(
name=request.POST["name"],
base_recipe=params["base"],
project=params["project"],
layer_version=lver,
is_image=True)
# If we created the object then setup these fields. They may get
# overwritten later on and cause the get_or_create to create a
# duplicate if they've changed.
if created:
recipe.file_path = request.POST["name"]
recipe.license = "MIT"
recipe.version = "0.1"
recipe.save()
except Error as err:
return {"error": "Can't create custom recipe: %s" % err}
# Find the package list from the last build of this recipe/target
target = Target.objects.filter(Q(build__outcome=Build.SUCCEEDED) &
Q(build__project=params['project']) &
(Q(target=params['base'].name) |
Q(target=recipe.name))).last()
if target:
# Copy in every package
# We don't want these packages to be linked to anything because
# that underlying data may change e.g. delete a build
for tpackage in target.target_installed_package_set.all():
try:
built_package = tpackage.package
# The package had no recipe information so is a ghost
# package skip it
if built_package.recipe == None:
continue;
config_package = CustomImagePackage.objects.get(
name=built_package.name)
recipe.includes_set.add(config_package)
except Exception as e:
logger.warning("Error adding package %s %s" %
(tpackage.package.name, e))
pass
return {"error": "ok",
"packages" : recipe.get_all_packages().count(),
"url": reverse('customrecipe', args=(params['project'].pk,
recipe.id))}
@xhr_response
def xhr_customrecipe_id(request, recipe_id):
"""
Set of ReST API processors working with recipe id.
Entry point: /xhr_customrecipe/<recipe_id>
Methods:
GET - Get details of custom image recipe
DELETE - Delete custom image recipe
Returns:
GET:
{"error": "ok",
"info": dictionary of field name -> value pairs
of the CustomImageRecipe model}
DELETE:
{"error": "ok"}
or
{"error": <error message>}
"""
try:
custom_recipe = CustomImageRecipe.objects.get(id=recipe_id)
except CustomImageRecipe.DoesNotExist:
return {"error": "Custom recipe with id=%s "
"not found" % recipe_id}
if request.method == 'GET':
info = {"id" : custom_recipe.id,
"name" : custom_recipe.name,
"base_recipe_id": custom_recipe.base_recipe.id,
"project_id": custom_recipe.project.id,
}
return {"error": "ok", "info": info}
elif request.method == 'DELETE':
custom_recipe.delete()
return {"error": "ok"}
else:
return {"error": "Method %s is not supported" % request.method}
def customrecipe_download(request, pid, recipe_id):
recipe = get_object_or_404(CustomImageRecipe, pk=recipe_id)
@@ -1960,232 +1761,6 @@ if True:
return response
def _traverse_dependents(next_package_id, rev_deps, all_current_packages, tree_level=0):
"""
Recurse through reverse dependency tree for next_package_id.
Limit the reverse dependency search to packages not already scanned,
that is, not already in rev_deps.
Limit the scan to a depth (tree_level) not exceeding the count of
all packages in the custom image, and if that depth is exceeded
return False, pop out of the recursion, and write a warning
to the log, but this is unlikely, suggesting a dependency loop
not caught by bitbake.
On return, the input/output arg rev_deps is appended with queryset
dictionary elements, annotated for use in the customimage template.
The list has unsorted, but unique elements.
"""
max_dependency_tree_depth = all_current_packages.count()
if tree_level >= max_dependency_tree_depth:
logger.warning(
"The number of reverse dependencies "
"for this package exceeds " + max_dependency_tree_depth +
" and the remaining reverse dependencies will not be removed")
return True
package = CustomImagePackage.objects.get(id=next_package_id)
dependents = \
package.package_dependencies_target.annotate(
name=F('package__name'),
pk=F('package__pk'),
size=F('package__size'),
).values("name", "pk", "size").exclude(
~Q(pk__in=all_current_packages)
)
for pkg in dependents:
if pkg in rev_deps:
# already seen, skip dependent search
continue
rev_deps.append(pkg)
if (_traverse_dependents(
pkg["pk"], rev_deps, all_current_packages, tree_level+1)):
return True
return False
def _get_all_dependents(package_id, all_current_packages):
"""
Returns sorted list of recursive reverse dependencies for package_id,
as a list of dictionary items, by recursing through dependency
relationships.
"""
rev_deps = []
_traverse_dependents(package_id, rev_deps, all_current_packages)
rev_deps = sorted(rev_deps, key=lambda x: x["name"])
return rev_deps
@xhr_response
def xhr_customrecipe_packages(request, recipe_id, package_id):
"""
ReST API to add/remove packages to/from custom recipe.
Entry point: /xhr_customrecipe/<recipe_id>/packages/<package_id>
Methods:
PUT - Add package to the recipe
DELETE - Delete package from the recipe
GET - Get package information
Returns:
{"error": "ok"}
or
{"error": <error message>}
"""
try:
recipe = CustomImageRecipe.objects.get(id=recipe_id)
except CustomImageRecipe.DoesNotExist:
return {"error": "Custom recipe with id=%s "
"not found" % recipe_id}
if package_id:
try:
package = CustomImagePackage.objects.get(id=package_id)
except Package.DoesNotExist:
return {"error": "Package with id=%s "
"not found" % package_id}
if request.method == 'GET':
# If no package_id then list the current packages
if not package_id:
total_size = 0
packages = recipe.get_all_packages().values("id",
"name",
"version",
"size")
for package in packages:
package['size_formatted'] = \
filtered_filesizeformat(package['size'])
total_size += package['size']
return {"error": "ok",
"packages" : list(packages),
"total" : len(packages),
"total_size" : total_size,
"total_size_formatted" :
filtered_filesizeformat(total_size)
}
else:
all_current_packages = recipe.get_all_packages()
# Dependencies for package which aren't satisfied by the
# current packages in the custom image recipe
deps =\
package.package_dependencies_source.for_target_or_none(
recipe.name)['packages'].annotate(
name=F('depends_on__name'),
pk=F('depends_on__pk'),
size=F('depends_on__size'),
).values("name", "pk", "size").filter(
# There are two depends types we don't know why
(Q(dep_type=Package_Dependency.TYPE_TRDEPENDS) |
Q(dep_type=Package_Dependency.TYPE_RDEPENDS)) &
~Q(pk__in=all_current_packages)
)
# Reverse dependencies which are needed by packages that are
# in the image. Recursive search providing all dependents,
# not just immediate dependents.
reverse_deps = _get_all_dependents(package_id, all_current_packages)
total_size_deps = 0
total_size_reverse_deps = 0
for dep in deps:
dep['size_formatted'] = \
filtered_filesizeformat(dep['size'])
total_size_deps += dep['size']
for dep in reverse_deps:
dep['size_formatted'] = \
filtered_filesizeformat(dep['size'])
total_size_reverse_deps += dep['size']
return {"error": "ok",
"id": package.pk,
"name": package.name,
"version": package.version,
"unsatisfied_dependencies": list(deps),
"unsatisfied_dependencies_size": total_size_deps,
"unsatisfied_dependencies_size_formatted":
filtered_filesizeformat(total_size_deps),
"reverse_dependencies": list(reverse_deps),
"reverse_dependencies_size": total_size_reverse_deps,
"reverse_dependencies_size_formatted":
filtered_filesizeformat(total_size_reverse_deps)}
included_packages = recipe.includes_set.values_list('pk', flat=True)
if request.method == 'PUT':
# If we're adding back a package which used to be included in this
# image all we need to do is remove it from the excludes
if package.pk in included_packages:
try:
recipe.excludes_set.remove(package)
return {"error": "ok"}
except Package.DoesNotExist:
return {"error":
"Package %s not found in excludes but was in "
"included list" % package.name}
else:
recipe.appends_set.add(package)
# Make sure that package is not in the excludes set
try:
recipe.excludes_set.remove(package)
except:
pass
# Add the dependencies we think will be added to the recipe
# as a result of appending this package.
# TODO this should recurse down the entire deps tree
for dep in package.package_dependencies_source.all_depends():
try:
cust_package = CustomImagePackage.objects.get(
name=dep.depends_on.name)
recipe.includes_set.add(cust_package)
try:
# When adding the pre-requisite package, make
# sure it's not in the excluded list from a
# prior removal.
recipe.excludes_set.remove(cust_package)
except Package.DoesNotExist:
# Don't care if the package had never been excluded
pass
except:
logger.warning("Could not add package's suggested"
"dependencies to the list")
return {"error": "ok"}
elif request.method == 'DELETE':
try:
# If we're deleting a package which is included we need to
# Add it to the excludes list.
if package.pk in included_packages:
recipe.excludes_set.add(package)
else:
recipe.appends_set.remove(package)
all_current_packages = recipe.get_all_packages()
reverse_deps_dictlist = _get_all_dependents(package.pk, all_current_packages)
ids = [entry['pk'] for entry in reverse_deps_dictlist]
reverse_deps = CustomImagePackage.objects.filter(id__in=ids)
for r in reverse_deps:
try:
if r.id in included_packages:
recipe.excludes_set.add(r)
else:
recipe.appends_set.remove(r)
except:
pass
return {"error": "ok"}
except CustomImageRecipe.DoesNotExist:
return {"error": "Tried to remove package that wasn't present"}
else:
return {"error": "Method %s is not supported" % request.method}
def importlayer(request, pid):
template = "importlayer.html"
context = {

View File

@@ -3048,7 +3048,7 @@
PV = "1.5.1+git${SRCPV}"
S = "${WORKDIR}/git/"
S = "${WORKDIR}/git"
EXTRA_OEMAKE = "'CC=${CC}' 'RANLIB=${RANLIB}' 'AR=${AR}' 'CFLAGS=${CFLAGS} -I${S}/include -DWITHOUT_XATTR' 'BUILDDIR=${S}'"
@@ -3945,6 +3945,12 @@
<filename>libsecret</filename>, and
<filename>webkit</filename>).
</para></listitem>
<listitem><para>
Using QEMU in usermode might not work properly when
running 64-bit binaries under 32-bit host machines.
In particular, "qemumips64" is known to not work under
i686.
</para></listitem>
</itemizedlist>
</para>
</section>
@@ -8771,19 +8777,19 @@
within a separately started QEMU or any
other virtual machine manager.
</para></listitem>
<listitem><para><emphasis>"GummibootTarget":</emphasis>
Choose "GummibootTarget" if your hardware is
<listitem><para><emphasis>"Systemd-bootTarget":</emphasis>
Choose "Systemd-bootTarget" if your hardware is
an EFI-based machine with
<filename>gummiboot</filename> as bootloader and
<filename>systemd-boot</filename> as bootloader and
<filename>core-image-testmaster</filename>
(or something similar) is installed.
Also, your hardware under test must be in a
DHCP-enabled network that gives it the same IP
address for each reboot.</para>
<para>If you choose "GummibootTarget", there are
<para>If you choose "Systemd-bootTarget", there are
additional requirements and considerations.
See the
"<link linkend='selecting-gummiboottarget'>Selecting GummibootTarget</link>"
"<link linkend='selecting-systemd-boottarget'>Selecting Systemd-bootTarget</link>"
section, which follows, for more information.
</para></listitem>
<listitem><para><emphasis>"BeagleBoneTarget":</emphasis>
@@ -8829,12 +8835,12 @@
</para>
</section>
<section id='selecting-gummiboottarget'>
<title>Selecting GummibootTarget</title>
<section id='selecting-systemd-boottarget'>
<title>Selecting Systemd-bootTarget</title>
<para>
If you did not set <filename>TEST_TARGET</filename> to
"GummibootTarget", then you do not need any information
"Systemd-bootTarget", then you do not need any information
in this section.
You can skip down to the
"<link linkend='qemu-image-running-tests'>Running Tests</link>"
@@ -8843,14 +8849,14 @@
<para>
If you did set <filename>TEST_TARGET</filename> to
"GummibootTarget", you also need to perform a one-time
"Systemd-bootTarget", you also need to perform a one-time
setup of your master image by doing the following:
<orderedlist>
<listitem><para><emphasis>Set <filename>EFI_PROVIDER</filename>:</emphasis>
Be sure that <filename>EFI_PROVIDER</filename>
is as follows:
<literallayout class='monospaced'>
EFI_PROVIDER = "gummiboot"
EFI_PROVIDER = "systemd-boot"
</literallayout>
</para></listitem>
<listitem><para><emphasis>Build the master image:</emphasis>
@@ -8914,7 +8920,7 @@
<para>
The final thing you need to do when setting
<filename>TEST_TARGET</filename> to "GummibootTarget" is
<filename>TEST_TARGET</filename> to "Systemd-bootTarget" is
to set up the test image:
<orderedlist>
<listitem><para><emphasis>Set up your <filename>local.conf</filename> file:</emphasis>
@@ -8923,7 +8929,7 @@
<literallayout class='monospaced'>
IMAGE_FSTYPES += "tar.gz"
INHERIT += "testimage"
TEST_TARGET = "GummibootTarget"
TEST_TARGET = "Systemd-bootTarget"
TEST_TARGET_IP = "192.168.2.3"
</literallayout>
</para></listitem>
@@ -9319,7 +9325,7 @@
The target controller object used to deploy
and start an image on a particular target
(e.g. QemuTarget, SimpleRemote, and
GummibootTarget).
Systemd-bootTarget).
Tests usually use the following:
<itemizedlist>
<listitem><para><emphasis><filename>ip</filename>:</emphasis>

View File

@@ -1099,36 +1099,6 @@
</para>
</section>
<section id='ref-classes-gummiboot'>
<title><filename>gummiboot.bbclass</filename></title>
<para>
The <filename>gummiboot</filename> class provides functions specific
to the gummiboot bootloader for building bootable images.
This is an internal class and is not intended to be
used directly.
Set the
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
variable to "gummiboot" to use this class.
</para>
<para>
For information on more variables used and supported in this class,
see the
<link linkend='var-GUMMIBOOT_CFG'><filename>GUMMIBOOT_CFG</filename></link>,
<link linkend='var-GUMMIBOOT_ENTRIES'><filename>GUMMIBOOT_ENTRIES</filename></link>,
and
<link linkend='var-GUMMIBOOT_TIMEOUT'><filename>GUMMIBOOT_TIMEOUT</filename></link>
variables.
</para>
<para>
You can also see the
<ulink url='http://freedesktop.org/wiki/Software/gummiboot/'>Gummiboot documentation</ulink>
for more information.
</para>
</section>
<section id='ref-classes-gzipnative'>
<title><filename>gzipnative.bbclass</filename></title>
@@ -1388,22 +1358,6 @@
</para>
</section>
<section id='ref-classes-image-swab'>
<title><filename>image-swab.bbclass</filename></title>
<para>
The <filename>image-swab</filename> class enables the
<ulink url='&YOCTO_HOME_URL;/tools-resources/projects/swabber'>Swabber</ulink>
tool in order to detect and log accesses to the host system during
the OpenEmbedded build process.
<note>
This class is currently unmaintained.
The <filename>strace</filename> package needs to be installed
in the build host as a dependency for this tool.
</note>
</para>
</section>
<section id='ref-classes-image-vm'>
<title><filename>image-vm.bbclass</filename></title>
@@ -3331,6 +3285,43 @@
</para>
</section>
<section id='ref-classes-systemd-boot'>
<title><filename>systemd-boot.bbclass</filename></title>
<para>
The <filename>systemd-boot</filename> class provides functions specific
to the systemd-boot bootloader for building bootable images.
This is an internal class and is not intended to be used directly.
<note>
The <filename>systemd-boot</filename> class is a result from
merging the <filename>gummiboot</filename> class used in previous
Yocto Project releases with the <filename>systemd</filename>
project.
</note>
Set the
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
variable to "systemd-boot" to use this class.
Doing so creates a standalone EFI bootloader that is not dependent
on systemd.
</para>
<para>
For information on more variables used and supported in this class,
see the
<link linkend='var-SYSTEMD_BOOT_CFG'><filename>SYSTEMD_BOOT_CFG</filename></link>,
<link linkend='var-SYSTEMD_BOOT_ENTRIES'><filename>SYSTEMD_BOOT_ENTRIES</filename></link>,
and
<link linkend='var-SYSTEMD_BOOT_TIMEOUT'><filename>SYSTEMD_BOOT_TIMEOUT</filename></link>
variables.
</para>
<para>
You can also see the
<ulink url='https://www.freedesktop.org/wiki/Software/systemd/'>Systemd documentation</ulink>
for more information.
</para>
</section>
<section id='ref-classes-terminal'>
<title><filename>terminal.bbclass</filename></title>

View File

@@ -47,7 +47,7 @@
<para>
The default behavior of this task is to run the
<filename>oe_runmake</filename> task if a makefile
<filename>oe_runmake</filename> function if a makefile
(<filename>Makefile</filename>, <filename>makefile</filename>,
or <filename>GNUmakefile</filename>) is found.
If no such file is found, the <filename>do_compile</filename>
@@ -260,6 +260,15 @@
This task runs with the current working directory set to
<filename>${</filename><link linkend='var-B'><filename>B</filename></link><filename>}</filename>,
which is the compilation directory.
The <filename>do_install</filename> task, as well as other tasks
that either directly or indirectly depend on the installed files
(e.g.
<link linkend='ref-tasks-package'><filename>do_package</filename></link>,
<link linkend='ref-tasks-package_write_deb'><filename>do_package_write_*</filename></link>,
and
<link linkend='ref-tasks-rootfs'><filename>do_rootfs</filename></link>),
run under
<link linkend='fakeroot-and-pseudo'>fakeroot</link>.
<note>
<title>Caution</title>

View File

@@ -1359,28 +1359,25 @@
<glossentry id='var-BPN'><glossterm>BPN</glossterm>
<info>
BPN[doc] = "The bare name of the recipe. This variable is a version of the PN variable but removes common suffixes and prefixes."
BPN[doc] = "This variable is a version of the PN variable but removes common suffixes and prefixes."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
The bare name of the recipe.
This variable is a version of the
<link linkend='var-PN'><filename>PN</filename></link>
variable but removes common suffixes such as
<filename>-native</filename> and
<filename>-cross</filename> as well
as removes common prefixes such as multilib's
variable with common prefixes and suffixes
removed, such as <filename>nativesdk-</filename>,
<filename>-cross</filename>,
<filename>-native</filename>, and multilib's
<filename>lib64-</filename> and
<filename>lib32-</filename>.
The exact list of suffixes removed is specified by the
<link linkend='var-SPECIAL_PKGSUFFIX'><filename>SPECIAL_PKGSUFFIX</filename></link>
variable.
The exact list of prefixes removed is specified by the
The exact lists of prefixes and suffixes removed are
specified by the
<link linkend='var-MLPREFIX'><filename>MLPREFIX</filename></link>
variable.
Prefixes are removed for <filename>multilib</filename>
and <filename>nativesdk-</filename> cases.
and
<link linkend='var-SPECIAL_PKGSUFFIX'><filename>SPECIAL_PKGSUFFIX</filename></link>
variables, respectively.
</para>
</glossdef>
</glossentry>
@@ -2599,6 +2596,11 @@
<literallayout class='monospaced'>
${<link linkend='var-WORKDIR'>WORKDIR</link>}/image
</literallayout>
<note><title>Caution</title>
Tasks that read from or write to this directory should
run under
<link linkend='fakeroot-and-pseudo'>fakeroot</link>.
</note>
</para>
</glossdef>
</glossentry>
@@ -3139,15 +3141,15 @@
by UUID to allow the kernel to locate the root device
even if the device name changes due to differences in
hardware configuration.
By default, <filename>SYSLINUX_ROOT</filename> is set
By default, <filename>ROOT_VM</filename> is set
as follows:
<literallayout class='monospaced'>
SYSLINUX_ROOT = "root=/dev/sda2"
ROOT_VM ?= "root=/dev/sda2"
</literallayout>
However, you can change this to locate the root device
using the disk signature instead:
<literallayout class='monospaced'>
SYSLINUX_ROOT = "root=PARTUUID=${DISK_SIGNATURE}-02"
ROOT_VM = "root=PARTUUID=${DISK_SIGNATURE}-02"
</literallayout>
</para>
@@ -3530,13 +3532,13 @@
<link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>),
the <filename>EFI_PROVIDER</filename> variable specifies
the EFI bootloader to use.
The default is "grub-efi", but "gummiboot" can be used
The default is "grub-efi", but "systemd-boot" can be used
instead.
</para>
<para>
See the
<link linkend='ref-classes-gummiboot'><filename>gummiboot</filename></link>
<link linkend='ref-classes-systemd-boot'><filename>systemd-boot</filename></link>
class for more information.
</para>
</glossdef>
@@ -3960,6 +3962,27 @@
</glossdef>
</glossentry>
<glossentry id='var-EXTRANATIVEPATH'><glossterm>EXTRANATIVEPATH</glossterm>
<info>
EXTRANATIVEPATH[doc] = "A list of subdirectories of ${STAGING_BINDIR_NATIVE} added to the beginning of the environment variable PATH."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
A list of subdirectories of
<filename>${</filename><link linkend='var-STAGING_BINDIR_NATIVE'><filename>STAGING_BINDIR_NATIVE</filename></link><filename>}</filename>
added to the beginning of the environment variable
<filename>PATH</filename>.
As an example, the following prepends
"${STAGING_BINDIR_NATIVE}/foo:${STAGING_BINDIR_NATIVE}/bar:"
to <filename>PATH</filename>:
<literallayout class='monospaced'>
EXTRANATIVEPATH = "foo bar"
</literallayout>
</para>
</glossdef>
</glossentry>
<glossentry id='var-EXTRA_OECMAKE'><glossterm>EXTRA_OECMAKE</glossterm>
<info>
EXTRA_OECMAKE[doc] = "Additional cmake options."
@@ -3999,6 +4022,15 @@
"", you need to set the variable to specify any required
GNU options.
</para>
<para>
<link linkend='var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></link>
and
<link linkend='var-PARALLEL_MAKEINST'><filename>PARALLEL_MAKEINST</filename></link>
also make use of
<filename>EXTRA_OEMAKE</filename> to pass the required
flags.
</para>
</glossdef>
</glossentry>
@@ -4128,12 +4160,16 @@
<glossentry id='var-FILES'><glossterm>FILES</glossterm>
<info>
FILES[doc] = "The list of directories or files that are placed in packages."
FILES[doc] = "The list of directories or files that are placed in a package."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
The list of directories or files that are placed in packages.
The list of files and directories that are placed in a
package.
The
<link linkend='var-PACKAGES'><filename>PACKAGES</filename></link>
variable lists the packages generated by a recipe.
</para>
<para>
@@ -4144,7 +4180,7 @@
resulting package.
Here is an example:
<literallayout class='monospaced'>
FILES_${PN} += "${bindir}/mydir1/ ${bindir}/mydir2/myfile"
FILES_${PN} += "${bindir}/mydir1 ${bindir}/mydir2/myfile"
</literallayout>
</para>
@@ -4159,6 +4195,8 @@
You can find a list of these variables at the top of the
<filename>meta/conf/bitbake.conf</filename> file in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
You will also find the default values of the various
<filename>FILES_*</filename> variables in this file.
</note>
<para>
@@ -4173,7 +4211,6 @@
variable for information on how to identify these files to
the PMS.
</para>
</glossdef>
</glossentry>
@@ -4637,92 +4674,6 @@
</glossdef>
</glossentry>
<glossentry id='var-GUMMIBOOT_CFG'><glossterm>GUMMIBOOT_CFG</glossterm>
<info>
GUMMIBOOT_CFG[doc] = "When EFI_PROVIDER is set to "gummiboot", the GUMMIBOOT_CFG variable specifies the configuration file that should be used."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
When
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
is set to "gummiboot", the
<filename>GUMMIBOOT_CFG</filename> variable specifies the
configuration file that should be used.
By default, the
<link linkend='ref-classes-gummiboot'><filename>gummiboot</filename></link>
class sets the <filename>GUMMIBOOT_CFG</filename> as
follows:
<literallayout class='monospaced'>
GUMMIBOOT_CFG ?= "${<link linkend='var-S'>S</link>}/loader.conf"
</literallayout>
</para>
<para>
For information on Gummiboot, see the
<ulink url='http://freedesktop.org/wiki/Software/gummiboot/'>Gummiboot documentation</ulink>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-GUMMIBOOT_ENTRIES'><glossterm>GUMMIBOOT_ENTRIES</glossterm>
<info>
GUMMIBOOT_ENTRIES[doc] = "When EFI_PROVIDER is set to "gummiboot", the GUMMIBOOT_ENTRIES variable specifies a list of entry files (*.conf) to be installed containing one boot entry per file."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
When
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
is set to "gummiboot", the
<filename>GUMMIBOOT_ENTRIES</filename> variable specifies
a list of entry files
(<filename>*.conf</filename>) to be installed
containing one boot entry per file.
By default, the
<link linkend='ref-classes-gummiboot'><filename>gummiboot</filename></link>
class sets the <filename>GUMMIBOOT_ENTRIES</filename> as
follows:
<literallayout class='monospaced'>
GUMMIBOOT_ENTRIES ?= ""
</literallayout>
</para>
<para>
For information on Gummiboot, see the
<ulink url='http://freedesktop.org/wiki/Software/gummiboot/'>Gummiboot documentation</ulink>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-GUMMIBOOT_TIMEOUT'><glossterm>GUMMIBOOT_TIMEOUT</glossterm>
<info>
GUMMIBOOT_TIMEOUT[doc] = "When EFI_PROVIDER is set to "gummiboot", the GUMMIBOOT_TIMEOUT variable specifies the boot menu timeout in seconds."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
When
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
is set to "gummiboot", the
<filename>GUMMIBOOT_TIMEOUT</filename> variable specifies
the boot menu timeout in seconds.
By default, the
<link linkend='ref-classes-gummiboot'><filename>gummiboot</filename></link>
class sets the <filename>GUMMIBOOT_TIMEOUT</filename> as
follows:
<literallayout class='monospaced'>
GUMMIBOOT_TIMEOUT ?= "10"
</literallayout>
</para>
<para>
For information on Gummiboot, see the
<ulink url='http://freedesktop.org/wiki/Software/gummiboot/'>Gummiboot documentation</ulink>.
</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-h'><title>H</title>
@@ -6073,72 +6024,108 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
<link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
variable.
</para>
<para>
The default value of this variable, which is set in the
<filename>meta/conf/bitbake.conf</filename> configuration
file in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
is "cpio.gz".
The Linux kernel's initramfs mechanism, as opposed to the
initial RAM disk
<ulink url='https://en.wikipedia.org/wiki/Initrd'>initrd</ulink>
mechanism, expects an optionally compressed cpio
archive.
</para>
</glossdef>
</glossentry>
<glossentry id='var-INITRAMFS_IMAGE'><glossterm>INITRAMFS_IMAGE</glossterm>
<info>
INITRAMFS_IMAGE[doc] = "Causes the OpenEmbedded build system to build an additional recipe as a dependency to your root filesystem recipe (e.g. core-image-sato)."
INITRAMFS_IMAGE[doc] = "Specifies the PROVIDES name of an image recipe that is used to build an initial RAM disk (initramfs) image."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
Causes the OpenEmbedded build system to build an additional
recipe as a dependency to your root filesystem recipe
(e.g. <filename>core-image-sato</filename>).
The additional recipe is used to create an initial RAM disk
(initramfs) that might be needed during the initial boot of
the target system to accomplish such things as loading
kernel modules prior to mounting the root file system.
Specifies the
<link linkend='var-PROVIDES'><filename>PROVIDES</filename></link>
name of an image recipe that is used to build an initial
RAM disk (initramfs) image.
An initramfs provides a temporary root filesystem used for
early system initialization (e.g. loading of modules
needed to locate and mount the "real" root filesystem).
The specified recipe is added as a dependency of the root
filesystem recipe (e.g.
<filename>core-image-sato</filename>).
See the <filename>meta/recipes-core/images/core-image-minimal-initramfs.bb</filename>
recipe in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
for an example initramfs recipe.
To select this recipe to provide the initramfs,
set <filename>INITRAMFS_IMAGE</filename> to
"core-image-minimal-initramfs".
<note>
The initramfs image recipe should set
<link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
to
<link linkend='var-INITRAMFS_FSTYPES'><filename>INITRAMFS_FSTYPES</filename></link>.
</note>
</para>
<para>
When you set the variable, specify the name of the
initramfs you want created.
The following example, which is set in the
<filename>local.conf</filename> configuration file, causes
a separate recipe to be created that results in an
initramfs image named
<filename>core-image-sato-initramfs.bb</filename> to be
created:
<literallayout class='monospaced'>
INITRAMFS_IMAGE = "core-image-minimal-initramfs"
</literallayout>
By default, the
You can also find more information by referencing the
<filename>meta/poky/conf/local.conf.sample.extended</filename>
configuration file in the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
the
<link linkend='ref-classes-image'><filename>image</filename></link>
class, and the
<link linkend='ref-classes-kernel'><filename>kernel</filename></link>
class sets this variable to a null string as follows:
<literallayout class='monospaced'>
INITRAMFS_IMAGE = ""
</literallayout>
class to see how to use the
<filename>INITRAMFS_IMAGE</filename> variable.
</para>
<para>
See the
<ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta-poky/conf/local.conf.sample.extended'><filename>local.conf.sample.extended</filename></ulink>
file for additional information.
You can also reference the
<ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/classes/kernel.bbclass'><filename>kernel.bbclass</filename></ulink>
file to see how the variable is used.
If <filename>INITRAMFS_IMAGE</filename> is empty, which is
the default, then no initramfs is built.
</para>
<para>
Finally, for more information you can also see the
<link linkend='var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></link>
variable, which allows the generated image to be bundled
inside the kernel image.
</para>
</glossdef>
</glossentry>
<glossentry id='var-INITRAMFS_IMAGE_BUNDLE'><glossterm>INITRAMFS_IMAGE_BUNDLE</glossterm>
<info>
INITRAMFS_IMAGE_BUNDLE[doc] = "Controls whether or not the image recipe specified by INITRAMFS_IMAGE is run through an extra pass during kernel compilation in order to build a single binary that contains both the kernel image and the initial RAM disk (initramfs)."
INITRAMFS_IMAGE_BUNDLE[doc] = "Controls whether or not the image recipe specified by INITRAMFS_IMAGE is run through an extra pass (do_bundle_initramfs) during kernel compilation in order to build a single binary that contains both the kernel image and the initial RAM disk (initramfs)."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
Controls whether or not the image recipe specified by
<link linkend='var-INITRAMFS_IMAGE'><filename>INITRAMFS_IMAGE</filename></link>
is run through an extra pass during kernel compilation
in order to build a single binary that contains both the
kernel image and the initial RAM disk (initramfs).
Using an extra compilation pass ensures that when a kernel
attempts to use an initramfs, it does not encounter
circular dependencies should the initramfs include kernel
modules.
is run through an extra pass
(<link linkend='ref-tasks-bundle_initramfs'><filename>do_bundle_initramfs</filename></link>)
during kernel compilation in order to build a single binary
that contains both the kernel image and the initial RAM disk
(initramfs).
This makes use of the
<link linkend='var-CONFIG_INITRAMFS_SOURCE'><filename>CONFIG_INITRAMFS_SOURCE</filename></link>
kernel feature.
<note>
Using an extra compilation pass to bundle the initramfs
avoids a circular dependency between the kernel recipe and
the initramfs recipe should the initramfs include kernel
modules.
Should that be the case, the initramfs recipe depends on
the kernel for the kernel modules, and the kernel depends
on the initramfs recipe since the initramfs is bundled
inside the kernel image.
</note>
</para>
<para>
@@ -6149,9 +6136,11 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
</para>
<para>
Setting the variable to "1" in a configuration file causes
the OpenEmbedded build system to make the extra pass during
kernel compilation:
Setting the variable to "1" in a configuration file causes the
OpenEmbedded build system to generate a kernel image with the
initramfs specified in
<link linkend='var-INITRAMFS_IMAGE'><filename>INITRAMFS_IMAGE</filename></link>
bundled within:
<literallayout class='monospaced'>
INITRAMFS_IMAGE_BUNDLE = "1"
</literallayout>
@@ -6159,7 +6148,7 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
<link linkend='ref-classes-kernel'><filename>kernel</filename></link>
class sets this variable to a null string as follows:
<literallayout class='monospaced'>
INITRAMFS_IMAGE_BUNDLE = ""
INITRAMFS_IMAGE_BUNDLE ?= ""
</literallayout>
<note>
You must set the
@@ -9111,6 +9100,19 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
${PN}-dbg ${PN}-staticdev ${PN}-dev ${PN}-doc ${PN}-locale ${PACKAGE_BEFORE_PN} ${PN}
</literallayout>
</para>
<para>
During packaging, the
<link linkend='ref-tasks-package'><filename>do_package</filename></link>
task goes through <filename>PACKAGES</filename> and uses
the
<link linkend='var-FILES'><filename>FILES</filename></link>
variable corresponding to each package to assign files to
the package.
If a file matches the <filename>FILES</filename> variable
for more than one package in <filename>PACKAGES</filename>,
it will be assigned to the earliest (leftmost) package.
</para>
</glossdef>
</glossentry>
@@ -9195,6 +9197,14 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
where <replaceable>x</replaceable> represents the maximum
number of parallel threads <filename>make</filename> can
run.
<note><title>Caution</title>
In order for <filename>PARALLEL_MAKE</filename> to be
effective, <filename>make</filename> must be called
with
<filename>${</filename><link linkend='var-EXTRA_OEMAKE'><filename>EXTRA_OEMAKE</filename></link><filename>}</filename>.
An easy way to ensure this is to use the
<filename>oe_runmake</filename> function.
</note>
</para>
<para>
@@ -9241,16 +9251,24 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
task in order to specify parallel installation.
This variable defaults to the value of
<link linkend='var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></link>.
<note>
If the software being built experiences dependency
issues during the
<note><title>Notes and Cautions</title>
<para>In order for <filename>PARALLEL_MAKEINST</filename>
to be
effective, <filename>make</filename> must be called
with
<filename>${</filename><link linkend='var-EXTRA_OEMAKE'><filename>EXTRA_OEMAKE</filename></link><filename>}</filename>.
An easy way to ensure this is to use the
<filename>oe_runmake</filename> function.</para>
<para>If the software being built experiences
dependency issues during the
<filename>do_install</filename> task that result in
race conditions, you can clear the
<filename>PARALLEL_MAKEINST</filename> variable within
the recipe as a workaround.
For information on addressing race conditions, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#debugging-parallel-make-races'>Debugging Parallel Make Races</ulink>"
section in the Yocto Project Development Manual.
section in the Yocto Project Development Manual.</para>
</note>
</para>
</glossdef>
@@ -9328,6 +9346,12 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
versioning scheme changes in some backwards incompatible
way.
</para>
<para>
<filename>PE</filename> is the default value of the
<link linkend='var-PKGE'><filename>PKGE</filename></link>
variable.
</para>
</glossdef>
</glossentry>
@@ -9513,13 +9537,12 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
<glossentry id='var-PKGE'><glossterm>PKGE</glossterm>
<info>
PKGE[doc] = "The epoch of the output package built by the OpenEmbedded build system."
PKGE[doc] = "The epoch of the package(s) built by the recipe."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
The epoch of the output package built by the
OpenEmbedded build system.
The epoch of the package(s) built by the recipe.
By default, <filename>PKGE</filename> is set to
<link linkend='var-PE'><filename>PE</filename></link>.
</para>
@@ -9528,13 +9551,12 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
<glossentry id='var-PKGR'><glossterm>PKGR</glossterm>
<info>
PKGR[doc] = "The revision of the output package built by the OpenEmbedded build system."
PKGR[doc] = "The revision of the package(s) built by the recipe."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
The revision of the output package built by the
OpenEmbedded build system.
The revision of the package(s) built by the recipe.
By default, <filename>PKGR</filename> is set to
<link linkend='var-PR'><filename>PR</filename></link>.
</para>
@@ -9543,13 +9565,13 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
<glossentry id='var-PKGV'><glossterm>PKGV</glossterm>
<info>
PKGV[doc] = "The version of the output package built by the OpenEmbedded build system."
PKGV[doc] = "The version of the package(s) built by the recipe."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
The version of the output package built by the
OpenEmbedded build system.
The version of the package(s) built by the
recipe.
By default, <filename>PKGV</filename> is set to
<link linkend='var-PV'><filename>PV</filename></link>.
</para>
@@ -9698,9 +9720,11 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
The OpenEmbedded build system does not need the aid of
<filename>PR</filename> to know when to rebuild a
recipe.
The build system uses
<link linkend='var-STAMP'><filename>STAMP</filename></link>
and the
The build system uses the task
<ulink url='&YOCTO_DOCS_BB_URL;#checksums'>input checksums</ulink>
along with the
<link linkend='structure-build-tmp-stamps'>stamp</link>
and
<link linkend='shared-state-cache'>shared state cache</link>
mechanisms.
</note>
@@ -9951,6 +9975,51 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
The <filename>PROVIDES</filename> statement results in
the "libav" recipe also being known as "libpostproc".
</para>
<para>
In addition to providing recipes under alternate names,
the <filename>PROVIDES</filename> mechanism is also used
to implement virtual targets.
A virtual target is a name that corresponds to some
particular functionality (e.g. a Linux kernel).
Recipes that provide the functionality in question list the
virtual target in <filename>PROVIDES</filename>.
Recipes that depend on the functionality in question can
include the virtual target in
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
to leave the choice of provider open.
</para>
<para>
Conventionally, virtual targets have names on the form
"virtual/function" (e.g. "virtual/kernel").
The slash is simply part of the name and has no
syntactical significance.
</para>
<para>
The
<link linkend='var-PREFERRED_PROVIDER'><filename>PREFERRED_PROVIDER</filename></link>
variable is used to select which particular recipe
provides a virtual target.
<note>
<para>A corresponding mechanism for virtual runtime
dependencies (packages) exists.
However, the mechanism does not depend on any special
functionality beyond ordinary variable assignments.
For example,
<filename>VIRTUAL-RUNTIME_dev_manager</filename>
refers to the package of the component that manages
the <filename>/dev</filename> directory.</para>
<para>Setting the "preferred provider" for runtime
dependencies is as simple as using the following
assignment in a configuration file:</para>
<literallayout class='monospaced'>
VIRTUAL-RUNTIME_dev_manager = "udev"
</literallayout>
</note>
</para>
</glossdef>
</glossentry>
@@ -10013,12 +10082,19 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
The version of the recipe.
The version is normally extracted from the recipe filename.
For example, if the recipe is named
<filename>expat_2.0.1.bb</filename>, then the default value of <filename>PV</filename>
will be "2.0.1".
<filename>expat_2.0.1.bb</filename>, then the default value
of <filename>PV</filename> will be "2.0.1".
<filename>PV</filename> is generally not overridden within
a recipe unless it is building an unstable (i.e. development) version from a source code repository
a recipe unless it is building an unstable (i.e.
development) version from a source code repository
(e.g. Git or Subversion).
</para>
<para>
<filename>PV</filename> is the default value of the
<link linkend='var-PKGV'><filename>PKGV</filename></link>
variable.
</para>
</glossdef>
</glossentry>
@@ -11526,15 +11602,24 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
<glossentry id='var-SERIAL_CONSOLES_CHECK'><glossterm>SERIAL_CONSOLES_CHECK</glossterm>
<info>
SERIAL_CONSOLES_CHECK[doc] = "Similar to SERIAL_CONSOLES except the device is checked for existence before attempting to enable it. Supported only by SysVinit."
SERIAL_CONSOLES_CHECK[doc] = "Selected SERIAL_CONSOLES to check against /proc/console before enabling using getty. Supported only by SysVinit."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
Similar to
<link linkend='var-SERIAL_CONSOLES'><filename>SERIAL_CONSOLES</filename></link>
except the device is checked for existence before attempting
to enable it.
Specifies serial consoles, which must be listed in
<link linkend='var-SERIAL_CONSOLES'><filename>SERIAL_CONSOLES</filename></link>,
to check against <filename>/proc/console</filename>
before enabling them using getty.
This variable allows aliasing in the format:
&lt;device&gt;:&lt;alias&gt;.
If a device was listed as "sclp_line0"
in <filename>/dev/</filename> and "ttyS0" was listed
in <filename>/proc/console</filename>, you would do the
following:
<literallayout class='monospaced'>
SERIAL_CONSOLES_CHECK = "slcp_line0:ttyS0"
</literallayout>
This variable is currently only supported with SysVinit
(i.e. not with systemd).
</para>
@@ -12765,6 +12850,92 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR = "${INC_PR}.3"
</glossdef>
</glossentry>
<glossentry id='var-SYSTEMD_BOOT_CFG'><glossterm>SYSTEMD_BOOT_CFG</glossterm>
<info>
SYSTEMD_BOOT_CFG[doc] = "When EFI_PROVIDER is set to "systemd-boot", the SYSTEMD_BOOT_CFG variable specifies the configuration file that should be used."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
When
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
is set to "systemd-boot", the
<filename>SYSTEMD_BOOT_CFG</filename> variable specifies the
configuration file that should be used.
By default, the
<link linkend='ref-classes-systemd-boot'><filename>systemd-boot</filename></link>
class sets the <filename>SYSTEMD_BOOT_CFG</filename> as
follows:
<literallayout class='monospaced'>
SYSTEMD_BOOT_CFG ?= "${<link linkend='var-S'>S</link>}/loader.conf"
</literallayout>
</para>
<para>
For information on Systemd-boot, see the
<ulink url='http://freedesktop.org/wiki/Software/systemd-boot/'>Systemd-boot documentation</ulink>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SYSTEMD_BOOT_ENTRIES'><glossterm>SYSTEMD_BOOT_ENTRIES</glossterm>
<info>
SYSTEMD_BOOT_ENTRIES[doc] = "When EFI_PROVIDER is set to "systemd-boot", the SYSTEMD_BOOT_ENTRIES variable specifies a list of entry files (*.conf) to be installed containing one boot entry per file."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
When
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
is set to "systemd-boot", the
<filename>SYSTEMD_BOOT_ENTRIES</filename> variable specifies
a list of entry files
(<filename>*.conf</filename>) to be installed
containing one boot entry per file.
By default, the
<link linkend='ref-classes-systemd-boot'><filename>systemd-boot</filename></link>
class sets the <filename>SYSTEMD_BOOT_ENTRIES</filename> as
follows:
<literallayout class='monospaced'>
SYSTEMD_BOOT_ENTRIES ?= ""
</literallayout>
</para>
<para>
For information on Systemd-boot, see the
<ulink url='http://freedesktop.org/wiki/Software/systemd-boot/'>Systemd-boot documentation</ulink>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SYSTEMD_BOOT_TIMEOUT'><glossterm>SYSTEMD_BOOT_TIMEOUT</glossterm>
<info>
SYSTEMD_BOOT_TIMEOUT[doc] = "When EFI_PROVIDER is set to "systemd-boot", the SYSTEMD_BOOT_TIMEOUT variable specifies the boot menu timeout in seconds."
</info>
<glossdef>
<para role="glossdeffirst">
<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
When
<link linkend='var-EFI_PROVIDER'><filename>EFI_PROVIDER</filename></link>
is set to "systemd-boot", the
<filename>SYSTEMD_BOOT_TIMEOUT</filename> variable specifies
the boot menu timeout in seconds.
By default, the
<link linkend='ref-classes-systemd-boot'><filename>systemd-boot</filename></link>
class sets the <filename>SYSTEMD_BOOT_TIMEOUT</filename> as
follows:
<literallayout class='monospaced'>
SYSTEMD_BOOT_TIMEOUT ?= "10"
</literallayout>
</para>
<para>
For information on Systemd-boot, see the
<ulink url='http://freedesktop.org/wiki/Software/systemd-boot/'>Systemd-boot documentation</ulink>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SYSTEMD_PACKAGES'><glossterm>SYSTEMD_PACKAGES</glossterm>
<info>
SYSTEMD_PACKAGES[doc] = "For recipes that inherit the systemd class, this variable locates the systemd unit files when they are not found in the main recipe's package."

View File

@@ -901,56 +901,15 @@
<title>Debugging</title>
<para>
When things go wrong, debugging needs to be straightforward.
Because of this, the Yocto Project includes strong debugging
tools:
<itemizedlist>
<listitem><para>Whenever a shared state package is written
out into the
<link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>,
a corresponding <filename>.siginfo</filename> file is
also written.
This file contains a pickled Python database of all
the Metadata that went into creating the hash for a
given shared state package.
Whenever a stamp is written into the stamp directory
<link linkend='var-STAMP'><filename>STAMP</filename></link>,
a corresponding <filename>.sigdata</filename> file
is created that contains the same hash data that
represented the executed task.
</para></listitem>
<listitem><para>You can use BitBake to dump out the
signature construction information without executing
tasks by using either of the following BitBake
command-line options:
<literallayout class='monospaced'>
&dash;&dash;dump-signatures=<replaceable>SIGNATURE_HANDLER</replaceable>
-S <replaceable>SIGNATURE_HANDLER</replaceable>
</literallayout>
<note>
Two common values for
<replaceable>SIGNATURE_HANDLER</replaceable> are
"none" and "printdiff" to only dump the signature
or to compare the dumped signature with the
cached one, respectively.
</note>
Using BitBake with either of these options causes
BitBake to dump out <filename>.sigdata</filename> files
in the stamp directory for every task it would have
executed instead of building the specified target
package.
</para></listitem>
<listitem><para>There is a
<filename>bitbake-diffsigs</filename> command that
can process <filename>.sigdata</filename> and
<filename>.siginfo</filename> files.
If you specify one of these files, BitBake dumps out
the dependency information in the file.
If you specify two files, BitBake compares the two
files and dumps out the differences between the two.
This more easily helps answer the question of "What
changed between X and Y?"</para></listitem>
</itemizedlist>
Seeing what metadata went into creating the input signature
of a shared state (sstate) task can be a useful debugging aid.
This information is available in signature information
(<filename>siginfo</filename>) files in
<link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>.
For information on how to view and interpret information in
<filename>siginfo</filename> files, see the
"<link linkend='usingpoky-viewing-task-variable-dependencies'>Viewing Task Variable Dependencies</link>"
section.
</para>
</section>
@@ -1020,6 +979,78 @@
</section>
</section>
<section id='fakeroot-and-pseudo'>
<title>Fakeroot and Pseudo</title>
<para>
Some tasks are easier to implement when allowed to perform certain
operations that are normally reserved for the root user.
For example, the
<link linkend='ref-tasks-install'><filename>do_install</filename></link>
task benefits from being able to set the UID and GID of installed files
to arbitrary values.
</para>
<para>
One approach to allowing tasks to perform root-only operations
would be to require BitBake to run as root.
However, this method is cumbersome and has security issues.
The approach that is actually used is to run tasks that benefit from
root privileges in a "fake" root environment.
Within this environment, the task and its child processes believe that
they are running as the root user, and see an internally consistent
view of the filesystem.
As long as generating the final output (e.g. a package or an image)
does not require root privileges, the fact that some earlier steps ran
in a fake root environment does not cause problems.
</para>
<para>
The capability to run tasks in a fake root environment is known as
"fakeroot", which is derived from the BitBake keyword/variable
flag that requests a fake root environment for a task.
In current versions of the OpenEmbedded build system,
the program that implements fakeroot is known as Pseudo.
</para>
<para>
Pseudo overrides system calls through the
<filename>LD_PRELOAD</filename> mechanism to give the
illusion of running as root.
To keep track of "fake" file ownership and permissions resulting from
operations that require root permissions, an sqlite3
database is used.
This database is stored in
<filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}/pseudo/files.db</filename>
for individual recipes.
Storing the database in a file as opposed to in memory
gives persistence between tasks, and even between builds.
<note><title>Caution</title>
If you add your own task that manipulates the same files or
directories as a fakeroot task, then that task should also run
under fakeroot.
Otherwise, the task will not be able to run root-only operations,
and will not see the fake file ownership and permissions set by the
other task.
You should also add a dependency on
<filename>virtual/fakeroot-native:do_populate_sysroot</filename>,
giving the following:
<literallayout class='monospaced'>
fakeroot do_mytask () {
...
}
do_mytask[depends] += "virtual/fakeroot-native:do_populate_sysroot"
</literallayout>
</note>
For more information, see the
<ulink url='&YOCTO_DOCS_BB_URL;#var-FAKEROOT'><filename>FAKEROOT*</filename></ulink>
variables in the BitBake User Manual.
You can also reference this
<ulink url='http://www.ibm.com/developerworks/opensource/library/os-aapseudo1/index.html'>Pseudo</ulink>
article.
</para>
</section>
<section id='x32'>
<title>x32</title>

View File

@@ -119,8 +119,8 @@
</para>
</section>
<section id='usingpoky-debugging'>
<title>Debugging Build Failures</title>
<section id='usingpoky-debugging-tools-and-techniques'>
<title>Debugging Tools and Techniques</title>
<para>
The exact method for debugging build failures depends on the nature of
@@ -163,23 +163,306 @@
<ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual'>BitBake User Manual</ulink>.
</note>
<section id='usingpoky-debugging-viewing-logs-from-failed-tasks'>
<title>Viewing Logs from Failed Tasks</title>
<section id='usingpoky-debugging-taskfailures'>
<title>Task Failures</title>
<para>The log file for shell tasks is available in
<filename>${WORKDIR}/temp/log.do_<replaceable>taskname</replaceable>.pid</filename>.
For example, the <filename>do_compile</filename> task for the QEMU minimal image for the x86
machine (<filename>qemux86</filename>) might be
<filename>tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_compile.20830</filename>.
To see what
<ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
runs to generate that log, look at the corresponding
<filename>run.do_<replaceable>taskname</replaceable>.pid</filename> file located in the same directory.
<para>
You can find the log for a task in the file
<filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}/temp/log.do_</filename><replaceable>taskname</replaceable>.
For example, the log for the
<link linkend='ref-tasks-compile'><filename>do_compile</filename></link>
task of the QEMU minimal image for the x86 machine
(<filename>qemux86</filename>) might be in
<filename>tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_compile</filename>.
To see the commands
<ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink> ran
to generate a log, look at the corresponding
<filename>run.do_</filename><replaceable>taskname</replaceable>
file in the same directory.
</para>
<para>
Presently, the output from Python tasks is sent directly to the console.
<filename>log.do_</filename><replaceable>taskname</replaceable> and
<filename>run.do_</filename><replaceable>taskname</replaceable>
are actually symbolic links to
<filename>log.do_</filename><replaceable>taskname</replaceable><filename>.</filename><replaceable>pid</replaceable>
and
<filename>log.run_</filename><replaceable>taskname</replaceable><filename>.</filename><replaceable>pid</replaceable>,
where <replaceable>pid</replaceable> is the PID the task had when
it ran.
The symlinks always point to the files corresponding to the most
recent run.
</para>
</section>
<section id='usingpoky-debugging-viewing-variable-values'>
<title>Viewing Variable Values</title>
<para>
BitBake's <filename>-e</filename> option is used to display
variable values after parsing.
The following command displays the variable values after the
configuration files (i.e. <filename>local.conf</filename>,
<filename>bblayers.conf</filename>,
<filename>bitbake.conf</filename> and so forth) have been
parsed:
<literallayout class='monospaced'>
$ bitbake -e
</literallayout>
The following command displays variable values after a specific
recipe has been parsed.
The variables include those from the configuration as well:
<literallayout class='monospaced'>
$ bitbake -e recipename
</literallayout>
<note><para>
Each recipe has its own private set of variables (datastore).
Internally, after parsing the configuration, a copy of the
resulting datastore is made prior to parsing each recipe.
This copying implies that variables set in one recipe will
not be visible to other recipes.</para>
<para>Likewise, each task within a recipe gets a private
datastore based on the recipe datastore, which means that
variables set within one task will not be visible to
other tasks.</para>
</note>
</para>
<para>
In the output of <filename>bitbake -e</filename>, each variable is
preceded by a description of how the variable got its value,
including temporary values that were later overriden.
This description also includes variable flags (varflags) set on
the variable.
The output can be very helpful during debugging.
</para>
<para>
Variables that are exported to the environment are preceded by
<filename>export</filename> in the output of
<filename>bitbake -e</filename>.
See the following example:
<literallayout class='monospaced'>
export CC="i586-poky-linux-gcc -m32 -march=i586 --sysroot=/home/ulf/poky/build/tmp/sysroots/qemux86"
</literallayout>
</para>
</section>
<section id='usingpoky-viewing-dependencies-between-recipes-and-tasks'>
<title>Viewing Dependencies Between Recipes and Tasks</title>
<para>
Sometimes it can be hard to see why BitBake wants to build other
recipes before the one you have specified.
Dependency information can help you understand why a recipe is
built.
</para>
<para>
To generate dependency information for a recipe, run the following
command:
<literallayout class='monospaced'>
$ bitbake -g <replaceable>recipename</replaceable>
</literallayout>
This command writes the following files in the current directory:
<itemizedlist>
<listitem><para>
<filename>pn-buildlist</filename>: A list of
recipes/targets involved in building
<replaceable>recipename</replaceable>.
"Involved" here means that at least one task from the
recipe needs to run when building
<replaceable>recipename</replaceable> from scratch.
Targets that are in
<link linkend='var-ASSUME_PROVIDED'><filename>ASSUME_PROVIDED</filename></link>
are not listed.
</para></listitem>
<listitem><para>
<filename>pn-depends.dot</filename>: A graph showing
dependencies between build-time targets (recipes).
</para></listitem>
<listitem><para>
<filename>package-depends.dot</filename>: A graph showing
known dependencies between runtime targets.
</para></listitem>
<listitem><para>
<filename>task-depends.dot</filename>: A graph showing
dependencies between tasks.
</para></listitem>
</itemizedlist>
</para>
<para>
The graphs are in
<ulink url='https://en.wikipedia.org/wiki/DOT_%28graph_description_language%29'>DOT</ulink>
format and can be converted to images (e.g. using the
<filename>dot</filename> tool from
<ulink url='http://www.graphviz.org/'>Graphviz</ulink>).
<note><title>Notes</title>
<itemizedlist>
<listitem><para>
DOT files use a plain text format.
The graphs generated using the
<filename>bitbake -g</filename> command are often so
large as to be difficult to read without special
pruning (e.g. with Bitbake's
<filename>-I</filename> option) and processing.
Despite the form and size of the graphs, the
corresponding <filename>.dot</filename> files can still
be possible to read and provide useful information.
</para>
<para>As an example, the
<filename>task-depends.dot</filename> file contains
lines such as the following:
<literallayout class='monospaced'>
"libxslt.do_configure" -> "libxml2.do_populate_sysroot"
</literallayout>
The above example line reveals that the
<link linkend='ref-tasks-configure'><filename>do_configure</filename></link>
task in <filename>libxslt</filename> depends on the
<link linkend='ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></link>
task in <filename>libxml2</filename>, which is a normal
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
dependency between the two recipes.
</para></listitem>
<listitem><para>
For an example of how <filename>.dot</filename> files
can be processed, see the
<filename>scripts/contrib/graph-tool</filename> Python
script, which finds and displays paths between graph
nodes.
</para></listitem>
</itemizedlist>
</note>
</para>
<para>
You can use a different method to view dependency information
by using the following command:
<literallayout class='monospaced'>
$ bitbake -g -u depexp <replaceable>recipename</replaceable>
</literallayout>
This command displays a GUI window from which you can view
build-time and runtime dependencies for the recipes involved in
building <replaceable>recipename</replaceable>.
</para>
</section>
<section id='usingpoky-viewing-task-variable-dependencies'>
<title>Viewing Task Variable Dependencies</title>
<para>
As mentioned in the
"<ulink url='&YOCTO_DOCS_BB_URL;#checksums'>Checksums (Signatures)</ulink>"
section of the BitBake User Manual, BitBake tries to automatically
determine what variables a task depends on so that it can rerun
the task if any values of the variables change.
This determination is usually reliable.
However, if you do things like construct variable names at runtime,
then you might have to manually declare dependencies on those
variables using <filename>vardeps</filename> as described in the
"<ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'>Variable Flags</ulink>"
section of the BitBake User Manual.
</para>
<para>
If you are unsure whether a variable dependency is being picked up
automatically for a given task, you can list the variable
dependencies BitBake has determined by doing the following:
<orderedlist>
<listitem><para>
Build the recipe containing the task:
<literallayout class='monospaced'>
$ bitbake <replaceable>recipename</replaceable>
</literallayout>
</para></listitem>
<listitem><para>
Inside the
<link linkend='var-STAMPS_DIR'><filename>STAMPS_DIR</filename></link>
directory, find the signature data
(<filename>sigdata</filename>) file that corresponds to the
task.
The <filename>sigdata</filename> files contain a pickled
Python database of all the metadata that went into creating
the input checksum for the task.
As an example, for the
<link linkend='ref-tasks-fetch'><filename>do_fetch</filename></link>
task of the <filename>db</filename> recipe, the
<filename>sigdata</filename> file might be found in the
following location:
<literallayout class='monospaced'>
${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1
</literallayout>
For tasks that are accelerated through the shared state
(<link linkend='shared-state-cache'>sstate</link>)
cache, an additional <filename>siginfo</filename> file is
written into
<link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>
along with the cached task output.
The <filename>siginfo</filename> files contain exactly the
same information as <filename>sigdata</filename> files.
</para></listitem>
<listitem><para>
Run <filename>bitbake-dumpsig</filename> on the
<filename>sigdata</filename> or
<filename>siginfo</filename> file.
Here is an example:
<literallayout class='monospaced'>
$ bitbake-dumpsig ${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1
</literallayout>
In the output of the above command, you will find a line
like the following, which lists all the (inferred) variable
dependencies for the task.
This list also includes indirect dependencies from
variables depending on other variables, recursively.
<literallayout class='monospaced'>
Task dependencies: ['PV', 'SRCREV', 'SRC_URI', 'SRC_URI[md5sum]', 'SRC_URI[sha256sum]', 'base_do_fetch']
</literallayout>
<note>
Functions (e.g. <filename>base_do_fetch</filename>)
also count as variable dependencies.
These functions in turn depend on the variables they
reference.
</note>
The output of <filename>bitbake-dumpsig</filename> also includes
the value each variable had, a list of dependencies for each
variable, and
<ulink url='&YOCTO_DOCS_BB_URL;#var-BB_HASHBASE_WHITELIST'><filename>BB_HASHBASE_WHITELIST</filename></ulink>
information.
</para></listitem>
</orderedlist>
</para>
<para>
There is also a <filename>bitbake-diffsigs</filename> command for
comparing two <filename>siginfo</filename> or
<filename>sigdata</filename> files.
This command can be helpful when trying to figure out what changed
between two versions of a task.
If you call <filename>bitbake-diffsigs</filename> with just one
file, the command behaves like
<filename>bitbake-dumpsig</filename>.
</para>
<para>
You can also use BitBake to dump out the signature construction
information without executing tasks by using either of the
following BitBake command-line options:
<literallayout class='monospaced'>
&dash;&dash;dump-signatures=<replaceable>SIGNATURE_HANDLER</replaceable>
-S <replaceable>SIGNATURE_HANDLER</replaceable>
</literallayout>
<note>
Two common values for
<replaceable>SIGNATURE_HANDLER</replaceable> are "none" and
"printdiff", which dump only the signature or compare the
dumped signature with the cached one, respectively.
</note>
Using BitBake with either of these options causes BitBake to dump
out <filename>sigdata</filename> files in the
<filename>stamps</filename> directory for every task it would have
executed instead of building the specified target package.
</para>
</section>
@@ -187,7 +470,7 @@
<title>Running Specific Tasks</title>
<para>
Any given package consists of a set of tasks.
Any given recipe consists of a set of tasks.
The standard BitBake behavior in most cases is:
<filename>do_fetch</filename>,
<filename>do_unpack</filename>,
@@ -319,27 +602,6 @@
</section>
<section id='usingpoky-debugging-dependencies'>
<title>Dependency Graphs</title>
<para>
Sometimes it can be hard to see why BitBake wants to build
other packages before building a given package you have specified.
The <filename>bitbake -g <replaceable>targetname</replaceable></filename> command
creates the <filename>pn-buildlist</filename>,
<filename>pn-depends.dot</filename>,
<filename>package-depends.dot</filename>, and
<filename>task-depends.dot</filename> files in the current
directory.
These files show what will be built and the package and task
dependencies, which are useful for debugging problems.
You can use the
<filename>bitbake -g -u depexp <replaceable>targetname</replaceable></filename>
command to display the results in a more human-readable form.
</para>
</section>
<section id='usingpoky-debugging-bitbake'>
<title>General BitBake Problems</title>
@@ -410,23 +672,6 @@
</para>
</section>
<section id='usingpoky-debugging-variables'>
<title>Variables</title>
<para>
You can use the <filename>-e</filename> BitBake option to
display the parsing environment for a configuration.
The following displays the general parsing environment:
<literallayout class='monospaced'>
$ bitbake -e
</literallayout>
This next example shows the parsing environment for a specific
recipe:
<literallayout class='monospaced'>
$ bitbake -e <replaceable>recipename</replaceable>
</literallayout>
</para>
</section>
<section id='recipe-logging-mechanisms'>
<title>Recipe Logging Mechanisms</title>
<para>

View File

@@ -0,0 +1,877 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<appendix id='sdk-appendix-mars'>
<title>Using Eclipse Mars</title>
<para>
This release of the Yocto Project supports both the Neon and Mars
versions of the Eclipse IDE.
This appendix presents information that describes how to obtain and
configure the Mars version of Eclipse.
It also provides a basic project example that you can work through
from start to finish.
For general information on using the Eclipse IDE and the Yocto
Project Eclipse Plug-In, see the
"<link linkend='sdk-developing-applications-using-eclipse'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></link>"
section.
</para>
<section id='mars-setting-up-the-eclipse-ide'>
<title>Setting Up the Mars Version of the Eclipse IDE</title>
<para>
To develop within the Eclipse IDE, you need to do the following:
<orderedlist>
<listitem><para>Install the Mars version of the Eclipse
IDE.</para></listitem>
<listitem><para>Configure the Eclipse IDE.
</para></listitem>
<listitem><para>Install the Eclipse Yocto Plug-in.
</para></listitem>
<listitem><para>Configure the Eclipse Yocto Plug-in.
</para></listitem>
</orderedlist>
<note>
Do not install Eclipse from your distribution's package
repository.
Be sure to install Eclipse from the official Eclipse
download site as directed in the next section.
</note>
</para>
<section id='mars-installing-eclipse-ide'>
<title>Installing the Mars Eclipse IDE</title>
<para>
Follow these steps to locate, install, and configure
Mars Eclipse:
<orderedlist>
<listitem><para><emphasis>Locate the Mars Download:</emphasis>
Open a browser and go to
<ulink url='http://www.eclipse.org/mars/'>http://www.eclipse.org/mars/</ulink>.
</para></listitem>
<listitem><para><emphasis>Download the Tarball:</emphasis>
Click the "Download" button and then use the "Linux
for Eclipse IDE for C++ Developers"
appropriate for your development system
(e.g.
<ulink url='http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/mars/2/eclipse-cpp-mars-2-linux-gtk-x86_64.tar.gz'>64-bit under Linux for Eclipse IDE for C++ Developers</ulink>
if your development system is a Linux 64-bit machine.
</para></listitem>
<listitem><para><emphasis>Unpack the Tarball:</emphasis>
Move to a clean directory and unpack the tarball.
Here is an example:
<literallayout class='monospaced'>
$ cd ~
$ tar -xzvf ~/Downloads/eclipse-cpp-mars-2-linux-gtk-x86_64.tar.gz
</literallayout>
Everything unpacks into a folder named "Eclipse".
</para></listitem>
<listitem><para><emphasis>Launch Eclipse:</emphasis>
Double click the "Eclipse" file in the folder to
launch Eclipse.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-configuring-the-mars-eclipse-ide'>
<title>Configuring the Mars Eclipse IDE</title>
<para>
Follow these steps to configure the Mars Eclipse IDE.
<note>
Depending on how you installed Eclipse and what you have
already done, some of the options will not appear.
If you cannot find an option as directed by the manual,
it has already been installed.
</note>
<orderedlist>
<listitem><para>Be sure Eclipse is running and
you are in your workbench.
</para></listitem>
<listitem><para>Select "Install New Software" from
the "Help" pull-down menu.
</para></listitem>
<listitem><para>Select
"Mars - http://download.eclipse.org/releases/mars"
from the "Work with:" pull-down menu.
</para></listitem>
<listitem><para>Expand the box next to
"Linux Tools" and select "C/C++ Remote
(Over TCF/TE) Run/Debug Launcher" and
"TM Terminal".
</para></listitem>
<listitem><para>Expand the box next to "Mobile and
Device Development" and select the following
boxes:
<literallayout class='monospaced'>
C/C++ Remote (Over TCF/TE) Run/Debug Launcher
Remote System Explorer User Actions
TM Terminal
TCF Remote System Explorer add-in
TCF Target Explorer
</literallayout>
</para></listitem>
<listitem><para>Expand the box next to
"Programming Languages" and select the
following boxes:
<literallayout class='monospaced'>
C/C++ Autotools Support
C/C++ Development Tools SDK
</literallayout>
</para></listitem>
<listitem><para>
Complete the installation by clicking through
appropriate "Next" and "Finish" buttons.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-installing-the-eclipse-yocto-plug-in'>
<title>Installing or Accessing the Mars Eclipse Yocto Plug-in</title>
<para>
You can install the Eclipse Yocto Plug-in into the Eclipse
IDE one of two ways: use the Yocto Project's Eclipse
Update site to install the pre-built plug-in or build and
install the plug-in from the latest source code.
</para>
<section id='mars-new-software'>
<title>Installing the Pre-built Plug-in from the Yocto Project Eclipse Update Site</title>
<para>
To install the Mars Eclipse Yocto Plug-in from the update
site, follow these steps:
<orderedlist>
<listitem><para>Start up the Eclipse IDE.
</para></listitem>
<listitem><para>In Eclipse, select "Install New
Software" from the "Help" menu.
</para></listitem>
<listitem><para>Click "Add..." in the "Work with:"
area.
</para></listitem>
<listitem><para>Enter
<filename>&ECLIPSE_DL_PLUGIN_URL;/mars</filename>
in the URL field and provide a meaningful name
in the "Name" field.
</para></listitem>
<listitem><para>Click "OK" to have the entry added
to the "Work with:" drop-down list.
</para></listitem>
<listitem><para>Select the entry for the plug-in
from the "Work with:" drop-down list.
</para></listitem>
<listitem><para>Check the boxes next to the following:
<literallayout class='monospaced'>
Yocto Project SDK Plug-in
Yocto Project Documentation plug-in
</literallayout>
</para></listitem>
<listitem><para>Complete the remaining software
installation steps and then restart the Eclipse
IDE to finish the installation of the plug-in.
<note>
You can click "OK" when prompted about
installing software that contains unsigned
content.
</note>
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-zip-file-method'>
<title>Installing the Plug-in Using the Latest Source Code</title>
<para>
To install the Mars Eclipse Yocto Plug-in from the latest
source code, follow these steps:
<orderedlist>
<listitem><para>Be sure your development system
has JDK 1.7+
</para></listitem>
<listitem><para>install X11-related packages:
<literallayout class='monospaced'>
$ sudo apt-get install xauth
</literallayout>
</para></listitem>
<listitem><para>In a new terminal shell, create a Git
repository with:
<literallayout class='monospaced'>
$ cd ~
$ git clone git://git.yoctoproject.org/eclipse-poky
</literallayout>
</para></listitem>
<listitem><para>Use Git to checkout the correct
tag:
<note><title>Developer's Note</title>
<para role='writernotes'>
Because the 2.2 tag will not exist until after
the release, I must first do the following
before running the
<filename>git checkout mars/yocto-&DISTRO;</filename>
command in this step:
<literallayout class='monospaced'>
$ git tag mars/yocto-2.2 origin/mars-master
</literallayout></para>
</note>
<literallayout class='monospaced'>
$ cd ~/eclipse-poky
$ git checkout mars/yocto-&DISTRO;
</literallayout>
This puts you in a detached HEAD state, which
is fine since you are only going to be building
and not developing.
</para></listitem>
<listitem><para>Change to the
<filename>scripts</filename>
directory within the Git repository:
<literallayout class='monospaced'>
$ cd scripts
</literallayout>
</para></listitem>
<listitem><para>Set up the local build environment
by running the setup script:
<literallayout class='monospaced'>
$ ./setup.sh
</literallayout>
When the script finishes execution,
it prompts you with instructions on how to run
the <filename>build.sh</filename> script, which
is also in the <filename>scripts</filename>
directory of the Git repository created
earlier.
</para></listitem>
<listitem><para>Run the <filename>build.sh</filename>
script as directed.
Be sure to provide the tag name, documentation
branch, and a release name.</para>
<para>
Following is an example:
<literallayout class='monospaced'>
$ ECLIPSE_HOME=/home/scottrif/eclipse-poky/scripts/eclipse ./build.sh -l mars/yocto-&DISTRO; master yocto-&DISTRO; 2>&amp;1 | tee build.log
</literallayout>
The previous example command adds the tag you
need for <filename>mars/yocto-&DISTRO;</filename>
to <filename>HEAD</filename>, then tells the
build script to use the local (-l) Git checkout
for the build.
After running the script, the file
<filename>org.yocto.sdk-</filename><replaceable>release</replaceable><filename>-</filename><replaceable>date</replaceable><filename>-archive.zip</filename>
is in the current directory.
</para></listitem>
<listitem><para>If necessary, start the Eclipse IDE
and be sure you are in the Workbench.
</para></listitem>
<listitem><para>Select "Install New Software" from
the "Help" pull-down menu.
</para></listitem>
<listitem><para>Click "Add".
</para></listitem>
<listitem><para>Provide anything you want in the
"Name" field.
</para></listitem>
<listitem><para>Click "Archive" and browse to the
ZIP file you built earlier.
This ZIP file should not be "unzipped", and must
be the <filename>*archive.zip</filename> file
created by running the
<filename>build.sh</filename> script.
</para></listitem>
<listitem><para>Click the "OK" button.
</para></listitem>
<listitem><para>Check the boxes that appear in
the installation window to install the
following:
<note><title>Developer's Note</title>
<para role='writernotes'>
Right now, a check box for BitBake Commander
is appearing.
This probably needs removed.
Do not check this box.</para>
</note>
<literallayout class='monospaced'>
Yocto Project SDK Plug-in
Yocto Project Documentation plug-in
</literallayout>
</para></listitem>
<listitem><para>Finish the installation by clicking
through the appropriate buttons.
You can click "OK" when prompted about
installing software that contains unsigned
content.
</para></listitem>
<listitem><para>Restart the Eclipse IDE if
necessary.
</para></listitem>
</orderedlist>
</para>
<para>
At this point you should be able to configure the
Eclipse Yocto Plug-in as described in the
"<link linkend='mars-configuring-the-eclipse-yocto-plug-in'>Configuring the Mars Eclipse Yocto Plug-in</link>"
section.</para>
</section>
</section>
<section id='mars-configuring-the-eclipse-yocto-plug-in'>
<title>Configuring the Mars Eclipse Yocto Plug-in</title>
<para>
Configuring the Mars Eclipse Yocto Plug-in involves setting the
Cross Compiler options and the Target options.
The configurations you choose become the default settings
for all projects.
You do have opportunities to change them later when
you configure the project (see the following section).
</para>
<para>
To start, you need to do the following from within the
Eclipse IDE:
<itemizedlist>
<listitem><para>Choose "Preferences" from the
"Window" menu to display the Preferences Dialog.
</para></listitem>
<listitem><para>Click "Yocto Project SDK" to display
the configuration screen.
</para></listitem>
</itemizedlist>
The following sub-sections describe how to configure the
the plug-in.
<note>
Throughout the descriptions, a start-to-finish example for
preparing a QEMU image for use with Eclipse is referenced
as the "wiki" and is linked to the example on the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'> Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
wiki page.
</note>
</para>
<section id='mars-configuring-the-cross-compiler-options'>
<title>Configuring the Cross-Compiler Options</title>
<para>
Cross Compiler options enable Eclipse to use your specific
cross compiler toolchain.
To configure these options, you must select
the type of toolchain, point to the toolchain, specify
the sysroot location, and select the target
architecture.
<itemizedlist>
<listitem><para><emphasis>Selecting the Toolchain Type:</emphasis>
Choose between
<filename>Standalone pre-built toolchain</filename>
and
<filename>Build system derived toolchain</filename>
for Cross Compiler Options.
<itemizedlist>
<listitem><para><emphasis>
<filename>Standalone Pre-built Toolchain:</filename></emphasis>
Select this type when you are using
a stand-alone cross-toolchain.
For example, suppose you are an
application developer and do not
need to build a target image.
Instead, you just want to use an
architecture-specific toolchain on
an existing kernel and target root
filesystem.
In other words, you have downloaded
and installed a pre-built toolchain
for an existing image.
</para></listitem>
<listitem><para><emphasis>
<filename>Build System Derived Toolchain:</filename></emphasis>
Select this type if you built the
toolchain as part of the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
When you select
<filename>Build system derived toolchain</filename>,
you are using the toolchain built and
bundled inside the Build Directory.
For example, suppose you created a
suitable image using the steps in the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
In this situation, you would select the
<filename>Build system derived toolchain</filename>.
</para></listitem>
</itemizedlist>
</para></listitem>
<listitem><para><emphasis>Specify the Toolchain Root Location:</emphasis>
If you are using a stand-alone pre-built
toolchain, you should be pointing to where it is
installed (e.g.
<filename>/opt/poky/&DISTRO;</filename>).
See the
"<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
section for information about how the SDK is
installed.</para>
<para>If you are using a build system derived
toolchain, the path you provide for the
<filename>Toolchain Root Location</filename>
field is the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
from which you run the
<filename>bitbake</filename> command (e.g
<filename>/home/scottrif/poky/build</filename>).</para>
<para>For more information, see the
"<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
section.
</para></listitem>
<listitem><para><emphasis>Specify Sysroot Location:</emphasis>
This location is where the root filesystem for
the target hardware resides.
</para>
<para>This location depends on where you
separately extracted and installed the target
filesystem.
As an example, suppose you prepared an image
using the steps in the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
If so, the <filename>MY_QEMU_ROOTFS</filename>
directory is found in the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
and you would browse to and select that directory
(e.g. <filename>/home/scottrif/build/MY_QEMU_ROOTFS</filename>).
</para>
<para>For more information on how to install the
toolchain and on how to extract and install the
sysroot filesystem, see the
"<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
section.
</para></listitem>
<listitem><para><emphasis>Select the Target Architecture:</emphasis>
The target architecture is the type of hardware
you are going to use or emulate.
Use the pull-down
<filename>Target Architecture</filename> menu
to make your selection.
The pull-down menu should have the supported
architectures.
If the architecture you need is not listed in
the menu, you will need to build the image.
See the
"<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
section of the Yocto Project Quick Start for
more information.
You can also see the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id='mars-configuring-the-target-options'>
<title>Configuring the Target Options</title>
<para>
You can choose to emulate hardware using the QEMU
emulator, or you can choose to run your image on actual
hardware.
<itemizedlist>
<listitem><para><emphasis>QEMU:</emphasis>
Select this option if you will be using the
QEMU emulator.
If you are using the emulator, you also need to
locate the kernel and specify any custom
options.</para>
<para>If you selected the
<filename>Build system derived toolchain</filename>,
the target kernel you built will be located in
the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
in
<filename>tmp/deploy/images/<replaceable>machine</replaceable></filename>
directory.
As an example, suppose you performed the steps in
the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
In this case, you specify your Build Directory path
followed by the image (e.g.
<filename>/home/scottrif/poky/tmp/deploy/images/qemux86/bzImage-qemux86.bin</filename>).
</para>
<para>If you selected the standalone pre-built
toolchain, the pre-built image you downloaded is
located in the directory you specified when you
downloaded the image.</para>
<para>Most custom options are for advanced QEMU
users to further customize their QEMU instance.
These options are specified between paired
angled brackets.
Some options must be specified outside the
brackets.
In particular, the options
<filename>serial</filename>,
<filename>nographic</filename>, and
<filename>kvm</filename> must all be outside the
brackets.
Use the <filename>man qemu</filename> command
to get help on all the options and their use.
The following is an example:
<literallayout class='monospaced'>
serial &lt;-m 256 -full-screen&gt;
</literallayout></para>
<para>
Regardless of the mode, Sysroot is already
defined as part of the Cross-Compiler Options
configuration in the
<filename>Sysroot Location:</filename> field.
</para></listitem>
<listitem><para><emphasis>External HW:</emphasis>
Select this option if you will be using actual
hardware.</para></listitem>
</itemizedlist>
</para>
<para>
Click the "Apply" and "OK" to save your plug-in
configurations.
</para>
</section>
</section>
</section>
<section id='mars-creating-the-project'>
<title>Creating the Project</title>
<para>
You can create two types of projects: Autotools-based, or
Makefile-based.
This section describes how to create Autotools-based projects
from within the Eclipse IDE.
For information on creating Makefile-based projects in a
terminal window, see the
"<link linkend='makefile-based-projects'>Makefile-Based Projects</link>"
section.
<note>
Do not use special characters in project names
(e.g. spaces, underscores, etc.). Doing so can
cause configuration to fail.
</note>
</para>
<para>
To create a project based on a Yocto template and then display
the source code, follow these steps:
<orderedlist>
<listitem><para>Select "C Project" from the "File -> New" menu.
</para></listitem>
<listitem><para>Expand <filename>Yocto Project SDK Autotools Project</filename>.
</para></listitem>
<listitem><para>Select <filename>Hello World ANSI C Autotools Projects</filename>.
This is an Autotools-based project based on a Yocto
template.
</para></listitem>
<listitem><para>Put a name in the <filename>Project name:</filename>
field.
Do not use hyphens as part of the name
(e.g. <filename>hello</filename>).
</para></listitem>
<listitem><para>Click "Next".
</para></listitem>
<listitem><para>Add appropriate information in the various
fields.
</para></listitem>
<listitem><para>Click "Finish".
</para></listitem>
<listitem><para>If the "open perspective" prompt appears,
click "Yes" so that you in the C/C++ perspective.
</para></listitem>
<listitem><para>The left-hand navigation pane shows your
project.
You can display your source by double clicking the
project's source file.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-configuring-the-cross-toolchains'>
<title>Configuring the Cross-Toolchains</title>
<para>
The earlier section,
"<link linkend='mars-configuring-the-eclipse-yocto-plug-in'>Configuring the Mars Eclipse Yocto Plug-in</link>",
sets up the default project configurations.
You can override these settings for a given project by following
these steps:
<orderedlist>
<listitem><para>Select "Yocto Project Settings" from
the "Project -> Properties" menu.
This selection brings up the Yocto Project Settings
Dialog and allows you to make changes specific to an
individual project.</para>
<para>By default, the Cross Compiler Options and Target
Options for a project are inherited from settings you
provided using the Preferences Dialog as described
earlier in the
"<link linkend='mars-configuring-the-eclipse-yocto-plug-in'>Configuring the Mars Eclipse Yocto Plug-in</link>" section.
The Yocto Project Settings Dialog allows you to override
those default settings for a given project.
</para></listitem>
<listitem><para>Make or verify your configurations for the
project and click "OK".
</para></listitem>
<listitem><para>Right-click in the navigation pane and
select "Reconfigure Project" from the pop-up menu.
This selection reconfigures the project by running
<filename>autogen.sh</filename> in the workspace for
your project.
The script also runs <filename>libtoolize</filename>,
<filename>aclocal</filename>,
<filename>autoconf</filename>,
<filename>autoheader</filename>,
<filename>automake --a</filename>, and
<filename>./configure</filename>.
Click on the "Console" tab beneath your source code to
see the results of reconfiguring your project.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-building-the-project'>
<title>Building the Project</title>
<para>
To build the project select "Build All" from the
"Project" menu.
The console should update and you can note the cross-compiler
you are using.
<note>
When building "Yocto Project SDK Autotools" projects, the
Eclipse IDE might display error messages for
Functions/Symbols/Types that cannot be "resolved", even when
the related include file is listed at the project navigator and
when the project is able to build.
For these cases only, it is recommended to add a new linked
folder to the appropriate sysroot.
Use these steps to add the linked folder:
<orderedlist>
<listitem><para>
Select the project.
</para></listitem>
<listitem><para>
Select "Folder" from the
<filename>File > New</filename> menu.
</para></listitem>
<listitem><para>
In the "New Folder" Dialog, select "Link to alternate
location (linked folder)".
</para></listitem>
<listitem><para>
Click "Browse" to navigate to the include folder inside
the same sysroot location selected in the Yocto Project
configuration preferences.
</para></listitem>
<listitem><para>
Click "OK".
</para></listitem>
<listitem><para>
Click "Finish" to save the linked folder.
</para></listitem>
</orderedlist>
</note>
</para>
</section>
<section id='mars-starting-qemu-in-user-space-nfs-mode'>
<title>Starting QEMU in User-Space NFS Mode</title>
<para>
To start the QEMU emulator from within Eclipse, follow these
steps:
<note>
See the
"<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
chapter in the Yocto Project Development Manual
for more information on using QEMU.
</note>
<orderedlist>
<listitem><para>Expose and select "External Tools
Configurations ..." from the "Run -> External Tools" menu.
</para></listitem>
<listitem><para>
Locate and select your image in the navigation panel to
the left (e.g. <filename>qemu_i586-poky-linux</filename>).
</para></listitem>
<listitem><para>
Click "Run" to launch QEMU.
<note>
The host on which you are running QEMU must have
the <filename>rpcbind</filename> utility running to be
able to make RPC calls on a server on that machine.
If QEMU does not invoke and you receive error messages
involving <filename>rpcbind</filename>, follow the
suggestions to get the service running.
As an example, on a new Ubuntu 16.04 LTS installation,
you must do the following in order to get QEMU to
launch:
<literallayout class='monospaced'>
$ sudo apt-get install rpcbind
</literallayout>
After installing <filename>rpcbind</filename>, you
need to edit the
<filename>/etc/init.d/rpcbind</filename> file to
include the following line:
<literallayout class='monospaced'>
OPTIONS="-i -w"
</literallayout>
After modifying the file, you need to start the
service:
<literallayout class='monospaced'>
$ sudo service portmap restart
</literallayout>
</note>
</para></listitem>
<listitem><para>If needed, enter your host root password in
the shell window at the prompt.
This sets up a <filename>Tap 0</filename> connection
needed for running in user-space NFS mode.
</para></listitem>
<listitem><para>Wait for QEMU to launch.
</para></listitem>
<listitem><para>Once QEMU launches, you can begin operating
within that environment.
One useful task at this point would be to determine the
IP Address for the user-space NFS by using the
<filename>ifconfig</filename> command.
The IP address of the QEMU machine appears in the
xterm window.
You can use this address to help you see which particular
IP address the instance of QEMU is using.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-deploying-and-debugging-the-application'>
<title>Deploying and Debugging the Application</title>
<para>
Once the QEMU emulator is running the image, you can deploy
your application using the Eclipse IDE and then use
the emulator to perform debugging.
Follow these steps to deploy the application.
<note>
Currently, Eclipse does not support SSH port forwarding.
Consequently, if you need to run or debug a remote
application using the host display, you must create a
tunneling connection from outside Eclipse and keep
that connection alive during your work.
For example, in a new terminal, run the following:
<literallayout class='monospaced'>
$ ssh -XY <replaceable>user_name</replaceable>@<replaceable>remote_host_ip</replaceable>
</literallayout>
Using the above form, here is an example:
<literallayout class='monospaced'>
$ ssh -XY root@192.168.7.2
</literallayout>
After running the command, add the command to be executed
in Eclipse's run configuration before the application
as follows:
<literallayout class='monospaced'>
export DISPLAY=:10.0
</literallayout>
Be sure to not destroy the connection during your QEMU
session (i.e. do not
exit out of or close that shell).
</note>
<orderedlist>
<listitem><para>Select "Debug Configurations..." from the
"Run" menu.</para></listitem>
<listitem><para>In the left area, expand
<filename>C/C++Remote Application</filename>.
</para></listitem>
<listitem><para>Locate your project and select it to bring
up a new tabbed view in the Debug Configurations Dialog.
</para></listitem>
<listitem><para>Click on the "Debugger" tab to see the
cross-tool debugger you are using.
Be sure to change to the debugger perspective in Eclipse.
</para></listitem>
<listitem><para>Click on the "Main" tab.
</para></listitem>
<listitem><para>Create a new connection to the QEMU instance
by clicking on "new".</para></listitem>
<listitem><para>Select <filename>SSH</filename>, which means
Secure Socket Shell.
Optionally, you can select an TCF connection instead.
</para></listitem>
<listitem><para>Click "Next".
</para></listitem>
<listitem><para>Clear out the "host name" field and enter
the IP Address determined earlier (e.g. 192.168.7.2).
</para></listitem>
<listitem><para>Click "Finish" to close the
New Connections Dialog.
</para></listitem>
<listitem><para>If necessary, use the drop-down menu now in the
"Connection" field and pick the IP Address you entered.
</para></listitem>
<listitem><para>Assuming you are connecting as the root user,
which is the default for QEMU x86-64 SDK images provided by
the Yocto Project, in the "Remote Absolute File Path for
C/C++ Application" field, browse to
<filename>/home/root</filename>.
You could also browse to any other path you have write
access to on the target such as
<filename>/usr/bin</filename>.
This location is where your application will be located on
the QEMU system.
If you fail to browse to and specify an appropriate
location, QEMU will not understand what to remotely
launch.
Eclipse is helpful in that it auto fills your application
name for you assuming you browsed to a directory.
<note>
If you are prompted to provide a username and to
optionally set a password, be sure you provide
"root" as the username and you leave the password
field blank.
</note>
</para></listitem>
<listitem><para>
Be sure you change to the "Debug" perspective in Eclipse.
</para></listitem>
<listitem><para>Click "Debug"
</para></listitem>
<listitem><para>Accept the debug perspective.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='mars-using-Linuxtools'>
<title>Using Linuxtools</title>
<para>
As mentioned earlier in the manual, performance tools exist
(Linuxtools) that enhance your development experience.
These tools are aids in developing and debugging applications and
images.
You can run these tools from within the Eclipse IDE through the
"Linuxtools" menu.
</para>
<para>
For information on how to configure and use these tools, see
<ulink url='http://www.eclipse.org/linuxtools/'>http://www.eclipse.org/linuxtools/</ulink>.
</para>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -20,7 +20,7 @@
<para>
You can find SDK installers here:
<itemizedlist>
<listitem><para><emphasis>Standard SDK Installers</emphasis>
<listitem><para><emphasis>Standard SDK Installers:</emphasis>
Go to <ulink url='&YOCTO_TOOLCHAIN_DL_URL;'></ulink>
and find the folder that matches your host development system
(i.e. <filename>i686</filename> for 32-bit machines or
@@ -39,9 +39,14 @@
poky-glibc-x86_64-core-image-sato-i586-toolchain-&DISTRO;.sh
</literallayout>
</para></listitem>
<listitem><para><emphasis>Extensible SDK Installers</emphasis>
Installers for the extensible SDK are in
<listitem><para><emphasis>Extensible SDK Installers:</emphasis>
Installers for the extensible SDK are also located in
<ulink url='&YOCTO_TOOLCHAIN_DL_URL;'></ulink>.
These installers have the string
<filename>ext</filename> as part of their names:
<literallayout class='monospaced'>
poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-&DISTRO;.sh
</literallayout>
</para></listitem>
</itemizedlist>
</para>
@@ -86,20 +91,30 @@
When the <filename>bitbake</filename> command completes, the toolchain
installer will be in
<filename>tmp/deploy/sdk</filename> in the Build Directory.
<note>
By default, this toolchain does not build static binaries.
If you want to use the toolchain to build these types of libraries,
you need to be sure your image has the appropriate static
development libraries.
Use the
<ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>
variable inside your <filename>local.conf</filename> file to
install the appropriate library packages.
Following is an example using <filename>glibc</filename> static
development libraries:
<literallayout class='monospaced'>
<note><title>Notes</title>
<itemizedlist>
<listitem><para>
By default, this toolchain does not build static binaries.
If you want to use the toolchain to build these types of
libraries, you need to be sure your image has the
appropriate static development libraries.
Use the
<ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>
variable inside your <filename>local.conf</filename> file
to install the appropriate library packages.
Following is an example using <filename>glibc</filename>
static development libraries:
<literallayout class='monospaced'>
IMAGE_INSTALL_append = " glibc-staticdev"
</literallayout>
</literallayout>
</para></listitem>
<listitem><para>
For additional information on building the installer,
see the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
wiki page.
</para></listitem>
</itemizedlist>
</note>
</para>
</section>
@@ -191,7 +206,7 @@
is the directory where the SDK is installed.
By default, this directory is <filename>/opt/poky/</filename>.
And, <replaceable>version</replaceable> represents the specific
snapshot of the SDK (e.g. <filename>&DISTRO;+snapshot</filename>).
snapshot of the SDK (e.g. <filename>&DISTRO;</filename>).
Furthermore, <replaceable>target</replaceable> represents the target
architecture (e.g. <filename>i586</filename>) and
<replaceable>host</replaceable> represents the development system's

View File

@@ -53,14 +53,32 @@
<listitem><para><emphasis>Build Tools and Build System:</emphasis>
The extensible SDK installer performs additional tasks as
compared to the standard SDK installer.
The extensible SDK installer extracts build tools specific
to the SDK and the installer also prepares the internal build
system within the SDK.
You can find pre-built extensible SDK installers in the same
<ulink url='http://downloads.yoctoproject.org/releases/yocto/yocto-&DISTRO;/toolchain/'>toolchain</ulink>
location as the pre-built standard SDK installers.
For extensible SDK installers, the
<filename>ext</filename> string is part of the name.
Here is an example:
<literallayout class='monospaced'>
poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-&DISTRO;.sh
</literallayout>
<note>
As an alternative to downloading an SDK, you can build the toolchain
installer.
For information on building the installer, see the
"<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
section.
Another helpful resource for building an installer is the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
wiki page.
</note>
Here is example output for running the extensible SDK
installer:
<literallayout class='monospaced'>
$ ./poky-glibc-x86_64-core-image-minimal-core2-64-toolchain-ext-2.1+snapshot.sh
Poky (Yocto Project Reference Distro) Extensible SDK installer version 2.1+snapshot
$ ./poky-glibc-x86_64-core-image-minimal-core2-64-toolchain-ext-&DISTRO;.sh
Poky (Yocto Project Reference Distro) Extensible SDK installer version &DISTRO;
===================================================================================
Enter target directory for SDK (default: ~/poky_sdk):
You are about to install the SDK to "/home/scottrif/poky_sdk". Proceed[Y/n]? Y
@@ -80,8 +98,9 @@
<para>
After installing the SDK, you need to run the SDK environment setup
script.
Here is the output:
Here is the output from an example run:
<literallayout class='monospaced'>
$ cd /home/scottrif/poky_sdk
$ source environment-setup-core2-64-poky-linux
SDK environment now set up; additionally you may now run devtool to perform development tasks.
Run devtool --help for further details.
@@ -217,7 +236,8 @@
and needs to be extracted to some
local area - this time outside of the default
workspace.
As always, if required <filename>devtool</filename> creates
If required, <filename>devtool</filename>
always creates
a Git repository locally during the extraction.
Furthermore, the first positional argument
<replaceable>srctree</replaceable> in this case
@@ -782,7 +802,8 @@
much to ensure that these Makefiles build correctly.
It is very common, for example, to explicitly call
<filename>gcc</filename> instead of using the
<filename>CC</filename> variable.
<ulink url='&YOCTO_DOCS_REF_URL;#var-CC'><filename>CC</filename></ulink>
variable.
Usually, in a cross-compilation environment,
<filename>gcc</filename> is the compiler for the build host
and the cross-compiler is named something similar to
@@ -976,7 +997,7 @@
with the <filename>inherit</filename> directive, leaving the recipe
to describe just the things that are specific to the software to be
built.
A <ulink url='ref-classes-base'><filename>base</filename></ulink>
A <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-base'><filename>base</filename></ulink>
class exists that is implicitly inherited by all recipes and provides
the functionality that most typical recipes need.
</para>
@@ -1009,7 +1030,7 @@
<itemizedlist>
<listitem><para><filename>image/</filename>:
Contains all of the files installed at the
<ulink url='&YOCTO_DOCS_REF_URL;ref-tasks-install'><filename>do_install</filename></ulink>
<ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-install'><filename>do_install</filename></ulink>
stage.
Within a recipe, this directory is referred to by the
expression

View File

@@ -113,8 +113,9 @@
of the SDK but is rather available for use as part of the
development process.
</para></listitem>
<listitem><para>Various user-space tools that greatly enhance
your application development experience.
<listitem><para>Various performance-related
<ulink url='http://www.eclipse.org/linuxtools/index.php'>tools</ulink>
that can enhance your development experience.
These tools are also separate from the actual SDK but can be
independently obtained and used in the development process.
</para></listitem>
@@ -196,9 +197,16 @@
These extensions allow for cross-compilation, deployment, and
execution of your output into a QEMU emulation session.
You can also perform cross-debugging and profiling.
The environment also supports a suite of tools that allows you to
perform remote profiling, tracing, collection of power data,
collection of latency data, and collection of performance data.
The environment also supports many performance-related
<ulink url='http://www.eclipse.org/linuxtools/index.php'>tools</ulink>
that enhance your development experience.
<note>
Previous releases of the Eclipse Yocto Plug-in supported
"user-space tools" (i.e. LatencyTOP, PowerTOP, Perf, SystemTap,
and Lttng-ust) that also added to the development experience.
These tools have been deprecated beginning with this release
of the plug-in.
</note>
</para>
<para>
@@ -210,54 +218,15 @@
</para>
</section>
<section id='user-space-tools'>
<title>User-Space Tools</title>
<section id='performance-enhancing-tools'>
<title>Performance Enhancing Tools</title>
<para>
User-space tools, which are available as part of the SDK
development environment, can be helpful.
The tools include LatencyTOP, PowerTOP, Perf, SystemTap,
and Lttng-ust.
These tools are common development tools for the Linux platform.
<itemizedlist>
<listitem><para><emphasis>LatencyTOP:</emphasis> LatencyTOP
focuses on latency that causes skips in audio, stutters in
your desktop experience, or situations that overload your
server even when you have plenty of CPU power left.
</para></listitem>
<listitem><para><emphasis>PowerTOP:</emphasis> Helps you
determine what software is using the most power.
You can find out more about PowerTOP at
<ulink url='https://01.org/powertop/'></ulink>.</para></listitem>
<listitem><para><emphasis>Perf:</emphasis> Performance counters
for Linux used to keep track of certain types of hardware
and software events.
For more information on these types of counters see
<ulink url='https://perf.wiki.kernel.org/'></ulink>.
For examples on how to setup and use this tool, see the
"<ulink url='&YOCTO_DOCS_PROF_URL;#profile-manual-perf'>perf</ulink>"
section in the Yocto Project Profiling and Tracing Manual.
</para></listitem>
<listitem><para><emphasis>SystemTap:</emphasis> A free software
infrastructure that simplifies information gathering about
a running Linux system.
This information helps you diagnose performance or
functional problems.
SystemTap is not available as a user-space tool through
the Eclipse IDE Yocto Plug-in.
See <ulink url='http://sourceware.org/systemtap'></ulink>
for more information on SystemTap.
For examples on how to setup and use this tool, see the
"<ulink url='&YOCTO_DOCS_PROF_URL;#profile-manual-systemtap'>SystemTap</ulink>"
section in the Yocto Project Profiling and Tracing Manual.
</para></listitem>
<listitem><para><emphasis>Lttng-ust:</emphasis> A User-space
Tracer designed to provide detailed information on
user-space activity.
See <ulink url='http://lttng.org/ust'></ulink> for more
information on Lttng-ust.
</para></listitem>
</itemizedlist>
Supported performance enhancing tools are available that let you
profile, debug, and perform tracing on your projects developed
using Eclipse.
For information on these tools see
<ulink url='http://www.eclipse.org/linuxtools/'>http://www.eclipse.org/linuxtools/</ulink>.
</para>
</section>
</section>

View File

@@ -74,6 +74,8 @@
<xi:include href="sdk-appendix-customizing.xml"/>
<xi:include href="sdk-appendix-mars.xml"/>
<!-- <index id='index'>
<title>Index</title>
</index>

View File

@@ -96,6 +96,17 @@
<literallayout class='monospaced'>
poky-glibc-x86_64-core-image-sato-i586-toolchain-&DISTRO;.sh
</literallayout>
<note>
As an alternative to downloading an SDK, you can build the toolchain
installer.
For information on building the installer, see the
"<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
section.
Another helpful resource for building an installer is the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
wiki page.
This wiki page focuses on development when using the Eclipse IDE.
</note>
</para>
<para>
@@ -107,7 +118,7 @@
You must change the permissions on the toolchain
installer script so that it is executable:
<literallayout class='monospaced'>
$ chmod +x poky-glibc-x86_64-core-image-sato-i586-toolchain-2.1.sh
$ chmod +x poky-glibc-x86_64-core-image-sato-i586-toolchain-&DISTRO;.sh
</literallayout>
</note>
</para>
@@ -126,16 +137,16 @@
run the installer again.
</note>
<literallayout class='monospaced'>
$ ./poky-glibc-x86_64-core-image-sato-i586-toolchain-2.1.sh
$ ./poky-glibc-x86_64-core-image-sato-i586-toolchain-&DISTRO;.sh
Poky (Yocto Project Reference Distro) SDK installer version 2.0
===============================================================
Enter target directory for SDK (default: /opt/poky/2.1):
You are about to install the SDK to "/opt/poky/2.1". Proceed[Y/n]? Y
Enter target directory for SDK (default: /opt/poky/&DISTRO;):
You are about to install the SDK to "/opt/poky/&DISTRO;". Proceed[Y/n]? Y
Extracting SDK.......................................................................done
Setting it up...done
SDK has been successfully set up and is ready to be used.
Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
$ . /opt/poky/2.1/environment-setup-i586-poky-linux
$ . /opt/poky/&DISTRO;/environment-setup-i586-poky-linux
</literallayout>
</para>
@@ -256,14 +267,15 @@
</itemizedlist></para></listitem>
<listitem><para><emphasis>Source the cross-toolchain
environment setup file:</emphasis>
Installation of the cross-toolchain creates a cross-toolchain
As described earlier in the manual, installing the
cross-toolchain creates a cross-toolchain
environment setup script in the directory that the SDK
was installed.
Before you can use the tools to develop your project, you must
source this setup script.
The script begins with the string "environment-setup" and contains
the machine architecture, which is followed by the string
"poky-linux".
Before you can use the tools to develop your project,
you must source this setup script.
The script begins with the string "environment-setup" and
contains the machine architecture, which is followed by the
string "poky-linux".
Here is an example that sources a script from the
default SDK installation directory that uses the
32-bit Intel x86 Architecture and the
@@ -482,11 +494,9 @@
See the
"<ulink url='&YOCTO_DOCS_DEV_URL;#patching-the-kernel'>Patching the Kernel</ulink>"
section in the Yocto Project Development
manual for an example.</para></listitem>
</itemizedlist></para>
<para>For information on pre-built kernel image naming schemes for images
that can run on the QEMU emulator, see the
<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
manual for an example.
</para></listitem>
</itemizedlist>
</para></listitem>
<listitem><para><emphasis>Install the SDK</emphasis>:
The SDK provides a target-specific cross-development toolchain, the root filesystem,
@@ -495,15 +505,18 @@
"<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
section.
</para></listitem>
<listitem><para><emphasis>Secure the target root filesystem
<listitem><para><emphasis>
Secure the target root filesystem
and the Cross-development toolchain</emphasis>:
You need to find and download the appropriate root filesystem and
the cross-development toolchain.</para>
<para>You can find the tarballs for the root filesystem in the same area used
for the kernel image.
Depending on the type of image you are running, the root filesystem you need differs.
For example, if you are developing an application that runs on an image that
supports Sato, you need to get a root filesystem that supports Sato.</para>
You need to find and download the appropriate root
filesystem and the cross-development toolchain.</para>
<para>You can find the tarballs for the root filesystem in
the same area used for the kernel image.
Depending on the type of image you are running, the root
filesystem you need differs.
For example, if you are developing an application that
runs on an image that supports Sato, you need to get a
root filesystem that supports Sato.</para>
<para>You can find the cross-development toolchains at
<ulink url='&YOCTO_TOOLCHAIN_DL_URL;'><filename>toolchains</filename></ulink>.
Be sure to get the correct toolchain for your development host and your
@@ -512,6 +525,17 @@
section for information and the
"<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
section for installation information.
<note>
As an alternative to downloading an SDK, you can build
the toolchain installer.
For information on building the installer, see the
"<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
section.
Another helpful resource for building an installer is
the
<ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
wiki page.
</note>
</para></listitem>
<listitem><para><emphasis>Create and build your application</emphasis>:
At this point, you need to have source files for your application.
@@ -519,13 +543,12 @@
project.
If you are not using Eclipse, you need to use the cross-development tools you have
installed to create the image.</para></listitem>
<listitem><para><emphasis>Deploy the image with the application</emphasis>:
If you are using the Eclipse IDE, you can deploy your image to the hardware or to
QEMU through the project's preferences.
If you are not using the Eclipse IDE, then you need to deploy the application
to the hardware using other methods.
Or, if you are using QEMU, you need to use that tool and
load your image in for testing.
<listitem><para>
<emphasis>Deploy the image with the application</emphasis>:
Using the Eclipse IDE, you can deploy your image to the
hardware or to QEMU through the project's preferences.
You can also use Eclipse to load and test your image under
QEMU.
See the
"<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
chapter in the Yocto Project Development Manual
@@ -533,10 +556,10 @@
</para></listitem>
<listitem><para><emphasis>Test and debug the application</emphasis>:
Once your application is deployed, you need to test it.
Within the Eclipse IDE, you can use the debugging environment along with the
set of installed user-space tools to debug your application.
Of course, the same user-space tools are available separately if you choose
not to use the Eclipse IDE.</para></listitem>
Within the Eclipse IDE, you can use the debugging
environment along with supported performance enhancing
<ulink url='http://www.eclipse.org/linuxtools/'>tools</ulink>.
</para></listitem>
</orderedlist>
</para>
</section>
@@ -544,6 +567,11 @@
<section id='adt-eclipse'>
<title>Working Within Eclipse</title>
<para role="writernotes">
This section needs to be updated to use Eclipse Neon throughout.
It is out of date at the moment.
</para>
<para>
The Eclipse IDE is a popular development environment and it fully
supports development using the Yocto Project.
@@ -565,9 +593,11 @@
execution of your output into a QEMU emulation session as well as
actual target hardware.
You can also perform cross-debugging and profiling.
The environment also supports a suite of tools that allows you
to perform remote profiling, tracing, collection of power data,
collection of latency data, and collection of performance data.
The environment also supports performance enhancing
<ulink url='http://www.eclipse.org/linuxtools/'>tools</ulink> that
allow you to perform remote profiling, tracing, collection of
power data, collection of latency data, and collection of
performance data.
</para>
<para>
@@ -1317,144 +1347,18 @@
</para>
</section>
<section id='running-user-space-tools'>
<title>Running User-Space Tools</title>
<section id='running-performance-tools'>
<title>Running Performance Tools</title>
<para>
As mentioned earlier in the manual, several tools exist that
enhance your development experience.
These tools are aids in developing and debugging applications
and images.
You can run these user-space tools from within the Eclipse
You can run these tools from within the Eclipse
IDE through the "YoctoProjectTools" menu.
</para>
<para>
Once you pick a tool, you need to configure it for the remote
target.
Every tool needs to have the connection configured.
You must select an existing TCF-based RSE connection to the
remote target.
If one does not exist, click "New" to create one.
</para>
<para>
Here are some specifics about the remote tools:
<itemizedlist>
<listitem><para><emphasis><filename>Lttng2.0 trace import</filename>:</emphasis>
Selecting this tool transfers the remote target's
<filename>Lttng</filename> tracing data back to the
local host machine and uses the Lttng Eclipse plug-in
to graphically display the output.
For information on how to use Lttng to trace an
application,
see <ulink url='http://lttng.org/documentation'></ulink>
and the
"<ulink url='&YOCTO_DOCS_PROF_URL;#lttng-linux-trace-toolkit-next-generation'>LTTng (Linux Trace Toolkit, next generation)</ulink>"
section, which is in the Yocto Project Profiling and
Tracing Manual.
<note>Do not use
<filename>Lttng-user space (legacy)</filename> tool.
This tool no longer has any upstream support.</note>
</para>
<para>Before you use the
<filename>Lttng2.0 trace import</filename> tool,
you need to setup the Lttng Eclipse plug-in and create a
Tracing project.
Do the following:
<orderedlist>
<listitem><para>Select "Open Perspective" from the
"Window" menu and then select "Other..." to
bring up a menu of other perspectives.
Choose "Tracing".
</para></listitem>
<listitem><para>Click "OK" to change the Eclipse
perspective into the Tracing perspective.
</para></listitem>
<listitem><para>Create a new Tracing project by
selecting "Project" from the "File -> New" menu.
</para></listitem>
<listitem><para>Choose "Tracing Project" from the
"Tracing" menu and click "Next".
</para></listitem>
<listitem><para>Provide a name for your tracing
project and click "Finish".
</para></listitem>
<listitem><para>Generate your tracing data on the
remote target.</para></listitem>
<listitem><para>Select "Lttng2.0 trace import"
from the "Yocto Project Tools" menu to
start the data import process.</para></listitem>
<listitem><para>Specify your remote connection name.
</para></listitem>
<listitem><para>For the Ust directory path, specify
the location of your remote tracing data.
Make sure the location ends with
<filename>ust</filename> (e.g.
<filename>/usr/mysession/ust</filename>).
</para></listitem>
<listitem><para>Click "OK" to complete the import
process.
The data is now in the local tracing project
you created.</para></listitem>
<listitem><para>Right click on the data and then use
the menu to Select "Generic CTF Trace" from the
"Trace Type... -> Common Trace Format" menu to
map the tracing type.</para></listitem>
<listitem><para>Right click the mouse and select
"Open" to bring up the Eclipse Lttng Trace
Viewer so you view the tracing data.
</para></listitem>
</orderedlist></para></listitem>
<listitem><para><emphasis><filename>PowerTOP</filename>:</emphasis>
Selecting this tool runs PowerTOP on the remote target
machine and displays the results in a new view called
PowerTOP.</para>
<para>The "Time to gather data(sec):" field is the time
passed in seconds before data is gathered from the
remote target for analysis.</para>
<para>The "show pids in wakeups list:" field corresponds
to the <filename>-p</filename> argument passed to
<filename>PowerTOP</filename>.</para></listitem>
<listitem><para><emphasis><filename>LatencyTOP and Perf</filename>:</emphasis>
LatencyTOP identifies system latency, while
Perf monitors the system's performance counter
registers.
Selecting either of these tools causes an RSE terminal
view to appear from which you can run the tools.
Both tools refresh the entire screen to display results
while they run.
For more information on setting up and using
<filename>perf</filename>, see the
"<ulink url='&YOCTO_DOCS_PROF_URL;#profile-manual-perf'>perf</ulink>"
section in the Yocto Project Profiling and Tracing
Manual.
</para></listitem>
<listitem><para><emphasis><filename>SystemTap</filename>:</emphasis>
Systemtap is a tool that lets you create and reuse
scripts to examine the activities of a live Linux
system.
You can easily extract, filter, and summarize data
that helps you diagnose complex performance or
functional problems.
For more information on setting up and using
<filename>SystemTap</filename>, see the
<ulink url='https://sourceware.org/systemtap/documentation.html'>SystemTap Documentation</ulink>.
</para></listitem>
<listitem><para><emphasis><filename>yocto-bsp</filename>:</emphasis>
The <filename>yocto-bsp</filename> tool lets you
quickly set up a Board Support Package (BSP) layer.
The tool requires a Metadata location, build location,
BSP name, BSP output location, and a kernel
architecture.
For more information on the
<filename>yocto-bsp</filename> tool outside of Eclipse,
see the
"<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a new BSP Layer Using the yocto-bsp Script</ulink>"
section in the Yocto Project Board Support Package
(BSP) Developer's Guide.
</para></listitem>
</itemizedlist>
For more information on these tools, see
<ulink url='http://www.eclipse.org/linuxtools/'>http://www.eclipse.org/linuxtools/</ulink>.
</para>
</section>
</section>

View File

@@ -51,8 +51,8 @@ RECIPE_MAINTAINER_pn-attr = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-autoconf = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-autogen-native = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-automake = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-avahi = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-avahi-ui = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-avahi = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-avahi-ui = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-babeltrace = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-base-files = "Ross Burton <ross.burton@intel.com>"
RECIPE_MAINTAINER_pn-base-passwd = "Ross Burton <ross.burton@intel.com>"
@@ -62,14 +62,14 @@ RECIPE_MAINTAINER_pn-bc = "Alejandro Hernandez <alejandro.hernandez@linux.intel.
RECIPE_MAINTAINER_pn-bdwgc = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-beecrypt = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-bigreqsproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-bind = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-bind = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-binutils = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-binutils-cross = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-binutils-cross-canadian = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-binutils-crosssdk = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-bison = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-bjam-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-blktool = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-blktool = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-blktrace = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-bluez5 = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-bmap-tools = "Ed Bartosh <ed.bartosh@linux.intel.com>"
@@ -88,11 +88,11 @@ RECIPE_MAINTAINER_pn-ca-certificates = "Alexander Kanavin <alexander.kanavin@int
RECIPE_MAINTAINER_pn-cairo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-calibrateproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-cantarell-fonts = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-ccache = "Wenzong Fan <wenzong.fan@windriver.com>"
RECIPE_MAINTAINER_pn-cdrtools-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-chkconfig = "Wenzong Fan <wenzong.fan@windriver.com>"
RECIPE_MAINTAINER_pn-chkconfig-alternatives-native = "Wenzong Fan <wenzong.fan@windriver.com>"
RECIPE_MAINTAINER_pn-chrpath = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-ccache = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-cdrtools-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-chkconfig = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-chkconfig-alternatives-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-chrpath = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-clutter-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-clutter-gst-3.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-clutter-gtk-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
@@ -154,21 +154,21 @@ RECIPE_MAINTAINER_pn-diffutils = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-directfb = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-directfb-examples = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-distcc = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-distcc-config = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-distcc-config = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-dmidecode = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-dmxproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-dsssl-stylesheets-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-sgml-dtd-3.1-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-sgml-dtd-4.1-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-sgml-dtd-4.5-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-utils-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-xml-dtd4 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-xsl-stylesheets = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-dosfstools = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-docbook-dsssl-stylesheets-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-docbook-sgml-dtd-3.1-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-docbook-sgml-dtd-4.1-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-docbook-sgml-dtd-4.5-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-docbook-utils-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-docbook-xml-dtd4 = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-docbook-xsl-stylesheets = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-dosfstools = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-dpkg = "Aníbal Limón <anibal.limon@linux.intel.com>"
RECIPE_MAINTAINER_pn-dri2proto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-dri3proto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-dropbear = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-dropbear = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-dtc = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-e2fsprogs = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-ed = "Alexander Kanavin <alexander.kanavin@intel.com>"
@@ -183,7 +183,7 @@ RECIPE_MAINTAINER_pn-encodings = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-epiphany = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-ethtool = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-eudev = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
RECIPE_MAINTAINER_pn-expat = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-expat = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-expect = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-ffmpeg = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-file = "Robert Yang <liezhi.yang@windriver.com>"
@@ -218,8 +218,8 @@ RECIPE_MAINTAINER_pn-gdb-cross = "Richard Purdie <richard.purdie@linuxfoundation
RECIPE_MAINTAINER_pn-gdb-cross-canadian = "Richard Purdie <richard.purdie@linuxfoundation.org>"
RECIPE_MAINTAINER_pn-gdbm = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-gdk-pixbuf = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gettext = "Wenzong Fan <wenzong.fan@windriver.com>"
RECIPE_MAINTAINER_pn-gettext-minimal-native = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-gettext = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-gettext-minimal-native = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-ghostscript = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-git = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-glew = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
@@ -243,11 +243,11 @@ RECIPE_MAINTAINER_pn-gnupg = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-gnutls = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-gobject-introspection = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-gperf = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-gpgme = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gpgme = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-gptfdisk = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-grep = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-groff = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-grub = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-grub = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-grub-efi = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-gsettings-desktop-schemas = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gst-player = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
@@ -262,7 +262,7 @@ RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-ugly = "Maxin B. John <maxin.john@inte
RECIPE_MAINTAINER_pn-gstreamer1.0-rtsp-server = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-gtk+ = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gtk+3 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gtk-doc-stub = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gtk-doc = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-gtk-engines = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gtk-icon-utils-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-gtk-sato-engine = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
@@ -300,8 +300,8 @@ RECIPE_MAINTAINER_pn-irda-utils = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-iso-codes = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-iw = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-libjpeg-turbo = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-json-c = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-json-glib = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-json-c = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-json-glib = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-kbd = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-kbproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-kconfig-frontends = "Alexander Kanavin <alexander.kanavin@intel.com>"
@@ -356,7 +356,7 @@ RECIPE_MAINTAINER_pn-libgcrypt = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-libgfortran = "Richard Purdie <richard.purdie@linuxfoundation.org>"
RECIPE_MAINTAINER_pn-libglade = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libglu = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libgpg-error = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libgpg-error = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-libgudev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libi18n-collate-perl = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
RECIPE_MAINTAINER_pn-libical = "Maxin B. John <maxin.john@intel.com>"
@@ -390,8 +390,8 @@ RECIPE_MAINTAINER_pn-libproxy = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-libpthread-stubs = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-librsvg = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libsamplerate0 = "Tanu Kaskinen <tanuk@iki.fi>"
RECIPE_MAINTAINER_pn-libsdl = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-libsdl2 = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-libsdl = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-libsdl2 = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-libsecret = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-libsm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libsndfile1 = "Tanu Kaskinen <tanuk@iki.fi>"
@@ -415,7 +415,7 @@ RECIPE_MAINTAINER_pn-libvorbis = "Tanu Kaskinen <tanuk@iki.fi>"
RECIPE_MAINTAINER_pn-libwebp = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-libwnck3 = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-libx11 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libx11-diet = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-libx11-diet = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libxau = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libxcalibrate = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-libxcb = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
@@ -463,14 +463,14 @@ RECIPE_MAINTAINER_pn-linux-yocto = "Bruce Ashfield <bruce.ashfield@windriver.com
RECIPE_MAINTAINER_pn-linux-yocto-dev = "Bruce Ashfield <bruce.ashfield@windriver.com>"
RECIPE_MAINTAINER_pn-linux-yocto-rt = "Bruce Ashfield <bruce.ashfield@windriver.com>"
RECIPE_MAINTAINER_pn-linux-yocto-tiny = "Bruce Ashfield <bruce.ashfield@windriver.com>"
RECIPE_MAINTAINER_pn-linuxdoc-tools-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-linuxdoc-tools-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-logrotate = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-lrzsz = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-lsb = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-lsbinitscripts = "Ross Burton <ross.burton@intel.com>"
RECIPE_MAINTAINER_pn-lsbtest = "Yi Zhao <yi.zhao@windriver.com>"
RECIPE_MAINTAINER_pn-lsbtest = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-lsof = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-ltp = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-ltp = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-lttng-modules = "Richard Purdie <richard.purdie@linuxfoundation.org>"
RECIPE_MAINTAINER_pn-lttng-tools = "Richard Purdie <richard.purdie@linuxfoundation.org>"
RECIPE_MAINTAINER_pn-lttng-ust = "Richard Purdie <richard.purdie@linuxfoundation.org>"
@@ -479,7 +479,7 @@ RECIPE_MAINTAINER_pn-lzo = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-lzop = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-m4 = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-m4-native = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-mailx = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-mailx = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-make = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-makedepend = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-makedevs = "Chen Qi <Qi.Chen@windriver.com>"
@@ -506,7 +506,7 @@ RECIPE_MAINTAINER_pn-meta-environment-extsdk = "Richard Purdie <richard.purdie@l
RECIPE_MAINTAINER_pn-meta-ide-support = "Cristian Iorga <cristian.iorga@intel.com>"
RECIPE_MAINTAINER_pn-meta-toolchain = "Cristian Iorga <cristian.iorga@intel.com>"
RECIPE_MAINTAINER_pn-meta-world-pkgdata = "Richard Purdie <richard.purdie@linuxfoundation.org>"
RECIPE_MAINTAINER_pn-mingetty = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-mingetty = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-mini-x-session = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-minicom = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-mkelfimage = "Alexander Kanavin <alexander.kanavin@intel.com>"
@@ -546,9 +546,9 @@ RECIPE_MAINTAINER_pn-nss = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-nss-myhostname = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-ofono = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-oh-puzzles = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-openjade-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-opensp = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-openssh = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-openjade-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-opensp = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-openssh = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-openssl = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-opkg = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
RECIPE_MAINTAINER_pn-opkg-arch-config = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
@@ -606,7 +606,7 @@ RECIPE_MAINTAINER_pn-pm-utils = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-pointercal = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-pointercal-xinput = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-pong-clock = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-popt = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-popt = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-portmap = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-powertop = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-ppp = "Hongxu Jia <hongxu.jia@windriver.com>"
@@ -616,7 +616,7 @@ RECIPE_MAINTAINER_pn-presentproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-procps = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-pseudo = "Mark Hatle <mark.hatle@windriver.com>"
RECIPE_MAINTAINER_pn-psmisc = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-psplash = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-psplash = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-ptest-runner = "Aníbal Limón <anibal.limon@linux.intel.com>"
RECIPE_MAINTAINER_pn-pulseaudio = "Tanu Kaskinen <tanuk@iki.fi>"
RECIPE_MAINTAINER_pn-pulseaudio-client-conf-sato = "Tanu Kaskinen <tanuk@iki.fi>"
@@ -690,17 +690,17 @@ RECIPE_MAINTAINER_pn-sed = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-serf = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-setserial = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-settings-daemon = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-sgml-common = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-sgml-common-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-sgmlspl = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-sgmlspl-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-sgml-common = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-sgml-common-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-sgmlspl = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-sgmlspl-native = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-shadow = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-shadow-securetty = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-shadow-sysroot = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-shared-mime-info = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-shutdown-desktop = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-signing-keys = "Richard Purdie <richard.purdie@linuxfoundation.org>"
RECIPE_MAINTAINER_pn-slang = "Kai Kang <kai.kang@windriver.com>"
RECIPE_MAINTAINER_pn-slang = "Robert Yang <liezhi.yang@windriver.com>"
RECIPE_MAINTAINER_pn-socat = "Hongxu Jia <hongxu.jia@windriver.com>"
RECIPE_MAINTAINER_pn-speex = "Tanu Kaskinen <tanuk@iki.fi>"
RECIPE_MAINTAINER_pn-speexdsp = "Tanu Kaskinen <tanuk@iki.fi>"
@@ -840,7 +840,7 @@ RECIPE_MAINTAINER_pn-xvideo-tests = "Maxin B. John <maxin.john@intel.com>"
RECIPE_MAINTAINER_pn-xvinfo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-xwininfo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-xz = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-yasm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
RECIPE_MAINTAINER_pn-yasm = "Dengke Du <dengke.du@windriver.com>"
RECIPE_MAINTAINER_pn-zip = "Chen Qi <Qi.Chen@windriver.com>"
RECIPE_MAINTAINER_pn-zisofs-tools-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
RECIPE_MAINTAINER_pn-zlib = "Chen Qi <Qi.Chen@windriver.com>"

View File

@@ -21,13 +21,13 @@ POKY_DEFAULT_EXTRA_RRECOMMENDS = "kernel-module-af-packet"
DISTRO_FEATURES ?= "${DISTRO_FEATURES_DEFAULT} ${DISTRO_FEATURES_LIBC} ${POKY_DEFAULT_DISTRO_FEATURES}"
PREFERRED_VERSION_linux-yocto ?= "4.4%"
PREFERRED_VERSION_linux-yocto_qemux86 ?= "4.4%"
PREFERRED_VERSION_linux-yocto_qemux86-64 ?= "4.4%"
PREFERRED_VERSION_linux-yocto_qemuarm ?= "4.4%"
PREFERRED_VERSION_linux-yocto_qemumips ?= "4.4%"
PREFERRED_VERSION_linux-yocto_qemumips64 ?= "4.4%"
PREFERRED_VERSION_linux-yocto_qemuppc ?= "4.4%"
PREFERRED_VERSION_linux-yocto ?= "4.8%"
PREFERRED_VERSION_linux-yocto_qemux86 ?= "4.8%"
PREFERRED_VERSION_linux-yocto_qemux86-64 ?= "4.8%"
PREFERRED_VERSION_linux-yocto_qemuarm ?= "4.8%"
PREFERRED_VERSION_linux-yocto_qemumips ?= "4.8%"
PREFERRED_VERSION_linux-yocto_qemumips64 ?= "4.8%"
PREFERRED_VERSION_linux-yocto_qemuppc ?= "4.8%"
SDK_NAME = "${DISTRO}-${TCLIBC}-${SDK_ARCH}-${IMAGE_BASENAME}-${TUNE_PKGARCH}"
SDKPATH = "/opt/${DISTRO}/${SDK_VERSION}"

View File

@@ -152,8 +152,7 @@ EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
# - 'image-swab' to perform host system intrusion detection
# NOTE: if listing mklibs & prelink both, then make sure mklibs is before prelink
# NOTE: mklibs also needs to be explicitly enabled for a given image, see local.conf.extended
# image-prelink disabled for now due to issues with IFUNC symbol relocation
USER_CLASSES ?= "buildstats image-mklibs"
USER_CLASSES ?= "buildstats image-mklibs image-prelink"
#
# Runtime testing of images

View File

@@ -7,8 +7,8 @@ KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
KMACHINE_genericx86 ?= "common-pc"
KMACHINE_genericx86-64 ?= "common-pc-64"
SRCREV_machine_genericx86 ?= "44af900716206d4cae283aa74e92f4118720724a"
SRCREV_machine_genericx86-64 ?= "44af900716206d4cae283aa74e92f4118720724a"
SRCREV_machine_genericx86 ?= "0847ed67f89b5a8bbaaa0a7b6cfa2a99ef34834f"
SRCREV_machine_genericx86-64 ?= "0847ed67f89b5a8bbaaa0a7b6cfa2a99ef34834f"
SRCREV_machine_edgerouter ?= "403eda4633e9037fb715d0d1e8ae847b2bd0651a"
SRCREV_machine_beaglebone ?= "859aecf685fcd9d30490a6da459fb76b48947075"
SRCREV_machine_mpc8315e-rdb ?= "403eda4633e9037fb715d0d1e8ae847b2bd0651a"
@@ -19,5 +19,5 @@ COMPATIBLE_MACHINE_edgerouter = "edgerouter"
COMPATIBLE_MACHINE_beaglebone = "beaglebone"
COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
LINUX_VERSION_genericx86 = "4.1.28"
LINUX_VERSION_genericx86-64 = "4.1.28"
LINUX_VERSION_genericx86 = "4.1.30"
LINUX_VERSION_genericx86-64 = "4.1.30"

View File

@@ -7,8 +7,8 @@ KBRANCH_edgerouter = "standard/edgerouter"
KBRANCH_beaglebone = "standard/beaglebone"
KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
SRCREV_machine_genericx86 ?= "ddab242999407fadae68e7ee5381b0ec6679d443"
SRCREV_machine_genericx86-64 ?= "ddab242999407fadae68e7ee5381b0ec6679d443"
SRCREV_machine_genericx86 ?= "0a0c93f29c0d65c00abdd2f6d1eb89134fae9525"
SRCREV_machine_genericx86-64 ?= "0a0c93f29c0d65c00abdd2f6d1eb89134fae9525"
SRCREV_machine_edgerouter ?= "628bf627561c6285d99fb978e11d4c15fc29324b"
SRCREV_machine_beaglebone ?= "628bf627561c6285d99fb978e11d4c15fc29324b"
SRCREV_machine_mpc8315e-rdb ?= "94ac8da44990afd2d43c0ccd713420fb1cfa0792"
@@ -19,5 +19,5 @@ COMPATIBLE_MACHINE_edgerouter = "edgerouter"
COMPATIBLE_MACHINE_beaglebone = "beaglebone"
COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
LINUX_VERSION_genericx86 = "4.4.15"
LINUX_VERSION_genericx86-64 = "4.4.15"
LINUX_VERSION_genericx86 = "4.4.18"
LINUX_VERSION_genericx86-64 = "4.4.18"

View File

@@ -19,8 +19,6 @@ def autotools_dep_prepend(d):
return deps + 'gnu-config-native '
EXTRA_OEMAKE = ""
DEPENDS_prepend = "${@autotools_dep_prepend(d)} "
inherit siteinfo
@@ -131,6 +129,8 @@ autotools_postconfigure(){
EXTRACONFFUNCS ??= ""
EXTRA_OECONF_append = " ${PACKAGECONFIG_CONFARGS}"
do_configure[prefuncs] += "autotools_preconfigure autotools_copy_aclocals ${EXTRACONFFUNCS}"
do_configure[postfuncs] += "autotools_postconfigure"

View File

@@ -431,12 +431,6 @@ python () {
appendVar('RDEPENDS_${PN}', extrardeps)
appendVar('PACKAGECONFIG_CONFARGS', extraconf)
# TODO: once all recipes/classes abusing EXTRA_OECONF
# to get PACKAGECONFIG options are fixed to use PACKAGECONFIG_CONFARGS
# move this appendVar to autotools.bbclass.
if not bb.data.inherits_class('cmake', d):
appendVar('EXTRA_OECONF', extraconf)
pn = d.getVar('PN', True)
license = d.getVar('LICENSE', True)
if license == "INVALID":
@@ -543,6 +537,19 @@ python () {
if pn in incompatwl:
bb.note("INCLUDING " + pn + " as buildable despite INCOMPATIBLE_LICENSE because it has been whitelisted")
# Try to verify per-package (LICENSE_<pkg>) values. LICENSE should be a
# superset of all per-package licenses. We do not do advanced (pattern)
# matching of license expressions - just check that all license strings
# in LICENSE_<pkg> are found in LICENSE.
license_set = oe.license.list_licenses(license)
for pkg in d.getVar('PACKAGES', True).split():
pkg_license = d.getVar('LICENSE_' + pkg, True)
if pkg_license:
unlisted = oe.license.list_licenses(pkg_license) - license_set
if unlisted:
bb.warn("LICENSE_%s includes licenses (%s) that are not "
"listed in LICENSE" % (pkg, ' '.join(unlisted)))
needsrcrev = False
srcuri = d.getVar('SRC_URI', True)
for uri in srcuri.split():

View File

@@ -1,3 +1,5 @@
DEPENDS_append_class-target = " bash-completion"
PACKAGES += "${PN}-bash-completion"
FILES_${PN}-bash-completion = "${datadir}/bash-completion ${sysconfdir}/bash_completion.d"

View File

@@ -4,5 +4,3 @@ CCACHE_DISABLE[unexport] = "1"
do_configure[dirs] =+ "${CCACHE_DIR}"
do_kernel_configme[dirs] =+ "${CCACHE_DIR}"
do_clean[cleandirs] += "${CCACHE_DIR}"

View File

@@ -108,18 +108,19 @@ cmake_do_configure() {
${OECMAKE_SITEFILE} \
${OECMAKE_SOURCEPATH} \
-DCMAKE_INSTALL_PREFIX:PATH=${prefix} \
-DCMAKE_INSTALL_BINDIR:PATH=${bindir} \
-DCMAKE_INSTALL_SBINDIR:PATH=${sbindir} \
-DCMAKE_INSTALL_LIBEXECDIR:PATH=${libexecdir} \
-DCMAKE_INSTALL_BINDIR:PATH=${@os.path.relpath(d.getVar('bindir', True), d.getVar('prefix', True))} \
-DCMAKE_INSTALL_SBINDIR:PATH=${@os.path.relpath(d.getVar('sbindir', True), d.getVar('prefix', True))} \
-DCMAKE_INSTALL_LIBEXECDIR:PATH=${@os.path.relpath(d.getVar('libexecdir', True), d.getVar('prefix', True))} \
-DCMAKE_INSTALL_SYSCONFDIR:PATH=${sysconfdir} \
-DCMAKE_INSTALL_SHAREDSTATEDIR:PATH=${sharedstatedir} \
-DCMAKE_INSTALL_SHAREDSTATEDIR:PATH=${@os.path.relpath(d.getVar('sharedstatedir', True), d. getVar('prefix', True))} \
-DCMAKE_INSTALL_LOCALSTATEDIR:PATH=${localstatedir} \
-DCMAKE_INSTALL_LIBDIR:PATH=${libdir} \
-DCMAKE_INSTALL_INCLUDEDIR:PATH=${includedir} \
-DCMAKE_INSTALL_DATAROOTDIR:PATH=${datadir} \
-DCMAKE_INSTALL_LIBDIR:PATH=${@os.path.relpath(d.getVar('libdir', True), d.getVar('prefix', True))} \
-DCMAKE_INSTALL_INCLUDEDIR:PATH=${@os.path.relpath(d.getVar('includedir', True), d.getVar('prefix', True))} \
-DCMAKE_INSTALL_DATAROOTDIR:PATH=${@os.path.relpath(d.getVar('datadir', True), d.getVar('prefix', True))} \
-DCMAKE_INSTALL_SO_NO_EXE=0 \
-DCMAKE_TOOLCHAIN_FILE=${WORKDIR}/toolchain.cmake \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_NO_SYSTEM_FROM_IMPORTED=1 \
${EXTRA_OECMAKE} \
-Wno-dev
}

View File

@@ -15,7 +15,8 @@ STAGING_BINDIR_TOOLCHAIN = "${STAGING_DIR_NATIVE}${bindir_native}/${SDK_ARCH}${S
# Update BASE_PACKAGE_ARCH and PACKAGE_ARCHS
#
PACKAGE_ARCH = "${SDK_ARCH}-${SDKPKGSUFFIX}"
CANADIANEXTRAOS = "linux-uclibc linux-musl"
BASECANADIANEXTRAOS ?= "linux-uclibc linux-musl"
CANADIANEXTRAOS = "${BASECANADIANEXTRAOS}"
CANADIANEXTRAVENDOR = ""
MODIFYTOS ??= "1"
python () {
@@ -34,8 +35,13 @@ python () {
tos = d.getVar("TARGET_OS", True)
whitelist = []
extralibcs = [""]
if "uclibc" in d.getVar("BASECANADIANEXTRAOS", True):
extralibcs.append("uclibc")
if "musl" in d.getVar("BASECANADIANEXTRAOS", True):
extralibcs.append("musl")
for variant in ["", "spe", "x32", "eabi", "n32"]:
for libc in ["", "uclibc", "musl"]:
for libc in extralibcs:
entry = "linux"
if variant and libc:
entry = entry + "-" + libc + variant
@@ -59,14 +65,20 @@ python () {
if tarch == "x86_64":
d.setVar("LIBCEXTENSION", "")
d.setVar("ABIEXTENSION", "")
d.appendVar("CANADIANEXTRAOS", " linux-gnux32 linux-uclibcx32 linux-muslx32")
d.appendVar("CANADIANEXTRAOS", " linux-gnux32")
for extraos in d.getVar("BASECANADIANEXTRAOS", True).split():
d.appendVar("CANADIANEXTRAOS", " " + extraos + "x32")
elif tarch == "powerpc":
# PowerPC can build "linux" and "linux-gnuspe"
d.setVar("LIBCEXTENSION", "")
d.setVar("ABIEXTENSION", "")
d.appendVar("CANADIANEXTRAOS", " linux-gnuspe linux-uclibcspe linux-muslspe")
d.appendVar("CANADIANEXTRAOS", " linux-gnuspe")
for extraos in d.getVar("BASECANADIANEXTRAOS", True).split():
d.appendVar("CANADIANEXTRAOS", " " + extraos + "spe")
elif tarch == "mips64":
d.appendVar("CANADIANEXTRAOS", " linux-gnun32 linux-uclibcn32 linux-musln32")
d.appendVar("CANADIANEXTRAOS", " linux-gnun32")
for extraos in d.getVar("BASECANADIANEXTRAOS", True).split():
d.appendVar("CANADIANEXTRAOS", " " + extraos + "n32")
if tarch == "arm" or tarch == "armeb":
d.appendVar("CANADIANEXTRAOS", " linux-musleabi linux-uclibceabi")
d.setVar("TARGET_OS", "linux-gnueabi")

View File

@@ -1,5 +1,3 @@
EXTRA_OEMAKE = ""
export STAGING_INCDIR
export STAGING_LIBDIR

View File

@@ -59,7 +59,7 @@ efi_iso_populate() {
cp $iso_dir/${EFIDIR}/* ${EFIIMGDIR}${EFIDIR}
cp $iso_dir/vmlinuz ${EFIIMGDIR}
EFIPATH=$(echo "${EFIDIR}" | sed 's/\//\\/g')
echo "fs0:${EFIPATH}\\${GRUB_IMAGE}" > ${EFIIMGDIR}/startup.nsh
printf 'fs0:%s\%s\n' "$EFIPATH" "$GRUB_IMAGE" > ${EFIIMGDIR}/startup.nsh
if [ -f "$iso_dir/initrd" ] ; then
cp $iso_dir/initrd ${EFIIMGDIR}
fi

View File

@@ -1,25 +1,69 @@
# Helper class to pull in the right gtk-doc dependencies and disable
# gtk-doc.
# Helper class to pull in the right gtk-doc dependencies and configure
# gtk-doc to enable or disable documentation building (which requries the
# use of usermode qemu).
# This variable is set to True if api-documentation is in
# DISTRO_FEATURES and qemu-usermode is in MACHINE_FEATURES, and False otherwise.
#
# Long-term it would be great if this class could be toggled between
# gtk-doc-stub-native and the real gtk-doc-native, which would enable
# re-generation of documentation. For now, we'll make do with this which
# packages up any existing documentation (so from tarball builds).
# It should be used in recipes to determine whether gtk-doc based documentation should be built,
# so that qemu use can be avoided when necessary.
GTKDOC_ENABLED = "${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', \
bb.utils.contains('MACHINE_FEATURES', 'qemu-usermode', 'True', 'False', d), 'False', d)}"
EXTRA_OECONF_prepend_class-target = "${@bb.utils.contains('GTKDOC_ENABLED', 'True', '--enable-gtk-doc --enable-gtk-doc-html --disable-gtk-doc-pdf', \
'--disable-gtk-doc', d)} "
# When building native recipes, disable gtkdoc, as it is not necessary,
# pulls in additional dependencies, and makes build times longer
EXTRA_OECONF_prepend_class-native = "--disable-gtk-doc "
EXTRA_OECONF_prepend_class-nativesdk = "--disable-gtk-doc "
DEPENDS_append_class-target = " gtk-doc-native qemu-native"
# Even though gtkdoc is disabled on -native, gtk-doc package is still
# needed for m4 macros.
DEPENDS_append_class-native = " gtk-doc-native"
DEPENDS_append_class-nativesdk = " gtk-doc-native"
# The documentation directory, where the infrastructure will be copied.
# gtkdocize has a default of "." so to handle out-of-tree builds set this to $S.
GTKDOC_DOCDIR ?= "${S}"
DEPENDS_append = " gtk-doc-stub-native"
EXTRA_OECONF_append = "\
--disable-gtk-doc \
--disable-gtk-doc-html \
--disable-gtk-doc-pdf \
"
do_configure_prepend () {
( cd ${S}; gtkdocize --docdir ${GTKDOC_DOCDIR} )
( cd ${S}; gtkdocize --docdir ${GTKDOC_DOCDIR} || true )
}
inherit qemu
export STAGING_DIR_HOST
do_compile_prepend_class-target () {
# Write out a qemu wrapper that will be given to gtkdoc-scangobj so that it
# can run target helper binaries through that.
qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['\$GIR_EXTRA_LIBS_PATH','$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
cat > ${B}/gtkdoc-qemuwrapper << EOF
#!/bin/sh
# Use a modules directory which doesn't exist so we don't load random things
# which may then get deleted (or their dependencies) and potentially segfault
export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
GIR_EXTRA_LIBS_PATH=\`find ${B} -name .libs| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
if test -d ".libs"; then
$qemu_binary ".libs/\$@"
else
$qemu_binary "\$@"
fi
if [ \$? -ne 0 ]; then
echo "If the above error message is about missing .so libraries, then setting up GIR_EXTRA_LIBS_PATH in the recipe should help."
echo "(typically like this: GIR_EXTRA_LIBS_PATH=\"$""{B}/something/.libs\" )"
exit 1
fi
EOF
chmod +x ${B}/gtkdoc-qemuwrapper
}
inherit pkgconfig

View File

@@ -43,7 +43,7 @@ ROOT_LIVE ?= "root=/dev/ram0"
INITRD_IMAGE_LIVE ?= "core-image-minimal-initramfs"
INITRD_LIVE ?= "${DEPLOY_DIR_IMAGE}/${INITRD_IMAGE_LIVE}-${MACHINE}.cpio.gz"
ROOTFS ?= "${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.ext4"
ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.ext4"
IMAGE_TYPEDEP_live = "ext4"
IMAGE_TYPEDEP_iso = "ext4"
@@ -144,14 +144,14 @@ build_iso() {
if [ "${PCBIOS}" = "1" ] && [ "${EFI}" != "1" ] ; then
# PCBIOS only media
mkisofs -V ${BOOTIMG_VOLUME_ID} \
-o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.iso \
-o ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso \
-b ${ISO_BOOTIMG} -c ${ISO_BOOTCAT} \
$mkisofs_compress_opts \
${MKISOFS_OPTIONS} $mkisofs_iso_level ${ISODIR}
else
# EFI only OR EFI+PCBIOS
mkisofs -A ${BOOTIMG_VOLUME_ID} -V ${BOOTIMG_VOLUME_ID} \
-o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.iso \
-o ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso \
-b ${ISO_BOOTIMG} -c ${ISO_BOOTCAT} \
$mkisofs_compress_opts ${MKISOFS_OPTIONS} $mkisofs_iso_level \
-eltorito-alt-boot -eltorito-platform efi \
@@ -160,7 +160,7 @@ build_iso() {
isohybrid_args="-u"
fi
isohybrid $isohybrid_args ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.iso
isohybrid $isohybrid_args ${IMGDEPLOYDIR}/${IMAGE_NAME}.iso
}
build_fat_img() {
@@ -252,13 +252,13 @@ build_hddimg() {
fi
fi
build_fat_img ${HDDDIR} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.hddimg
build_fat_img ${HDDDIR} ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
if [ "${PCBIOS}" = "1" ]; then
syslinux_hddimg_install
fi
chmod 644 ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.hddimg
chmod 644 ${IMGDEPLOYDIR}/${IMAGE_NAME}.hddimg
fi
}

View File

@@ -33,14 +33,14 @@ IMAGE_TYPEDEP_hdddirect = "${VM_ROOTFS_TYPE}"
IMAGE_TYPES_MASKED += "vmdk vdi qcow2 hdddirect"
VM_ROOTFS_TYPE ?= "ext4"
ROOTFS ?= "${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.${VM_ROOTFS_TYPE}"
ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.${VM_ROOTFS_TYPE}"
# Used by bootloader
LABELS_VM ?= "boot"
ROOT_VM ?= "root=/dev/sda2"
# Using an initramfs is optional. Enable it by setting INITRD_IMAGE_VM.
INITRD_IMAGE_VM ?= ""
INITRD_VM ?= "${@'${DEPLOY_DIR_IMAGE}/${INITRD_IMAGE_VM}-${MACHINE}.cpio.gz' if '${INITRD_IMAGE_VM}' else ''}"
INITRD_VM ?= "${@'${IMGDEPLOYDIR}/${INITRD_IMAGE_VM}-${MACHINE}.cpio.gz' if '${INITRD_IMAGE_VM}' else ''}"
do_bootdirectdisk[depends] += "${@'${INITRD_IMAGE_VM}:do_image_complete' if '${INITRD_IMAGE_VM}' else ''}"
BOOTDD_VOLUME_ID ?= "boot"
@@ -52,7 +52,7 @@ DISK_SIGNATURE[vardepsexclude] = "DISK_SIGNATURE_GENERATED"
build_boot_dd() {
HDDDIR="${S}/hdd/boot"
HDDIMG="${S}/hdd.image"
IMAGE=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.hdddirect
IMAGE=${IMGDEPLOYDIR}/${IMAGE_NAME}.hdddirect
populate_kernel $HDDDIR
@@ -104,9 +104,13 @@ build_boot_dd() {
dd if=$HDDIMG of=$IMAGE conv=notrunc seek=1 bs=512
dd if=${ROOTFS} of=$IMAGE conv=notrunc seek=$OFFSET bs=512
cd ${DEPLOY_DIR_IMAGE}
rm -f ${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.hdddirect
ln -s ${IMAGE_NAME}.hdddirect ${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.hdddirect
cd ${IMGDEPLOYDIR}
if [ "${RM_OLD_IMAGE}" = "1" ] && [ -L ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.hdddirect ]; then
rm -f $(readlink -f ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.hdddirect)
fi
ln -sf ${IMAGE_NAME}.hdddirect ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.hdddirect
}
python do_bootdirectdisk() {
@@ -141,8 +145,13 @@ DISK_SIGNATURE_GENERATED := "${@generate_disk_signature()}"
run_qemu_img (){
type="$1"
qemu-img convert -O $type ${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.hdddirect ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.$type
ln -sf ${IMAGE_NAME}.$type ${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.$type
qemu-img convert -O $type ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.hdddirect ${IMGDEPLOYDIR}/${IMAGE_NAME}.$type
if [ "${RM_OLD_IMAGE}" = "1" ] && [ -L ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.$type ]; then
rm -f $(readlink -f ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.$type)
fi
ln -sf ${IMAGE_NAME}.$type ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.$type
}
create_vmdk_image () {
run_qemu_img vmdk

View File

@@ -74,6 +74,8 @@ IMAGE_INSTALL[type] = "list"
export PACKAGE_INSTALL ?= "${IMAGE_INSTALL} ${ROOTFS_BOOTSTRAP_INSTALL} ${FEATURE_INSTALL}"
PACKAGE_INSTALL_ATTEMPTONLY ?= "${FEATURE_INSTALL_OPTIONAL}"
IMGDEPLOYDIR = "${WORKDIR}/deploy-${PN}-image-complete"
# Images are generally built explicitly, do not need to be part of world.
EXCLUDE_FROM_WORLD = "1"
@@ -118,7 +120,7 @@ def rootfs_variables(d):
'IMAGE_ROOTFS_MAXSIZE','IMAGE_NAME','IMAGE_LINK_NAME','IMAGE_MANIFEST','DEPLOY_DIR_IMAGE','RM_OLD_IMAGE','IMAGE_FSTYPES','IMAGE_INSTALL_COMPLEMENTARY','IMAGE_LINGUAS',
'MULTILIBRE_ALLOW_REP','MULTILIB_TEMP_ROOTFS','MULTILIB_VARIANTS','MULTILIBS','ALL_MULTILIB_PACKAGE_ARCHS','MULTILIB_GLOBAL_VARIANTS','BAD_RECOMMENDATIONS','NO_RECOMMENDATIONS',
'PACKAGE_ARCHS','PACKAGE_CLASSES','TARGET_VENDOR','TARGET_ARCH','TARGET_OS','OVERRIDES','BBEXTENDVARIANT','FEED_DEPLOYDIR_BASE_URI','INTERCEPT_DIR','USE_DEVFS',
'COMPRESSIONTYPES', 'IMAGE_GEN_DEBUGFS', 'ROOTFS_RO_UNNEEDED']
'CONVERSIONTYPES', 'IMAGE_GEN_DEBUGFS', 'ROOTFS_RO_UNNEEDED', 'IMGDEPLOYDIR']
variables.extend(rootfs_command_variables(d))
variables.extend(variable_depends(d))
return " ".join(variables)
@@ -249,7 +251,7 @@ fakeroot python do_rootfs () {
progress_reporter.finish()
}
do_rootfs[dirs] = "${TOPDIR}"
do_rootfs[cleandirs] += "${S}"
do_rootfs[cleandirs] += "${S} ${IMGDEPLOYDIR}"
do_rootfs[umask] = "022"
addtask rootfs before do_build
@@ -273,6 +275,11 @@ fakeroot python do_image_complete () {
}
do_image_complete[dirs] = "${TOPDIR}"
do_image_complete[umask] = "022"
SSTATETASKS += "do_image_complete"
SSTATE_SKIP_CREATION_task-image-complete = '1'
do_image_complete[sstate-inputdirs] = "${IMGDEPLOYDIR}"
do_image_complete[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
do_image_complete[stamp-extra-info] = "${MACHINE}"
addtask do_image_complete after do_image before do_build
# Add image-level QA/sanity checks to IMAGE_QA_COMMANDS
@@ -343,7 +350,7 @@ python setup_debugfs () {
python () {
vardeps = set()
# We allow COMPRESSIONTYPES to have duplicates. That avoids breaking
# We allow CONVERSIONTYPES to have duplicates. That avoids breaking
# derived distros when OE-core or some other layer independently adds
# the same type. There is still only one command for each type, but
# presumably the commands will do the same when the type is the same,
@@ -351,7 +358,7 @@ python () {
#
# Without de-duplication, gen_conversion_cmds() below
# would create the same compression command multiple times.
ctypes = set(d.getVar('COMPRESSIONTYPES', True).split())
ctypes = set(d.getVar('CONVERSIONTYPES', True).split())
old_overrides = d.getVar('OVERRIDES', 0)
def _image_base_type(type):
@@ -440,7 +447,7 @@ python () {
cmds.append("\t" + image_cmd)
else:
bb.fatal("No IMAGE_CMD defined for IMAGE_FSTYPES entry '%s' - possibly invalid type name or missing support class" % t)
cmds.append(localdata.expand("\tcd ${DEPLOY_DIR_IMAGE}"))
cmds.append(localdata.expand("\tcd ${IMGDEPLOYDIR}"))
# Since a copy of IMAGE_CMD_xxx will be inlined within do_image_xxx,
# prevent a redundant copy of IMAGE_CMD_xxx being emitted as a function.
@@ -456,9 +463,10 @@ python () {
# Create input image first.
gen_conversion_cmds(type)
localdata.setVar('type', type)
cmd = "\t" + localdata.getVar("COMPRESS_CMD_" + ctype, True)
cmd = "\t" + (localdata.getVar("CONVERSION_CMD_" + ctype, True) or localdata.getVar("COMPRESS_CMD_" + ctype, True))
if cmd not in cmds:
cmds.append(cmd)
vardeps.add('CONVERSION_CMD_' + ctype)
vardeps.add('COMPRESS_CMD_' + ctype)
subimage = type + "." + ctype
if subimage not in subimages:
@@ -557,7 +565,7 @@ python set_image_size () {
#
python create_symlinks() {
deploy_dir = d.getVar('DEPLOY_DIR_IMAGE', True)
deploy_dir = d.getVar('IMGDEPLOYDIR', True)
img_name = d.getVar('IMAGE_NAME', True)
link_name = d.getVar('IMAGE_LINK_NAME', True)
manifest_name = d.getVar('IMAGE_MANIFEST', True)

View File

@@ -29,6 +29,7 @@ def imagetypes_getdepends(d):
for typedepends in (d.getVar("IMAGE_TYPEDEP_%s" % basetype, True) or "").split():
adddep(d.getVar('IMAGE_DEPENDS_%s' % typedepends, True) , deps)
for ctype in resttypes:
adddep(d.getVar("CONVERSION_DEPENDS_%s" % ctype, True), deps)
adddep(d.getVar("COMPRESS_DEPENDS_%s" % ctype, True), deps)
# Sort the set so that ordering is consistant
@@ -41,9 +42,9 @@ XZ_THREADS ?= "-T 0"
ZIP_COMPRESSION_LEVEL ?= "-9"
JFFS2_SUM_EXTRA_ARGS ?= ""
IMAGE_CMD_jffs2 = "mkfs.jffs2 --root=${IMAGE_ROOTFS} --faketime --output=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.jffs2 ${EXTRA_IMAGECMD}"
IMAGE_CMD_jffs2 = "mkfs.jffs2 --root=${IMAGE_ROOTFS} --faketime --output=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.jffs2 ${EXTRA_IMAGECMD}"
IMAGE_CMD_cramfs = "mkfs.cramfs ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cramfs ${EXTRA_IMAGECMD}"
IMAGE_CMD_cramfs = "mkfs.cramfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cramfs ${EXTRA_IMAGECMD}"
oe_mkext234fs () {
fstype=$1
@@ -63,8 +64,8 @@ oe_mkext234fs () {
eval COUNT=\"$MIN_COUNT\"
fi
# Create a sparse image block
dd if=/dev/zero of=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024
mkfs.$fstype -F $extra_imagecmd ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}
dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024
mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}
}
IMAGE_CMD_ext2 = "oe_mkext234fs ext2 ${EXTRA_IMAGECMD}"
@@ -74,16 +75,16 @@ IMAGE_CMD_ext4 = "oe_mkext234fs ext4 ${EXTRA_IMAGECMD}"
MIN_BTRFS_SIZE ?= "16384"
IMAGE_CMD_btrfs () {
if [ ${ROOTFS_SIZE} -gt ${MIN_BTRFS_SIZE} ]; then
dd if=/dev/zero of=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs count=${ROOTFS_SIZE} bs=1024
mkfs.btrfs ${EXTRA_IMAGECMD} -r ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs
dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs count=${ROOTFS_SIZE} bs=1024
mkfs.btrfs ${EXTRA_IMAGECMD} -r ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs
else
bbfatal "Rootfs is too small for BTRFS (Rootfs Actual Size: ${ROOTFS_SIZE}, BTRFS Minimum Size: ${MIN_BTRFS_SIZE})"
fi
}
IMAGE_CMD_squashfs = "mksquashfs ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs ${EXTRA_IMAGECMD} -noappend"
IMAGE_CMD_squashfs-xz = "mksquashfs ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz ${EXTRA_IMAGECMD} -noappend -comp xz"
IMAGE_CMD_squashfs-lzo = "mksquashfs ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-lzo ${EXTRA_IMAGECMD} -noappend -comp lzo"
IMAGE_CMD_squashfs = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs ${EXTRA_IMAGECMD} -noappend"
IMAGE_CMD_squashfs-xz = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz ${EXTRA_IMAGECMD} -noappend -comp xz"
IMAGE_CMD_squashfs-lzo = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-lzo ${EXTRA_IMAGECMD} -noappend -comp lzo"
# By default, tar from the host is used, which can be quite old. If
# you need special parameters (like --xattrs) which are only supported
@@ -96,11 +97,11 @@ IMAGE_CMD_squashfs-lzo = "mksquashfs ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE
# In practice, it turned out to be not needed when creating archives and
# required when extracting, but it seems prudent to use it in both cases.
IMAGE_CMD_TAR ?= "tar"
IMAGE_CMD_tar = "${IMAGE_CMD_TAR} -cvf ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} ."
IMAGE_CMD_tar = "${IMAGE_CMD_TAR} -cvf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} ."
do_image_cpio[cleandirs] += "${WORKDIR}/cpio_append"
IMAGE_CMD_cpio () {
(cd ${IMAGE_ROOTFS} && find . | cpio -o -H newc >${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
(cd ${IMAGE_ROOTFS} && find . | cpio -o -H newc >${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
# We only need the /init symlink if we're building the real
# image. The -dbg image doesn't need it! By being clever
# about this we also avoid 'touch' below failing, as it
@@ -113,7 +114,7 @@ IMAGE_CMD_cpio () {
else
touch ${WORKDIR}/cpio_append/init
fi
(cd ${WORKDIR}/cpio_append && echo ./init | cpio -oA -H newc -F ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
(cd ${WORKDIR}/cpio_append && echo ./init | cpio -oA -H newc -F ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cpio)
fi
fi
}
@@ -122,8 +123,8 @@ ELF_KERNEL ?= "${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE}"
ELF_APPEND ?= "ramdisk_size=32768 root=/dev/ram0 rw console="
IMAGE_CMD_elf () {
test -f ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf && rm -f ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf
mkelfImage --kernel=${ELF_KERNEL} --initrd=${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.cpio.gz --output=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf --append='${ELF_APPEND}' ${EXTRA_IMAGECMD}
test -f ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf && rm -f ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf
mkelfImage --kernel=${ELF_KERNEL} --initrd=${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.cpio.gz --output=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf --append='${ELF_APPEND}' ${EXTRA_IMAGECMD}
}
IMAGE_TYPEDEP_elf = "cpio.gz"
@@ -141,20 +142,20 @@ multiubi_mkfs() {
echo \[ubifs\] > ubinize${vname}-${IMAGE_NAME}.cfg
echo mode=ubi >> ubinize${vname}-${IMAGE_NAME}.cfg
echo image=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs >> ubinize${vname}-${IMAGE_NAME}.cfg
echo image=${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs >> ubinize${vname}-${IMAGE_NAME}.cfg
echo vol_id=0 >> ubinize${vname}-${IMAGE_NAME}.cfg
echo vol_type=dynamic >> ubinize${vname}-${IMAGE_NAME}.cfg
echo vol_name=${UBI_VOLNAME} >> ubinize${vname}-${IMAGE_NAME}.cfg
echo vol_flags=autoresize >> ubinize${vname}-${IMAGE_NAME}.cfg
mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ${mkubifs_args}
ubinize -o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi ${ubinize_args} ubinize${vname}-${IMAGE_NAME}.cfg
mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ${mkubifs_args}
ubinize -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubi ${ubinize_args} ubinize${vname}-${IMAGE_NAME}.cfg
# Cleanup cfg file
mv ubinize${vname}-${IMAGE_NAME}.cfg ${DEPLOY_DIR_IMAGE}/
mv ubinize${vname}-${IMAGE_NAME}.cfg ${IMGDEPLOYDIR}/
# Create own symlinks for 'named' volumes
if [ -n "$vname" ]; then
cd ${DEPLOY_DIR_IMAGE}
cd ${IMGDEPLOYDIR}
if [ -e ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs ]; then
ln -sf ${IMAGE_NAME}${vname}${IMAGE_NAME_SUFFIX}.ubifs \
${IMAGE_LINK_NAME}${vname}.ubifs
@@ -181,7 +182,7 @@ IMAGE_CMD_ubi () {
multiubi_mkfs "${MKUBIFS_ARGS}" "${UBINIZE_ARGS}"
}
IMAGE_CMD_ubifs = "mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.ubifs ${MKUBIFS_ARGS}"
IMAGE_CMD_ubifs = "mkfs.ubifs -r ${IMAGE_ROOTFS} -o ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.ubifs ${MKUBIFS_ARGS}"
WKS_FILE ?= "${IMAGE_BASENAME}.${MACHINE}.wks"
WKS_FILES ?= "${WKS_FILE} ${IMAGE_BASENAME}.wks"
@@ -201,7 +202,7 @@ def wks_search(files, search_path):
WIC_CREATE_EXTRA_ARGS ?= ""
IMAGE_CMD_wic () {
out="${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}"
out="${IMGDEPLOYDIR}/${IMAGE_NAME}"
wks="${WKS_FULL_PATH}"
if [ -z "$wks" ]; then
bbfatal "No kickstart files from WKS_FILES were found: ${WKS_FILES}. Please set WKS_FILE or WKS_FILES appropriately."
@@ -214,7 +215,7 @@ IMAGE_CMD_wic () {
IMAGE_CMD_wic[vardepsexclude] = "WKS_FULL_PATH WKS_FILES"
# Rebuild when the wks file or vars in WICVARS change
USING_WIC = "${@bb.utils.contains_any('IMAGE_FSTYPES', 'wic ' + ' '.join('wic.%s' % c for c in '${COMPRESSIONTYPES}'.split()), '1', '', d)}"
USING_WIC = "${@bb.utils.contains_any('IMAGE_FSTYPES', 'wic ' + ' '.join('wic.%s' % c for c in '${CONVERSIONTYPES}'.split()), '1', '', d)}"
WKS_FILE_CHECKSUM = "${@'${WKS_FULL_PATH}:%s' % os.path.exists('${WKS_FULL_PATH}') if '${USING_WIC}' else ''}"
do_image_wic[file-checksums] += "${WKS_FILE_CHECKSUM}"
@@ -316,29 +317,35 @@ IMAGE_TYPES = " \
wic wic.gz wic.bz2 wic.lzma \
"
COMPRESSIONTYPES = "gz bz2 lzma xz lz4 zip sum md5sum sha1sum sha224sum sha256sum sha384sum sha512sum bmap"
COMPRESS_CMD_lzma = "lzma -k -f -7 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
COMPRESS_CMD_gz = "gzip -f -9 -c ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.gz"
COMPRESS_CMD_bz2 = "pbzip2 -f -k ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
COMPRESS_CMD_xz = "xz -f -k -c ${XZ_COMPRESSION_LEVEL} ${XZ_THREADS} --check=${XZ_INTEGRITY_CHECK} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.xz"
COMPRESS_CMD_lz4 = "lz4c -9 -c ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.lz4"
COMPRESS_CMD_zip = "zip ${ZIP_COMPRESSION_LEVEL} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.zip ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
COMPRESS_CMD_sum = "sumtool -i ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sum ${JFFS2_SUM_EXTRA_ARGS}"
COMPRESS_CMD_md5sum = "md5sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.md5sum"
COMPRESS_CMD_sha1sum = "sha1sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha1sum"
COMPRESS_CMD_sha224sum = "sha224sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha224sum"
COMPRESS_CMD_sha256sum = "sha256sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha256sum"
COMPRESS_CMD_sha384sum = "sha384sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha384sum"
COMPRESS_CMD_sha512sum = "sha512sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha512sum"
COMPRESS_CMD_bmap = "bmaptool create ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.bmap"
COMPRESS_DEPENDS_lzma = "xz-native"
COMPRESS_DEPENDS_gz = ""
COMPRESS_DEPENDS_bz2 = "pbzip2-native"
COMPRESS_DEPENDS_xz = "xz-native"
COMPRESS_DEPENDS_lz4 = "lz4-native"
COMPRESS_DEPENDS_zip = "zip-native"
COMPRESS_DEPENDS_sum = "mtd-utils-native"
COMPRESS_DEPENDS_bmap = "bmap-tools-native"
# Compression is a special case of conversion. The old variable
# names are still supported for backward-compatibility. When defining
# new compression or conversion commands, use CONVERSIONTYPES and
# CONVERSION_CMD/DEPENDS.
COMPRESSIONTYPES ?= ""
CONVERSIONTYPES = "gz bz2 lzma xz lz4 zip sum md5sum sha1sum sha224sum sha256sum sha384sum sha512sum bmap ${COMPRESSIONTYPES}"
CONVERSION_CMD_lzma = "lzma -k -f -7 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
CONVERSION_CMD_gz = "gzip -f -9 -c ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.gz"
CONVERSION_CMD_bz2 = "pbzip2 -f -k ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
CONVERSION_CMD_xz = "xz -f -k -c ${XZ_COMPRESSION_LEVEL} ${XZ_THREADS} --check=${XZ_INTEGRITY_CHECK} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.xz"
CONVERSION_CMD_lz4 = "lz4c -9 -c ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.lz4"
CONVERSION_CMD_zip = "zip ${ZIP_COMPRESSION_LEVEL} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.zip ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
CONVERSION_CMD_sum = "sumtool -i ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sum ${JFFS2_SUM_EXTRA_ARGS}"
CONVERSION_CMD_md5sum = "md5sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.md5sum"
CONVERSION_CMD_sha1sum = "sha1sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha1sum"
CONVERSION_CMD_sha224sum = "sha224sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha224sum"
CONVERSION_CMD_sha256sum = "sha256sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha256sum"
CONVERSION_CMD_sha384sum = "sha384sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha384sum"
CONVERSION_CMD_sha512sum = "sha512sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha512sum"
CONVERSION_CMD_bmap = "bmaptool create ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.bmap"
CONVERSION_DEPENDS_lzma = "xz-native"
CONVERSION_DEPENDS_gz = ""
CONVERSION_DEPENDS_bz2 = "pbzip2-native"
CONVERSION_DEPENDS_xz = "xz-native"
CONVERSION_DEPENDS_lz4 = "lz4-native"
CONVERSION_DEPENDS_zip = "zip-native"
CONVERSION_DEPENDS_sum = "mtd-utils-native"
CONVERSION_DEPENDS_bmap = "bmap-tools-native"
RUNNABLE_IMAGE_TYPES ?= "ext2 ext3 ext4"
RUNNABLE_MACHINE_PATTERNS ?= "qemu"
@@ -354,4 +361,4 @@ IMAGE_TYPES_MASKED ?= ""
# The WICVARS variable is used to define list of bitbake variables used in wic code
# variables from this list is written to <image>.env file
WICVARS ?= "BBLAYERS DEPLOY_DIR_IMAGE HDDDIR IMAGE_BASENAME IMAGE_BOOT_FILES IMAGE_LINK_NAME IMAGE_ROOTFS INITRAMFS_FSTYPES INITRD ISODIR MACHINE_ARCH ROOTFS_SIZE STAGING_DATADIR STAGING_DIR_NATIVE STAGING_LIBDIR TARGET_SYS"
WICVARS ?= "BBLAYERS IMGDEPLOYDIR DEPLOY_DIR_IMAGE HDDDIR IMAGE_BASENAME IMAGE_BOOT_FILES IMAGE_LINK_NAME IMAGE_ROOTFS INITRAMFS_FSTYPES INITRD ISODIR MACHINE_ARCH ROOTFS_SIZE STAGING_DATADIR STAGING_DIR_NATIVE STAGING_LIBDIR TARGET_SYS"

View File

@@ -2,25 +2,25 @@ inherit image_types kernel-arch
oe_mkimage () {
mkimage -A ${UBOOT_ARCH} -O linux -T ramdisk -C $2 -n ${IMAGE_NAME} \
-d ${DEPLOY_DIR_IMAGE}/$1 ${DEPLOY_DIR_IMAGE}/$1.u-boot
-d ${IMGDEPLOYDIR}/$1 ${IMGDEPLOYDIR}/$1.u-boot
if [ x$3 = x"clean" ]; then
rm $1
fi
}
COMPRESSIONTYPES += "gz.u-boot bz2.u-boot lzma.u-boot u-boot"
CONVERSIONTYPES += "gz.u-boot bz2.u-boot lzma.u-boot u-boot"
COMPRESS_DEPENDS_u-boot = "u-boot-mkimage-native"
COMPRESS_CMD_u-boot = "oe_mkimage ${IMAGE_NAME}.rootfs.${type} none"
CONVERSION_DEPENDS_u-boot = "u-boot-mkimage-native"
CONVERSION_CMD_u-boot = "oe_mkimage ${IMAGE_NAME}.rootfs.${type} none"
COMPRESS_DEPENDS_gz.u-boot = "u-boot-mkimage-native"
COMPRESS_CMD_gz.u-boot = "${COMPRESS_CMD_gz}; oe_mkimage ${IMAGE_NAME}.rootfs.${type}.gz gzip clean"
CONVERSION_DEPENDS_gz.u-boot = "u-boot-mkimage-native"
CONVERSION_CMD_gz.u-boot = "${CONVERSION_CMD_gz}; oe_mkimage ${IMAGE_NAME}.rootfs.${type}.gz gzip clean"
COMPRESS_DEPENDS_bz2.u-boot = "u-boot-mkimage-native"
COMPRESS_CMD_bz2.u-boot = "${COMPRESS_CMD_bz2}; oe_mkimage ${IMAGE_NAME}.rootfs.${type}.bz2 bzip2 clean"
CONVERSION_DEPENDS_bz2.u-boot = "u-boot-mkimage-native"
CONVERSION_CMD_bz2.u-boot = "${CONVERSION_CMD_bz2}; oe_mkimage ${IMAGE_NAME}.rootfs.${type}.bz2 bzip2 clean"
COMPRESS_DEPENDS_lzma.u-boot = "u-boot-mkimage-native"
COMPRESS_CMD_lzma.u-boot = "${COMPRESS_CMD_lzma}; oe_mkimage ${IMAGE_NAME}.rootfs.${type}.lzma lzma clean"
CONVERSION_DEPENDS_lzma.u-boot = "u-boot-mkimage-native"
CONVERSION_CMD_lzma.u-boot = "${CONVERSION_CMD_lzma}; oe_mkimage ${IMAGE_NAME}.rootfs.${type}.lzma lzma clean"
IMAGE_TYPES += "ext2.u-boot ext2.gz.u-boot ext2.bz2.u-boot ext2.lzma.u-boot ext3.gz.u-boot ext4.gz.u-boot cpio.gz.u-boot"

View File

@@ -179,9 +179,14 @@ def package_qa_get_machine_dict(d):
return machdata
def package_qa_clean_path(path,d):
""" Remove the common prefix from the path. In this case it is the TMPDIR"""
return path.replace(d.getVar("TMPDIR", True) + "/", "")
def package_qa_clean_path(path, d, pkg=None):
"""
Remove redundant paths from the path for display. If pkg isn't set then
TMPDIR is stripped, otherwise PKGDEST/pkg is stripped.
"""
if pkg:
path = path.replace(os.path.join(d.getVar("PKGDEST", True), pkg), "/")
return path.replace(d.getVar("TMPDIR", True), "/").replace("//", "/")
def package_qa_write_error(type, error, d):
logfile = d.getVar('QA_LOGFILE', True)

View File

@@ -36,12 +36,6 @@ python split_kernel_module_packages () {
import re
modinfoexp = re.compile("([^=]+)=(.*)")
kerverrexp = re.compile('^(.*-hh.*)[\.\+].*$')
depmodpat0 = re.compile("^(.*\.k?o):..*$")
depmodpat1 = re.compile("^(.*\.k?o):\s*(.*\.k?o)\s*$")
depmodpat2 = re.compile("^(.*\.k?o):\s*(.*\.k?o)\s*\\\$")
depmodpat3 = re.compile("^\t(.*\.k?o)\s*\\\$")
depmodpat4 = re.compile("^\t(.*\.k?o)\s*$")
def extract_modinfo(file):
import tempfile, subprocess
@@ -63,68 +57,6 @@ python split_kernel_module_packages () {
vals[m.group(1)] = m.group(2)
return vals
def parse_depmod():
dvar = d.getVar('PKGD', True)
kernelver = d.getVar('KERNEL_VERSION', True)
kernelver_stripped = kernelver
m = kerverrexp.match(kernelver)
if m:
kernelver_stripped = m.group(1)
staging_kernel_dir = d.getVar("STAGING_KERNEL_BUILDDIR", True)
system_map_file = "%s/boot/System.map-%s" % (dvar, kernelver)
if not os.path.exists(system_map_file):
system_map_file = "%s/System.map-%s" % (staging_kernel_dir, kernelver)
if not os.path.exists(system_map_file):
bb.fatal("System.map-%s does not exist in '%s/boot' nor STAGING_KERNEL_BUILDDIR '%s'" % (kernelver, dvar, staging_kernel_dir))
cmd = "depmod -n -a -b %s -F %s %s" % (dvar, system_map_file, kernelver_stripped)
f = os.popen(cmd, 'r')
deps = {}
line = f.readline()
while line:
if not depmodpat0.match(line):
line = f.readline()
continue
m1 = depmodpat1.match(line)
if m1:
deps[m1.group(1)] = m1.group(2).split()
else:
m2 = depmodpat2.match(line)
if m2:
deps[m2.group(1)] = m2.group(2).split()
line = f.readline()
m3 = depmodpat3.match(line)
while m3:
deps[m2.group(1)].extend(m3.group(1).split())
line = f.readline()
m3 = depmodpat3.match(line)
m4 = depmodpat4.match(line)
deps[m2.group(1)].extend(m4.group(1).split())
line = f.readline()
f.close()
return deps
def get_dependencies(file, pattern, format):
# file no longer includes PKGD
file = file.replace(d.getVar('PKGD', True) or '', '', 1)
# instead is prefixed with /lib/modules/${KERNEL_VERSION}
file = file.replace("/lib/modules/%s/" % d.getVar('KERNEL_VERSION', True) or '', '', 1)
if file in module_deps:
dependencies = []
for i in module_deps[file]:
m = re.match(pattern, os.path.basename(i))
if not m:
continue
on = legitimize_package_name(m.group(1))
dependency_pkg = format % on
dependencies.append(dependency_pkg)
return dependencies
return []
def frob_metadata(file, pkg, pattern, format, basename):
vals = extract_modinfo(file)
@@ -173,7 +105,13 @@ python split_kernel_module_packages () {
d.setVar('DESCRIPTION_' + pkg, old_desc + "; " + vals["description"])
rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS_' + pkg, True) or "")
for dep in get_dependencies(file, pattern, format):
modinfo_deps = []
if "depends" in vals and vals["depends"] != "":
for dep in vals["depends"].split(","):
on = legitimize_package_name(dep)
dependency_pkg = format % on
modinfo_deps.append(dependency_pkg)
for dep in modinfo_deps:
if not dep in rdepends:
rdepends[dep] = []
d.setVar('RDEPENDS_' + pkg, bb.utils.join_deps(rdepends, commasep=False))
@@ -181,7 +119,6 @@ python split_kernel_module_packages () {
# Avoid automatic -dev recommendations for modules ending with -dev.
d.setVarFlag('RRECOMMENDS_' + pkg, 'nodeprrecs', 1)
module_deps = parse_depmod()
module_regex = '^(.*)\.k?o$'
module_pattern_prefix = d.getVar('KERNEL_MODULE_PACKAGE_PREFIX', True)

View File

@@ -119,77 +119,53 @@ do_kernel_metadata() {
patches="${@" ".join(find_patches(d))}"
feat_dirs="${@" ".join(find_kernel_feature_dirs(d))}"
# add any explicitly referenced features onto the end of the feature
# list that is passed to the kernel build scripts.
if [ -n "${KERNEL_FEATURES}" ]; then
for feat in ${KERNEL_FEATURES}; do
addon_features="$addon_features --feature $feat"
done
fi
# check for feature directories/repos/branches that were part of the
# SRC_URI. If they were supplied, we convert them into include directives
# for the update part of the process
if [ -n "${feat_dirs}" ]; then
for f in ${feat_dirs}; do
for f in ${feat_dirs}; do
if [ -d "${WORKDIR}/$f/meta" ]; then
includes="$includes -I${WORKDIR}/$f/meta"
elif [ -d "${WORKDIR}/$f" ]; then
includes="$includes -I${WORKDIR}/$f"
includes="$includes -I${WORKDIR}/$f/kernel-meta"
elif [ -d "${WORKDIR}/$f" ]; then
includes="$includes -I${WORKDIR}/$f"
fi
done
done
for s in ${sccs} ${patches}; do
sdir=$(dirname $s)
includes="$includes -I${sdir}"
# if a SRC_URI passed patch or .scc has a subdir of "kernel-meta",
# then we add it to the search path
if [ -d "${sdir}/kernel-meta" ]; then
includes="$includes -I${sdir}/kernel-meta"
fi
done
# expand kernel features into their full path equivalents
bsp_definition=$(spp ${includes} --find -DKMACHINE=${KMACHINE} -DKTYPE=${LINUX_KERNEL_TYPE})
meta_dir=$(kgit --meta)
# run1: pull all the configuration fragments, no matter where they come from
elements="`echo -n ${bsp_definition} ${sccs} ${patches} ${KERNEL_FEATURES}`"
if [ -n "${elements}" ]; then
scc --force -o ${S}/${meta_dir}:cfg,meta ${includes} ${bsp_definition} ${sccs} ${patches} ${KERNEL_FEATURES}
fi
# updates or generates the target description
updateme ${updateme_flags} -DKDESC=${KMACHINE}:${LINUX_KERNEL_TYPE} \
${includes} ${addon_features} ${ARCH} ${KMACHINE} ${sccs} ${patches}
if [ $? -ne 0 ]; then
bbfatal_log "Could not update ${machine_branch}"
# run2: only generate patches for elements that have been passed on the SRC_URI
elements="`echo -n ${sccs} ${patches} ${KERNEL_FEATURES}`"
if [ -n "${elements}" ]; then
scc --force -o ${S}/${meta_dir}:patch --cmds patch ${includes} ${sccs} ${patches} ${KERNEL_FEATURES}
fi
}
do_patch() {
cd ${S}
# executes and modifies the source tree as required
patchme ${KMACHINE}
if [ $? -ne 0 ]; then
bberror "Could not apply patches for ${KMACHINE}."
bbfatal_log "Patch failures can be resolved in the linux source directory ${S})"
fi
# check to see if the specified SRCREV is reachable from the final branch.
# if it wasn't something wrong has happened, and we should error.
machine_srcrev="${SRCREV_machine}"
if [ -z "${machine_srcrev}" ]; then
# fallback to SRCREV if a non machine_meta tree is being built
machine_srcrev="${SRCREV}"
# if SRCREV cannot be reached something is wrong.
if [ -z "${machine_srcrev}" ]; then
bbfatal "Neither SRCREV_machine or SRCREV was specified!"
fi
fi
if [ -n "${KMETA_AUDIT}" ]; then
current_branch=`git rev-parse --abbrev-ref HEAD`
machine_branch="${@ get_machine_branch(d, "${KBRANCH}" )}"
if [ "${current_branch}" != "${machine_branch}" ]; then
bbwarn "After meta data application, the kernel tree branch is ${current_branch}."
bbwarn "The SRC_URI specified branch ${machine_branch}."
bbwarn ""
bbwarn "The branch will be forced to ${machine_branch}, but this means the board meta data"
bbwarn "(.scc files) do not match the SRC_URI specification."
bbwarn ""
bbwarn "The meta data and branch ${machine_branch} should be inspected to ensure the proper"
bbwarn "kernel is being built."
git checkout -f ${machine_branch}
fi
fi
if [ "${machine_srcrev}" != "AUTOINC" ]; then
if ! [ "$(git rev-parse --verify ${machine_srcrev}~0)" = "$(git merge-base ${machine_srcrev} HEAD)" ]; then
bberror "SRCREV ${machine_srcrev} was specified, but is not reachable"
bbfatal "Check the BSP description for incorrect branch selection, or other errors."
meta_dir=$(kgit --meta)
(cd ${meta_dir}; ln -sf patch.queue series)
if [ -f "${meta_dir}/series" ]; then
kgit-s2q --gen -v --patches .kernel-meta/
if [ $? -ne 0 ]; then
bberror "Could not apply patches for ${KMACHINE}."
bbfatal_log "Patch failures can be resolved in the linux source directory ${S})"
fi
fi
}
@@ -258,26 +234,37 @@ do_kernel_metadata[depends] = "kern-tools-native:do_populate_sysroot"
do_kernel_configme[dirs] += "${S} ${B}"
do_kernel_configme() {
bbnote "kernel configme"
export KMETA=${KMETA}
set +e
if [ -n "${KCONFIG_MODE}" ]; then
configmeflags=${KCONFIG_MODE}
else
# If a defconfig was passed, use =n as the baseline, which is achieved
# via --allnoconfig
# translate the kconfig_mode into something that merge_config.sh
# understands
case ${KCONFIG_MODE} in
*allnoconfig)
config_flags="-n"
;;
*alldefconfig)
config_flags=""
;;
*)
if [ -f ${WORKDIR}/defconfig ]; then
configmeflags="--allnoconfig"
config_flags="-n"
fi
fi
;;
esac
cd ${S}
PATH=${PATH}:${S}/scripts/util
configme ${configmeflags} --reconfig --output ${B} ${LINUX_KERNEL_TYPE} ${KMACHINE}
meta_dir=$(kgit --meta)
configs="$(scc --configs -o ${meta_dir})"
if [ -z "${configs}" ]; then
bbfatal_log "Could not find configuration queue (${meta_dir}/config.queue)"
fi
CFLAGS="${CFLAGS} ${TOOLCHAIN_OPTIONS}" ARCH=${ARCH} merge_config.sh -O ${B} ${config_flags} ${configs} > ${meta_dir}/cfg/merge_config_build.log 2>&1
if [ $? -ne 0 ]; then
bbfatal_log "Could not configure ${KMACHINE}-${LINUX_KERNEL_TYPE}"
fi
echo "# Global settings from linux recipe" >> ${B}/.config
echo "CONFIG_LOCALVERSION="\"${LINUX_VERSION_EXTENSION}\" >> ${B}/.config
}
@@ -295,36 +282,23 @@ python do_kernel_configcheck() {
kmeta = "." + kmeta
pathprefix = "export PATH=%s:%s; " % (d.getVar('PATH', True), "${S}/scripts/util/")
cmd = d.expand("cd ${S}; kconf_check -config %s/meta-series ${S} ${B}" % kmeta)
cmd = d.expand("scc --configs -o ${S}/.kernel-meta")
ret, configs = oe.utils.getstatusoutput("%s%s" % (pathprefix, cmd))
cmd = d.expand("cd ${S}; kconf_check --report -o ${S}/%s/cfg/ ${B}/.config ${S} %s" % (kmeta,configs))
ret, result = oe.utils.getstatusoutput("%s%s" % (pathprefix, cmd))
config_check_visibility = int(d.getVar( "KCONF_AUDIT_LEVEL", True ) or 0)
bsp_check_visibility = int(d.getVar( "KCONF_BSP_AUDIT_LEVEL", True ) or 0)
# if config check visibility is non-zero, report dropped configuration values
mismatch_file = "${S}/" + kmeta + "/" + "mismatch.cfg"
mismatch_file = d.expand("${S}/%s/cfg/mismatch.txt" % kmeta)
if os.path.exists(mismatch_file):
if config_check_visibility:
with open (mismatch_file, "r") as myfile:
results = myfile.read()
bb.warn( "[kernel config]: specified values did not make it into the kernel's final configuration:\n\n%s" % results)
# if config check visibility is level 2 or higher, report non-hardware options
nonhw_file = "${S}/" + kmeta + "/" + "nonhw_report.cfg"
if os.path.exists(nonhw_file):
if config_check_visibility > 1:
with open (nonhw_file, "r") as myfile:
results = myfile.read()
bb.warn( "[kernel config]: BSP specified non-hw configuration:\n\n%s" % results)
bsp_desc = "${S}/" + kmeta + "/" + "top_tgt"
if os.path.exists(bsp_desc) and bsp_check_visibility > 1:
with open (bsp_desc, "r") as myfile:
bsp_tgt = myfile.read()
m = re.match("^(.*)scratch.obj(.*)$", bsp_tgt)
if not m is None:
bb.warn( "[kernel]: An auto generated BSP description was used, this normally indicates a misconfiguration.\n" +
"Check that your machine (%s) has an associated kernel description." % "${MACHINE}" )
}
# Ensure that the branches (BSP and meta) are on the locations specified by

View File

@@ -156,10 +156,6 @@ UBOOT_LOADADDRESS ?= "${UBOOT_ENTRYPOINT}"
# Some Linux kernel configurations need additional parameters on the command line
KERNEL_EXTRA_ARGS ?= ""
# For the kernel, we don't want the '-e MAKEFLAGS=' in EXTRA_OEMAKE.
# We don't want to override kernel Makefile variables from the environment
EXTRA_OEMAKE = ""
KERNEL_ALT_IMAGETYPE ??= ""
copy_initramfs() {
@@ -364,6 +360,14 @@ do_shared_workdir () {
cp .config $kerneldir/
mkdir -p $kerneldir/include/config
cp include/config/kernel.release $kerneldir/include/config/kernel.release
if [ -e certs/signing_key.pem ]; then
# The signing_key.* files are stored in the certs/ dir in
# newer Linux kernels
mkdir -p $kerneldir/certs
cp certs/signing_key.* $kerneldir/certs/
elif [ -e signing_key.priv ]; then
cp signing_key.* $kerneldir/
fi
# We can also copy over all the generated files and avoid special cases
# like version.h, but we've opted to keep this small until file creep starts
@@ -434,6 +438,7 @@ kernel_do_configure() {
}
do_savedefconfig() {
bbplain "Saving defconfig to:\n${B}/defconfig"
oe_runmake -C ${B} savedefconfig
}
do_savedefconfig[nostamp] = "1"

View File

@@ -387,20 +387,6 @@ def find_license_files(d):
import oe.license
from collections import defaultdict, OrderedDict
pn = d.getVar('PN', True)
for package in d.getVar('PACKAGES', True):
if d.getVar('LICENSE_' + package, True):
license_types = license_types + ' & ' + \
d.getVar('LICENSE_' + package, True)
#If we get here with no license types, then that means we have a recipe
#level license. If so, we grab only those.
try:
license_types
except NameError:
# All the license types at the recipe level
license_types = d.getVar('LICENSE', True)
# All the license files for the package
lic_files = d.getVar('LIC_FILES_CHKSUM', True)
pn = d.getVar('PN', True)
@@ -498,7 +484,7 @@ def find_license_files(d):
v = FindVisitor()
try:
v.visit_string(license_types)
v.visit_string(d.getVar('LICENSE', True))
except oe.license.InvalidLicense as exc:
bb.fatal('%s: %s' % (d.getVar('PF', True), exc))
except SyntaxError:

View File

@@ -8,6 +8,15 @@ EXTRA_OEMAKE += "KERNEL_SRC=${STAGING_KERNEL_DIR}"
MODULES_INSTALL_TARGET ?= "modules_install"
python __anonymous () {
depends = d.getVar('DEPENDS', True)
extra_symbols = []
for dep in depends.split():
if dep.startswith("kernel-module-"):
extra_symbols.append("${STAGING_INCDIR}/" + dep + "/Module.symvers")
d.setVar('KBUILD_EXTRA_SYMBOLS', " ".join(extra_symbols))
}
module_do_compile() {
unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
oe_runmake KERNEL_PATH=${STAGING_KERNEL_DIR} \
@@ -15,6 +24,7 @@ module_do_compile() {
CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
AR="${KERNEL_AR}" \
O=${STAGING_KERNEL_BUILDDIR} \
KBUILD_EXTRA_SYMBOLS="${KBUILD_EXTRA_SYMBOLS}" \
${MAKE_TARGETS}
}
@@ -24,6 +34,11 @@ module_do_install() {
CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
O=${STAGING_KERNEL_BUILDDIR} \
${MODULES_INSTALL_TARGET}
install -d -m0755 ${D}${includedir}/${BPN}
cp -a --no-preserve=ownership ${B}/Module.symvers ${D}${includedir}/${BPN}
# it doesn't actually seem to matter which path is specified here
sed -e 's:${B}/::g' -i ${D}${includedir}/${BPN}/Module.symvers
}
EXPORT_FUNCTIONS do_compile do_install

View File

@@ -1,4 +1,5 @@
DEPENDS_prepend = "nodejs-native "
RDEPENDS_${PN}_prepend = "nodejs "
S = "${WORKDIR}/npmpkg"
NPM_INSTALLDIR = "${D}${libdir}/node_modules/${PN}"
@@ -15,6 +16,10 @@ def npm_oe_arch_map(target_arch, d):
NPM_ARCH ?= "${@npm_oe_arch_map(d.getVar('TARGET_ARCH', True), d)}"
npm_do_compile() {
# Copy in any additionally fetched modules
if [ -d ${WORKDIR}/node_modules ] ; then
cp -a ${WORKDIR}/node_modules ${S}/
fi
# changing the home directory to the working directory, the .npmrc will
# be created in this directory
export HOME=${WORKDIR}

View File

@@ -226,7 +226,8 @@ def package_compare_impl(pkgtype, d):
else:
bb.plain('Not copying packages for recipe %s' % pn)
do_cleanall_append() {
do_cleansstate[postfuncs] += "pfs_cleanpkgs"
python pfs_cleanpkgs () {
import errno
for pkgclass in (d.getVar('PACKAGE_CLASSES', True) or '').split():
if pkgclass.startswith('package_'):

View File

@@ -5,6 +5,9 @@ QUILTRCFILE ?= "${STAGING_ETCDIR_NATIVE}/quiltrc"
PATCHDEPENDENCY = "${PATCHTOOL}-native:do_populate_sysroot"
PATCH_GIT_USER_NAME ?= "OpenEmbedded"
PATCH_GIT_USER_EMAIL ?= "oe.patch@oe"
inherit terminal
def src_patches(d, all = False ):

Some files were not shown because too many files have changed in this diff Show More