sphinx: test-manual code block, link, and format update

(From yocto-docs rev: 928c212ec4ad6e09524fdf8147aa7daf244677b0)

Signed-off-by: Mark Morton <mark.morton@windriver.com>
Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
This commit is contained in:
Mark Morton
2020-09-14 16:11:31 -07:00
committed by Richard Purdie
parent fc876832cb
commit 74f212d2e1
3 changed files with 256 additions and 183 deletions

View File

@@ -23,26 +23,25 @@ project core.
Currently, the Yocto Project Test Environment Manual has no projected
release date. This manual is a work-in-progress and is being initially
loaded with information from the `README <>`__ files and notes from key
loaded with information from the README files and notes from key
engineers:
- *``yocto-autobuilder2``:* This
```README.md`http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder2/tree/README.md
- *yocto-autobuilder2:* This
:yocto_git:`README.md </cgit.cgi/yocto-autobuilder2/tree/README.md>`
is the main README which detials how to set up the Yocto Project
Autobuilder. The ``yocto-autobuilder2`` repository represents the
Yocto Project's console UI plugin to Buildbot and the configuration
necessary to configure Buildbot to perform the testing the project
requires.
- *``yocto-autobuilder-helper``:* This
```README`http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder-helper/tree/README
- *yocto-autobuilder-helper:* This :yocto_git:`README </cgit.cgi/yocto-autobuilder-helper/tree/README/>`
and repository contains Yocto Project Autobuilder Helper scripts and
configuration. The ``yocto-autobuilder-helper`` repository contains
the "glue" logic that defines which tests to run and how to run them.
As a result, it can be used by any Continuous Improvement (CI) system
to run builds, support getting the correct code revisions, configure
builds and layers, run builds, and collect results. The code is
independent of any CI system, which means the code can work Buildbot,
independent of any CI system, which means the code can work `Buildbot <https://docs.buildbot.net/0.9.15.post1/>`__,
Jenkins, or others. This repository has a branch per release of the
project defining the tests to run on a per release basis.
@@ -124,21 +123,22 @@ thefollowing types of tests:
ensure we have no single point of failure and can ensure the
different distros work effectively.
- *eSDK Testing:* Image tests initiated through the following command:
$ bitbake image -c testsdkext The tests utilize the ``testsdkext``
class and the ``do_testsdkext`` task.
- *eSDK Testing:* Image tests initiated through the following command::
$ bitbake image -c testsdkext
The tests utilize the ``testsdkext`` class and the ``do_testsdkext`` task.
- *Feature Testing:* Various scenario-based tests are run through the
`OpenEmbedded
Self-Test <&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance>`__
(oe-selftest). We test oe-selftest on each of the main distrubutions
:ref:`OpenEmbedded Self test (oe-selftest) <ref-manual/ref-release-process:Testing and Quality Assurance>`. We test oe-selftest on each of the main distrubutions
we support.
- *Image Testing:* Image tests initiated through the following command:
$ bitbake image -c testimage The tests utilize the
:ref:`testimage* <ref-classes-testimage*>`
classes and the
:ref:`ref-tasks-testimage` task.
- *Image Testing:* Image tests initiated through the following command::
$ bitbake image -c testimage
The tests utilize the :ref:`testimage* <ref-classes-testimage*>`
classes and the :ref:`ref-tasks-testimage` task.
- *Layer Testing:* The Autobuilder has the possibility to test whether
specific layers work with the test of the system. The layers tested
@@ -147,20 +147,22 @@ thefollowing types of tests:
- *Package Testing:* A Package Test (ptest) runs tests against packages
built by the OpenEmbedded build system on the target machine. See the
"`Testing Packages With
ptest <&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest>`__" section
:ref:`Testing Packages With
ptest <dev-manual/dev-manual-common-tasks:Testing Packages With ptest>` section
in the Yocto Project Development Tasks Manual and the
":yocto_wiki:`Ptest </wiki/Ptest>`" Wiki page for more
information on Ptest.
- *SDK Testing:* Image tests initiated through the following command: $
bitbake image -c testsdk The tests utilize the
:ref:`testsdk <ref-classes-testsdk>` class and
- *SDK Testing:* Image tests initiated through the following command::
$ bitbake image -c testsdk
The tests utilize the :ref:`testsdk <ref-classes-testsdk>` class and
the ``do_testsdk`` task.
- *Unit Testing:* Unit tests on various components of the system run
through ``oe-selftest`` and
```bitbake-selftest`` <&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance>`__.
through :ref:`bitbake-selftest <ref-manual/ref-release-process:Testing and Quality Assurance>` and
:ref:`oe-selftest <ref-manual/ref-release-process:Testing and Quality Assurance>`.
- *Automatic Upgrade Helper:* This target tests whether new versions of
software are available and whether we can automatically upgrade to
@@ -174,36 +176,43 @@ How Tests Map to Areas of Code
Tests map into the codebase as follows:
- *bitbake-selftest*:
- *bitbake-selftest:*
These tests are self-contained and test BitBake as well as its APIs,
which include the fetchers. The tests are located in
``bitbake/lib/*/tests``.
From within the BitBake repository, run the following: $
bitbake-selftest
From within the BitBake repository, run the following::
$ bitbake-selftest
To skip tests that access the Internet, use the ``BB_SKIP_NETTEST``
variable when running "bitbake-selftest" as follows: $
BB_SKIP_NETTEST=yes bitbake-selftest
variable when running "bitbake-selftest" as follows::
$ BB_SKIP_NETTEST=yes bitbake-selftest
The default output is quiet and just prints a summary of what was
run. To see more information, there is a verbose option:$
bitbake-selftest -v
run. To see more information, there is a verbose option::
$ bitbake-selftest -v
Use this option when you wish to skip tests that access the network,
which are mostly necessary to test the fetcher modules. To specify
individual test modules to run, append the test module name to the
"bitbake-selftest" command. For example, to specify the tests for the
bb.data.module, run: $ bitbake-selftest bb.test.data.moduleYou can
also specify individual tests by defining the full name and module
plus the class path of the test, for example: $ bitbake-selftest
bb.tests.data.TestOverrides.test_one_override
bb.data.module, run::
$ bitbake-selftest bb.test.data.module
You can also specify individual tests by defining the full name and module
plus the class path of the test, for example::
$ bitbake-selftest bb.tests.data.TestOverrides.test_one_override
The tests are based on `Python
unittest <https://docs.python.org/3/library/unittest.html>`__.
- *oe-selftest*:
- *oe-selftest:*
- These tests use OE to test the workflows, which include testing
specific features, behaviors of tasks, and API unit tests.
@@ -219,31 +228,40 @@ Tests map into the codebase as follows:
- The code for the tests resides in
``meta/lib/oeqa/selftest/cases/``.
- To run all the tests, enter the following command: $ oe-selftest
-a
- To run all the tests, enter the following command::
$ oe-selftest -a
- To run a specific test, use the following command form where
testname is the name of the specific test: $ oe-selftest -r
testname For example, the following command would run the tinfoil
getVar API test:$ oe-selftest -r
tinfoil.TinfoilTests.test_getvarIt is also possible to run a set
testname is the name of the specific test::
$ oe-selftest -r <testname>
For example, the following command would run the tinfoil
getVar API test::
$ oe-selftest -r tinfoil.TinfoilTests.test_getvar
It is also possible to run a set
of tests. For example the following command will run all of the
tinfoil tests:$ oe-selftest -r tinfoil
tinfoil tests::
$ oe-selftest -r tinfoil
- *testimage:*
- These tests build an image, boot it, and run tests against the
image's content.
- The code for these tests resides in
``meta/lib/oeqa/runtime/cases/``.
- The code for these tests resides in ``meta/lib/oeqa/runtime/cases/``.
- You need to set the
:term:`IMAGE_CLASSES`
variable as follows: IMAGE_CLASSES += "testimage"
- You need to set the :term:`IMAGE_CLASSES` variable as follows::
- Run the tests using the following command form: $ bitbake image -c
testimage
IMAGE_CLASSES += "testimage"
- Run the tests using the following command form::
$ bitbake image -c testimage
- *testsdk:*
@@ -252,8 +270,9 @@ Tests map into the codebase as follows:
- The code for these tests resides in ``meta/lib/oeqa/sdk/cases/``.
- Run the test using the following command form: $ bitbake image -c
testsdk
- Run the test using the following command form::
$ bitbake image -c testsdk
- *testsdk_ext:*
@@ -262,8 +281,9 @@ Tests map into the codebase as follows:
- The code for these tests resides in ``meta/lib/oeqa/esdk``.
- To run the tests, use the following command form: $ bitbake image
-c testsdkext
- To run the tests, use the following command form::
$ bitbake image -c testsdkext
- *oe-build-perf-test:*
@@ -272,8 +292,11 @@ Tests map into the codebase as follows:
- The code for these tests resides in ``meta/lib/oeqa/buildperf``.
- To run the tests, use the following command form: $
oe-build-perf-test optionsThe command takes a number of options,
- To run the tests, use the following command form::
$ oe-build-perf-test <options>
The command takes a number of options,
such as where to place the test results. The Autobuilder Helper
Scripts include the ``build-perf-test-wrapper`` script with
examples of how to use the oe-build-perf-test from the command
@@ -285,9 +308,9 @@ Tests map into the codebase as follows:
Use the ``oe-build-perf-report`` command to generate text reports
and HTML reports with graphs of the performance data. For
examples, see
`http://downloads.yoctoproject.org/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.html <#>`__
:yocto_dl:`/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.html`
and
`http://downloads.yoctoproject.org/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.txt <#>`__.
:yocto_dl:`/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.txt`.
- The tests are contained in ``lib/oeqa/buildperf/test_basic.py``.
@@ -295,7 +318,7 @@ Test Examples
=============
This section provides example tests for each of the tests listed in the
`How Tests Map to Areas of Code <#test-test-mapping>`__ section.
:ref:`test-manual/test-manual-intro:How Tests Map to Areas of Code` section.
For oeqa tests, testcases for each area reside in the main test
directory at ``meta/lib/oeqa/selftest/cases`` directory.
@@ -308,14 +331,20 @@ directory.
``bitbake-selftest``
--------------------
A simple test example from ``lib/bb/tests/data.py`` is: class
DataExpansions(unittest.TestCase): def setUp(self): self.d =
bb.data.init() self.d["foo"] = "value_of_foo" self.d["bar"] =
"value_of_bar" self.d["value_of_foo"] = "value_of_'value_of_foo'" def
test_one_var(self): val = self.d.expand("${foo}")
self.assertEqual(str(val), "value_of_foo")
A simple test example from ``lib/bb/tests/data.py`` is::
In this example, a ```DataExpansions`` <>`__ class of tests is created,
class DataExpansions(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
self.d["foo"] = "value_of_foo"
self.d["bar"] = "value_of_bar"
self.d["value_of_foo"] = "value_of_'value_of_foo'"
def test_one_var(self):
val = self.d.expand("${foo}")
self.assertEqual(str(val), "value_of_foo")
In this example, a ``DataExpansions`` class of tests is created,
derived from standard python unittest. The class has a common ``setUp``
function which is shared by all the tests in the class. A simple test is
then added to test that when a variable is expanded, the correct value
@@ -323,7 +352,7 @@ is found.
Bitbake selftests are straightforward python unittest. Refer to the
Python unittest documentation for additional information on writing
these tests at: `https://docs.python.org/3/library/unittest.html <#>`__.
these tests at: https://docs.python.org/3/library/unittest.html.
.. _oe-selftest-example:
@@ -334,14 +363,15 @@ These tests are more complex due to the setup required behind the scenes
for full builds. Rather than directly using Python's unittest, the code
wraps most of the standard objects. The tests can be simple, such as
testing a command from within the OE build environment using the
following example:class BitbakeLayers(OESelftestTestCase): def
test_bitbakelayers_showcrossdepends(self): result =
runCmd('bitbake-layers show-cross-depends') self.assertTrue('aspell' in
result.output, msg = "No dependencies were shown. bitbake-layers
show-cross-depends output: %s"% result.output)
following example::
class BitbakeLayers(OESelftestTestCase):
def test_bitbakelayers_showcrossdepends(self):
result = runCmd('bitbake-layers show-cross-depends')
self.assertTrue('aspell' in result.output, msg = "No dependencies were shown. bitbake-layers show-cross-depends output: %s"% result.output)
This example, taken from ``meta/lib/oeqa/selftest/cases/bblayers.py``,
creates a testcase from the ```OESelftestTestCase`` <>`__ class, derived
creates a testcase from the ``OESelftestTestCase`` class, derived
from ``unittest.TestCase``, which runs the ``bitbake-layers`` command
and checks the output to ensure it contains something we know should be
here.
@@ -367,7 +397,7 @@ so tests within a given test class should always run in the same build,
while tests in different classes or modules may be split into different
builds. There is no data store available for these tests since the tests
launch the ``bitbake`` command and exist outside of its context. As a
result, common bitbake library functions (bb.*) are also unavailable.
result, common bitbake library functions (bb.\*) are also unavailable.
.. _testimage-example:
@@ -378,14 +408,18 @@ These tests are run once an image is up and running, either on target
hardware or under QEMU. As a result, they are assumed to be running in a
target image environment, as opposed to a host build environment. A
simple example from ``meta/lib/oeqa/runtime/cases/python.py`` contains
the following:class PythonTest(OERuntimeTestCase):
@OETestDepends(['ssh.SSHTest.test_ssh']) @OEHasPackage(['python3-core'])
def test_python3(self): cmd = "python3 -c \\"import codecs;
print(codecs.encode('Uryyb, jbeyq', 'rot13'))\"" status, output =
self.target.run(cmd) msg = 'Exit status was not 0. Output: %s' % output
self.assertEqual(status, 0, msg=msg)
the following::
In this example, the ```OERuntimeTestCase`` <>`__ class wraps
class PythonTest(OERuntimeTestCase):
@OETestDepends(['ssh.SSHTest.test_ssh'])
@OEHasPackage(['python3-core'])
def test_python3(self):
cmd = "python3 -c \\"import codecs; print(codecs.encode('Uryyb, jbeyq', 'rot13'))\""
status, output = self.target.run(cmd)
msg = 'Exit status was not 0. Output: %s' % output
self.assertEqual(status, 0, msg=msg)
In this example, the ``OERuntimeTestCase`` class wraps
``unittest.TestCase``. Within the test, ``self.target`` represents the
target system, where commands can be run on it using the ``run()``
method.
@@ -402,19 +436,30 @@ the image.
These tests are run against built extensible SDKs (eSDKs). The tests can
assume that the eSDK environment has already been setup. An example from
``meta/lib/oeqa/sdk/cases/devtool.py`` contains the following:class
DevtoolTest(OESDKExtTestCase): @classmethod def setUpClass(cls):
myapp_src = os.path.join(cls.tc.esdk_files_dir, "myapp") cls.myapp_dst =
os.path.join(cls.tc.sdk_dir, "myapp") shutil.copytree(myapp_src,
cls.myapp_dst) subprocess.check_output(['git', 'init', '.'],
cwd=cls.myapp_dst) subprocess.check_output(['git', 'add', '.'],
cwd=cls.myapp_dst) subprocess.check_output(['git', 'commit', '-m',
"'test commit'"], cwd=cls.myapp_dst) @classmethod def
tearDownClass(cls): shutil.rmtree(cls.myapp_dst) def
\_test_devtool_build(self, directory): self._run('devtool add myapp %s'
% directory) try: self._run('devtool build myapp') finally:
self._run('devtool reset myapp') def test_devtool_build_make(self):
self._test_devtool_build(self.myapp_dst)In this example, the ``devtool``
``meta/lib/oeqa/sdk/cases/devtool.py`` contains the following::
class DevtoolTest(OESDKExtTestCase):
@classmethod def setUpClass(cls):
myapp_src = os.path.join(cls.tc.esdk_files_dir, "myapp")
cls.myapp_dst = os.path.join(cls.tc.sdk_dir, "myapp")
shutil.copytree(myapp_src, cls.myapp_dst)
subprocess.check_output(['git', 'init', '.'], cwd=cls.myapp_dst)
subprocess.check_output(['git', 'add', '.'], cwd=cls.myapp_dst)
subprocess.check_output(['git', 'commit', '-m', "'test commit'"], cwd=cls.myapp_dst)
@classmethod
def tearDownClass(cls):
shutil.rmtree(cls.myapp_dst)
def _test_devtool_build(self, directory):
self._run('devtool add myapp %s' % directory)
try:
self._run('devtool build myapp')
finally:
self._run('devtool reset myapp')
def test_devtool_build_make(self):
self._test_devtool_build(self.myapp_dst)
In this example, the ``devtool``
command is tested to see whether a sample application can be built with
the ``devtool build`` command within the eSDK.
@@ -426,14 +471,20 @@ the ``devtool build`` command within the eSDK.
These tests are run against built SDKs. The tests can assume that an SDK
has already been extracted and its environment file has been sourced. A
simple example from ``meta/lib/oeqa/sdk/cases/python2.py`` contains the
following:class Python3Test(OESDKTestCase): def setUp(self): if not
(self.tc.hasHostPackage("nativesdk-python3-core") or
self.tc.hasHostPackage("python3-core-native")): raise
unittest.SkipTest("No python3 package in the SDK") def
test_python3(self): cmd = "python3 -c \\"import codecs;
print(codecs.encode('Uryyb, jbeyq', 'rot13'))\"" output = self._run(cmd)
self.assertEqual(output, "Hello, world\n")In this example, if
nativesdk-python3-core has been installed into the SDK, the code runs
following::
class Python3Test(OESDKTestCase):
def setUp(self):
if not (self.tc.hasHostPackage("nativesdk-python3-core") or
self.tc.hasHostPackage("python3-core-native")):
raise unittest.SkipTest("No python3 package in the SDK")
def test_python3(self):
cmd = "python3 -c \\"import codecs; print(codecs.encode('Uryyb, jbeyq', 'rot13'))\""
output = self._run(cmd)
self.assertEqual(output, "Hello, world\n")
In this example, if nativesdk-python3-core has been installed into the SDK, the code runs
the python3 interpreter with a basic command to check it is working
correctly. The test would only run if python3 is installed in the SDK.
@@ -444,17 +495,25 @@ correctly. The test would only run if python3 is installed in the SDK.
The performance tests usually measure how long operations take and the
resource utilisation as that happens. An example from
``meta/lib/oeqa/buildperf/test_basic.py`` contains the following:class
Test3(BuildPerfTestCase): def test3(self): """Bitbake parsing (bitbake
-p)""" # Drop all caches and parse self.rm_cache()
oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True)
self.measure_cmd_resources(['bitbake', '-p'], 'parse_1', 'bitbake -p (no
caches)') # Drop tmp/cache
oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True)
self.measure_cmd_resources(['bitbake', '-p'], 'parse_2', 'bitbake -p (no
tmp/cache)') # Parse with fully cached data
self.measure_cmd_resources(['bitbake', '-p'], 'parse_3', 'bitbake -p
(cached)')This example shows how three specific parsing timings are
``meta/lib/oeqa/buildperf/test_basic.py`` contains the following::
class Test3(BuildPerfTestCase):
def test3(self):
"""Bitbake parsing (bitbake -p)"""
# Drop all caches and parse
self.rm_cache()
oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True)
self.measure_cmd_resources(['bitbake', '-p'], 'parse_1',
'bitbake -p (no caches)')
# Drop tmp/cache
oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True)
self.measure_cmd_resources(['bitbake', '-p'], 'parse_2',
'bitbake -p (no tmp/cache)')
# Parse with fully cached data
self.measure_cmd_resources(['bitbake', '-p'], 'parse_3',
'bitbake -p (cached)')
This example shows how three specific parsing timings are
measured, with and without various caches, to show how BitBakes parsing
performance trends over time.

View File

@@ -23,11 +23,11 @@ We have two broad categories of test builds, including "full" and
"quick". On the Autobuilder, these can be seen as "a-quick" and
"a-full", simply for ease of sorting in the UI. Use our Autobuilder
console view to see where me manage most test-related items, available
at: `https://autobuilder.yoctoproject.org/typhoon/#/console <#>`__.
at: :yocto_ab:`/typhoon/#/console`.
Builds are triggered manually when the test branches are ready. The
builds are monitored by the SWAT team. For additional information, see
`https://wiki.yoctoproject.org/wiki/Yocto_Build_Failure_Swat_Team <#>`__.
:yocto_wiki:`/wiki/Yocto_Build_Failure_Swat_Team`.
If successful, the changes would usually be merged to the ``master``
branch. If not successful, someone would respond to the changes on the
mailing list explaining that there was a failure in testing. The choice
@@ -37,9 +37,9 @@ which the result was required.
The Autobuilder does build the ``master`` branch once daily for several
reasons, in particular, to ensure the current ``master`` branch does
build, but also to keep ``yocto-testresults``
(`http://git.yoctoproject.org/cgit.cgi/yocto-testresults/ <#>`__),
(:yocto_git:`/cgit.cgi/yocto-testresults/`),
buildhistory
(`http://git.yoctoproject.org/cgit.cgi/poky-buildhistory/ <#>`__), and
(:yocto_git:`/cgit.cgi/poky-buildhistory/`), and
our sstate up to date. On the weekend, there is a master-next build
instead to ensure the test results are updated for the less frequently
run targets.
@@ -47,7 +47,7 @@ run targets.
Performance builds (buildperf-\* targets in the console) are triggered
separately every six hours and automatically push their results to the
buildstats repository at:
`http://git.yoctoproject.org/cgit.cgi/yocto-buildstats/ <#>`__.
:yocto_git:`/cgit.cgi/yocto-buildstats/`.
The 'quick' targets have been selected to be the ones which catch the
most failures or give the most valuable data. We run 'fast' ptests in
@@ -56,8 +56,6 @@ target doesn't include \*-lsb builds for all architectures, some world
builds and doesn't trigger performance tests or ltp testing. The full
build includes all these things and is slower but more comprehensive.
.. _test-yocto-project-autobuilder-overview:
Release Builds
==============
@@ -76,7 +74,7 @@ box to "generate an email to QA"is also checked.
When the build completes, an email is sent out using the send-qa-email
script in the ``yocto-autobuilder-helper`` repository to the list of
people configured for that release. Release builds are placed into a
directory in `https://autobuilder.yocto.io/pub/releases <#>`__ on the
directory in https://autobuilder.yocto.io/pub/releases on the
Autobuilder which is included in the email. The process from here is
more manual and control is effectively passed to release engineering.
The next steps include:

View File

@@ -10,7 +10,7 @@ Execution Flow within the Autobuilder
The “a-full” and “a-quick” targets are the usual entry points into the
Autobuilder and it makes sense to follow the process through the system
starting there. This is best visualised from the Autobuilder Console
view (`https://autobuilder.yoctoproject.org/typhoon/#/console <#>`__).
view (:yocto_ab:`/typhoon/#/console`).
Each item along the top of that view represents some “target build” and
these targets are all run in parallel. The full build will trigger the
@@ -20,32 +20,48 @@ each of those targets on a seperate buildbot worker. To understand the
configuration, you need to look at the entry on ``config.json`` file
within the ``yocto-autobuilder-helper`` repository. The targets are
defined in the overrides section, a quick example could be qemux86-64
which looks like:"qemux86-64" : { "MACHINE" : "qemux86-64", "TEMPLATE" :
"arch-qemu", "step1" : { "extravars" : [ "IMAGE_FSTYPES_append = ' wic
wic.bmap'" ] } },And to expand that, you need the “arch-qemu” entry from
the “templates” section, which looks like:"arch-qemu" : { "BUILDINFO" :
true, "BUILDHISTORY" : true, "step1" : { "BBTARGETS" : "core-image-sato
core-image-sato-dev core-image-sato-sdk core-image-minimal
core-image-minimal-dev core-image-sato:do_populate_sdk", "SANITYTARGETS"
: "core-image-minimal:do_testimage core-image-sato:do_testimage
core-image-sato-sdk:do_testimage core-image-sato:do_testsdk" }, "step2"
: { "SDKMACHINE" : "x86_64", "BBTARGETS" :
"core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext
core-image-sato:do_populate_sdk_ext", "SANITYTARGETS" :
"core-image-sato:do_testsdk core-image-minimal:do_testsdkext
core-image-sato:do_testsdkext" }, "step3" : { "BUILDHISTORY" : false,
"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 oe-selftest
${HELPERSTMACHTARGS} -j 15"], "ADDLAYER" :
["${BUILDDIR}/../meta-selftest"] } },Combining these two entries you can
see that “qemux86-64” is a three step build where the
``bitbake BBTARGETS`` would be run, then ``bitbake
SANITYTARGETS`` for each step; all for
which looks like::
"qemux86-64" : {
"MACHINE" : "qemux86-64",
"TEMPLATE" : "arch-qemu",
"step1" : {
"extravars" : [
"IMAGE_FSTYPES_append = ' wic wic.bmap'"
]
}
},
And to expand that, you need the “arch-qemu” entry from
the “templates” section, which looks like::
"arch-qemu" : {
"BUILDINFO" : true,
"BUILDHISTORY" : true,
"step1" : {
"BBTARGETS" : "core-image-sato core-image-sato-dev core-image-sato-sdk core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk",
"SANITYTARGETS" : "core-image-minimal:do_testimage core-image-sato:do_testimage core-image-sato-sdk:do_testimage core-image-sato:do_testsdk"
},
"step2" : {
"SDKMACHINE" : "x86_64",
"BBTARGETS" : "core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
"SANITYTARGETS" : "core-image-sato:do_testsdk core-image-minimal:do_testsdkext core-image-sato:do_testsdkext"
},
"step3" : {
"BUILDHISTORY" : false,
"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 oe-selftest ${HELPERSTMACHTARGS} -j 15"],
"ADDLAYER" : ["${BUILDDIR}/../meta-selftest"]
}
},
Combining these two entries you can see that “qemux86-64” is a three step build where the
``bitbake BBTARGETS`` would be run, then ``bitbake SANITYTARGETS`` for each step; all for
``MACHINE=”qemx86-64”`` but with differing SDKMACHINE settings. In step
1 an extra variable is added to the ``auto.conf`` file to enable wic
image generation.
While not every detail of this is covered here, you can see how the
templating mechanism allows quite complex configurations to be built up
template mechanism allows quite complex configurations to be built up
yet allows duplication and repetition to be kept to a minimum.
The different build targets are designed to allow for parallelisation,
@@ -66,13 +82,13 @@ For each given target in a build, the Autobuilder executes several
steps. These are configured in ``yocto-autobuilder2/builders.py`` and
roughly consist of:
1. *Run ``clobberdir``*
#. *Run clobberdir*.
This cleans out any previous build. Old builds are left around to
allow easier debugging of failed builds. For additional information,
see ```clobberdir`` <#test-clobberdir>`__.
see :ref:`test-manual/test-manual-understand-autobuilder:clobberdir`.
2. *Obtain yocto-autobuilder-helper*
#. *Obtain yocto-autobuilder-helper*
This step clones the ``yocto-autobuilder-helper`` git repository.
This is necessary to prevent the requirement to maintain all the
@@ -80,12 +96,12 @@ roughly consist of:
matches the release being built so we can support older releases and
still make changes in newer ones.
3. *Write layerinfo.json*
#. *Write layerinfo.json*
This transfers data in the Buildbot UI when the build was configured
to the Helper.
4. *Call scripts/shared-repo-unpack*
#. *Call scripts/shared-repo-unpack*
This is a call into the Helper scripts to set up a checkout of all
the pieces this build might need. It might clone the BitBake
@@ -94,7 +110,7 @@ roughly consist of:
from the ``layerinfo.json`` file to help understand the
configuration. It will also use a local cache of repositories to
speed up the clone checkouts. For additional information, see
`Autobuilder Clone Cache <#test-autobuilder-clone-cache>`__.
:ref:`test-manual/test-manual-understand-autobuilder:Autobuilder Clone Cache`.
This step has two possible modes of operation. If the build is part
of a parent build, its possible that all the repositories needed may
@@ -114,7 +130,7 @@ roughly consist of:
available, it would clone from the cache and the upstreams as
necessary. This is considered the fallback mode.
5. *Call scripts/run-config*
#. *Call scripts/run-config*
This is another call into the Helper scripts where its expected that
the main functionality of this target will be executed.
@@ -137,8 +153,7 @@ special script that moves files to a special location, rather than
deleting them. Files in this location are deleted by an ``rm`` command,
which is run under ``ionice -c 3``. For example, the deletion only
happens when there is idle IO capacity on the Worker. The Autobuilder
Worker Janitor runs this deletion. See `Autobuilder Worker
Janitor <#test-autobuilder-worker-janitor>`__.
Worker Janitor runs this deletion. See :ref:`test-manual/test-manual-understand-autobuilder:Autobuilder Worker Janitor`.
.. _test-autobuilder-clone-cache:
@@ -150,8 +165,7 @@ on the Autobuilder. We therefore have a stash of commonly used
repositories pre-cloned on the Workers. Data is fetched from these
during clones first, then "topped up" with later revisions from any
upstream when necesary. The cache is maintained by the Autobuilder
Worker Janitor. See `Autobuilder Worker
Janitor <#test-autobuilder-worker-janitor>`__.
Worker Janitor. See :ref:`test-manual/test-manual-understand-autobuilder:Autobuilder Worker Janitor`.
.. _test-autobuilder-worker-janitor:
@@ -159,8 +173,7 @@ Autobuilder Worker Janitor
--------------------------
This is a process running on each Worker that performs two basic
operations, including background file deletion at IO idle (see `Target
Execution: clobberdir <#test-list-tgt-exec-clobberdir>`__) and
operations, including background file deletion at IO idle (see :ref:`test-manual/test-manual-understand-autobuilder:Autobuilder Target Execution Overview`: Run clobberdir) and
maintainenance of a cache of cloned repositories to improve the speed
the system can checkout repositories.
@@ -181,7 +194,7 @@ Shared SSTATE_DIR
The Workers are all connected over NFS which allows the ``sstate``
directory to be shared between them. This means once a Worker has built
an artefact, all the others can benefit from it. Usage of the directory
an artifact, all the others can benefit from it. Usage of the directory
within the directory is designed for sharing over NFS.
.. _test-resulttool:
@@ -198,7 +211,7 @@ Resulttool is part of OpenEmbedded-Core and is used to manipulate these
json results files. It has the ability to merge files together, display
reports of the test results and compare different result files.
For details, see `https://wiki.yoctoproject.org/wiki/Resulttool <#>`__.
For details, see :yocto_wiki:`/wiki/Resulttool`.
.. _test-run-config-tgt-execution:
@@ -209,50 +222,46 @@ The ``scripts/run-config`` execution is where most of the work within
the Autobuilder happens. It runs through a number of steps; the first
are general setup steps that are run once and include:
1. Set up any ``buildtools-tarball`` if configured.
#. Set up any ``buildtools-tarball`` if configured.
2. Call "buildhistory-init" if buildhistory is configured.
#. Call "buildhistory-init" if buildhistory is configured.
For each step that is configured in ``config.json``, it will perform the
following:
## WRITER's question: What does "logging in as stepXa" and others refer
to below? ##
1. Add any layers that are specified using the
#. Add any layers that are specified using the
``bitbake-layers add-layer`` command (logging as stepXa)
2. Call the ``scripts/setup-config`` script to generate the necessary
#. Call the ``scripts/setup-config`` script to generate the necessary
``auto.conf`` configuration file for the build
3. Run the ``bitbake BBTARGETS`` command (logging as stepXb)
#. Run the ``bitbake BBTARGETS`` command (logging as stepXb)
4. Run the ``bitbake SANITYTARGETS`` command (logging as stepXc)
#. Run the ``bitbake SANITYTARGETS`` command (logging as stepXc)
5. Run the ``EXTRACMDS`` command, which are run within the BitBake build
#. Run the ``EXTRACMDS`` command, which are run within the BitBake build
environment (logging as stepXd)
6. Run the ``EXTRAPLAINCMDS`` command(s), which are run outside the
#. Run the ``EXTRAPLAINCMDS`` command(s), which are run outside the
BitBake build environment (logging as stepXd)
7. Remove any layers added in `step
1 <#test-run-config-add-layers-step>`__ using the
``bitbake-layers remove-layer`` command (logging as stepXa)
#. Remove any layers added in step
1 using the ``bitbake-layers remove-layer`` command (logging as stepXa)
Once the execution steps above complete, ``run-config`` executes a set
of post-build steps, including:
1. Call ``scripts/publish-artifacts`` to collect any output which is to
#. Call ``scripts/publish-artifacts`` to collect any output which is to
be saved from the build.
2. Call ``scripts/collect-results`` to collect any test results to be
#. Call ``scripts/collect-results`` to collect any test results to be
saved from the build.
3. Call ``scripts/upload-error-reports`` to send any error reports
#. Call ``scripts/upload-error-reports`` to send any error reports
generated to the remote server.
4. Cleanup the build directory using
```clobberdir`` <#test-clobberdir>`__ if the build was successful,
#. Cleanup the build directory using
:ref:`test-manual/test-manual-understand-autobuilder:clobberdir` if the build was successful,
else rename it to “build-renamed” for potential future debugging.
.. _test-deploying-yp-autobuilder:
@@ -279,11 +288,18 @@ The standard ``config.json`` minimally attempts to allow substitution of
the paths. The Helper script repository includes a
``local-example.json`` file to show how you could override these from a
separate configuration file. Pass the following into the environment of
the Autobuilder:$ ABHELPER_JSON="config.json local-example.json"As
another example, you could also pass the following into the
environment:$ ABHELPER_JSON="config.json /some/location/local.json"One
issue users often run into is validation of the ``config.json`` files. A
the Autobuilder::
$ ABHELPER_JSON="config.json local-example.json"
As another example, you could also pass the following into the
environment::
$ ABHELPER_JSON="config.json /some/location/local.json"
One issue users often run into is validation of the ``config.json`` files. A
tip for minimizing issues from invalid json files is to use a Git
``pre-commit-hook.sh`` script to verify the JSON file before committing
it. Create a symbolic link as follows:$ ln -s
../../scripts/pre-commit-hook.sh .git/hooks/pre-commit
it. Create a symbolic link as follows::
$ ln -s ../../scripts/pre-commit-hook.sh .git/hooks/pre-commit