sphinx: initial sphinx support

This commit is autogenerated pandoc to generate an inital set
of reST files based on DocBook XML files.

A .rst file is generated for each .xml files in all manuals with this
command:

cd <manual>
for i in *.xml; do \
  pandoc -f docbook -t rst --shift-heading-level-by=-1 \
  $i -o $(basename $i .xml).rst \
done

The conversion was done with: pandoc 2.9.2.1-91 (Arch Linux).

Also created an initial top level index file for each document, and
added all 'books' to the top leve index.rst file.

The YP manuals layout is organized as:

Book
  Chapter
    Section
      Section
        Section

Sphinx uses section headers to create the document structure.
ReStructuredText defines sections headers like that:

   To break longer text up into sections, you use section headers. These
   are a single line of text (one or more words) with adornment: an
   underline alone, or an underline and an overline together, in dashes
   "-----", equals "======", tildes "~~~~~~" or any of the
   non-alphanumeric characters = - ` : ' " ~ ^ _ * + # < > that you feel
   comfortable with. An underline-only adornment is distinct from an
   overline-and-underline adornment using the same character. The
   underline/overline must be at least as long as the title text. Be
   consistent, since all sections marked with the same adornment style
   are deemed to be at the same level:

Let's define the following convention when converting from Docbook:

Book                => overline ===   (Title)
  Chapter           => overline ***   (1.)
    Section         => ====           (1.1)
      Section       => ----           (1.1.1)
        Section     => ~~~~           (1.1.1.1)
          Section   => ^^^^           (1.1.1.1.1)

During the conversion with pandoc, we used --shift-heading-level=-1 to
convert most of DocBook headings automatically. However with this
setting, the Chapter header was removed, so I added it back
manually. Without this setting all headings were off by one, which was
more difficult to manually fix.

At least with this change, we now have the same TOC with Sphinx and
DocBook.

(From yocto-docs rev: 3c73d64a476d4423ee4c6808c685fa94d88d7df8)

Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
This commit is contained in:
Nicolas Dechesne
2020-06-26 19:10:51 +02:00
committed by Richard Purdie
parent c40a8d5904
commit 9bd69b1f1d
66 changed files with 49599 additions and 0 deletions

View File

@@ -0,0 +1,178 @@
**********************
Using the Command Line
**********************
Recall that earlier the manual discussed how to use an existing
toolchain tarball that had been installed into the default installation
directory, ``/opt/poky/DISTRO``, which is outside of the `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__ (see the section
"`Using a Cross-Toolchain
Tarball) <#using-an-existing-toolchain-tarball>`__". And, that sourcing
your architecture-specific environment setup script initializes a
suitable cross-toolchain development environment.
During this setup, locations for the compiler, QEMU scripts, QEMU
binary, a special version of ``pkgconfig`` and other useful utilities
are added to the ``PATH`` variable. Also, variables to assist
``pkgconfig`` and ``autotools`` are also defined so that, for example,
``configure.sh`` can find pre-generated test results for tests that need
target hardware on which to run. You can see the "`Setting Up the
Cross-Development
Environment <#setting-up-the-cross-development-environment>`__" section
for the list of cross-toolchain environment variables established by the
script.
Collectively, these conditions allow you to easily use the toolchain
outside of the OpenEmbedded build environment on both Autotools-based
projects and Makefile-based projects. This chapter provides information
for both these types of projects.
Autotools-Based Projects
========================
Once you have a suitable cross-toolchain installed, it is very easy to
develop a project outside of the OpenEmbedded build system. This section
presents a simple "Helloworld" example that shows how to set up,
compile, and run the project.
Creating and Running a Project Based on GNU Autotools
-----------------------------------------------------
Follow these steps to create a simple Autotools-based project:
1. *Create your directory:* Create a clean directory for your project
and then make that directory your working location: $ mkdir
$HOME/helloworld $ cd $HOME/helloworld
2. *Populate the directory:* Create ``hello.c``, ``Makefile.am``, and
``configure.in`` files as follows:
- For ``hello.c``, include these lines: #include <stdio.h> main() {
printf("Hello World!\n"); }
- For ``Makefile.am``, include these lines: bin_PROGRAMS = hello
hello_SOURCES = hello.c
- For ``configure.in``, include these lines: AC_INIT(hello.c)
AM_INIT_AUTOMAKE(hello,0.1) AC_PROG_CC AC_PROG_INSTALL
AC_OUTPUT(Makefile)
3. *Source the cross-toolchain environment setup file:* Installation of
the cross-toolchain creates a cross-toolchain environment setup
script in the directory that the ADT was installed. Before you can
use the tools to develop your project, you must source this setup
script. The script begins with the string "environment-setup" and
contains the machine architecture, which is followed by the string
"poky-linux". Here is an example that sources a script from the
default ADT installation directory that uses the 32-bit Intel x86
Architecture and the DISTRO_NAME Yocto Project release: $ source
/opt/poky/DISTRO/environment-setup-i586-poky-linux
4. *Generate the local aclocal.m4 files and create the configure
script:* The following GNU Autotools generate the local
``aclocal.m4`` files and create the configure script: $ aclocal $
autoconf
5. *Generate files needed by GNU coding standards:* GNU coding
standards require certain files in order for the project to be
compliant. This command creates those files: $ touch NEWS README
AUTHORS ChangeLog
6. *Generate the configure file:* This command generates the
``configure``: $ automake -a
7. *Cross-compile the project:* This command compiles the project using
the cross-compiler. The
```CONFIGURE_FLAGS`` <&YOCTO_DOCS_REF_URL;#var-CONFIGURE_FLAGS>`__
environment variable provides the minimal arguments for GNU
configure: $ ./configure ${CONFIGURE_FLAGS}
8. *Make and install the project:* These two commands generate and
install the project into the destination directory: $ make $ make
install DESTDIR=./tmp
9. *Verify the installation:* This command is a simple way to verify
the installation of your project. Running the command prints the
architecture on which the binary file can run. This architecture
should be the same architecture that the installed cross-toolchain
supports. $ file ./tmp/usr/local/bin/hello
10. *Execute your project:* To execute the project in the shell, simply
enter the name. You could also copy the binary to the actual target
hardware and run the project there as well: $ ./hello As expected,
the project displays the "Hello World!" message.
Passing Host Options
--------------------
For an Autotools-based project, you can use the cross-toolchain by just
passing the appropriate host option to ``configure.sh``. The host option
you use is derived from the name of the environment setup script found
in the directory in which you installed the cross-toolchain. For
example, the host option for an ARM-based target that uses the GNU EABI
is ``armv5te-poky-linux-gnueabi``. You will notice that the name of the
script is ``environment-setup-armv5te-poky-linux-gnueabi``. Thus, the
following command works to update your project and rebuild it using the
appropriate cross-toolchain tools: $ ./configure
--host=armv5te-poky-linux-gnueabi \\ --with-libtool-sysroot=sysroot_dir
.. note::
If the
configure
script results in problems recognizing the
--with-libtool-sysroot=
sysroot-dir
option, regenerate the script to enable the support by doing the
following and then run the script again:
::
$ libtoolize --automake
$ aclocal -I ${OECORE_NATIVE_SYSROOT}/usr/share/aclocal \
[-I dir_containing_your_project-specific_m4_macros]
$ autoconf
$ autoheader
$ automake -a
Makefile-Based Projects
=======================
For Makefile-based projects, the cross-toolchain environment variables
established by running the cross-toolchain environment setup script are
subject to general ``make`` rules.
To illustrate this, consider the following four cross-toolchain
environment variables:
`CC <&YOCTO_DOCS_REF_URL;#var-CC>`__\ =i586-poky-linux-gcc -m32
-march=i586 --sysroot=/opt/poky/1.8/sysroots/i586-poky-linux
`LD <&YOCTO_DOCS_REF_URL;#var-LD>`__\ =i586-poky-linux-ld
--sysroot=/opt/poky/1.8/sysroots/i586-poky-linux
`CFLAGS <&YOCTO_DOCS_REF_URL;#var-CFLAGS>`__\ =-O2 -pipe -g
-feliminate-unused-debug-types
`CXXFLAGS <&YOCTO_DOCS_REF_URL;#var-CXXFLAGS>`__\ =-O2 -pipe -g
-feliminate-unused-debug-types Now, consider the following three cases:
- *Case 1 - No Variables Set in the ``Makefile``:* Because these
variables are not specifically set in the ``Makefile``, the variables
retain their values based on the environment.
- *Case 2 - Variables Set in the ``Makefile``:* Specifically setting
variables in the ``Makefile`` during the build results in the
environment settings of the variables being overwritten.
- *Case 3 - Variables Set when the ``Makefile`` is Executed from the
Command Line:* Executing the ``Makefile`` from the command line
results in the variables being overwritten with command-line content
regardless of what is being set in the ``Makefile``. In this case,
environment variables are not considered unless you use the "-e" flag
during the build: $ make -e file If you use this flag, then the
environment values of the variables override any variables
specifically set in the ``Makefile``.
.. note::
For the list of variables set up by the cross-toolchain environment
setup script, see the "
Setting Up the Cross-Development Environment
" section.

View File

@@ -0,0 +1,136 @@
*****************************************
The Application Development Toolkit (ADT)
*****************************************
Part of the Yocto Project development solution is an Application
Development Toolkit (ADT). The ADT provides you with a custom-built,
cross-development platform suited for developing a user-targeted product
application.
Fundamentally, the ADT consists of the following:
- An architecture-specific cross-toolchain and matching sysroot both
built by the `OpenEmbedded build
system <&YOCTO_DOCS_DEV_URL;#build-system-term>`__. The toolchain and
sysroot are based on a `Metadata <&YOCTO_DOCS_DEV_URL;#metadata>`__
configuration and extensions, which allows you to cross-develop on
the host machine for the target hardware.
- The Eclipse IDE Yocto Plug-in.
- The Quick EMUlator (QEMU), which lets you simulate target hardware.
- Various user-space tools that greatly enhance your application
development experience.
The Cross-Development Toolchain
===============================
The `Cross-Development
Toolchain <&YOCTO_DOCS_DEV_URL;#cross-development-toolchain>`__ consists
of a cross-compiler, cross-linker, and cross-debugger that are used to
develop user-space applications for targeted hardware. This toolchain is
created either by running the ADT Installer script, a toolchain
installer script, or through a `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__ that is based on
your Metadata configuration or extension for your targeted device. The
cross-toolchain works with a matching target sysroot.
Sysroot
=======
The matching target sysroot contains needed headers and libraries for
generating binaries that run on the target architecture. The sysroot is
based on the target root filesystem image that is built by the
OpenEmbedded build system and uses the same Metadata configuration used
to build the cross-toolchain.
.. _eclipse-overview:
Eclipse Yocto Plug-in
=====================
The Eclipse IDE is a popular development environment and it fully
supports development using the Yocto Project. When you install and
configure the Eclipse Yocto Project Plug-in into the Eclipse IDE, you
maximize your Yocto Project experience. Installing and configuring the
Plug-in results in an environment that has extensions specifically
designed to let you more easily develop software. These extensions allow
for cross-compilation, deployment, and execution of your output into a
QEMU emulation session. You can also perform cross-debugging and
profiling. The environment also supports a suite of tools that allows
you to perform remote profiling, tracing, collection of power data,
collection of latency data, and collection of performance data.
For information about the application development workflow that uses the
Eclipse IDE and for a detailed example of how to install and configure
the Eclipse Yocto Project Plug-in, see the "`Working Within
Eclipse <&YOCTO_DOCS_DEV_URL;#adt-eclipse>`__" section of the Yocto
Project Development Manual.
The QEMU Emulator
=================
The QEMU emulator allows you to simulate your hardware while running
your application or image. QEMU is made available a number of ways:
- If you use the ADT Installer script to install ADT, you can specify
whether or not to install QEMU.
- If you have cloned the ``poky`` Git repository to create a `Source
Directory <&YOCTO_DOCS_DEV_URL;#source-directory>`__ and you have
sourced the environment setup script, QEMU is installed and
automatically available.
- If you have downloaded a Yocto Project release and unpacked it to
create a `Source Directory <&YOCTO_DOCS_DEV_URL;#source-directory>`__
and you have sourced the environment setup script, QEMU is installed
and automatically available.
- If you have installed the cross-toolchain tarball and you have
sourced the toolchain's setup environment script, QEMU is also
installed and automatically available.
User-Space Tools
================
User-space tools are included as part of the Yocto Project. You will
find these tools helpful during development. The tools include
LatencyTOP, PowerTOP, OProfile, Perf, SystemTap, and Lttng-ust. These
tools are common development tools for the Linux platform.
- *LatencyTOP:* LatencyTOP focuses on latency that causes skips in
audio, stutters in your desktop experience, or situations that
overload your server even when you have plenty of CPU power left.
- *PowerTOP:* Helps you determine what software is using the most
power. You can find out more about PowerTOP at
` <https://01.org/powertop/>`__.
- *OProfile:* A system-wide profiler for Linux systems that is capable
of profiling all running code at low overhead. You can find out more
about OProfile at ` <http://oprofile.sourceforge.net/about/>`__. For
examples on how to setup and use this tool, see the
"`OProfile <&YOCTO_DOCS_PROF_URL;#profile-manual-oprofile>`__"
section in the Yocto Project Profiling and Tracing Manual.
- *Perf:* Performance counters for Linux used to keep track of certain
types of hardware and software events. For more information on these
types of counters see ` <https://perf.wiki.kernel.org/>`__. For
examples on how to setup and use this tool, see the
"`perf <&YOCTO_DOCS_PROF_URL;#profile-manual-perf>`__" section in the
Yocto Project Profiling and Tracing Manual.
- *SystemTap:* A free software infrastructure that simplifies
information gathering about a running Linux system. This information
helps you diagnose performance or functional problems. SystemTap is
not available as a user-space tool through the Eclipse IDE Yocto
Plug-in. See ` <http://sourceware.org/systemtap>`__ for more
information on SystemTap. For examples on how to setup and use this
tool, see the
"`SystemTap <&YOCTO_DOCS_PROF_URL;#profile-manual-systemtap>`__"
section in the Yocto Project Profiling and Tracing Manual.
- *Lttng-ust:* A User-space Tracer designed to provide detailed
information on user-space activity. See ` <http://lttng.org/ust>`__
for more information on Lttng-ust.

View File

@@ -0,0 +1,22 @@
************
Introduction
************
Welcome to the Yocto Project Application Developer's Guide. This manual
provides information that lets you begin developing applications using
the Yocto Project.
The Yocto Project provides an application development environment based
on an Application Development Toolkit (ADT) and the availability of
stand-alone cross-development toolchains and other tools. This manual
describes the ADT and how you can configure and install it, how to
access and use the cross-development toolchains, how to customize the
development packages installation, how to use command-line development
for both Autotools-based and Makefile-based projects, and an
introduction to the Eclipse IDE Yocto Plug-in.
.. note::
The ADT is distribution-neutral and does not require the Yocto
Project reference distribution, which is called Poky. This manual,
however, uses examples that use the Poky distribution.

View File

@@ -0,0 +1,13 @@
===========================================
Yocto Project Application Developer's Guide
===========================================
.. toctree::
:caption: Table of Contents
:numbered:
adt-manual-intro
adt-intro
adt-prepare
adt-package
adt-command

View File

@@ -0,0 +1,68 @@
************************************************************
Optionally Customizing the Development Packages Installation
************************************************************
Because the Yocto Project is suited for embedded Linux development, it
is likely that you will need to customize your development packages
installation. For example, if you are developing a minimal image, then
you might not need certain packages (e.g. graphics support packages).
Thus, you would like to be able to remove those packages from your
target sysroot.
Package Management Systems
==========================
The OpenEmbedded build system supports the generation of sysroot files
using three different Package Management Systems (PMS):
- *OPKG:* A less well known PMS whose use originated in the
OpenEmbedded and OpenWrt embedded Linux projects. This PMS works with
files packaged in an ``.ipk`` format. See
` <http://en.wikipedia.org/wiki/Opkg>`__ for more information about
OPKG.
- *RPM:* A more widely known PMS intended for GNU/Linux distributions.
This PMS works with files packaged in an ``.rpm`` format. The build
system currently installs through this PMS by default. See
` <http://en.wikipedia.org/wiki/RPM_Package_Manager>`__ for more
information about RPM.
- *Debian:* The PMS for Debian-based systems is built on many PMS
tools. The lower-level PMS tool ``dpkg`` forms the base of the Debian
PMS. For information on dpkg see
` <http://en.wikipedia.org/wiki/Dpkg>`__.
Configuring the PMS
===================
Whichever PMS you are using, you need to be sure that the
```PACKAGE_CLASSES`` <&YOCTO_DOCS_REF_URL;#var-PACKAGE_CLASSES>`__
variable in the ``conf/local.conf`` file is set to reflect that system.
The first value you choose for the variable specifies the package file
format for the root filesystem at sysroot. Additional values specify
additional formats for convenience or testing. See the
``conf/local.conf`` configuration file for details.
.. note::
For build performance information related to the PMS, see the "
package.bbclass
" section in the Yocto Project Reference Manual.
As an example, consider a scenario where you are using OPKG and you want
to add the ``libglade`` package to the target sysroot.
First, you should generate the IPK file for the ``libglade`` package and
add it into a working ``opkg`` repository. Use these commands: $ bitbake
libglade $ bitbake package-index
Next, source the cross-toolchain environment setup script found in the
`Source Directory <&YOCTO_DOCS_DEV_URL;#source-directory>`__. Follow
that by setting up the installation destination to point to your sysroot
as sysroot_dir. Finally, have an OPKG configuration file conf_file that
corresponds to the ``opkg`` repository you have just created. The
following command forms should now work: $ opkg-cl f conf_file -o
sysroot_dir update $ opkg-cl f cconf_file -o sysroot_dir \\
--force-overwrite install libglade $ opkg-cl f cconf_file -o
sysroot_dir \\ --force-overwrite install libglade-dbg $ opkg-cl f
conf_file> -osysroot_dir> \\ --force-overwrite install libglade-dev

View File

@@ -0,0 +1,753 @@
*************************************
Preparing for Application Development
*************************************
In order to develop applications, you need set up your host development
system. Several ways exist that allow you to install cross-development
tools, QEMU, the Eclipse Yocto Plug-in, and other tools. This chapter
describes how to prepare for application development.
.. _installing-the-adt:
Installing the ADT and Toolchains
=================================
The following list describes installation methods that set up varying
degrees of tool availability on your system. Regardless of the
installation method you choose, you must ``source`` the cross-toolchain
environment setup script, which establishes several key environment
variables, before you use a toolchain. See the "`Setting Up the
Cross-Development
Environment <#setting-up-the-cross-development-environment>`__" section
for more information.
.. note::
Avoid mixing installation methods when installing toolchains for
different architectures. For example, avoid using the ADT Installer
to install some toolchains and then hand-installing cross-development
toolchains by running the toolchain installer for different
architectures. Mixing installation methods can result in situations
where the ADT Installer becomes unreliable and might not install the
toolchain.
If you must mix installation methods, you might avoid problems by
deleting ``/var/lib/opkg``, thus purging the ``opkg`` package
metadata.
- *Use the ADT installer script:* This method is the recommended way to
install the ADT because it automates much of the process for you. For
example, you can configure the installation to install the QEMU
emulator and the user-space NFS, specify which root filesystem
profiles to download, and define the target sysroot location.
- *Use an existing toolchain:* Using this method, you select and
download an architecture-specific toolchain installer and then run
the script to hand-install the toolchain. If you use this method, you
just get the cross-toolchain and QEMU - you do not get any of the
other mentioned benefits had you run the ADT Installer script.
- *Use the toolchain from within the Build Directory:* If you already
have a `Build Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__,
you can build the cross-toolchain within the directory. However, like
the previous method mentioned, you only get the cross-toolchain and
QEMU - you do not get any of the other benefits without taking
separate steps.
Using the ADT Installer
-----------------------
To run the ADT Installer, you need to get the ADT Installer tarball, be
sure you have the necessary host development packages that support the
ADT Installer, and then run the ADT Installer Script.
For a list of the host packages needed to support ADT installation and
use, see the "ADT Installer Extras" lists in the "`Required Packages for
the Host Development
System <&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system>`__"
section of the Yocto Project Reference Manual.
Getting the ADT Installer Tarball
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ADT Installer is contained in the ADT Installer tarball. You can get
the tarball using either of these methods:
- *Download the Tarball:* You can download the tarball from
` <&YOCTO_ADTINSTALLER_DL_URL;>`__ into any directory.
- *Build the Tarball:* You can use
`BitBake <&YOCTO_DOCS_DEV_URL;#bitbake-term>`__ to generate the
tarball inside an existing `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__.
If you use BitBake to generate the ADT Installer tarball, you must
``source`` the environment setup script
(````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__ or
```oe-init-build-env-memres`` <&YOCTO_DOCS_REF_URL;#structure-memres-core-script>`__)
located in the Source Directory before running the ``bitbake``
command that creates the tarball.
The following example commands establish the `Source
Directory <&YOCTO_DOCS_DEV_URL;#source-directory>`__, check out the
current release branch, set up the build environment while also
creating the default Build Directory, and run the ``bitbake`` command
that results in the tarball
``poky/build/tmp/deploy/sdk/adt_installer.tar.bz2``:
.. note::
Before using BitBake to build the ADT tarball, be sure to make
sure your
local.conf
file is properly configured. See the "
User Configuration
" section in the Yocto Project Reference Manual for general
configuration information.
$ cd ~ $ git clone git://git.yoctoproject.org/poky $ cd poky $ git
checkout -b DISTRO_NAME origin/DISTRO_NAME $ source OE_INIT_FILE $
bitbake adt-installer
Configuring and Running the ADT Installer Script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before running the ADT Installer script, you need to unpack the tarball.
You can unpack the tarball in any directory you wish. For example, this
command copies the ADT Installer tarball from where it was built into
the home directory and then unpacks the tarball into a top-level
directory named ``adt-installer``: $ cd ~ $ cp
poky/build/tmp/deploy/sdk/adt_installer.tar.bz2 $HOME $ tar -xjf
adt_installer.tar.bz2 Unpacking it creates the directory
``adt-installer``, which contains the ADT Installer script
(``adt_installer``) and its configuration file (``adt_installer.conf``).
Before you run the script, however, you should examine the ADT Installer
configuration file and be sure you are going to get what you want. Your
configurations determine which kernel and filesystem image are
downloaded.
The following list describes the configurations you can define for the
ADT Installer. For configuration values and restrictions, see the
comments in the ``adt-installer.conf`` file:
- ``YOCTOADT_REPO``: This area includes the IPKG-based packages and the
root filesystem upon which the installation is based. If you want to
set up your own IPKG repository pointed to by ``YOCTOADT_REPO``, you
need to be sure that the directory structure follows the same layout
as the reference directory set up at
` <http://adtrepo.yoctoproject.org>`__. Also, your repository needs
to be accessible through HTTP.
- ``YOCTOADT_TARGETS``: The machine target architectures for which you
want to set up cross-development environments.
- ``YOCTOADT_QEMU``: Indicates whether or not to install the emulator
QEMU.
- ``YOCTOADT_NFS_UTIL``: Indicates whether or not to install user-mode
NFS. If you plan to use the Eclipse IDE Yocto plug-in against QEMU,
you should install NFS.
.. note::
To boot QEMU images using our userspace NFS server, you need to be
running
portmap
or
rpcbind
. If you are running
rpcbind
, you will also need to add the
-i
option when
rpcbind
starts up. Please make sure you understand the security
implications of doing this. You might also have to modify your
firewall settings to allow NFS booting to work.
- ``YOCTOADT_ROOTFS_``\ arch: The root filesystem images you want to
download from the ``YOCTOADT_IPKG_REPO`` repository.
- ``YOCTOADT_TARGET_SYSROOT_IMAGE_``\ arch: The particular root
filesystem used to extract and create the target sysroot. The value
of this variable must have been specified with
``YOCTOADT_ROOTFS_``\ arch. For example, if you downloaded both
``minimal`` and ``sato-sdk`` images by setting
``YOCTOADT_ROOTFS_``\ arch to "minimal sato-sdk", then
``YOCTOADT_ROOTFS_``\ arch must be set to either "minimal" or
"sato-sdk".
- ``YOCTOADT_TARGET_SYSROOT_LOC_``\ arch: The location on the
development host where the target sysroot is created.
After you have configured the ``adt_installer.conf`` file, run the
installer using the following command: $ cd adt-installer $
./adt_installer Once the installer begins to run, you are asked to enter
the location for cross-toolchain installation. The default location is
``/opt/poky/``\ release. After either accepting the default location or
selecting your own location, you are prompted to run the installation
script interactively or in silent mode. If you want to closely monitor
the installation, choose “I” for interactive mode rather than “S” for
silent mode. Follow the prompts from the script to complete the
installation.
Once the installation completes, the ADT, which includes the
cross-toolchain, is installed in the selected installation directory.
You will notice environment setup files for the cross-toolchain in the
installation directory, and image tarballs in the ``adt-installer``
directory according to your installer configurations, and the target
sysroot located according to the ``YOCTOADT_TARGET_SYSROOT_LOC_``\ arch
variable also in your configuration file.
.. _using-an-existing-toolchain-tarball:
Using a Cross-Toolchain Tarball
-------------------------------
If you want to simply install a cross-toolchain by hand, you can do so
by running the toolchain installer. The installer includes the pre-built
cross-toolchain, the ``runqemu`` script, and support files. If you use
this method to install the cross-toolchain, you might still need to
install the target sysroot by installing and extracting it separately.
For information on how to install the sysroot, see the "`Extracting the
Root Filesystem <#extracting-the-root-filesystem>`__" section.
Follow these steps:
1. *Get your toolchain installer using one of the following methods:*
- Go to ` <&YOCTO_TOOLCHAIN_DL_URL;>`__ and find the folder that
matches your host development system (i.e. ``i686`` for 32-bit
machines or ``x86_64`` for 64-bit machines).
Go into that folder and download the toolchain installer whose
name includes the appropriate target architecture. The toolchains
provided by the Yocto Project are based off of the
``core-image-sato`` image and contain libraries appropriate for
developing against that image. For example, if your host
development system is a 64-bit x86 system and you are going to use
your cross-toolchain for a 32-bit x86 target, go into the
``x86_64`` folder and download the following installer:
poky-glibc-x86_64-core-image-sato-i586-toolchain-DISTRO.sh
- Build your own toolchain installer. For cases where you cannot use
an installer from the download area, you can build your own as
described in the "`Optionally Building a Toolchain
Installer <#optionally-building-a-toolchain-installer>`__"
section.
2. *Once you have the installer, run it to install the toolchain:*
.. note::
You must change the permissions on the toolchain installer script
so that it is executable.
The following command shows how to run the installer given a
toolchain tarball for a 64-bit x86 development host system and a
32-bit x86 target architecture. The example assumes the toolchain
installer is located in ``~/Downloads/``. $
~/Downloads/poky-glibc-x86_64-core-image-sato-i586-toolchain-DISTRO.sh
The first thing the installer prompts you for is the directory into
which you want to install the toolchain. The default directory used
is ``/opt/poky/DISTRO``. If you do not have write permissions for the
directory into which you are installing the toolchain, the toolchain
installer notifies you and exits. Be sure you have write permissions
in the directory and run the installer again.
When the script finishes, the cross-toolchain is installed. You will
notice environment setup files for the cross-toolchain in the
installation directory.
.. _using-the-toolchain-from-within-the-build-tree:
Using BitBake and the Build Directory
-------------------------------------
A final way of making the cross-toolchain available is to use BitBake to
generate the toolchain within an existing `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__. This method does
not install the toolchain into the default ``/opt`` directory. As with
the previous method, if you need to install the target sysroot, you must
do that separately as well.
Follow these steps to generate the toolchain into the Build Directory:
1. *Set up the Build Environment:* Source the OpenEmbedded build
environment setup script (i.e.
````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__ or
```oe-init-build-env-memres`` <&YOCTO_DOCS_REF_URL;#structure-memres-core-script>`__)
located in the `Source
Directory <&YOCTO_DOCS_DEV_URL;#source-directory>`__.
2. *Check your Local Configuration File:* At this point, you should be
sure that the ```MACHINE`` <&YOCTO_DOCS_REF_URL;#var-MACHINE>`__
variable in the ``local.conf`` file found in the ``conf`` directory
of the Build Directory is set for the target architecture. Comments
within the ``local.conf`` file list the values you can use for the
``MACHINE`` variable. If you do not change the ``MACHINE`` variable,
the OpenEmbedded build system uses ``qemux86`` as the default target
machine when building the cross-toolchain.
.. note::
You can populate the Build Directory with the cross-toolchains for
more than a single architecture. You just need to edit the
MACHINE
variable in the
local.conf
file and re-run the
bitbake
command.
3. *Make Sure Your Layers are Enabled:* Examine the
``conf/bblayers.conf`` file and make sure that you have enabled all
the compatible layers for your target machine. The OpenEmbedded build
system needs to be aware of each layer you want included when
building images and cross-toolchains. For information on how to
enable a layer, see the "`Enabling Your
Layer <&YOCTO_DOCS_DEV_URL;#enabling-your-layer>`__" section in the
Yocto Project Development Manual.
4. *Generate the Cross-Toolchain:* Run ``bitbake meta-ide-support`` to
complete the cross-toolchain generation. Once the ``bitbake`` command
finishes, the cross-toolchain is generated and populated within the
Build Directory. You will notice environment setup files for the
cross-toolchain that contain the string "``environment-setup``" in
the Build Directory's ``tmp`` folder.
Be aware that when you use this method to install the toolchain, you
still need to separately extract and install the sysroot filesystem.
For information on how to do this, see the "`Extracting the Root
Filesystem <#extracting-the-root-filesystem>`__" section.
Setting Up the Cross-Development Environment
============================================
Before you can develop using the cross-toolchain, you need to set up the
cross-development environment by sourcing the toolchain's environment
setup script. If you used the ADT Installer or hand-installed
cross-toolchain, then you can find this script in the directory you
chose for installation. For this release, the default installation
directory is ````. If you installed the toolchain in the `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__, you can find the
environment setup script for the toolchain in the Build Directory's
``tmp`` directory.
Be sure to run the environment setup script that matches the
architecture for which you are developing. Environment setup scripts
begin with the string "``environment-setup``" and include as part of
their name the architecture. For example, the toolchain environment
setup script for a 64-bit IA-based architecture installed in the default
installation directory would be the following:
YOCTO_ADTPATH_DIR/environment-setup-x86_64-poky-linux When you run the
setup script, many environment variables are defined:
```SDKTARGETSYSROOT`` <&YOCTO_DOCS_REF_URL;#var-SDKTARGETSYSROOT>`__ -
The path to the sysroot used for cross-compilation
```PKG_CONFIG_PATH`` <&YOCTO_DOCS_REF_URL;#var-PKG_CONFIG_PATH>`__ - The
path to the target pkg-config files
```CONFIG_SITE`` <&YOCTO_DOCS_REF_URL;#var-CONFIG_SITE>`__ - A GNU
autoconf site file preconfigured for the target
```CC`` <&YOCTO_DOCS_REF_URL;#var-CC>`__ - The minimal command and
arguments to run the C compiler
```CXX`` <&YOCTO_DOCS_REF_URL;#var-CXX>`__ - The minimal command and
arguments to run the C++ compiler
```CPP`` <&YOCTO_DOCS_REF_URL;#var-CPP>`__ - The minimal command and
arguments to run the C preprocessor
```AS`` <&YOCTO_DOCS_REF_URL;#var-AS>`__ - The minimal command and
arguments to run the assembler ```LD`` <&YOCTO_DOCS_REF_URL;#var-LD>`__
- The minimal command and arguments to run the linker
```GDB`` <&YOCTO_DOCS_REF_URL;#var-GDB>`__ - The minimal command and
arguments to run the GNU Debugger
```STRIP`` <&YOCTO_DOCS_REF_URL;#var-STRIP>`__ - The minimal command and
arguments to run 'strip', which strips symbols
```RANLIB`` <&YOCTO_DOCS_REF_URL;#var-RANLIB>`__ - The minimal command
and arguments to run 'ranlib'
```OBJCOPY`` <&YOCTO_DOCS_REF_URL;#var-OBJCOPY>`__ - The minimal command
and arguments to run 'objcopy'
```OBJDUMP`` <&YOCTO_DOCS_REF_URL;#var-OBJDUMP>`__ - The minimal command
and arguments to run 'objdump' ```AR`` <&YOCTO_DOCS_REF_URL;#var-AR>`__
- The minimal command and arguments to run 'ar'
```NM`` <&YOCTO_DOCS_REF_URL;#var-NM>`__ - The minimal command and
arguments to run 'nm'
```TARGET_PREFIX`` <&YOCTO_DOCS_REF_URL;#var-TARGET_PREFIX>`__ - The
toolchain binary prefix for the target tools
```CROSS_COMPILE`` <&YOCTO_DOCS_REF_URL;#var-CROSS_COMPILE>`__ - The
toolchain binary prefix for the target tools
```CONFIGURE_FLAGS`` <&YOCTO_DOCS_REF_URL;#var-CONFIGURE_FLAGS>`__ - The
minimal arguments for GNU configure
```CFLAGS`` <&YOCTO_DOCS_REF_URL;#var-CFLAGS>`__ - Suggested C flags
```CXXFLAGS`` <&YOCTO_DOCS_REF_URL;#var-CXXFLAGS>`__ - Suggested C++
flags ```LDFLAGS`` <&YOCTO_DOCS_REF_URL;#var-LDFLAGS>`__ - Suggested
linker flags when you use CC to link
```CPPFLAGS`` <&YOCTO_DOCS_REF_URL;#var-CPPFLAGS>`__ - Suggested
preprocessor flags
Securing Kernel and Filesystem Images
=====================================
You will need to have a kernel and filesystem image to boot using your
hardware or the QEMU emulator. Furthermore, if you plan on booting your
image using NFS or you want to use the root filesystem as the target
sysroot, you need to extract the root filesystem.
Getting the Images
------------------
To get the kernel and filesystem images, you either have to build them
or download pre-built versions. For an example of how to build these
images, see the "`Buiding
Images <&YOCTO_DOCS_QS_URL;#qs-buiding-images>`__" section of the Yocto
Project Quick Start. For an example of downloading pre-build versions,
see the "`Example Using Pre-Built Binaries and
QEMU <#using-pre-built>`__" section.
The Yocto Project ships basic kernel and filesystem images for several
architectures (``x86``, ``x86-64``, ``mips``, ``powerpc``, and ``arm``)
that you can use unaltered in the QEMU emulator. These kernel images
reside in the release area - ` <&YOCTO_MACHINES_DL_URL;>`__ and are
ideal for experimentation using Yocto Project. For information on the
image types you can build using the OpenEmbedded build system, see the
"`Images <&YOCTO_DOCS_REF_URL;#ref-images>`__" chapter in the Yocto
Project Reference Manual.
If you are planning on developing against your image and you are not
building or using one of the Yocto Project development images (e.g.
``core-image-*-dev``), you must be sure to include the development
packages as part of your image recipe.
If you plan on remotely deploying and debugging your application from
within the Eclipse IDE, you must have an image that contains the Yocto
Target Communication Framework (TCF) agent (``tcf-agent``). You can do
this by including the ``eclipse-debug`` image feature.
.. note::
See the "
Image Features
" section in the Yocto Project Reference Manual for information on
image features.
To include the ``eclipse-debug`` image feature, modify your
``local.conf`` file in the `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__ so that the
```EXTRA_IMAGE_FEATURES`` <&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES>`__
variable includes the "eclipse-debug" feature. After modifying the
configuration file, you can rebuild the image. Once the image is
rebuilt, the ``tcf-agent`` will be included in the image and is launched
automatically after the boot.
Extracting the Root Filesystem
------------------------------
If you install your toolchain by hand or build it using BitBake and you
need a root filesystem, you need to extract it separately. If you use
the ADT Installer to install the ADT, the root filesystem is
automatically extracted and installed.
Here are some cases where you need to extract the root filesystem:
- You want to boot the image using NFS.
- You want to use the root filesystem as the target sysroot. For
example, the Eclipse IDE environment with the Eclipse Yocto Plug-in
installed allows you to use QEMU to boot under NFS.
- You want to develop your target application using the root filesystem
as the target sysroot.
To extract the root filesystem, first ``source`` the cross-development
environment setup script to establish necessary environment variables.
If you built the toolchain in the Build Directory, you will find the
toolchain environment script in the ``tmp`` directory. If you installed
the toolchain by hand, the environment setup script is located in
``/opt/poky/DISTRO``.
After sourcing the environment script, use the ``runqemu-extract-sdk``
command and provide the filesystem image.
Following is an example. The second command sets up the environment. In
this case, the setup script is located in the ``/opt/poky/DISTRO``
directory. The third command extracts the root filesystem from a
previously built filesystem that is located in the ``~/Downloads``
directory. Furthermore, this command extracts the root filesystem into
the ``qemux86-sato`` directory: $ cd ~ $ source
/opt/poky/DISTRO/environment-setup-i586-poky-linux $ runqemu-extract-sdk
\\ ~/Downloads/core-image-sato-sdk-qemux86-2011091411831.rootfs.tar.bz2
\\ $HOME/qemux86-sato You could now point to the target sysroot at
``qemux86-sato``.
Optionally Building a Toolchain Installer
=========================================
As an alternative to locating and downloading a toolchain installer, you
can build the toolchain installer if you have a `Build
Directory <&YOCTO_DOCS_DEV_URL;#build-directory>`__.
.. note::
Although not the preferred method, it is also possible to use
bitbake meta-toolchain
to build the toolchain installer. If you do use this method, you must
separately install and extract the target sysroot. For information on
how to install the sysroot, see the "
Extracting the Root Filesystem
" section.
To build the toolchain installer and populate the SDK image, use the
following command: $ bitbake image -c populate_sdk The command results
in a toolchain installer that contains the sysroot that matches your
target root filesystem.
Another powerful feature is that the toolchain is completely
self-contained. The binaries are linked against their own copy of
``libc``, which results in no dependencies on the target system. To
achieve this, the pointer to the dynamic loader is configured at install
time since that path cannot be dynamically altered. This is the reason
for a wrapper around the ``populate_sdk`` archive.
Another feature is that only one set of cross-canadian toolchain
binaries are produced per architecture. This feature takes advantage of
the fact that the target hardware can be passed to ``gcc`` as a set of
compiler options. Those options are set up by the environment script and
contained in variables such as ```CC`` <&YOCTO_DOCS_REF_URL;#var-CC>`__
and ```LD`` <&YOCTO_DOCS_REF_URL;#var-LD>`__. This reduces the space
needed for the tools. Understand, however, that a sysroot is still
needed for every target since those binaries are target-specific.
Remember, before using any BitBake command, you must source the build
environment setup script (i.e.
````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__ or
```oe-init-build-env-memres`` <&YOCTO_DOCS_REF_URL;#structure-memres-core-script>`__)
located in the Source Directory and you must make sure your
``conf/local.conf`` variables are correct. In particular, you need to be
sure the ```MACHINE`` <&YOCTO_DOCS_REF_URL;#var-MACHINE>`__ variable
matches the architecture for which you are building and that the
```SDKMACHINE`` <&YOCTO_DOCS_REF_URL;#var-SDKMACHINE>`__ variable is
correctly set if you are building a toolchain designed to run on an
architecture that differs from your current development host machine
(i.e. the build machine).
When the ``bitbake`` command completes, the toolchain installer will be
in ``tmp/deploy/sdk`` in the Build Directory.
.. note::
By default, this toolchain does not build static binaries. If you
want to use the toolchain to build these types of libraries, you need
to be sure your image has the appropriate static development
libraries. Use the
IMAGE_INSTALL
variable inside your
local.conf
file to install the appropriate library packages. Following is an
example using
glibc
static development libraries:
::
IMAGE_INSTALL_append = " glibc-staticdev"
Optionally Using an External Toolchain
======================================
You might want to use an external toolchain as part of your development.
If this is the case, the fundamental steps you need to accomplish are as
follows:
- Understand where the installed toolchain resides. For cases where you
need to build the external toolchain, you would need to take separate
steps to build and install the toolchain.
- Make sure you add the layer that contains the toolchain to your
``bblayers.conf`` file through the
```BBLAYERS`` <&YOCTO_DOCS_REF_URL;#var-BBLAYERS>`__ variable.
- Set the
```EXTERNAL_TOOLCHAIN`` <&YOCTO_DOCS_REF_URL;#var-EXTERNAL_TOOLCHAIN>`__
variable in your ``local.conf`` file to the location in which you
installed the toolchain.
A good example of an external toolchain used with the Yocto Project is
Mentor Graphics Sourcery G++ Toolchain. You can see information on how
to use that particular layer in the ``README`` file at
` <http://github.com/MentorEmbedded/meta-sourcery/>`__. You can find
further information by reading about the
```TCMODE`` <&YOCTO_DOCS_REF_URL;#var-TCMODE>`__ variable in the Yocto
Project Reference Manual's variable glossary.
.. _using-pre-built:
Example Using Pre-Built Binaries and QEMU
=========================================
If hardware, libraries and services are stable, you can get started by
using a pre-built binary of the filesystem image, kernel, and toolchain
and run it using the QEMU emulator. This scenario is useful for
developing application software.
|Using a Pre-Built Image|
For this scenario, you need to do several things:
- Install the appropriate stand-alone toolchain tarball.
- Download the pre-built image that will boot with QEMU. You need to be
sure to get the QEMU image that matches your target machines
architecture (e.g. x86, ARM, etc.).
- Download the filesystem image for your target machine's architecture.
- Set up the environment to emulate the hardware and then start the
QEMU emulator.
Installing the Toolchain
------------------------
You can download a tarball installer, which includes the pre-built
toolchain, the ``runqemu`` script, and support files from the
appropriate directory under ` <&YOCTO_TOOLCHAIN_DL_URL;>`__. Toolchains
are available for 32-bit and 64-bit x86 development systems from the
``i686`` and ``x86_64`` directories, respectively. The toolchains the
Yocto Project provides are based off the ``core-image-sato`` image and
contain libraries appropriate for developing against that image. Each
type of development system supports five or more target architectures.
The names of the tarball installer scripts are such that a string
representing the host system appears first in the filename and then is
immediately followed by a string representing the target architecture.
::
poky-glibc-host_system-image_type-arch-toolchain-release_version.sh
Where:
host_system is a string representing your development system:
i686 or x86_64.
image_type is a string representing the image you wish to
develop a Software Development Toolkit (SDK) for use against.
The Yocto Project builds toolchain installers using the
following BitBake command:
bitbake core-image-sato -c populate_sdk
arch is a string representing the tuned target architecture:
i586, x86_64, powerpc, mips, armv7a or armv5te
release_version is a string representing the release number of the
Yocto Project:
DISTRO, DISTRO+snapshot
For example, the following toolchain installer is for a 64-bit
development host system and a i586-tuned target architecture based off
the SDK for ``core-image-sato``:
poky-glibc-x86_64-core-image-sato-i586-toolchain-DISTRO.sh
Toolchains are self-contained and by default are installed into
``/opt/poky``. However, when you run the toolchain installer, you can
choose an installation directory.
The following command shows how to run the installer given a toolchain
tarball for a 64-bit x86 development host system and a 32-bit x86 target
architecture. You must change the permissions on the toolchain installer
script so that it is executable.
The example assumes the toolchain installer is located in
``~/Downloads/``.
.. note::
If you do not have write permissions for the directory into which you
are installing the toolchain, the toolchain installer notifies you
and exits. Be sure you have write permissions in the directory and
run the installer again.
$ ~/Downloads/poky-glibc-x86_64-core-image-sato-i586-toolchain-DISTRO.sh
For more information on how to install tarballs, see the "`Using a
Cross-Toolchain
Tarball <&YOCTO_DOCS_ADT_URL;#using-an-existing-toolchain-tarball>`__"
and "`Using BitBake and the Build
Directory <&YOCTO_DOCS_ADT_URL;#using-the-toolchain-from-within-the-build-tree>`__"
sections in the Yocto Project Application Developer's Guide.
Downloading the Pre-Built Linux Kernel
--------------------------------------
You can download the pre-built Linux kernel suitable for running in the
QEMU emulator from ` <&YOCTO_QEMU_DL_URL;>`__. Be sure to use the kernel
that matches the architecture you want to simulate. Download areas exist
for the five supported machine architectures: ``qemuarm``, ``qemumips``,
``qemuppc``, ``qemux86``, and ``qemux86-64``.
Most kernel files have one of the following forms: \*zImage-qemuarch.bin
vmlinux-qemuarch.bin Where: arch is a string representing the target
architecture: x86, x86-64, ppc, mips, or arm.
You can learn more about downloading a Yocto Project kernel in the
"`Yocto Project Kernel <&YOCTO_DOCS_DEV_URL;#local-kernel-files>`__"
bulleted item in the Yocto Project Development Manual.
Downloading the Filesystem
--------------------------
You can also download the filesystem image suitable for your target
architecture from ` <&YOCTO_QEMU_DL_URL;>`__. Again, be sure to use the
filesystem that matches the architecture you want to simulate.
The filesystem image has two tarball forms: ``ext3`` and ``tar``. You
must use the ``ext3`` form when booting an image using the QEMU
emulator. The ``tar`` form can be flattened out in your host development
system and used for build purposes with the Yocto Project.
core-image-profile-qemuarch.ext3 core-image-profile-qemuarch.tar.bz2
Where: profile is the filesystem image's profile: lsb, lsb-dev, lsb-sdk,
lsb-qt3, minimal, minimal-dev, sato, sato-dev, or sato-sdk. For
information on these types of image profiles, see the
"`Images <&YOCTO_DOCS_REF_URL;#ref-images>`__" chapter in the Yocto
Project Reference Manual. arch is a string representing the target
architecture: x86, x86-64, ppc, mips, or arm.
Setting Up the Environment and Starting the QEMU Emulator
---------------------------------------------------------
Before you start the QEMU emulator, you need to set up the emulation
environment. The following command form sets up the emulation
environment. $ source
YOCTO_ADTPATH_DIR/environment-setup-arch-poky-linux-if Where: arch is a
string representing the target architecture: i586, x86_64, ppc603e,
mips, or armv5te. if is a string representing an embedded application
binary interface. Not all setup scripts include this string.
Finally, this command form invokes the QEMU emulator $ runqemu qemuarch
kernel-image filesystem-image Where: qemuarch is a string representing
the target architecture: qemux86, qemux86-64, qemuppc, qemumips, or
qemuarm. kernel-image is the architecture-specific kernel image.
filesystem-image is the .ext3 filesystem image.
Continuing with the example, the following two commands setup the
emulation environment and launch QEMU. This example assumes the root
filesystem (``.ext3`` file) and the pre-built kernel image file both
reside in your home directory. The kernel and filesystem are for a
32-bit target architecture. $ cd $HOME $ source
YOCTO_ADTPATH_DIR/environment-setup-i586-poky-linux $ runqemu qemux86
bzImage-qemux86.bin \\ core-image-sato-qemux86.ext3
The environment in which QEMU launches varies depending on the
filesystem image and on the target architecture. For example, if you
source the environment for the ARM target architecture and then boot the
minimal QEMU image, the emulator comes up in a new shell in command-line
mode. However, if you boot the SDK image, QEMU comes up with a GUI.
.. note::
Booting the PPC image results in QEMU launching in the same shell in
command-line mode.
.. |Using a Pre-Built Image| image:: figures/using-a-pre-built-image.png

View File

@@ -0,0 +1,360 @@
=========================
Yocto Project Quick Build
=========================
Welcome!
========
Welcome! This short document steps you through the process for a typical
image build using the Yocto Project. The document also introduces how to
configure a build for specific hardware. You will use Yocto Project to
build a reference embedded OS called Poky.
.. note::
- The examples in this paper assume you are using a native Linux
system running a recent Ubuntu Linux distribution. If the machine
you want to use Yocto Project on to build an image (`build
host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__) is not
a native Linux system, you can still perform these steps by using
CROss PlatformS (CROPS) and setting up a Poky container. See the
`Setting Up to Use CROss PlatformS
(CROPS) <&YOCTO_DOCS_DEV_URL;#setting-up-to-use-crops>`__" section
in the Yocto Project Development Tasks Manual for more
information.
- You may use Windows Subsystem For Linux v2 to set up a build host
using Windows 10.
.. note::
The Yocto Project is not compatible with WSLv1, it is
compatible but not officially supported nor validated with
WSLv2, if you still decide to use WSL please upgrade to WSLv2.
See the `Setting Up to Use Windows Subsystem For
Linux <&YOCTO_DOCS_DEV_URL;#setting-up-to-use-wsl>`__" section in
the Yocto Project Development Tasks Manual for more information.
If you want more conceptual or background information on the Yocto
Project, see the `Yocto Project Overview and Concepts
Manual <&YOCTO_DOCS_OM_URL;>`__.
Compatible Linux Distribution
=============================
Make sure your `build
host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__ meets the
following requirements:
- 50 Gbytes of free disk space
- Runs a supported Linux distribution (i.e. recent releases of Fedora,
openSUSE, CentOS, Debian, or Ubuntu). For a list of Linux
distributions that support the Yocto Project, see the "`Supported
Linux
Distributions <&YOCTO_DOCS_REF_URL;#detailed-supported-distros>`__"
section in the Yocto Project Reference Manual. For detailed
information on preparing your build host, see the "`Preparing the
Build Host <&YOCTO_DOCS_DEV_URL;#dev-preparing-the-build-host>`__"
section in the Yocto Project Development Tasks Manual.
-
- Git 1.8.3.1 or greater
- tar 1.28 or greater
- Python 3.5.0 or greater.
- gcc 5.0 or greater.
If your build host does not meet any of these three listed version
requirements, you can take steps to prepare the system so that you
can still use the Yocto Project. See the "`Required Git, tar, Python
and gcc
Versions <&YOCTO_DOCS_REF_URL;#required-git-tar-python-and-gcc-versions>`__"
section in the Yocto Project Reference Manual for information.
Build Host Packages
===================
You must install essential host packages on your build host. The
following command installs the host packages based on an Ubuntu
distribution:
.. note::
For host package requirements on all supported Linux distributions,
see the "
Required Packages for the Build Host
" section in the Yocto Project Reference Manual.
$ sudo apt-get install UBUNTU_HOST_PACKAGES_ESSENTIAL
Use Git to Clone Poky
=====================
Once you complete the setup instructions for your machine, you need to
get a copy of the Poky repository on your build host. Use the following
commands to clone the Poky repository. $ git clone
git://git.yoctoproject.org/poky Cloning into 'poky'... remote: Counting
objects: 432160, done. remote: Compressing objects: 100%
(102056/102056), done. remote: Total 432160 (delta 323116), reused
432037 (delta 323000) Receiving objects: 100% (432160/432160), 153.81
MiB \| 8.54 MiB/s, done. Resolving deltas: 100% (323116/323116), done.
Checking connectivity... done. Move to the ``poky`` directory and take a
look at the tags: $ cd poky $ git fetch --tags $ git tag 1.1_M1.final
1.1_M1.rc1 1.1_M1.rc2 1.1_M2.final 1.1_M2.rc1 . . . yocto-2.5
yocto-2.5.1 yocto-2.5.2 yocto-2.6 yocto-2.6.1 yocto-2.6.2 yocto-2.7
yocto_1.5_M5.rc8 For this example, check out the branch based on the
DISTRO_REL_TAG release: $ git checkout tags/DISTRO_REL_TAG -b
my-DISTRO_REL_TAG Switched to a new branch 'my-DISTRO_REL_TAG' The
previous Git checkout command creates a local branch named
my-DISTRO_REL_TAG. The files available to you in that branch exactly
match the repository's files in the "DISTRO_NAME_NO_CAP" development
branch at the time of the Yocto Project DISTRO_REL_TAG release.
For more options and information about accessing Yocto Project related
repositories, see the "`Locating Yocto Project Source
Files <&YOCTO_DOCS_DEV_URL;#locating-yocto-project-source-files>`__"
section in the Yocto Project Development Tasks Manual.
Building Your Image
===================
Use the following steps to build your image. The build process creates
an entire Linux distribution, including the toolchain, from source.
.. note::
- If you are working behind a firewall and your build host is not
set up for proxies, you could encounter problems with the build
process when fetching source code (e.g. fetcher failures or Git
failures).
- If you do not know your proxy settings, consult your local network
infrastructure resources and get that information. A good starting
point could also be to check your web browser settings. Finally,
you can find more information on the "`Working Behind a Network
Proxy <https://wiki.yoctoproject.org/wiki/Working_Behind_a_Network_Proxy>`__"
page of the Yocto Project Wiki.
1. *Initialize the Build Environment:* From within the ``poky``
directory, run the
````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__ environment
setup script to define Yocto Project's build environment on your
build host. $ cd ~/poky $ source OE_INIT_FILE You had no
conf/local.conf file. This configuration file has therefore been
created for you with some default values. You may wish to edit it to,
for example, select a different MACHINE (target hardware). See
conf/local.conf for more information as common configuration options
are commented. You had no conf/bblayers.conf file. This configuration
file has therefore been created for you with some default values. To
add additional metadata layers into your configuration please add
entries to conf/bblayers.conf. The Yocto Project has extensive
documentation about OE including a reference manual which can be
found at: http://yoctoproject.org/documentation For more information
about OpenEmbedded see their website: http://www.openembedded.org/
### Shell environment set up for builds. ### You can now run 'bitbake
<target>' Common targets are: core-image-minimal core-image-sato
meta-toolchain meta-ide-support You can also run generated qemu
images with a command like 'runqemu qemux86-64' Among other things,
the script creates the `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__, which is
``build`` in this case and is located in the `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__. After the
script runs, your current working directory is set to the Build
Directory. Later, when the build completes, the Build Directory
contains all the files created during the build.
2. *Examine Your Local Configuration File:* When you set up the build
environment, a local configuration file named ``local.conf`` becomes
available in a ``conf`` subdirectory of the Build Directory. For this
example, the defaults are set to build for a ``qemux86`` target,
which is suitable for emulation. The package manager used is set to
the RPM package manager.
.. tip::
You can significantly speed up your build and guard against
fetcher failures by using mirrors. To use mirrors, add these lines
to your
local.conf
file in the Build directory:
::
SSTATE_MIRRORS = "\
file://.* http://sstate.yoctoproject.org/dev/PATH;downloadfilename=PATH \n \
file://.* http://sstate.yoctoproject.org/YOCTO_DOC_VERSION_MINUS_ONE/PATH;downloadfilename=PATH \n \
file://.* http://sstate.yoctoproject.org/YOCTO_DOC_VERSION/PATH;downloadfilename=PATH \n \
"
The previous examples showed how to add sstate paths for Yocto
Project YOCTO_DOC_VERSION_MINUS_ONE, YOCTO_DOC_VERSION, and a
development area. For a complete index of sstate locations, see
.
3. *Start the Build:* Continue with the following command to build an OS
image for the target, which is ``core-image-sato`` in this example: $
bitbake core-image-sato For information on using the ``bitbake``
command, see the
"`BitBake <&YOCTO_DOCS_OM_URL;#usingpoky-components-bitbake>`__"
section in the Yocto Project Overview and Concepts Manual, or see the
"`BitBake
Command <&YOCTO_DOCS_BB_URL;#bitbake-user-manual-command>`__" section
in the BitBake User Manual.
4. *Simulate Your Image Using QEMU:* Once this particular image is
built, you can start QEMU, which is a Quick EMUlator that ships with
the Yocto Project: $ runqemu qemux86-64 If you want to learn more
about running QEMU, see the "`Using the Quick EMUlator
(QEMU) <&YOCTO_DOCS_DEV_URL;#dev-manual-qemu>`__" chapter in the
Yocto Project Development Tasks Manual.
5. *Exit QEMU:* Exit QEMU by either clicking on the shutdown icon or by
typing ``Ctrl-C`` in the QEMU transcript window from which you evoked
QEMU.
Customizing Your Build for Specific Hardware
============================================
So far, all you have done is quickly built an image suitable for
emulation only. This section shows you how to customize your build for
specific hardware by adding a hardware layer into the Yocto Project
development environment.
In general, layers are repositories that contain related sets of
instructions and configurations that tell the Yocto Project what to do.
Isolating related metadata into functionally specific layers facilitates
modular development and makes it easier to reuse the layer metadata.
.. note::
By convention, layer names start with the string "meta-".
Follow these steps to add a hardware layer:
1. *Find a Layer:* Lots of hardware layers exist. The Yocto Project
`Source Repositories <&YOCTO_GIT_URL;>`__ has many hardware layers.
This example adds the
`meta-altera <https://github.com/kraj/meta-altera>`__ hardware layer.
2. *Clone the Layer* Use Git to make a local copy of the layer on your
machine. You can put the copy in the top level of the copy of the
Poky repository created earlier: $ cd ~/poky $ git clone
https://github.com/kraj/meta-altera.git Cloning into 'meta-altera'...
remote: Counting objects: 25170, done. remote: Compressing objects:
100% (350/350), done. remote: Total 25170 (delta 645), reused 719
(delta 538), pack-reused 24219 Receiving objects: 100% (25170/25170),
41.02 MiB \| 1.64 MiB/s, done. Resolving deltas: 100% (13385/13385),
done. Checking connectivity... done. The hardware layer now exists
with other layers inside the Poky reference repository on your build
host as ``meta-altera`` and contains all the metadata needed to
support hardware from Altera, which is owned by Intel.
3. *Change the Configuration to Build for a Specific Machine:* The
```MACHINE`` <&YOCTO_DOCS_REF_URL;#var-MACHINE>`__ variable in the
``local.conf`` file specifies the machine for the build. For this
example, set the ``MACHINE`` variable to "cyclone5". These
configurations are used:
` <https://github.com/kraj/meta-altera/blob/master/conf/machine/cyclone5.conf>`__.
.. note::
See the "
Examine Your Local Configuration File
" step earlier for more information on configuring the build.
4. *Add Your Layer to the Layer Configuration File:* Before you can use
a layer during a build, you must add it to your ``bblayers.conf``
file, which is found in the `Build
Directory's <&YOCTO_DOCS_REF_URL;#build-directory>`__ ``conf``
directory.
Use the ``bitbake-layers add-layer`` command to add the layer to the
configuration file: $ cd ~/poky/build $ bitbake-layers add-layer
../meta-altera NOTE: Starting bitbake server... Parsing recipes: 100%
\|##################################################################\|
Time: 0:00:32 Parsing of 918 .bb files complete (0 cached, 918
parsed). 1401 targets, 123 skipped, 0 masked, 0 errors. You can find
more information on adding layers in the "`Adding a Layer Using the
``bitbake-layers``
Script <&YOCTO_DOCS_DEV_URL;#adding-a-layer-using-the-bitbake-layers-script>`__"
section.
Completing these steps has added the ``meta-altera`` layer to your Yocto
Project development environment and configured it to build for the
"cyclone5" machine.
.. note::
The previous steps are for demonstration purposes only. If you were
to attempt to build an image for the "cyclone5" build, you should
read the Altera
README
.
Creating Your Own General Layer
===============================
Maybe you have an application or specific set of behaviors you need to
isolate. You can create your own general layer using the
``bitbake-layers create-layer`` command. The tool automates layer
creation by setting up a subdirectory with a ``layer.conf``
configuration file, a ``recipes-example`` subdirectory that contains an
``example.bb`` recipe, a licensing file, and a ``README``.
The following commands run the tool to create a layer named
``meta-mylayer`` in the ``poky`` directory: $ cd ~/poky $ bitbake-layers
create-layer meta-mylayer NOTE: Starting bitbake server... Add your new
layer with 'bitbake-layers add-layer meta-mylayer' For more information
on layers and how to create them, see the "`Creating a General Layer
Using the ``bitbake-layers``
Script <&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-bitbake-layers-script>`__"
section in the Yocto Project Development Tasks Manual.
Where To Go Next
================
Now that you have experienced using the Yocto Project, you might be
asking yourself "What now?" The Yocto Project has many sources of
information including the website, wiki pages, and user manuals:
- *Website:* The `Yocto Project Website <&YOCTO_HOME_URL;>`__ provides
background information, the latest builds, breaking news, full
development documentation, and access to a rich Yocto Project
Development Community into which you can tap.
- *Developer Screencast:* The `Getting Started with the Yocto Project -
New Developer Screencast Tutorial <http://vimeo.com/36450321>`__
provides a 30-minute video created for users unfamiliar with the
Yocto Project but familiar with Linux build hosts. While this
screencast is somewhat dated, the introductory and fundamental
concepts are useful for the beginner.
- *Yocto Project Overview and Concepts Manual:* The `Yocto Project
Overview and Concepts Manual <&YOCTO_DOCS_OM_URL;>`__ is a great
place to start to learn about the Yocto Project. This manual
introduces you to the Yocto Project and its development environment.
The manual also provides conceptual information for various aspects
of the Yocto Project.
- *Yocto Project Wiki:* The `Yocto Project Wiki <&YOCTO_WIKI_URL;>`__
provides additional information on where to go next when ramping up
with the Yocto Project, release information, project planning, and QA
information.
- *Yocto Project Mailing Lists:* Related mailing lists provide a forum
for discussion, patch submission and announcements. Several mailing
lists exist and are grouped according to areas of concern. See the
"`Mailing lists <&YOCTO_DOCS_REF_URL;#resources-mailinglist>`__"
section in the Yocto Project Reference Manual for a complete list of
Yocto Project mailing lists.
- *Comprehensive List of Links and Other Documentation:* The "`Links
and Related
Documentation <&YOCTO_DOCS_REF_URL;#resources-links-and-related-documentation>`__"
section in the Yocto Project Reference Manual provides a
comprehensive list of all related links and other user documentation.

View File

@@ -0,0 +1,9 @@
=====================================================
Yocto Project Board Support Package Developer's Guide
=====================================================
.. toctree::
:caption: Table of Contents
:numbered:
bsp

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,62 @@
******************************************
The Yocto Project Development Tasks Manual
******************************************
.. _dev-welcome:
Welcome
=======
Welcome to the Yocto Project Development Tasks Manual! This manual
provides relevant procedures necessary for developing in the Yocto
Project environment (i.e. developing embedded Linux images and
user-space applications that run on targeted devices). The manual groups
related procedures into higher-level sections. Procedures can consist of
high-level steps or low-level steps depending on the topic.
This manual provides the following:
- Procedures that help you get going with the Yocto Project. For
example, procedures that show you how to set up a build host and work
with the Yocto Project source repositories.
- Procedures that show you how to submit changes to the Yocto Project.
Changes can be improvements, new features, or bug fixes.
- Procedures related to "everyday" tasks you perform while developing
images and applications using the Yocto Project. For example,
procedures to create a layer, customize an image, write a new recipe,
and so forth.
This manual does not provide the following:
- Redundant Step-by-step Instructions: For example, the `Yocto Project
Application Development and the Extensible Software Development Kit
(eSDK) <&YOCTO_DOCS_SDK_URL;>`__ manual contains detailed
instructions on how to install an SDK, which is used to develop
applications for target hardware.
- Reference or Conceptual Material: This type of material resides in an
appropriate reference manual. For example, system variables are
documented in the `Yocto Project Reference
Manual <&YOCTO_DOCS_REF_URL;>`__.
- Detailed Public Information Not Specific to the Yocto Project: For
example, exhaustive information on how to use the Source Control
Manager Git is better covered with Internet searches and official Git
Documentation than through the Yocto Project documentation.
Other Information
=================
Because this manual presents information for many different topics,
supplemental information is recommended for full comprehension. For
introductory information on the Yocto Project, see the `Yocto Project
Website <&YOCTO_HOME_URL;>`__. If you want to build an image with no
knowledge of Yocto Project as a way of quickly testing it out, see the
`Yocto Project Quick Build <&YOCTO_DOCS_BRIEF_URL;>`__ document.
For a comprehensive list of links and other documentation, see the
"`Links and Related
Documentation <&YOCTO_DOCS_REF_URL;#resources-links-and-related-documentation>`__"
section in the Yocto Project Reference Manual.

View File

@@ -0,0 +1,429 @@
*******************************
Using the Quick EMUlator (QEMU)
*******************************
The Yocto Project uses an implementation of the Quick EMUlator (QEMU)
Open Source project as part of the Yocto Project development "tool set".
This chapter provides both procedures that show you how to use the Quick
EMUlator (QEMU) and other QEMU information helpful for development
purposes.
.. _qemu-dev-overview:
Overview
========
Within the context of the Yocto Project, QEMU is an emulator and
virtualization machine that allows you to run a complete image you have
built using the Yocto Project as just another task on your build system.
QEMU is useful for running and testing images and applications on
supported Yocto Project architectures without having actual hardware.
Among other things, the Yocto Project uses QEMU to run automated Quality
Assurance (QA) tests on final images shipped with each release.
.. note::
This implementation is not the same as QEMU in general.
This section provides a brief reference for the Yocto Project
implementation of QEMU.
For official information and documentation on QEMU in general, see the
following references:
- `QEMU Website <http://wiki.qemu.org/Main_Page>`__\ *:* The official
website for the QEMU Open Source project.
- `Documentation <http://wiki.qemu.org/Manual>`__\ *:* The QEMU user
manual.
.. _qemu-running-qemu:
Running QEMU
============
To use QEMU, you need to have QEMU installed and initialized as well as
have the proper artifacts (i.e. image files and root filesystems)
available. Follow these general steps to run QEMU:
1. *Install QEMU:* QEMU is made available with the Yocto Project a
number of ways. One method is to install a Software Development Kit
(SDK). See "`The QEMU
Emulator <&YOCTO_DOCS_SDK_URL;#the-qemu-emulator>`__" section in the
Yocto Project Application Development and the Extensible Software
Development Kit (eSDK) manual for information on how to install QEMU.
2. *Setting Up the Environment:* How you set up the QEMU environment
depends on how you installed QEMU:
- If you cloned the ``poky`` repository or you downloaded and
unpacked a Yocto Project release tarball, you can source the build
environment script (i.e.
````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__): $ cd
~/poky $ source oe-init-build-env
- If you installed a cross-toolchain, you can run the script that
initializes the toolchain. For example, the following commands run
the initialization script from the default ``poky_sdk`` directory:
. ~/poky_sdk/environment-setup-core2-64-poky-linux
3. *Ensure the Artifacts are in Place:* You need to be sure you have a
pre-built kernel that will boot in QEMU. You also need the target
root filesystem for your target machines architecture:
- If you have previously built an image for QEMU (e.g. ``qemux86``,
``qemuarm``, and so forth), then the artifacts are in place in
your `Build Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__.
- If you have not built an image, you can go to the
`machines/qemu <&YOCTO_MACHINES_DL_URL;>`__ area and download a
pre-built image that matches your architecture and can be run on
QEMU.
See the "`Extracting the Root
Filesystem <&YOCTO_DOCS_SDK_URL;#sdk-extracting-the-root-filesystem>`__"
section in the Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) manual for information on
how to extract a root filesystem.
4. *Run QEMU:* The basic ``runqemu`` command syntax is as follows: $
runqemu [option ] [...] Based on what you provide on the command
line, ``runqemu`` does a good job of figuring out what you are trying
to do. For example, by default, QEMU looks for the most recently
built image according to the timestamp when it needs to look for an
image. Minimally, through the use of options, you must provide either
a machine name, a virtual machine image (``*wic.vmdk``), or a kernel
image (``*.bin``).
Here are some additional examples to help illustrate further QEMU:
- This example starts QEMU with MACHINE set to "qemux86-64".
Assuming a standard `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__, ``runqemu``
automatically finds the ``bzImage-qemux86-64.bin`` image file and
the ``core-image-minimal-qemux86-64-20200218002850.rootfs.ext4``
(assuming the current build created a ``core-image-minimal``
image).
.. note::
When more than one image with the same name exists, QEMU finds
and uses the most recently built image according to the
timestamp.
$ runqemu qemux86-64
- This example produces the exact same results as the previous
example. This command, however, specifically provides the image
and root filesystem type. $ runqemu qemux86-64 core-image-minimal
ext4
- This example specifies to boot an initial RAM disk image and to
enable audio in QEMU. For this case, ``runqemu`` set the internal
variable ``FSTYPE`` to "cpio.gz". Also, for audio to be enabled,
an appropriate driver must be installed (see the previous
description for the ``audio`` option for more information). $
runqemu qemux86-64 ramfs audio
- This example does not provide enough information for QEMU to
launch. While the command does provide a root filesystem type, it
must also minimally provide a MACHINE, KERNEL, or VM option. $
runqemu ext4
- This example specifies to boot a virtual machine image
(``.wic.vmdk`` file). From the ``.wic.vmdk``, ``runqemu``
determines the QEMU architecture (MACHINE) to be "qemux86-64" and
the root filesystem type to be "vmdk". $ runqemu
/home/scott-lenovo/vm/core-image-minimal-qemux86-64.wic.vmdk
Switching Between Consoles
==========================
When booting or running QEMU, you can switch between supported consoles
by using Ctrl+Alt+number. For example, Ctrl+Alt+3 switches you to the
serial console as long as that console is enabled. Being able to switch
consoles is helpful, for example, if the main QEMU console breaks for
some reason.
.. note::
Usually, "2" gets you to the main console and "3" gets you to the
serial console.
Removing the Splash Screen
==========================
You can remove the splash screen when QEMU is booting by using Alt+left.
Removing the splash screen allows you to see what is happening in the
background.
Disabling the Cursor Grab
=========================
The default QEMU integration captures the cursor within the main window.
It does this since standard mouse devices only provide relative input
and not absolute coordinates. You then have to break out of the grab
using the "Ctrl+Alt" key combination. However, the Yocto Project's
integration of QEMU enables the wacom USB touch pad driver by default to
allow input of absolute coordinates. This default means that the mouse
can enter and leave the main window without the grab taking effect
leading to a better user experience.
.. _qemu-running-under-a-network-file-system-nfs-server:
Running Under a Network File System (NFS) Server
================================================
One method for running QEMU is to run it on an NFS server. This is
useful when you need to access the same file system from both the build
and the emulated system at the same time. It is also worth noting that
the system does not need root privileges to run. It uses a user space
NFS server to avoid that. Follow these steps to set up for running QEMU
using an NFS server.
1. *Extract a Root Filesystem:* Once you are able to run QEMU in your
environment, you can use the ``runqemu-extract-sdk`` script, which is
located in the ``scripts`` directory along with the ``runqemu``
script.
The ``runqemu-extract-sdk`` takes a root filesystem tarball and
extracts it into a location that you specify. Here is an example that
takes a file system and extracts it to a directory named
``test-nfs``: runqemu-extract-sdk
./tmp/deploy/images/qemux86-64/core-image-sato-qemux86-64.tar.bz2
test-nfs
2. *Start QEMU:* Once you have extracted the file system, you can run
``runqemu`` normally with the additional location of the file system.
You can then also make changes to the files within ``./test-nfs`` and
see those changes appear in the image in real time. Here is an
example using the ``qemux86`` image: runqemu qemux86-64 ./test-nfs
.. note::
Should you need to start, stop, or restart the NFS share, you can use
the following commands:
- The following command starts the NFS share: runqemu-export-rootfs
start file-system-location
- The following command stops the NFS share: runqemu-export-rootfs
stop file-system-location
- The following command restarts the NFS share:
runqemu-export-rootfs restart file-system-location
.. _qemu-kvm-cpu-compatibility:
QEMU CPU Compatibility Under KVM
================================
By default, the QEMU build compiles for and targets 64-bit and x86 Intel
Core2 Duo processors and 32-bit x86 Intel Pentium II processors. QEMU
builds for and targets these CPU types because they display a broad
range of CPU feature compatibility with many commonly used CPUs.
Despite this broad range of compatibility, the CPUs could support a
feature that your host CPU does not support. Although this situation is
not a problem when QEMU uses software emulation of the feature, it can
be a problem when QEMU is running with KVM enabled. Specifically,
software compiled with a certain CPU feature crashes when run on a CPU
under KVM that does not support that feature. To work around this
problem, you can override QEMU's runtime CPU setting by changing the
``QB_CPU_KVM`` variable in ``qemuboot.conf`` in the `Build
Directory's <&YOCTO_DOCS_REF_URL;#build-directory>`__ ``deploy/image``
directory. This setting specifies a ``-cpu`` option passed into QEMU in
the ``runqemu`` script. Running ``qemu -cpu help`` returns a list of
available supported CPU types.
.. _qemu-dev-performance:
QEMU Performance
================
Using QEMU to emulate your hardware can result in speed issues depending
on the target and host architecture mix. For example, using the
``qemux86`` image in the emulator on an Intel-based 32-bit (x86) host
machine is fast because the target and host architectures match. On the
other hand, using the ``qemuarm`` image on the same Intel-based host can
be slower. But, you still achieve faithful emulation of ARM-specific
issues.
To speed things up, the QEMU images support using ``distcc`` to call a
cross-compiler outside the emulated system. If you used ``runqemu`` to
start QEMU, and the ``distccd`` application is present on the host
system, any BitBake cross-compiling toolchain available from the build
system is automatically used from within QEMU simply by calling
``distcc``. You can accomplish this by defining the cross-compiler
variable (e.g. ``export CC="distcc"``). Alternatively, if you are using
a suitable SDK image or the appropriate stand-alone toolchain is
present, the toolchain is also automatically used.
.. note::
Several mechanisms exist that let you connect to the system running
on the QEMU emulator:
- QEMU provides a framebuffer interface that makes standard consoles
available.
- Generally, headless embedded devices have a serial port. If so,
you can configure the operating system of the running image to use
that port to run a console. The connection uses standard IP
networking.
- SSH servers exist in some QEMU images. The ``core-image-sato``
QEMU image has a Dropbear secure shell (SSH) server that runs with
the root password disabled. The ``core-image-full-cmdline`` and
``core-image-lsb`` QEMU images have OpenSSH instead of Dropbear.
Including these SSH servers allow you to use standard ``ssh`` and
``scp`` commands. The ``core-image-minimal`` QEMU image, however,
contains no SSH server.
- You can use a provided, user-space NFS server to boot the QEMU
session using a local copy of the root filesystem on the host. In
order to make this connection, you must extract a root filesystem
tarball by using the ``runqemu-extract-sdk`` command. After
running the command, you must then point the ``runqemu`` script to
the extracted directory instead of a root filesystem image file.
See the "`Running Under a Network File System (NFS)
Server <#qemu-running-under-a-network-file-system-nfs-server>`__"
section for more information.
.. _qemu-dev-command-line-syntax:
QEMU Command-Line Syntax
========================
The basic ``runqemu`` command syntax is as follows: $ runqemu [option ]
[...] Based on what you provide on the command line, ``runqemu`` does a
good job of figuring out what you are trying to do. For example, by
default, QEMU looks for the most recently built image according to the
timestamp when it needs to look for an image. Minimally, through the use
of options, you must provide either a machine name, a virtual machine
image (``*wic.vmdk``), or a kernel image (``*.bin``).
Following is the command-line help output for the ``runqemu`` command: $
runqemu --help Usage: you can run this script with any valid combination
of the following environment variables (in any order): KERNEL - the
kernel image file to use ROOTFS - the rootfs image file or nfsroot
directory to use MACHINE - the machine name (optional, autodetected from
KERNEL filename if unspecified) Simplified QEMU command-line options can
be passed with: nographic - disable video console serial - enable a
serial console on /dev/ttyS0 slirp - enable user networking, no root
privileges is required kvm - enable KVM when running x86/x86_64
(VT-capable CPU required) kvm-vhost - enable KVM with vhost when running
x86/x86_64 (VT-capable CPU required) publicvnc - enable a VNC server
open to all hosts audio - enable audio [*/]ovmf\* - OVMF firmware file
or base name for booting with UEFI tcpserial=<port> - specify tcp serial
port number biosdir=<dir> - specify custom bios dir
biosfilename=<filename> - specify bios filename qemuparams=<xyz> -
specify custom parameters to QEMU bootparams=<xyz> - specify custom
kernel parameters during boot help, -h, --help: print this text
Examples: runqemu runqemu qemuarm runqemu tmp/deploy/images/qemuarm
runqemu tmp/deploy/images/qemux86/<qemuboot.conf> runqemu qemux86-64
core-image-sato ext4 runqemu qemux86-64 wic-image-minimal wic runqemu
path/to/bzImage-qemux86.bin path/to/nfsrootdir/ serial runqemu qemux86
iso/hddimg/wic.vmdk/wic.qcow2/wic.vdi/ramfs/cpio.gz... runqemu qemux86
qemuparams="-m 256" runqemu qemux86 bootparams="psplash=false" runqemu
path/to/<image>-<machine>.wic runqemu path/to/<image>-<machine>.wic.vmdk
.. _qemu-dev-runqemu-command-line-options:
``runqemu`` Command-Line Options
================================
Following is a description of ``runqemu`` options you can provide on the
command line:
.. note::
If you do provide some "illegal" option combination or perhaps you do
not provide enough in the way of options,
runqemu
provides appropriate error messaging to help you correct the problem.
- QEMUARCH: The QEMU machine architecture, which must be "qemuarm",
"qemuarm64", "qemumips", "qemumips64", "qemuppc", "qemux86", or
"qemux86-64".
- ``VM``: The virtual machine image, which must be a ``.wic.vmdk``
file. Use this option when you want to boot a ``.wic.vmdk`` image.
The image filename you provide must contain one of the following
strings: "qemux86-64", "qemux86", "qemuarm", "qemumips64",
"qemumips", "qemuppc", or "qemush4".
- ROOTFS: A root filesystem that has one of the following filetype
extensions: "ext2", "ext3", "ext4", "jffs2", "nfs", or "btrfs". If
the filename you provide for this option uses “nfs”, it must provide
an explicit root filesystem path.
- KERNEL: A kernel image, which is a ``.bin`` file. When you provide a
``.bin`` file, ``runqemu`` detects it and assumes the file is a
kernel image.
- MACHINE: The architecture of the QEMU machine, which must be one of
the following: "qemux86", "qemux86-64", "qemuarm", "qemuarm64",
"qemumips", “qemumips64", or "qemuppc". The MACHINE and QEMUARCH
options are basically identical. If you do not provide a MACHINE
option, ``runqemu`` tries to determine it based on other options.
- ``ramfs``: Indicates you are booting an initial RAM disk (initramfs)
image, which means the ``FSTYPE`` is ``cpio.gz``.
- ``iso``: Indicates you are booting an ISO image, which means the
``FSTYPE`` is ``.iso``.
- ``nographic``: Disables the video console, which sets the console to
"ttys0". This option is useful when you have logged into a server and
you do not want to disable forwarding from the X Window System (X11)
to your workstation or laptop.
- ``serial``: Enables a serial console on ``/dev/ttyS0``.
- ``biosdir``: Establishes a custom directory for BIOS, VGA BIOS and
keymaps.
- ``biosfilename``: Establishes a custom BIOS name.
- ``qemuparams=\"xyz\"``: Specifies custom QEMU parameters. Use this
option to pass options other than the simple "kvm" and "serial"
options.
- ``bootparams=\"xyz\"``: Specifies custom boot parameters for the
kernel.
- ``audio``: Enables audio in QEMU. The MACHINE option must be either
"qemux86" or "qemux86-64" in order for audio to be enabled.
Additionally, the ``snd_intel8x0`` or ``snd_ens1370`` driver must be
installed in linux guest.
- ``slirp``: Enables "slirp" networking, which is a different way of
networking that does not need root access but also is not as easy to
use or comprehensive as the default.
- ``kvm``: Enables KVM when running "qemux86" or "qemux86-64" QEMU
architectures. For KVM to work, all the following conditions must be
met:
- Your MACHINE must be either qemux86" or "qemux86-64".
- Your build host has to have the KVM modules installed, which are
``/dev/kvm``.
- The build host ``/dev/kvm`` directory has to be both writable and
readable.
- ``kvm-vhost``: Enables KVM with VHOST support when running "qemux86"
or "qemux86-64" QEMU architectures. For KVM with VHOST to work, the
following conditions must be met:
- `kvm <#kvm-cond>`__ option conditions must be met.
- Your build host has to have virtio net device, which are
``/dev/vhost-net``.
- The build host ``/dev/vhost-net`` directory has to be either
readable or writable and “slirp-enabled”.
- ``publicvnc``: Enables a VNC server open to all hosts.

View File

@@ -0,0 +1,873 @@
***********************************
Setting Up to Use the Yocto Project
***********************************
This chapter provides guidance on how to prepare to use the Yocto
Project. You can learn about creating a team environment that develops
using the Yocto Project, how to set up a `build
host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__, how to locate
Yocto Project source repositories, and how to create local Git
repositories.
.. _usingpoky-changes-collaborate:
Creating a Team Development Environment
=======================================
It might not be immediately clear how you can use the Yocto Project in a
team development environment, or how to scale it for a large team of
developers. You can adapt the Yocto Project to many different use cases
and scenarios; however, this flexibility could cause difficulties if you
are trying to create a working setup that scales effectively.
To help you understand how to set up this type of environment, this
section presents a procedure that gives you information that can help
you get the results you want. The procedure is high-level and presents
some of the project's most successful experiences, practices, solutions,
and available technologies that have proved to work well in the past;
however, keep in mind, the procedure here is simply a starting point.
You can build off these steps and customize the procedure to fit any
particular working environment and set of practices.
1. *Determine Who is Going to be Developing:* You first need to
understand who is going to be doing anything related to the Yocto
Project and determine their roles. Making this determination is
essential to completing subsequent steps, which are to get your
equipment together and set up your development environment's
hardware topology.
The following roles exist:
- *Application Developer:* This type of developer does application
level work on top of an existing software stack.
- *Core System Developer:* This type of developer works on the
contents of the operating system image itself.
- *Build Engineer:* This type of developer manages Autobuilders and
releases. Depending on the specifics of the environment, not all
situations might need a Build Engineer.
- *Test Engineer:* This type of developer creates and manages
automated tests that are used to ensure all application and core
system development meets desired quality standards.
2. *Gather the Hardware:* Based on the size and make-up of the team,
get the hardware together. Ideally, any development, build, or test
engineer uses a system that runs a supported Linux distribution.
These systems, in general, should be high performance (e.g. dual,
six-core Xeons with 24 Gbytes of RAM and plenty of disk space). You
can help ensure efficiency by having any machines used for testing
or that run Autobuilders be as high performance as possible.
.. note::
Given sufficient processing power, you might also consider
building Yocto Project development containers to be run under
Docker, which is described later.
3. *Understand the Hardware Topology of the Environment:* Once you
understand the hardware involved and the make-up of the team, you
can understand the hardware topology of the development environment.
You can get a visual idea of the machines and their roles across the
development environment.
4. *Use Git as Your Source Control Manager (SCM):* Keeping your
`Metadata <&YOCTO_DOCS_REF_URL;#metadata>`__ (i.e. recipes,
configuration files, classes, and so forth) and any software you are
developing under the control of an SCM system that is compatible
with the OpenEmbedded build system is advisable. Of all of the SCMs
supported by BitBake, the Yocto Project team strongly recommends
using `Git <&YOCTO_DOCS_OM_URL;#git>`__. Git is a distributed system
that is easy to back up, allows you to work remotely, and then
connects back to the infrastructure.
.. note::
For information about BitBake, see the
BitBake User Manual
.
It is relatively easy to set up Git services and create
infrastructure like
`http://git.yoctoproject.org <&YOCTO_GIT_URL;>`__, which is based on
server software called ``gitolite`` with ``cgit`` being used to
generate the web interface that lets you view the repositories. The
``gitolite`` software identifies users using SSH keys and allows
branch-based access controls to repositories that you can control as
little or as much as necessary.
.. note::
The setup of these services is beyond the scope of this manual.
However, sites such as the following exist that describe how to
perform setup:
- `Git documentation <http://git-scm.com/book/ch4-8.html>`__:
Describes how to install ``gitolite`` on the server.
- `Gitolite <http://gitolite.com>`__: Information for
``gitolite``.
- `Interfaces, frontends, and
tools <https://git.wiki.kernel.org/index.php/Interfaces,_frontends,_and_tools>`__:
Documentation on how to create interfaces and frontends for
Git.
5. *Set up the Application Development Machines:* As mentioned earlier,
application developers are creating applications on top of existing
software stacks. Following are some best practices for setting up
machines used for application development:
- Use a pre-built toolchain that contains the software stack
itself. Then, develop the application code on top of the stack.
This method works well for small numbers of relatively isolated
applications.
- Keep your cross-development toolchains updated. You can do this
through provisioning either as new toolchain downloads or as
updates through a package update mechanism using ``opkg`` to
provide updates to an existing toolchain. The exact mechanics of
how and when to do this depend on local policy.
- Use multiple toolchains installed locally into different
locations to allow development across versions.
6. *Set up the Core Development Machines:* As mentioned earlier, core
developers work on the contents of the operating system itself.
Following are some best practices for setting up machines used for
developing images:
- Have the `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__ available on
the developer workstations so developers can run their own builds
and directly rebuild the software stack.
- Keep the core system unchanged as much as possible and do your
work in layers on top of the core system. Doing so gives you a
greater level of portability when upgrading to new versions of
the core system or Board Support Packages (BSPs).
- Share layers amongst the developers of a particular project and
contain the policy configuration that defines the project.
7. *Set up an Autobuilder:* Autobuilders are often the core of the
development environment. It is here that changes from individual
developers are brought together and centrally tested. Based on this
automated build and test environment, subsequent decisions about
releases can be made. Autobuilders also allow for "continuous
integration" style testing of software components and regression
identification and tracking.
See "`Yocto Project
Autobuilder <http://autobuilder.yoctoproject.org>`__" for more
information and links to buildbot. The Yocto Project team has found
this implementation works well in this role. A public example of
this is the Yocto Project Autobuilders, which the Yocto Project team
uses to test the overall health of the project.
The features of this system are:
- Highlights when commits break the build.
- Populates an `sstate
cache <&YOCTO_DOCS_OM_URL;#shared-state-cache>`__ from which
developers can pull rather than requiring local builds.
- Allows commit hook triggers, which trigger builds when commits
are made.
- Allows triggering of automated image booting and testing under
the QuickEMUlator (QEMU).
- Supports incremental build testing and from-scratch builds.
- Shares output that allows developer testing and historical
regression investigation.
- Creates output that can be used for releases.
- Allows scheduling of builds so that resources can be used
efficiently.
8. *Set up Test Machines:* Use a small number of shared, high
performance systems for testing purposes. Developers can use these
systems for wider, more extensive testing while they continue to
develop locally using their primary development system.
9. *Document Policies and Change Flow:* The Yocto Project uses a
hierarchical structure and a pull model. Scripts exist to create and
send pull requests (i.e. ``create-pull-request`` and
``send-pull-request``). This model is in line with other open source
projects where maintainers are responsible for specific areas of the
project and a single maintainer handles the final "top-of-tree"
merges.
.. note::
You can also use a more collective push model. The
gitolite
software supports both the push and pull models quite easily.
As with any development environment, it is important to document the
policy used as well as any main project guidelines so they are
understood by everyone. It is also a good idea to have
well-structured commit messages, which are usually a part of a
project's guidelines. Good commit messages are essential when
looking back in time and trying to understand why changes were made.
If you discover that changes are needed to the core layer of the
project, it is worth sharing those with the community as soon as
possible. Chances are if you have discovered the need for changes,
someone else in the community needs them also.
10. *Development Environment Summary:* Aside from the previous steps,
some best practices exist within the Yocto Project development
environment. Consider the following:
- Use `Git <&YOCTO_DOCS_OM_URL;#git>`__ as the source control
system.
- Maintain your Metadata in layers that make sense for your
situation. See the "`The Yocto Project Layer
Model <&YOCTO_DOCS_OM_URL;#the-yocto-project-layer-model>`__"
section in the Yocto Project Overview and Concepts Manual and the
"`Understanding and Creating
Layers <#understanding-and-creating-layers>`__" section for more
information on layers.
- Separate the project's Metadata and code by using separate Git
repositories. See the "`Yocto Project Source
Repositories <&YOCTO_DOCS_OM_URL;#yocto-project-repositories>`__"
section in the Yocto Project Overview and Concepts Manual for
information on these repositories. See the "`Locating Yocto
Project Source Files <#locating-yocto-project-source-files>`__"
section for information on how to set up local Git repositories
for related upstream Yocto Project Git repositories.
- Set up the directory for the shared state cache
(```SSTATE_DIR`` <&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR>`__) where
it makes sense. For example, set up the sstate cache on a system
used by developers in the same organization and share the same
source directories on their machines.
- Set up an Autobuilder and have it populate the sstate cache and
source directories.
- The Yocto Project community encourages you to send patches to the
project to fix bugs or add features. If you do submit patches,
follow the project commit guidelines for writing good commit
messages. See the "`Submitting a Change to the Yocto
Project <#how-to-submit-a-change>`__" section.
- Send changes to the core sooner than later as others are likely
to run into the same issues. For some guidance on mailing lists
to use, see the list in the "`Submitting a Change to the Yocto
Project <#how-to-submit-a-change>`__" section. For a description
of the available mailing lists, see the "`Mailing
Lists <&YOCTO_DOCS_REF_URL;#resources-mailinglist>`__" section in
the Yocto Project Reference Manual.
.. _dev-preparing-the-build-host:
Preparing the Build Host
========================
This section provides procedures to set up a system to be used as your
`build host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__ for
development using the Yocto Project. Your build host can be a native
Linux machine (recommended), it can be a machine (Linux, Mac, or
Windows) that uses `CROPS <https://github.com/crops/poky-container>`__,
which leverages `Docker Containers <https://www.docker.com/>`__ or it
can be a Windows machine capable of running Windows Subsystem For Linux
v2 (WSL).
.. note::
The Yocto Project is not compatible with
Windows Subsystem for Linux v1
. It is compatible but not officially supported nor validated with
WSLv2. If you still decide to use WSL please upgrade to
WSLv2
.
Once your build host is set up to use the Yocto Project, further steps
are necessary depending on what you want to accomplish. See the
following references for information on how to prepare for Board Support
Package (BSP) development and kernel development:
- *BSP Development:* See the "`Preparing Your Build Host to Work With
BSP
Layers <&YOCTO_DOCS_BSP_URL;#preparing-your-build-host-to-work-with-bsp-layers>`__"
section in the Yocto Project Board Support Package (BSP) Developer's
Guide.
- *Kernel Development:* See the "`Preparing the Build Host to Work on
the
Kernel <&YOCTO_DOCS_KERNEL_DEV_URL;#preparing-the-build-host-to-work-on-the-kernel>`__"
section in the Yocto Project Linux Kernel Development Manual.
Setting Up a Native Linux Host
------------------------------
Follow these steps to prepare a native Linux machine as your Yocto
Project Build Host:
1. *Use a Supported Linux Distribution:* You should have a reasonably
current Linux-based host system. You will have the best results with
a recent release of Fedora, openSUSE, Debian, Ubuntu, RHEL or CentOS
as these releases are frequently tested against the Yocto Project and
officially supported. For a list of the distributions under
validation and their status, see the "`Supported Linux
Distributions <&YOCTO_DOCS_REF_URL;#detailed-supported-distros>`__"
section in the Yocto Project Reference Manual and the wiki page at
`Distribution
Support <&YOCTO_WIKI_URL;/wiki/Distribution_Support>`__.
2. *Have Enough Free Memory:* Your system should have at least 50 Gbytes
of free disk space for building images.
3. *Meet Minimal Version Requirements:* The OpenEmbedded build system
should be able to run on any modern distribution that has the
following versions for Git, tar, Python and gcc.
- Git 1.8.3.1 or greater
- tar 1.28 or greater
- Python 3.5.0 or greater.
- gcc 5.0 or greater.
If your build host does not meet any of these three listed version
requirements, you can take steps to prepare the system so that you
can still use the Yocto Project. See the "`Required Git, tar, Python
and gcc
Versions <&YOCTO_DOCS_REF_URL;#required-git-tar-python-and-gcc-versions>`__"
section in the Yocto Project Reference Manual for information.
4. *Install Development Host Packages:* Required development host
packages vary depending on your build host and what you want to do
with the Yocto Project. Collectively, the number of required packages
is large if you want to be able to cover all cases.
For lists of required packages for all scenarios, see the "`Required
Packages for the Build
Host <&YOCTO_DOCS_REF_URL;#required-packages-for-the-build-host>`__"
section in the Yocto Project Reference Manual.
Once you have completed the previous steps, you are ready to continue
using a given development path on your native Linux machine. If you are
going to use BitBake, see the "`Cloning the ``poky``
Repository <#cloning-the-poky-repository>`__" section. If you are going
to use the Extensible SDK, see the "`Using the Extensible
SDK <&YOCTO_DOCS_SDK_URL;#sdk-extensible>`__" Chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual. If you want to work on the kernel, see the `Yocto
Project Linux Kernel Development
Manual <&YOCTO_DOCS_KERNEL_DEV_URL;>`__. If you are going to use
Toaster, see the "`Setting Up and Using
Toaster <&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use>`__"
section in the Toaster User Manual.
.. _setting-up-to-use-crops:
Setting Up to Use CROss PlatformS (CROPS)
-----------------------------------------
With `CROPS <https://github.com/crops/poky-container>`__, which
leverages `Docker Containers <https://www.docker.com/>`__, you can
create a Yocto Project development environment that is operating system
agnostic. You can set up a container in which you can develop using the
Yocto Project on a Windows, Mac, or Linux machine.
Follow these general steps to prepare a Windows, Mac, or Linux machine
as your Yocto Project build host:
1. *Determine What Your Build Host Needs:*
`Docker <https://www.docker.com/what-docker>`__ is a software
container platform that you need to install on the build host.
Depending on your build host, you might have to install different
software to support Docker containers. Go to the Docker installation
page and read about the platform requirements in "`Supported
Platforms <https://docs.docker.com/install/#supported-platforms>`__"
your build host needs to run containers.
2. *Choose What To Install:* Depending on whether or not your build host
meets system requirements, you need to install "Docker CE Stable" or
the "Docker Toolbox". Most situations call for Docker CE. However, if
you have a build host that does not meet requirements (e.g.
Pre-Windows 10 or Windows 10 "Home" version), you must install Docker
Toolbox instead.
3. *Go to the Install Site for Your Platform:* Click the link for the
Docker edition associated with your build host's native software. For
example, if your build host is running Microsoft Windows Version 10
and you want the Docker CE Stable edition, click that link under
"Supported Platforms".
4. *Install the Software:* Once you have understood all the
pre-requisites, you can download and install the appropriate
software. Follow the instructions for your specific machine and the
type of the software you need to install:
- Install `Docker CE for
Windows <https://docs.docker.com/docker-for-windows/install/#install-docker-for-windows-desktop-app>`__
for Windows build hosts that meet requirements.
- Install `Docker CE for
Macs <https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac>`__
for Mac build hosts that meet requirements.
- Install `Docker Toolbox for
Windows <https://docs.docker.com/toolbox/toolbox_install_windows/>`__
for Windows build hosts that do not meet Docker requirements.
- Install `Docker Toolbox for
MacOS <https://docs.docker.com/toolbox/toolbox_install_mac/>`__
for Mac build hosts that do not meet Docker requirements.
- Install `Docker CE for
CentOS <https://docs.docker.com/install/linux/docker-ce/centos/>`__
for Linux build hosts running the CentOS distribution.
- Install `Docker CE for
Debian <https://docs.docker.com/install/linux/docker-ce/debian/>`__
for Linux build hosts running the Debian distribution.
- Install `Docker CE for
Fedora <https://docs.docker.com/install/linux/docker-ce/fedora/>`__
for Linux build hosts running the Fedora distribution.
- Install `Docker CE for
Ubuntu <https://docs.docker.com/install/linux/docker-ce/ubuntu/>`__
for Linux build hosts running the Ubuntu distribution.
5. *Optionally Orient Yourself With Docker:* If you are unfamiliar with
Docker and the container concept, you can learn more here -
` <https://docs.docker.com/get-started/>`__.
6. *Launch Docker or Docker Toolbox:* You should be able to launch
Docker or the Docker Toolbox and have a terminal shell on your
development host.
7. *Set Up the Containers to Use the Yocto Project:* Go to
` <https://github.com/crops/docker-win-mac-docs/wiki>`__ and follow
the directions for your particular build host (i.e. Linux, Mac, or
Windows).
Once you complete the setup instructions for your machine, you have
the Poky, Extensible SDK, and Toaster containers available. You can
click those links from the page and learn more about using each of
those containers.
Once you have a container set up, everything is in place to develop just
as if you were running on a native Linux machine. If you are going to
use the Poky container, see the "`Cloning the ``poky``
Repository <#cloning-the-poky-repository>`__" section. If you are going
to use the Extensible SDK container, see the "`Using the Extensible
SDK <&YOCTO_DOCS_SDK_URL;#sdk-extensible>`__" Chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual. If you are going to use the Toaster container, see
the "`Setting Up and Using
Toaster <&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use>`__"
section in the Toaster User Manual.
.. _setting-up-to-use-wsl:
Setting Up to Use Windows Subsystem For Linux (WSLv2)
-----------------------------------------------------
With `Windows Subsystem for Linux
(WSLv2) <https://docs.microsoft.com/en-us/windows/wsl/wsl2-about>`__,
you can create a Yocto Project development environment that allows you
to build on Windows. You can set up a Linux distribution inside Windows
in which you can develop using the Yocto Project.
Follow these general steps to prepare a Windows machine using WSLv2 as
your Yocto Project build host:
1. *Make sure your Windows 10 machine is capable of running WSLv2:*
WSLv2 is only available for Windows 10 builds > 18917. To check which
build version you are running, you may open a command prompt on
Windows and execute the command "ver". C:\Users\myuser> ver Microsoft
Windows [Version 10.0.19041.153] If your build is capable of running
WSLv2 you may continue, for more information on this subject or
instructions on how to upgrade to WSLv2 visit `Windows 10
WSLv2 <https://docs.microsoft.com/en-us/windows/wsl/wsl2-install>`__
2. *Install the Linux distribution of your choice inside Windows 10:*
Once you know your version of Windows 10 supports WSLv2, you can
install the distribution of your choice from the Microsoft Store.
Open the Microsoft Store and search for Linux. While there are
several Linux distributions available, the assumption is that your
pick will be one of the distributions supported by the Yocto Project
as stated on the instructions for using a native Linux host. After
making your selection, simply click "Get" to download and install the
distribution.
3. *Check your Linux distribution is using WSLv2:* Open a Windows
PowerShell and run: C:\WINDOWS\system32> wsl -l -v NAME STATE VERSION
\*Ubuntu Running 2 Note the version column which says the WSL version
being used by your distribution, on compatible systems, this can be
changed back at any point in time.
4. *Optionally Orient Yourself on WSL:* If you are unfamiliar with WSL,
you can learn more here -
` <https://docs.microsoft.com/en-us/windows/wsl/wsl2-about>`__.
5. *Launch your WSL Distibution:* From the Windows start menu simply
launch your WSL distribution just like any other application.
6. *Optimize your WSLv2 storage often:* Due to the way storage is
handled on WSLv2, the storage space used by the undelying Linux
distribution is not reflected immedately, and since bitbake heavily
uses storage, after several builds, you may be unaware you are
running out of space. WSLv2 uses a VHDX file for storage, this issue
can be easily avoided by manually optimizing this file often, this
can be done in the following way:
1. *Find the location of your VHDX file:* First you need to find the
distro app package directory, to achieve this open a Windows
Powershell as Administrator and run: C:\WINDOWS\system32>
Get-AppxPackage -Name "*Ubuntu*" \| Select PackageFamilyName
PackageFamilyName -----------------
CanonicalGroupLimited.UbuntuonWindows_79abcdefgh You should now
replace the PackageFamilyName and your user on the following path
to find your VHDX file:
``C:\Users\user\AppData\Local\Packages\PackageFamilyName\LocalState\``
For example: ls
C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\\
Mode LastWriteTime Length Name -a---- 3/14/2020 9:52 PM
57418973184 ext4.vhdx Your VHDX file path is:
``C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\ext4.vhdx``
2. *Optimize your VHDX file:* Open a Windows Powershell as
Administrator to optimize your VHDX file, shutting down WSL first:
C:\WINDOWS\system32> wsl --shutdown C:\WINDOWS\system32>
optimize-vhd -Path
C:\Users\myuser\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79abcdefgh\LocalState\ext4.vhdx
-Mode full A progress bar should be shown while optimizing the
VHDX file, and storage should now be reflected correctly on the
Windows Explorer.
.. note::
The current implementation of WSLv2 does not have out-of-the-box
access to external devices such as those connected through a USB
port, but it automatically mounts your
C:
drive on
/mnt/c/
(and others), which you can use to share deploy artifacts to be later
flashed on hardware through Windows, but your build directory should
not reside inside this mountpoint.
Once you have WSLv2 set up, everything is in place to develop just as if
you were running on a native Linux machine. If you are going to use the
Extensible SDK container, see the "`Using the Extensible
SDK <&YOCTO_DOCS_SDK_URL;#sdk-extensible>`__" Chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual. If you are going to use the Toaster container, see
the "`Setting Up and Using
Toaster <&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use>`__"
section in the Toaster User Manual.
Locating Yocto Project Source Files
===================================
This section shows you how to locate, fetch and configure the source
files you'll need to work with the Yocto Project.
.. note::
- For concepts and introductory information about Git as it is used
in the Yocto Project, see the "`Git <&YOCTO_DOCS_OM_URL;#git>`__"
section in the Yocto Project Overview and Concepts Manual.
- For concepts on Yocto Project source repositories, see the "`Yocto
Project Source
Repositories <&YOCTO_DOCS_OM_URL;#yocto-project-repositories>`__"
section in the Yocto Project Overview and Concepts Manual."
Accessing Source Repositories
-----------------------------
Working from a copy of the upstream Yocto Project `Source
Repositories <&YOCTO_DOCS_OM_URL;#source-repositories>`__ is the
preferred method for obtaining and using a Yocto Project release. You
can view the Yocto Project Source Repositories at
` <&YOCTO_GIT_URL;>`__. In particular, you can find the ``poky``
repository at ` <http://git.yoctoproject.org/cgit/cgit.cgi/poky/>`__.
Use the following procedure to locate the latest upstream copy of the
``poky`` Git repository:
1. *Access Repositories:* Open a browser and go to
` <&YOCTO_GIT_URL;>`__ to access the GUI-based interface into the
Yocto Project source repositories.
2. *Select the Repository:* Click on the repository in which you are
interested (e.g. ``poky``).
3. *Find the URL Used to Clone the Repository:* At the bottom of the
page, note the URL used to
`clone <&YOCTO_DOCS_OM_URL;#git-commands-clone>`__ that repository
(e.g. ``YOCTO_GIT_URL/poky``).
.. note::
For information on cloning a repository, see the "
Cloning the
poky
Repository
" section.
Accessing Index of Releases
---------------------------
Yocto Project maintains an Index of Releases area that contains related
files that contribute to the Yocto Project. Rather than Git
repositories, these files are tarballs that represent snapshots in time
of a given component.
.. note::
The recommended method for accessing Yocto Project components is to
use Git to clone the upstream repository and work from within that
locally cloned repository. The procedure in this section exists
should you desire a tarball snapshot of any given component.
Follow these steps to locate and download a particular tarball:
1. *Access the Index of Releases:* Open a browser and go to
` <&YOCTO_DL_URL;/releases>`__ to access the Index of Releases. The
list represents released components (e.g. ``bitbake``, ``sato``, and
so on).
.. note::
The
yocto
directory contains the full array of released Poky tarballs. The
poky
directory in the Index of Releases was historically used for very
early releases and exists now only for retroactive completeness.
2. *Select a Component:* Click on any released component in which you
are interested (e.g. ``yocto``).
3. *Find the Tarball:* Drill down to find the associated tarball. For
example, click on ``yocto-DISTRO`` to view files associated with the
Yocto Project DISTRO release (e.g.
``poky-DISTRO_NAME_NO_CAP-POKYVERSION.tar.bz2``, which is the
released Poky tarball).
4. *Download the Tarball:* Click the tarball to download and save a
snapshot of the given component.
Using the Downloads Page
------------------------
The `Yocto Project Website <&YOCTO_HOME_URL;>`__ uses a "DOWNLOADS" page
from which you can locate and download tarballs of any Yocto Project
release. Rather than Git repositories, these files represent snapshot
tarballs similar to the tarballs located in the Index of Releases
described in the "`Accessing Index of
Releases <#accessing-index-of-releases>`__" section.
.. note::
The recommended method for accessing Yocto Project components is to
use Git to clone a repository and work from within that local
repository. The procedure in this section exists should you desire a
tarball snapshot of any given component.
1. *Go to the Yocto Project Website:* Open The `Yocto Project
Website <&YOCTO_HOME_URL;>`__ in your browser.
2. *Get to the Downloads Area:* Select the "DOWNLOADS" item from the
pull-down "SOFTWARE" tab menu near the top of the page.
3. *Select a Yocto Project Release:* Use the menu next to "RELEASE" to
display and choose a recent or past supported Yocto Project release
(e.g. DISTRO_NAME_NO_CAP, DISTRO_NAME_NO_CAP_MINUS_ONE, and so
forth).
.. note::
For a "map" of Yocto Project releases to version numbers, see the
Releases
wiki page.
You can use the "RELEASE ARCHIVE" link to reveal a menu of all Yocto
Project releases.
4. *Download Tools or Board Support Packages (BSPs):* From the
"DOWNLOADS" page, you can download tools or BSPs as well. Just scroll
down the page and look for what you need.
Accessing Nightly Builds
------------------------
Yocto Project maintains an area for nightly builds that contains tarball
releases at ` <&YOCTO_AB_NIGHTLY_URL;>`__. These builds include Yocto
Project releases ("poky"), toolchains, and builds for supported
machines.
Should you ever want to access a nightly build of a particular Yocto
Project component, use the following procedure:
1. *Locate the Index of Nightly Builds:* Open a browser and go to
` <&YOCTO_AB_NIGHTLY_URL;>`__ to access the Nightly Builds.
2. *Select a Date:* Click on the date in which you are interested. If
you want the latest builds, use "CURRENT".
3. *Select a Build:* Choose the area in which you are interested. For
example, if you are looking for the most recent toolchains, select
the "toolchain" link.
4. *Find the Tarball:* Drill down to find the associated tarball.
5. *Download the Tarball:* Click the tarball to download and save a
snapshot of the given component.
Cloning and Checking Out Branches
=================================
To use the Yocto Project for development, you need a release locally
installed on your development system. This locally installed set of
files is referred to as the `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ in the Yocto
Project documentation.
The preferred method of creating your Source Directory is by using
`Git <&YOCTO_DOCS_OM_URL;#git>`__ to clone a local copy of the upstream
``poky`` repository. Working from a cloned copy of the upstream
repository allows you to contribute back into the Yocto Project or to
simply work with the latest software on a development branch. Because
Git maintains and creates an upstream repository with a complete history
of changes and you are working with a local clone of that repository,
you have access to all the Yocto Project development branches and tag
names used in the upstream repository.
Cloning the ``poky`` Repository
-------------------------------
Follow these steps to create a local version of the upstream
```poky`` <&YOCTO_DOCS_REF_URL;#poky>`__ Git repository.
1. *Set Your Directory:* Change your working directory to where you want
to create your local copy of ``poky``.
2. *Clone the Repository:* The following example command clones the
``poky`` repository and uses the default name "poky" for your local
repository: $ git clone git://git.yoctoproject.org/poky Cloning into
'poky'... remote: Counting objects: 432160, done. remote: Compressing
objects: 100% (102056/102056), done. remote: Total 432160 (delta
323116), reused 432037 (delta 323000) Receiving objects: 100%
(432160/432160), 153.81 MiB \| 8.54 MiB/s, done. Resolving deltas:
100% (323116/323116), done. Checking connectivity... done. Unless you
specify a specific development branch or tag name, Git clones the
"master" branch, which results in a snapshot of the latest
development changes for "master". For information on how to check out
a specific development branch or on how to check out a local branch
based on a tag name, see the "`Checking Out By Branch in
Poky <#checking-out-by-branch-in-poky>`__" and `Checking Out By Tag
in Poky <#checkout-out-by-tag-in-poky>`__" sections, respectively.
Once the local repository is created, you can change to that
directory and check its status. Here, the single "master" branch
exists on your system and by default, it is checked out: $ cd ~/poky
$ git status On branch master Your branch is up-to-date with
'origin/master'. nothing to commit, working directory clean $ git
branch \* master Your local repository of poky is identical to the
upstream poky repository at the time from which it was cloned. As you
work with the local branch, you can periodically use the
``git pull DASHDASHrebase`` command to be sure you are up-to-date
with the upstream branch.
Checking Out by Branch in Poky
------------------------------
When you clone the upstream poky repository, you have access to all its
development branches. Each development branch in a repository is unique
as it forks off the "master" branch. To see and use the files of a
particular development branch locally, you need to know the branch name
and then specifically check out that development branch.
.. note::
Checking out an active development branch by branch name gives you a
snapshot of that particular branch at the time you check it out.
Further development on top of the branch that occurs after check it
out can occur.
1. *Switch to the Poky Directory:* If you have a local poky Git
repository, switch to that directory. If you do not have the local
copy of poky, see the "`Cloning the ``poky``
Repository <#cloning-the-poky-repository>`__" section.
2. *Determine Existing Branch Names:* $ git branch -a \* master
remotes/origin/1.1_M1 remotes/origin/1.1_M2 remotes/origin/1.1_M3
remotes/origin/1.1_M4 remotes/origin/1.2_M1 remotes/origin/1.2_M2
remotes/origin/1.2_M3 . . . remotes/origin/thud
remotes/origin/thud-next remotes/origin/warrior
remotes/origin/warrior-next remotes/origin/zeus
remotes/origin/zeus-next ... and so on ...
3. *Check out the Branch:* Check out the development branch in which you
want to work. For example, to access the files for the Yocto Project
DISTRO Release (DISTRO_NAME), use the following command: $ git
checkout -b DISTRO_NAME_NO_CAP origin/DISTRO_NAME_NO_CAP Branch
DISTRO_NAME_NO_CAP set up to track remote branch DISTRO_NAME_NO_CAP
from origin. Switched to a new branch 'DISTRO_NAME_NO_CAP' The
previous command checks out the "DISTRO_NAME_NO_CAP" development
branch and reports that the branch is tracking the upstream
"origin/DISTRO_NAME_NO_CAP" branch.
The following command displays the branches that are now part of your
local poky repository. The asterisk character indicates the branch
that is currently checked out for work: $ git branch master \*
DISTRO_NAME_NO_CAP
.. _checkout-out-by-tag-in-poky:
Checking Out by Tag in Poky
---------------------------
Similar to branches, the upstream repository uses tags to mark specific
commits associated with significant points in a development branch (i.e.
a release point or stage of a release). You might want to set up a local
branch based on one of those points in the repository. The process is
similar to checking out by branch name except you use tag names.
.. note::
Checking out a branch based on a tag gives you a stable set of files
not affected by development on the branch above the tag.
1. *Switch to the Poky Directory:* If you have a local poky Git
repository, switch to that directory. If you do not have the local
copy of poky, see the "`Cloning the ``poky``
Repository <#cloning-the-poky-repository>`__" section.
2. *Fetch the Tag Names:* To checkout the branch based on a tag name,
you need to fetch the upstream tags into your local repository: $ git
fetch --tags $
3. *List the Tag Names:* You can list the tag names now: $ git tag
1.1_M1.final 1.1_M1.rc1 1.1_M1.rc2 1.1_M2.final 1.1_M2.rc1 . . .
yocto-2.5 yocto-2.5.1 yocto-2.5.2 yocto-2.5.3 yocto-2.6 yocto-2.6.1
yocto-2.6.2 yocto-2.7 yocto_1.5_M5.rc8
4. *Check out the Branch:* $ git checkout tags/DISTRO_REL_TAG -b
my_yocto_DISTRO Switched to a new branch 'my_yocto_DISTRO' $ git
branch master \* my_yocto_DISTRO The previous command creates and
checks out a local branch named "my_yocto_DISTRO", which is based on
the commit in the upstream poky repository that has the same tag. In
this example, the files you have available locally as a result of the
``checkout`` command are a snapshot of the "DISTRO_NAME_NO_CAP"
development branch at the point where Yocto Project DISTRO was
released.

View File

@@ -0,0 +1,12 @@
======================================
Yocto Project Development Tasks Manual
======================================
.. toctree::
:caption: Table of Contents
:numbered:
dev-manual-intro
dev-manual-start
dev-manual-common-tasks
dev-manual-qemu

View File

@@ -9,3 +9,14 @@ Welcome to The Yocto Project's documentation!
.. toctree::
:maxdepth: 1
brief-yoctoprojectqs/brief-yoctoprojectqs
overview-manual/overview-manual
bsp-guide/bsp-guide
ref-manual/ref-manual
dev-manual/dev-manual
adt-manual/adt-manual
kernel-dev/kernel-dev
profile-manual/profile-manual
sdk-manual/sdk-manual
toaster-manual/toaster-manual
test-manual/test-manual

View File

@@ -0,0 +1,762 @@
*******************************************************
Working with Advanced Metadata (``yocto-kernel-cache``)
*******************************************************
.. _kernel-dev-advanced-overview:
Overview
========
In addition to supporting configuration fragments and patches, the Yocto
Project kernel tools also support rich
`Metadata <&YOCTO_DOCS_REF_URL;#metadata>`__ that you can use to define
complex policies and Board Support Package (BSP) support. The purpose of
the Metadata and the tools that manage it is to help you manage the
complexity of the configuration and sources used to support multiple
BSPs and Linux kernel types.
Kernel Metadata exists in many places. One area in the Yocto Project
`Source Repositories <&YOCTO_DOCS_OM_URL;#source-repositories>`__ is the
``yocto-kernel-cache`` Git repository. You can find this repository
grouped under the "Yocto Linux Kernel" heading in the `Yocto Project
Source Repositories <&YOCTO_GIT_URL;>`__.
Kernel development tools ("kern-tools") exist also in the Yocto Project
Source Repositories under the "Yocto Linux Kernel" heading in the
``yocto-kernel-tools`` Git repository. The recipe that builds these
tools is ``meta/recipes-kernel/kern-tools/kern-tools-native_git.bb`` in
the `Source Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ (e.g.
``poky``).
Using Kernel Metadata in a Recipe
=================================
As mentioned in the introduction, the Yocto Project contains kernel
Metadata, which is located in the ``yocto-kernel-cache`` Git repository.
This Metadata defines Board Support Packages (BSPs) that correspond to
definitions in linux-yocto recipes for corresponding BSPs. A BSP
consists of an aggregation of kernel policy and enabled
hardware-specific features. The BSP can be influenced from within the
linux-yocto recipe.
.. note::
A Linux kernel recipe that contains kernel Metadata (e.g. inherits
from the
linux-yocto.inc
file) is said to be a "linux-yocto style" recipe.
Every linux-yocto style recipe must define the
```KMACHINE`` <&YOCTO_DOCS_REF_URL;#var-KMACHINE>`__ variable. This
variable is typically set to the same value as the ``MACHINE`` variable,
which is used by `BitBake <&YOCTO_DOCS_REF_URL;#bitbake-term>`__.
However, in some cases, the variable might instead refer to the
underlying platform of the ``MACHINE``.
Multiple BSPs can reuse the same ``KMACHINE`` name if they are built
using the same BSP description. Multiple Corei7-based BSPs could share
the same "intel-corei7-64" value for ``KMACHINE``. It is important to
realize that ``KMACHINE`` is just for kernel mapping, while ``MACHINE``
is the machine type within a BSP Layer. Even with this distinction,
however, these two variables can hold the same value. See the `BSP
Descriptions <#bsp-descriptions>`__ section for more information.
Every linux-yocto style recipe must also indicate the Linux kernel
source repository branch used to build the Linux kernel. The
```KBRANCH`` <&YOCTO_DOCS_REF_URL;#var-KBRANCH>`__ variable must be set
to indicate the branch.
.. note::
You can use the
KBRANCH
value to define an alternate branch typically with a machine override
as shown here from the
meta-yocto-bsp
layer:
::
KBRANCH_edgerouter = "standard/edgerouter"
The linux-yocto style recipes can optionally define the following
variables: KERNEL_FEATURES LINUX_KERNEL_TYPE
```LINUX_KERNEL_TYPE`` <&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE>`__
defines the kernel type to be used in assembling the configuration. If
you do not specify a ``LINUX_KERNEL_TYPE``, it defaults to "standard".
Together with ``KMACHINE``, ``LINUX_KERNEL_TYPE`` defines the search
arguments used by the kernel tools to find the appropriate description
within the kernel Metadata with which to build out the sources and
configuration. The linux-yocto recipes define "standard", "tiny", and
"preempt-rt" kernel types. See the "`Kernel Types <#kernel-types>`__"
section for more information on kernel types.
During the build, the kern-tools search for the BSP description file
that most closely matches the ``KMACHINE`` and ``LINUX_KERNEL_TYPE``
variables passed in from the recipe. The tools use the first BSP
description it finds that match both variables. If the tools cannot find
a match, they issue a warning.
The tools first search for the ``KMACHINE`` and then for the
``LINUX_KERNEL_TYPE``. If the tools cannot find a partial match, they
will use the sources from the ``KBRANCH`` and any configuration
specified in the ```SRC_URI`` <&YOCTO_DOCS_REF_URL;#var-SRC_URI>`__.
You can use the
```KERNEL_FEATURES`` <&YOCTO_DOCS_REF_URL;#var-KERNEL_FEATURES>`__
variable to include features (configuration fragments, patches, or both)
that are not already included by the ``KMACHINE`` and
``LINUX_KERNEL_TYPE`` variable combination. For example, to include a
feature specified as "features/netfilter/netfilter.scc", specify:
KERNEL_FEATURES += "features/netfilter/netfilter.scc" To include a
feature called "cfg/sound.scc" just for the ``qemux86`` machine,
specify: KERNEL_FEATURES_append_qemux86 = " cfg/sound.scc" The value of
the entries in ``KERNEL_FEATURES`` are dependent on their location
within the kernel Metadata itself. The examples here are taken from the
``yocto-kernel-cache`` repository. Each branch of this repository
contains "features" and "cfg" subdirectories at the top-level. For more
information, see the "`Kernel Metadata
Syntax <#kernel-metadata-syntax>`__" section.
Kernel Metadata Syntax
======================
The kernel Metadata consists of three primary types of files: ``scc``
[1]_ description files, configuration fragments, and patches. The
``scc`` files define variables and include or otherwise reference any of
the three file types. The description files are used to aggregate all
types of kernel Metadata into what ultimately describes the sources and
the configuration required to build a Linux kernel tailored to a
specific machine.
The ``scc`` description files are used to define two fundamental types
of kernel Metadata:
- Features
- Board Support Packages (BSPs)
Features aggregate sources in the form of patches and configuration
fragments into a modular reusable unit. You can use features to
implement conceptually separate kernel Metadata descriptions such as
pure configuration fragments, simple patches, complex features, and
kernel types. `Kernel types <#kernel-types>`__ define general kernel
features and policy to be reused in the BSPs.
BSPs define hardware-specific features and aggregate them with kernel
types to form the final description of what will be assembled and built.
While the kernel Metadata syntax does not enforce any logical separation
of configuration fragments, patches, features or kernel types, best
practices dictate a logical separation of these types of Metadata. The
following Metadata file hierarchy is recommended: base/ bsp/ cfg/
features/ ktypes/ patches/
The ``bsp`` directory contains the `BSP
descriptions <#bsp-descriptions>`__. The remaining directories all
contain "features". Separating ``bsp`` from the rest of the structure
aids conceptualizing intended usage.
Use these guidelines to help place your ``scc`` description files within
the structure:
- If your file contains only configuration fragments, place the file in
the ``cfg`` directory.
- If your file contains only source-code fixes, place the file in the
``patches`` directory.
- If your file encapsulates a major feature, often combining sources
and configurations, place the file in ``features`` directory.
- If your file aggregates non-hardware configuration and patches in
order to define a base kernel policy or major kernel type to be
reused across multiple BSPs, place the file in ``ktypes`` directory.
These distinctions can easily become blurred - especially as out-of-tree
features slowly merge upstream over time. Also, remember that how the
description files are placed is a purely logical organization and has no
impact on the functionality of the kernel Metadata. There is no impact
because all of ``cfg``, ``features``, ``patches``, and ``ktypes``,
contain "features" as far as the kernel tools are concerned.
Paths used in kernel Metadata files are relative to base, which is
either
```FILESEXTRAPATHS`` <&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS>`__ if
you are creating Metadata in `recipe-space <#recipe-space-metadata>`__,
or the top level of
```yocto-kernel-cache`` <&YOCTO_GIT_URL;/cgit/cgit.cgi/yocto-kernel-cache/tree/>`__
if you are creating `Metadata outside of the
recipe-space <#metadata-outside-the-recipe-space>`__.
Configuration
-------------
The simplest unit of kernel Metadata is the configuration-only feature.
This feature consists of one or more Linux kernel configuration
parameters in a configuration fragment file (``.cfg``) and a ``.scc``
file that describes the fragment.
As an example, consider the Symmetric Multi-Processing (SMP) fragment
used with the ``linux-yocto-4.12`` kernel as defined outside of the
recipe space (i.e. ``yocto-kernel-cache``). This Metadata consists of
two files: ``smp.scc`` and ``smp.cfg``. You can find these files in the
``cfg`` directory of the ``yocto-4.12`` branch in the
``yocto-kernel-cache`` Git repository: cfg/smp.scc: define
KFEATURE_DESCRIPTION "Enable SMP for 32 bit builds" define
KFEATURE_COMPATIBILITY all kconf hardware smp.cfg cfg/smp.cfg:
CONFIG_SMP=y CONFIG_SCHED_SMT=y # Increase default NR_CPUS from 8 to 64
so that platform with # more than 8 processors can be all activated at
boot time CONFIG_NR_CPUS=64 # The following is needed when setting
NR_CPUS to something # greater than 8 on x86 architectures, it should be
automatically # disregarded by Kconfig when using a different arch
CONFIG_X86_BIGSMP=y You can find general information on configuration
fragment files in the "`Creating Configuration
Fragments <#creating-config-fragments>`__" section.
Within the ``smp.scc`` file, the
```KFEATURE_DESCRIPTION`` <&YOCTO_DOCS_REF_URL;#var-KFEATURE_DESCRIPTION>`__
statement provides a short description of the fragment. Higher level
kernel tools use this description.
Also within the ``smp.scc`` file, the ``kconf`` command includes the
actual configuration fragment in an ``.scc`` file, and the "hardware"
keyword identifies the fragment as being hardware enabling, as opposed
to general policy, which would use the "non-hardware" keyword. The
distinction is made for the benefit of the configuration validation
tools, which warn you if a hardware fragment overrides a policy set by a
non-hardware fragment.
.. note::
The description file can include multiple
kconf
statements, one per fragment.
As described in the "`Validating
Configuration <#validating-configuration>`__" section, you can use the
following BitBake command to audit your configuration: $ bitbake
linux-yocto -c kernel_configcheck -f
Patches
-------
Patch descriptions are very similar to configuration fragment
descriptions, which are described in the previous section. However,
instead of a ``.cfg`` file, these descriptions work with source patches
(i.e. ``.patch`` files).
A typical patch includes a description file and the patch itself. As an
example, consider the build patches used with the ``linux-yocto-4.12``
kernel as defined outside of the recipe space (i.e.
``yocto-kernel-cache``). This Metadata consists of several files:
``build.scc`` and a set of ``*.patch`` files. You can find these files
in the ``patches/build`` directory of the ``yocto-4.12`` branch in the
``yocto-kernel-cache`` Git repository.
The following listings show the ``build.scc`` file and part of the
``modpost-mask-trivial-warnings.patch`` file: patches/build/build.scc:
patch arm-serialize-build-targets.patch patch
powerpc-serialize-image-targets.patch patch
kbuild-exclude-meta-directory-from-distclean-processi.patch # applied by
kgit # patch kbuild-add-meta-files-to-the-ignore-li.patch patch
modpost-mask-trivial-warnings.patch patch
menuconfig-check-lxdiaglog.sh-Allow-specification-of.patch
patches/build/modpost-mask-trivial-warnings.patch: From
bd48931bc142bdd104668f3a062a1f22600aae61 Mon Sep 17 00:00:00 2001 From:
Paul Gortmaker <paul.gortmaker@windriver.com> Date: Sun, 25 Jan 2009
17:58:09 -0500 Subject: [PATCH] modpost: mask trivial warnings Newer
HOSTCC will complain about various stdio fcns because . . . char
\*dump_write = NULL, \*files_source = NULL; int opt; -- 2.10.1 generated
by cgit v0.10.2 at 2017-09-28 15:23:23 (GMT) The description file can
include multiple patch statements where each statement handles a single
patch. In the example ``build.scc`` file, five patch statements exist
for the five patches in the directory.
You can create a typical ``.patch`` file using ``diff -Nurp`` or
``git format-patch`` commands. For information on how to create patches,
see the "`Using ``devtool`` to Patch the
Kernel <#using-devtool-to-patch-the-kernel>`__" and "`Using Traditional
Kernel Development to Patch the
Kernel <#using-traditional-kernel-development-to-patch-the-kernel>`__"
sections.
Features
--------
Features are complex kernel Metadata types that consist of configuration
fragments, patches, and possibly other feature description files. As an
example, consider the following generic listing: features/myfeature.scc
define KFEATURE_DESCRIPTION "Enable myfeature" patch
0001-myfeature-core.patch patch 0002-myfeature-interface.patch include
cfg/myfeature_dependency.scc kconf non-hardware myfeature.cfg This
example shows how the ``patch`` and ``kconf`` commands are used as well
as how an additional feature description file is included with the
``include`` command.
Typically, features are less granular than configuration fragments and
are more likely than configuration fragments and patches to be the types
of things you want to specify in the ``KERNEL_FEATURES`` variable of the
Linux kernel recipe. See the "`Using Kernel Metadata in a
Recipe <#using-kernel-metadata-in-a-recipe>`__" section earlier in the
manual.
Kernel Types
------------
A kernel type defines a high-level kernel policy by aggregating
non-hardware configuration fragments with patches you want to use when
building a Linux kernel of a specific type (e.g. a real-time kernel).
Syntactically, kernel types are no different than features as described
in the "`Features <#features>`__" section. The
```LINUX_KERNEL_TYPE`` <&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE>`__
variable in the kernel recipe selects the kernel type. For example, in
the ``linux-yocto_4.12.bb`` kernel recipe found in
``poky/meta/recipes-kernel/linux``, a
```require`` <&YOCTO_DOCS_BB_URL;#require-inclusion>`__ directive
includes the ``poky/meta/recipes-kernel/linux/linux-yocto.inc`` file,
which has the following statement that defines the default kernel type:
LINUX_KERNEL_TYPE ??= "standard"
Another example would be the real-time kernel (i.e.
``linux-yocto-rt_4.12.bb``). This kernel recipe directly sets the kernel
type as follows: LINUX_KERNEL_TYPE = "preempt-rt"
.. note::
You can find kernel recipes in the
meta/recipes-kernel/linux
directory of the
Source Directory
(e.g.
poky/meta/recipes-kernel/linux/linux-yocto_4.12.bb
). See the "
Using Kernel Metadata in a Recipe
" section for more information.
Three kernel types ("standard", "tiny", and "preempt-rt") are supported
for Linux Yocto kernels:
- "standard": Includes the generic Linux kernel policy of the Yocto
Project linux-yocto kernel recipes. This policy includes, among other
things, which file systems, networking options, core kernel features,
and debugging and tracing options are supported.
- "preempt-rt": Applies the ``PREEMPT_RT`` patches and the
configuration options required to build a real-time Linux kernel.
This kernel type inherits from the "standard" kernel type.
- "tiny": Defines a bare minimum configuration meant to serve as a base
for very small Linux kernels. The "tiny" kernel type is independent
from the "standard" configuration. Although the "tiny" kernel type
does not currently include any source changes, it might in the
future.
For any given kernel type, the Metadata is defined by the ``.scc`` (e.g.
``standard.scc``). Here is a partial listing for the ``standard.scc``
file, which is found in the ``ktypes/standard`` directory of the
``yocto-kernel-cache`` Git repository: # Include this kernel type
fragment to get the standard features and # configuration values. #
Note: if only the features are desired, but not the configuration # then
this should be included as: # include ktypes/standard/standard.scc nocfg
# if no chained configuration is desired, include it as: # include
ktypes/standard/standard.scc nocfg inherit include ktypes/base/base.scc
branch standard kconf non-hardware standard.cfg include
features/kgdb/kgdb.scc . . . include cfg/net/ip6_nf.scc include
cfg/net/bridge.scc include cfg/systemd.scc include
features/rfkill/rfkill.scc
As with any ``.scc`` file, a kernel type definition can aggregate other
``.scc`` files with ``include`` commands. These definitions can also
directly pull in configuration fragments and patches with the ``kconf``
and ``patch`` commands, respectively.
.. note::
It is not strictly necessary to create a kernel type
.scc
file. The Board Support Package (BSP) file can implicitly define the
kernel type using a
define
KTYPE
myktype
line. See the "
BSP Descriptions
" section for more information.
BSP Descriptions
----------------
BSP descriptions (i.e. ``*.scc`` files) combine kernel types with
hardware-specific features. The hardware-specific Metadata is typically
defined independently in the BSP layer, and then aggregated with each
supported kernel type.
.. note::
For BSPs supported by the Yocto Project, the BSP description files
are located in the
bsp
directory of the
yocto-kernel-cache
repository organized under the "Yocto Linux Kernel" heading in the
Yocto Project Source Repositories
.
This section overviews the BSP description structure, the aggregation
concepts, and presents a detailed example using a BSP supported by the
Yocto Project (i.e. BeagleBone Board). For complete information on BSP
layer file hierarchy, see the `Yocto Project Board Support Package (BSP)
Developer's Guide <&YOCTO_DOCS_BSP_URL;>`__.
.. _bsp-description-file-overview:
Overview
~~~~~~~~
For simplicity, consider the following root BSP layer description files
for the BeagleBone board. These files employ both a structure and naming
convention for consistency. The naming convention for the file is as
follows: bsp_root_name-kernel_type.scc Here are some example root layer
BSP filenames for the BeagleBone Board BSP, which is supported by the
Yocto Project: beaglebone-standard.scc beaglebone-preempt-rt.scc Each
file uses the root name (i.e "beaglebone") BSP name followed by the
kernel type.
Examine the ``beaglebone-standard.scc`` file: define KMACHINE beaglebone
define KTYPE standard define KARCH arm include
ktypes/standard/standard.scc branch beaglebone include beaglebone.scc #
default policy for standard kernels include
features/latencytop/latencytop.scc include
features/profiling/profiling.scc Every top-level BSP description file
should define the ```KMACHINE`` <&YOCTO_DOCS_REF_URL;#var-KMACHINE>`__,
```KTYPE`` <&YOCTO_DOCS_REF_URL;#var-KTYPE>`__, and
```KARCH`` <&YOCTO_DOCS_REF_URL;#var-KARCH>`__ variables. These
variables allow the OpenEmbedded build system to identify the
description as meeting the criteria set by the recipe being built. This
example supports the "beaglebone" machine for the "standard" kernel and
the "arm" architecture.
Be aware that a hard link between the ``KTYPE`` variable and a kernel
type description file does not exist. Thus, if you do not have the
kernel type defined in your kernel Metadata as it is here, you only need
to ensure that the
```LINUX_KERNEL_TYPE`` <&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE>`__
variable in the kernel recipe and the ``KTYPE`` variable in the BSP
description file match.
To separate your kernel policy from your hardware configuration, you
include a kernel type (``ktype``), such as "standard". In the previous
example, this is done using the following: include
ktypes/standard/standard.scc This file aggregates all the configuration
fragments, patches, and features that make up your standard kernel
policy. See the "`Kernel Types <#kernel-types>`__" section for more
information.
To aggregate common configurations and features specific to the kernel
for mybsp, use the following: include mybsp.scc You can see that in the
BeagleBone example with the following: include beaglebone.scc For
information on how to break a complete ``.config`` file into the various
configuration fragments, see the "`Creating Configuration
Fragments <#creating-config-fragments>`__" section.
Finally, if you have any configurations specific to the hardware that
are not in a ``*.scc`` file, you can include them as follows: kconf
hardware mybsp-extra.cfg The BeagleBone example does not include these
types of configurations. However, the Malta 32-bit board does
("mti-malta32"). Here is the ``mti-malta32-le-standard.scc`` file:
define KMACHINE mti-malta32-le define KMACHINE qemumipsel define KTYPE
standard define KARCH mips include ktypes/standard/standard.scc branch
mti-malta32 include mti-malta32.scc kconf hardware mti-malta32-le.cfg
.. _bsp-description-file-example-minnow:
Example
~~~~~~~
Many real-world examples are more complex. Like any other ``.scc`` file,
BSP descriptions can aggregate features. Consider the Minnow BSP
definition given the ``linux-yocto-4.4`` branch of the
``yocto-kernel-cache`` (i.e.
``yocto-kernel-cache/bsp/minnow/minnow.scc``):
.. note::
Although the Minnow Board BSP is unused, the Metadata remains and is
being used here just as an example.
include cfg/x86.scc include features/eg20t/eg20t.scc include
cfg/dmaengine.scc include features/power/intel.scc include cfg/efi.scc
include features/usb/ehci-hcd.scc include features/usb/ohci-hcd.scc
include features/usb/usb-gadgets.scc include
features/usb/touchscreen-composite.scc include cfg/timer/hpet.scc
include features/leds/leds.scc include features/spi/spidev.scc include
features/i2c/i2cdev.scc include features/mei/mei-txe.scc # Earlyprintk
and port debug requires 8250 kconf hardware cfg/8250.cfg kconf hardware
minnow.cfg kconf hardware minnow-dev.cfg
The ``minnow.scc`` description file includes a hardware configuration
fragment (``minnow.cfg``) specific to the Minnow BSP as well as several
more general configuration fragments and features enabling hardware
found on the machine. This ``minnow.scc`` description file is then
included in each of the three "minnow" description files for the
supported kernel types (i.e. "standard", "preempt-rt", and "tiny").
Consider the "minnow" description for the "standard" kernel type (i.e.
``minnow-standard.scc``: define KMACHINE minnow define KTYPE standard
define KARCH i386 include ktypes/standard include minnow.scc # Extra
minnow configs above the minimal defined in minnow.scc include
cfg/efi-ext.scc include features/media/media-all.scc include
features/sound/snd_hda_intel.scc # The following should really be in
standard.scc # USB live-image support include cfg/usb-mass-storage.scc
include cfg/boot-live.scc # Basic profiling include
features/latencytop/latencytop.scc include
features/profiling/profiling.scc # Requested drivers that don't have an
existing scc kconf hardware minnow-drivers-extra.cfg The ``include``
command midway through the file includes the ``minnow.scc`` description
that defines all enabled hardware for the BSP that is common to all
kernel types. Using this command significantly reduces duplication.
Now consider the "minnow" description for the "tiny" kernel type (i.e.
``minnow-tiny.scc``): define KMACHINE minnow define KTYPE tiny define
KARCH i386 include ktypes/tiny include minnow.scc As you might expect,
the "tiny" description includes quite a bit less. In fact, it includes
only the minimal policy defined by the "tiny" kernel type and the
hardware-specific configuration required for booting the machine along
with the most basic functionality of the system as defined in the base
"minnow" description file.
Notice again the three critical variables:
```KMACHINE`` <&YOCTO_DOCS_REF_URL;#var-KMACHINE>`__,
```KTYPE`` <&YOCTO_DOCS_REF_URL;#var-KTYPE>`__, and
```KARCH`` <&YOCTO_DOCS_REF_URL;#var-KARCH>`__. Of these variables, only
``KTYPE`` has changed to specify the "tiny" kernel type.
Kernel Metadata Location
========================
Kernel Metadata always exists outside of the kernel tree either defined
in a kernel recipe (recipe-space) or outside of the recipe. Where you
choose to define the Metadata depends on what you want to do and how you
intend to work. Regardless of where you define the kernel Metadata, the
syntax used applies equally.
If you are unfamiliar with the Linux kernel and only wish to apply a
configuration and possibly a couple of patches provided to you by
others, the recipe-space method is recommended. This method is also a
good approach if you are working with Linux kernel sources you do not
control or if you just do not want to maintain a Linux kernel Git
repository on your own. For partial information on how you can define
kernel Metadata in the recipe-space, see the "`Modifying an Existing
Recipe <#modifying-an-existing-recipe>`__" section.
Conversely, if you are actively developing a kernel and are already
maintaining a Linux kernel Git repository of your own, you might find it
more convenient to work with kernel Metadata kept outside the
recipe-space. Working with Metadata in this area can make iterative
development of the Linux kernel more efficient outside of the BitBake
environment.
Recipe-Space Metadata
---------------------
When stored in recipe-space, the kernel Metadata files reside in a
directory hierarchy below
```FILESEXTRAPATHS`` <&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS>`__. For
a linux-yocto recipe or for a Linux kernel recipe derived by copying and
modifying
``oe-core/meta-skeleton/recipes-kernel/linux/linux-yocto-custom.bb`` to
a recipe in your layer, ``FILESEXTRAPATHS`` is typically set to
``${``\ ```THISDIR`` <&YOCTO_DOCS_REF_URL;#var-THISDIR>`__\ ``}/${``\ ```PN`` <&YOCTO_DOCS_REF_URL;#var-PN>`__\ ``}``.
See the "`Modifying an Existing
Recipe <#modifying-an-existing-recipe>`__" section for more information.
Here is an example that shows a trivial tree of kernel Metadata stored
in recipe-space within a BSP layer: meta-my_bsp_layer/ \`--
recipes-kernel \`-- linux \`-- linux-yocto \|-- bsp-standard.scc \|--
bsp.cfg \`-- standard.cfg
When the Metadata is stored in recipe-space, you must take steps to
ensure BitBake has the necessary information to decide what files to
fetch and when they need to be fetched again. It is only necessary to
specify the ``.scc`` files on the
```SRC_URI`` <&YOCTO_DOCS_REF_URL;#var-SRC_URI>`__. BitBake parses them
and fetches any files referenced in the ``.scc`` files by the
``include``, ``patch``, or ``kconf`` commands. Because of this, it is
necessary to bump the recipe ```PR`` <&YOCTO_DOCS_REF_URL;#var-PR>`__
value when changing the content of files not explicitly listed in the
``SRC_URI``.
If the BSP description is in recipe space, you cannot simply list the
``*.scc`` in the ``SRC_URI`` statement. You need to use the following
form from your kernel append file: SRC_URI_append_myplatform = " \\
file://myplatform;type=kmeta;destsuffix=myplatform \\ "
Metadata Outside the Recipe-Space
---------------------------------
When stored outside of the recipe-space, the kernel Metadata files
reside in a separate repository. The OpenEmbedded build system adds the
Metadata to the build as a "type=kmeta" repository through the
```SRC_URI`` <&YOCTO_DOCS_REF_URL;#var-SRC_URI>`__ variable. As an
example, consider the following ``SRC_URI`` statement from the
``linux-yocto_4.12.bb`` kernel recipe: SRC_URI =
"git://git.yoctoproject.org/linux-yocto-4.12.git;name=machine;branch=${KBRANCH};
\\
git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.12;destsuffix=${KMETA}"
``${KMETA}``, in this context, is simply used to name the directory into
which the Git fetcher places the Metadata. This behavior is no different
than any multi-repository ``SRC_URI`` statement used in a recipe (e.g.
see the previous section).
You can keep kernel Metadata in a "kernel-cache", which is a directory
containing configuration fragments. As with any Metadata kept outside
the recipe-space, you simply need to use the ``SRC_URI`` statement with
the "type=kmeta" attribute. Doing so makes the kernel Metadata available
during the configuration phase.
If you modify the Metadata, you must not forget to update the ``SRCREV``
statements in the kernel's recipe. In particular, you need to update the
``SRCREV_meta`` variable to match the commit in the ``KMETA`` branch you
wish to use. Changing the data in these branches and not updating the
``SRCREV`` statements to match will cause the build to fetch an older
commit.
Organizing Your Source
======================
Many recipes based on the ``linux-yocto-custom.bb`` recipe use Linux
kernel sources that have only a single branch - "master". This type of
repository structure is fine for linear development supporting a single
machine and architecture. However, if you work with multiple boards and
architectures, a kernel source repository with multiple branches is more
efficient. For example, suppose you need a series of patches for one
board to boot. Sometimes, these patches are works-in-progress or
fundamentally wrong, yet they are still necessary for specific boards.
In these situations, you most likely do not want to include these
patches in every kernel you build (i.e. have the patches as part of the
lone "master" branch). It is situations like these that give rise to
multiple branches used within a Linux kernel sources Git repository.
Repository organization strategies exist that maximize source reuse,
remove redundancy, and logically order your changes. This section
presents strategies for the following cases:
- Encapsulating patches in a feature description and only including the
patches in the BSP descriptions of the applicable boards.
- Creating a machine branch in your kernel source repository and
applying the patches on that branch only.
- Creating a feature branch in your kernel source repository and
merging that branch into your BSP when needed.
The approach you take is entirely up to you and depends on what works
best for your development model.
Encapsulating Patches
---------------------
if you are reusing patches from an external tree and are not working on
the patches, you might find the encapsulated feature to be appropriate.
Given this scenario, you do not need to create any branches in the
source repository. Rather, you just take the static patches you need and
encapsulate them within a feature description. Once you have the feature
description, you simply include that into the BSP description as
described in the "`BSP Descriptions <#bsp-descriptions>`__" section.
You can find information on how to create patches and BSP descriptions
in the "`Patches <#patches>`__" and "`BSP
Descriptions <#bsp-descriptions>`__" sections.
Machine Branches
----------------
When you have multiple machines and architectures to support, or you are
actively working on board support, it is more efficient to create
branches in the repository based on individual machines. Having machine
branches allows common source to remain in the "master" branch with any
features specific to a machine stored in the appropriate machine branch.
This organization method frees you from continually reintegrating your
patches into a feature.
Once you have a new branch, you can set up your kernel Metadata to use
the branch a couple different ways. In the recipe, you can specify the
new branch as the ``KBRANCH`` to use for the board as follows: KBRANCH =
"mynewbranch" Another method is to use the ``branch`` command in the BSP
description: mybsp.scc: define KMACHINE mybsp define KTYPE standard
define KARCH i386 include standard.scc branch mynewbranch include
mybsp-hw.scc
If you find yourself with numerous branches, you might consider using a
hierarchical branching system similar to what the Yocto Linux Kernel Git
repositories use: common/kernel_type/machine
If you had two kernel types, "standard" and "small" for instance, three
machines, and common as ``mydir``, the branches in your Git repository
might look like this: mydir/base mydir/standard/base
mydir/standard/machine_a mydir/standard/machine_b
mydir/standard/machine_c mydir/small/base mydir/small/machine_a
This organization can help clarify the branch relationships. In this
case, ``mydir/standard/machine_a`` includes everything in ``mydir/base``
and ``mydir/standard/base``. The "standard" and "small" branches add
sources specific to those kernel types that for whatever reason are not
appropriate for the other branches.
.. note::
The "base" branches are an artifact of the way Git manages its data
internally on the filesystem: Git will not allow you to use
mydir/standard
and
mydir/standard/machine_a
because it would have to create a file and a directory named
"standard".
Feature Branches
----------------
When you are actively developing new features, it can be more efficient
to work with that feature as a branch, rather than as a set of patches
that have to be regularly updated. The Yocto Project Linux kernel tools
provide for this with the ``git merge`` command.
To merge a feature branch into a BSP, insert the ``git merge`` command
after any ``branch`` commands: mybsp.scc: define KMACHINE mybsp define
KTYPE standard define KARCH i386 include standard.scc branch mynewbranch
git merge myfeature include mybsp-hw.scc
.. _scc-reference:
SCC Description File Reference
==============================
This section provides a brief reference for the commands you can use
within an SCC description file (``.scc``):
- ``branch [ref]``: Creates a new branch relative to the current branch
(typically ``${KTYPE}``) using the currently checked-out branch, or
"ref" if specified.
- ``define``: Defines variables, such as
```KMACHINE`` <&YOCTO_DOCS_REF_URL;#var-KMACHINE>`__,
```KTYPE`` <&YOCTO_DOCS_REF_URL;#var-KTYPE>`__,
```KARCH`` <&YOCTO_DOCS_REF_URL;#var-KARCH>`__, and
```KFEATURE_DESCRIPTION`` <&YOCTO_DOCS_REF_URL;#var-KFEATURE_DESCRIPTION>`__.
- ``include SCC_FILE``: Includes an SCC file in the current file. The
file is parsed as if you had inserted it inline.
- ``kconf [hardware|non-hardware] CFG_FILE``: Queues a configuration
fragment for merging into the final Linux ``.config`` file.
- ``git merge GIT_BRANCH``: Merges the feature branch into the current
branch.
- ``patch PATCH_FILE``: Applies the patch to the current Git branch.
.. [1]
``scc`` stands for Series Configuration Control, but the naming has
less significance in the current implementation of the tooling than
it had in the past. Consider ``scc`` files to be description files.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,409 @@
************************
Advanced Kernel Concepts
************************
.. _kernel-big-picture:
Yocto Project Kernel Development and Maintenance
================================================
Kernels available through the Yocto Project (Yocto Linux kernels), like
other kernels, are based off the Linux kernel releases from
` <http://www.kernel.org>`__. At the beginning of a major Linux kernel
development cycle, the Yocto Project team chooses a Linux kernel based
on factors such as release timing, the anticipated release timing of
final upstream ``kernel.org`` versions, and Yocto Project feature
requirements. Typically, the Linux kernel chosen is in the final stages
of development by the Linux community. In other words, the Linux kernel
is in the release candidate or "rc" phase and has yet to reach final
release. But, by being in the final stages of external development, the
team knows that the ``kernel.org`` final release will clearly be within
the early stages of the Yocto Project development window.
This balance allows the Yocto Project team to deliver the most
up-to-date Yocto Linux kernel possible, while still ensuring that the
team has a stable official release for the baseline Linux kernel
version.
As implied earlier, the ultimate source for Yocto Linux kernels are
released kernels from ``kernel.org``. In addition to a foundational
kernel from ``kernel.org``, the available Yocto Linux kernels contain a
mix of important new mainline developments, non-mainline developments
(when no alternative exists), Board Support Package (BSP) developments,
and custom features. These additions result in a commercially released
Yocto Project Linux kernel that caters to specific embedded designer
needs for targeted hardware.
You can find a web interface to the Yocto Linux kernels in the `Source
Repositories <&YOCTO_DOCS_OM_URL;#source-repositories>`__ at
` <&YOCTO_GIT_URL;>`__. If you look at the interface, you will see to
the left a grouping of Git repositories titled "Yocto Linux Kernel".
Within this group, you will find several Linux Yocto kernels developed
and included with Yocto Project releases:
- *``linux-yocto-4.1``:* The stable Yocto Project kernel to use with
the Yocto Project Release 2.0. This kernel is based on the Linux 4.1
released kernel.
- *``linux-yocto-4.4``:* The stable Yocto Project kernel to use with
the Yocto Project Release 2.1. This kernel is based on the Linux 4.4
released kernel.
- *``linux-yocto-4.6``:* A temporary kernel that is not tied to any
Yocto Project release.
- *``linux-yocto-4.8``:* The stable yocto Project kernel to use with
the Yocto Project Release 2.2.
- *``linux-yocto-4.9``:* The stable Yocto Project kernel to use with
the Yocto Project Release 2.3. This kernel is based on the Linux 4.9
released kernel.
- *``linux-yocto-4.10``:* The default stable Yocto Project kernel to
use with the Yocto Project Release 2.3. This kernel is based on the
Linux 4.10 released kernel.
- *``linux-yocto-4.12``:* The default stable Yocto Project kernel to
use with the Yocto Project Release 2.4. This kernel is based on the
Linux 4.12 released kernel.
- *``yocto-kernel-cache``:* The ``linux-yocto-cache`` contains patches
and configurations for the linux-yocto kernel tree. This repository
is useful when working on the linux-yocto kernel. For more
information on this "Advanced Kernel Metadata", see the "`Working
With Advanced Metadata
(``yocto-kernel-cache``) <#kernel-dev-advanced>`__" Chapter.
- *``linux-yocto-dev``:* A development kernel based on the latest
upstream release candidate available.
.. note::
Long Term Support Initiative (LTSI) for Yocto Linux kernels is as
follows:
- For Yocto Project releases 1.7, 1.8, and 2.0, the LTSI kernel is
``linux-yocto-3.14``.
- For Yocto Project releases 2.1, 2.2, and 2.3, the LTSI kernel is
``linux-yocto-4.1``.
- For Yocto Project release 2.4, the LTSI kernel is
``linux-yocto-4.9``
- ``linux-yocto-4.4`` is an LTS kernel.
Once a Yocto Linux kernel is officially released, the Yocto Project team
goes into their next development cycle, or upward revision (uprev)
cycle, while still continuing maintenance on the released kernel. It is
important to note that the most sustainable and stable way to include
feature development upstream is through a kernel uprev process.
Back-porting hundreds of individual fixes and minor features from
various kernel versions is not sustainable and can easily compromise
quality.
During the uprev cycle, the Yocto Project team uses an ongoing analysis
of Linux kernel development, BSP support, and release timing to select
the best possible ``kernel.org`` Linux kernel version on which to base
subsequent Yocto Linux kernel development. The team continually monitors
Linux community kernel development to look for significant features of
interest. The team does consider back-porting large features if they
have a significant advantage. User or community demand can also trigger
a back-port or creation of new functionality in the Yocto Project
baseline kernel during the uprev cycle.
Generally speaking, every new Linux kernel both adds features and
introduces new bugs. These consequences are the basic properties of
upstream Linux kernel development and are managed by the Yocto Project
team's Yocto Linux kernel development strategy. It is the Yocto Project
team's policy to not back-port minor features to the released Yocto
Linux kernel. They only consider back-porting significant technological
jumps DASH and, that is done after a complete gap analysis. The reason
for this policy is that back-porting any small to medium sized change
from an evolving Linux kernel can easily create mismatches,
incompatibilities and very subtle errors.
The policies described in this section result in both a stable and a
cutting edge Yocto Linux kernel that mixes forward ports of existing
Linux kernel features and significant and critical new functionality.
Forward porting Linux kernel functionality into the Yocto Linux kernels
available through the Yocto Project can be thought of as a "micro
uprev." The many “micro uprevs” produce a Yocto Linux kernel version
with a mix of important new mainline, non-mainline, BSP developments and
feature integrations. This Yocto Linux kernel gives insight into new
features and allows focused amounts of testing to be done on the kernel,
which prevents surprises when selecting the next major uprev. The
quality of these cutting edge Yocto Linux kernels is evolving and the
kernels are used in leading edge feature and BSP development.
Yocto Linux Kernel Architecture and Branching Strategies
========================================================
As mentioned earlier, a key goal of the Yocto Project is to present the
developer with a kernel that has a clear and continuous history that is
visible to the user. The architecture and mechanisms, in particular the
branching strategies, used achieve that goal in a manner similar to
upstream Linux kernel development in ``kernel.org``.
You can think of a Yocto Linux kernel as consisting of a baseline Linux
kernel with added features logically structured on top of the baseline.
The features are tagged and organized by way of a branching strategy
implemented by the Yocto Project team using the Source Code Manager
(SCM) Git.
.. note::
- Git is the obvious SCM for meeting the Yocto Linux kernel
organizational and structural goals described in this section. Not
only is Git the SCM for Linux kernel development in ``kernel.org``
but, Git continues to grow in popularity and supports many
different work flows, front-ends and management techniques.
- You can find documentation on Git at
` <http://git-scm.com/documentation>`__. You can also get an
introduction to Git as it applies to the Yocto Project in the
"`Git <&YOCTO_DOCS_OM_URL;#git>`__" section in the Yocto Project
Overview and Concepts Manual. The latter reference provides an
overview of Git and presents a minimal set of Git commands that
allows you to be functional using Git. You can use as much, or as
little, of what Git has to offer to accomplish what you need for
your project. You do not have to be a "Git Expert" in order to use
it with the Yocto Project.
Using Git's tagging and branching features, the Yocto Project team
creates kernel branches at points where functionality is no longer
shared and thus, needs to be isolated. For example, board-specific
incompatibilities would require different functionality and would
require a branch to separate the features. Likewise, for specific kernel
features, the same branching strategy is used.
This "tree-like" architecture results in a structure that has features
organized to be specific for particular functionality, single kernel
types, or a subset of kernel types. Thus, the user has the ability to
see the added features and the commits that make up those features. In
addition to being able to see added features, the user can also view the
history of what made up the baseline Linux kernel.
Another consequence of this strategy results in not having to store the
same feature twice internally in the tree. Rather, the kernel team
stores the unique differences required to apply the feature onto the
kernel type in question.
.. note::
The Yocto Project team strives to place features in the tree such
that features can be shared by all boards and kernel types where
possible. However, during development cycles or when large features
are merged, the team cannot always follow this practice. In those
cases, the team uses isolated branches to merge features.
BSP-specific code additions are handled in a similar manner to
kernel-specific additions. Some BSPs only make sense given certain
kernel types. So, for these types, the team creates branches off the end
of that kernel type for all of the BSPs that are supported on that
kernel type. From the perspective of the tools that create the BSP
branch, the BSP is really no different than a feature. Consequently, the
same branching strategy applies to BSPs as it does to kernel features.
So again, rather than store the BSP twice, the team only stores the
unique differences for the BSP across the supported multiple kernels.
While this strategy can result in a tree with a significant number of
branches, it is important to realize that from the developer's point of
view, there is a linear path that travels from the baseline
``kernel.org``, through a select group of features and ends with their
BSP-specific commits. In other words, the divisions of the kernel are
transparent and are not relevant to the developer on a day-to-day basis.
From the developer's perspective, this path is the "master" branch in
Git terms. The developer does not need to be aware of the existence of
any other branches at all. Of course, value exists in the having these
branches in the tree, should a person decide to explore them. For
example, a comparison between two BSPs at either the commit level or at
the line-by-line code ``diff`` level is now a trivial operation.
The following illustration shows the conceptual Yocto Linux kernel.
In the illustration, the "Kernel.org Branch Point" marks the specific
spot (or Linux kernel release) from which the Yocto Linux kernel is
created. From this point forward in the tree, features and differences
are organized and tagged.
The "Yocto Project Baseline Kernel" contains functionality that is
common to every kernel type and BSP that is organized further along in
the tree. Placing these common features in the tree this way means
features do not have to be duplicated along individual branches of the
tree structure.
From the "Yocto Project Baseline Kernel", branch points represent
specific functionality for individual Board Support Packages (BSPs) as
well as real-time kernels. The illustration represents this through
three BSP-specific branches and a real-time kernel branch. Each branch
represents some unique functionality for the BSP or for a real-time
Yocto Linux kernel.
In this example structure, the "Real-time (rt) Kernel" branch has common
features for all real-time Yocto Linux kernels and contains more
branches for individual BSP-specific real-time kernels. The illustration
shows three branches as an example. Each branch points the way to
specific, unique features for a respective real-time kernel as they
apply to a given BSP.
The resulting tree structure presents a clear path of markers (or
branches) to the developer that, for all practical purposes, is the
Yocto Linux kernel needed for any given set of requirements.
.. note::
Keep in mind the figure does not take into account all the supported
Yocto Linux kernels, but rather shows a single generic kernel just
for conceptual purposes. Also keep in mind that this structure
represents the Yocto Project
Source Repositories
that are either pulled from during the build or established on the
host development system prior to the build by either cloning a
particular kernel's Git repository or by downloading and unpacking a
tarball.
Working with the kernel as a structured tree follows recognized
community best practices. In particular, the kernel as shipped with the
product, should be considered an "upstream source" and viewed as a
series of historical and documented modifications (commits). These
modifications represent the development and stabilization done by the
Yocto Project kernel development team.
Because commits only change at significant release points in the product
life cycle, developers can work on a branch created from the last
relevant commit in the shipped Yocto Project Linux kernel. As mentioned
previously, the structure is transparent to the developer because the
kernel tree is left in this state after cloning and building the kernel.
Kernel Build File Hierarchy
===========================
Upstream storage of all the available kernel source code is one thing,
while representing and using the code on your host development system is
another. Conceptually, you can think of the kernel source repositories
as all the source files necessary for all the supported Yocto Linux
kernels. As a developer, you are just interested in the source files for
the kernel on which you are working. And, furthermore, you need them
available on your host system.
Kernel source code is available on your host system several different
ways:
- *Files Accessed While using ``devtool``:* ``devtool``, which is
available with the Yocto Project, is the preferred method by which to
modify the kernel. See the "`Kernel Modification
Workflow <#kernel-modification-workflow>`__" section.
- *Cloned Repository:* If you are working in the kernel all the time,
you probably would want to set up your own local Git repository of
the Yocto Linux kernel tree. For information on how to clone a Yocto
Linux kernel Git repository, see the "`Preparing the Build Host to
Work on the
Kernel <#preparing-the-build-host-to-work-on-the-kernel>`__" section.
- *Temporary Source Files from a Build:* If you just need to make some
patches to the kernel using a traditional BitBake workflow (i.e. not
using the ``devtool``), you can access temporary kernel source files
that were extracted and used during a kernel build.
The temporary kernel source files resulting from a build using BitBake
have a particular hierarchy. When you build the kernel on your
development system, all files needed for the build are taken from the
source repositories pointed to by the
```SRC_URI`` <&YOCTO_DOCS_REF_URL;#var-SRC_URI>`__ variable and gathered
in a temporary work area where they are subsequently used to create the
unique kernel. Thus, in a sense, the process constructs a local source
tree specific to your kernel from which to generate the new kernel
image.
The following figure shows the temporary file structure created on your
host system when you build the kernel using Bitbake. This `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__ contains all the
source files used during the build.
Again, for additional information on the Yocto Project kernel's
architecture and its branching strategy, see the "`Yocto Linux Kernel
Architecture and Branching
Strategies <#yocto-linux-kernel-architecture-and-branching-strategies>`__"
section. You can also reference the "`Using ``devtool`` to Patch the
Kernel <#using-devtool-to-patch-the-kernel>`__" and "`Using Traditional
Kernel Development to Patch the
Kernel <#using-traditional-kernel-development-to-patch-the-kernel>`__"
sections for detailed example that modifies the kernel.
Determining Hardware and Non-Hardware Features for the Kernel Configuration Audit Phase
=======================================================================================
This section describes part of the kernel configuration audit phase that
most developers can ignore. For general information on kernel
configuration including ``menuconfig``, ``defconfig`` files, and
configuration fragments, see the "`Configuring the
Kernel <#configuring-the-kernel>`__" section.
During this part of the audit phase, the contents of the final
``.config`` file are compared against the fragments specified by the
system. These fragments can be system fragments, distro fragments, or
user-specified configuration elements. Regardless of their origin, the
OpenEmbedded build system warns the user if a specific option is not
included in the final kernel configuration.
By default, in order to not overwhelm the user with configuration
warnings, the system only reports missing "hardware" options as they
could result in a boot failure or indicate that important hardware is
not available.
To determine whether or not a given option is "hardware" or
"non-hardware", the kernel Metadata in ``yocto-kernel-cache`` contains
files that classify individual or groups of options as either hardware
or non-hardware. To better show this, consider a situation where the
``yocto-kernel-cache`` contains the following files:
yocto-kernel-cache/features/drm-psb/hardware.cfg
yocto-kernel-cache/features/kgdb/hardware.cfg
yocto-kernel-cache/ktypes/base/hardware.cfg
yocto-kernel-cache/bsp/mti-malta32/hardware.cfg
yocto-kernel-cache/bsp/qemu-ppc32/hardware.cfg
yocto-kernel-cache/bsp/qemuarma9/hardware.cfg
yocto-kernel-cache/bsp/mti-malta64/hardware.cfg
yocto-kernel-cache/bsp/arm-versatile-926ejs/hardware.cfg
yocto-kernel-cache/bsp/common-pc/hardware.cfg
yocto-kernel-cache/bsp/common-pc-64/hardware.cfg
yocto-kernel-cache/features/rfkill/non-hardware.cfg
yocto-kernel-cache/ktypes/base/non-hardware.cfg
yocto-kernel-cache/features/aufs/non-hardware.kcf
yocto-kernel-cache/features/ocf/non-hardware.kcf
yocto-kernel-cache/ktypes/base/non-hardware.kcf
yocto-kernel-cache/ktypes/base/hardware.kcf
yocto-kernel-cache/bsp/qemu-ppc32/hardware.kcf The following list
provides explanations for the various files:
- ``hardware.kcf``: Specifies a list of kernel Kconfig files that
contain hardware options only.
- ``non-hardware.kcf``: Specifies a list of kernel Kconfig files that
contain non-hardware options only.
- ``hardware.cfg``: Specifies a list of kernel ``CONFIG_`` options that
are hardware, regardless of whether or not they are within a Kconfig
file specified by a hardware or non-hardware Kconfig file (i.e.
``hardware.kcf`` or ``non-hardware.kcf``).
- ``non-hardware.cfg``: Specifies a list of kernel ``CONFIG_`` options
that are not hardware, regardless of whether or not they are within a
Kconfig file specified by a hardware or non-hardware Kconfig file
(i.e. ``hardware.kcf`` or ``non-hardware.kcf``).
Here is a specific example using the
``kernel-cache/bsp/mti-malta32/hardware.cfg``: CONFIG_SERIAL_8250
CONFIG_SERIAL_8250_CONSOLE CONFIG_SERIAL_8250_NR_UARTS
CONFIG_SERIAL_8250_PCI CONFIG_SERIAL_CORE CONFIG_SERIAL_CORE_CONSOLE
CONFIG_VGA_ARB The kernel configuration audit automatically detects
these files (hence the names must be exactly the ones discussed here),
and uses them as inputs when generating warnings about the final
``.config`` file.
A user-specified kernel Metadata repository, or recipe space feature,
can use these same files to classify options that are found within its
``.cfg`` files as hardware or non-hardware, to prevent the OpenEmbedded
build system from producing an error or warning when an option is not in
the final ``.config`` file.

View File

@@ -0,0 +1,43 @@
**********************
Kernel Development FAQ
**********************
.. _kernel-dev-faq-section:
Common Questions and Solutions
==============================
The following lists some solutions for common questions. How do I use my
own Linux kernel ``.config`` file? Refer to the "`Changing the
Configuration <#changing-the-configuration>`__" section for information.
How do I create configuration fragments? Refer to the "`Creating
Configuration Fragments <#creating-config-fragments>`__" section for
information. How do I use my own Linux kernel sources? Refer to the
"`Working With Your Own Sources <#working-with-your-own-sources>`__"
section for information. How do I install/not-install the kernel image
on the rootfs? The kernel image (e.g. ``vmlinuz``) is provided by the
``kernel-image`` package. Image recipes depend on ``kernel-base``. To
specify whether or not the kernel image is installed in the generated
root filesystem, override ``RDEPENDS_kernel-base`` to include or not
include "kernel-image". See the "`Using .bbappend Files in Your
Layer <&YOCTO_DOCS_DEV_URL;#using-bbappend-files>`__" section in the
Yocto Project Development Tasks Manual for information on how to use an
append file to override metadata. How do I install a specific kernel
module? Linux kernel modules are packaged individually. To ensure a
specific kernel module is included in an image, include it in the
appropriate machine
```RRECOMMENDS`` <&YOCTO_DOCS_REF_URL;#var-RRECOMMENDS>`__ variable.
These other variables are useful for installing specific modules:
```MACHINE_ESSENTIAL_EXTRA_RDEPENDS`` <&YOCTO_DOCS_REF_URL;#var-MACHINE_ESSENTIAL_EXTRA_RDEPENDS>`__
```MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS`` <&YOCTO_DOCS_REF_URL;#var-MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS>`__
```MACHINE_EXTRA_RDEPENDS`` <&YOCTO_DOCS_REF_URL;#var-MACHINE_EXTRA_RDEPENDS>`__
```MACHINE_EXTRA_RRECOMMENDS`` <&YOCTO_DOCS_REF_URL;#var-MACHINE_EXTRA_RRECOMMENDS>`__
For example, set the following in the ``qemux86.conf`` file to include
the ``ab123`` kernel modules with images built for the ``qemux86``
machine: MACHINE_EXTRA_RRECOMMENDS += "kernel-module-ab123" For more
information, see the "`Incorporating Out-of-Tree
Modules <#incorporating-out-of-tree-modules>`__" section. How do I
change the Linux kernel command line? The Linux kernel command line is
typically specified in the machine config using the ``APPEND`` variable.
For example, you can add some helpful debug information doing the
following: APPEND += "printk.time=y initcall_debug debug"

View File

@@ -0,0 +1,178 @@
************
Introduction
************
.. _kernel-dev-overview:
Overview
========
Regardless of how you intend to make use of the Yocto Project, chances
are you will work with the Linux kernel. This manual describes how to
set up your build host to support kernel development, introduces the
kernel development process, provides background information on the Yocto
Linux kernel `Metadata <&YOCTO_DOCS_REF_URL;#metadata>`__, describes
common tasks you can perform using the kernel tools, shows you how to
use the kernel Metadata needed to work with the kernel inside the Yocto
Project, and provides insight into how the Yocto Project team develops
and maintains Yocto Linux kernel Git repositories and Metadata.
Each Yocto Project release has a set of Yocto Linux kernel recipes,
whose Git repositories you can view in the Yocto `Source
Repositories <&YOCTO_GIT_URL;>`__ under the "Yocto Linux Kernel"
heading. New recipes for the release track the latest Linux kernel
upstream developments from ` <http://www.kernel.org>`__ and introduce
newly-supported platforms. Previous recipes in the release are refreshed
and supported for at least one additional Yocto Project release. As they
align, these previous releases are updated to include the latest from
the Long Term Support Initiative (LTSI) project. You can learn more
about Yocto Linux kernels and LTSI in the "`Yocto Project Kernel
Development and Maintenance <#kernel-big-picture>`__" section.
Also included is a Yocto Linux kernel development recipe
(``linux-yocto-dev.bb``) should you want to work with the very latest in
upstream Yocto Linux kernel development and kernel Metadata development.
.. note::
For more on Yocto Linux kernels, see the "
Yocto Project Kernel Development and Maintenance
section.
The Yocto Project also provides a powerful set of kernel tools for
managing Yocto Linux kernel sources and configuration data. You can use
these tools to make a single configuration change, apply multiple
patches, or work with your own kernel sources.
In particular, the kernel tools allow you to generate configuration
fragments that specify only what you must, and nothing more.
Configuration fragments only need to contain the highest level visible
``CONFIG`` options as presented by the Yocto Linux kernel ``menuconfig``
system. Contrast this against a complete Yocto Linux kernel ``.config``
file, which includes all the automatically selected ``CONFIG`` options.
This efficiency reduces your maintenance effort and allows you to
further separate your configuration in ways that make sense for your
project. A common split separates policy and hardware. For example, all
your kernels might support the ``proc`` and ``sys`` filesystems, but
only specific boards require sound, USB, or specific drivers. Specifying
these configurations individually allows you to aggregate them together
as needed, but maintains them in only one place. Similar logic applies
to separating source changes.
If you do not maintain your own kernel sources and need to make only
minimal changes to the sources, the released recipes provide a vetted
base upon which to layer your changes. Doing so allows you to benefit
from the continual kernel integration and testing performed during
development of the Yocto Project.
If, instead, you have a very specific Linux kernel source tree and are
unable to align with one of the official Yocto Linux kernel recipes, an
alternative exists by which you can use the Yocto Project Linux kernel
tools with your own kernel sources.
The remainder of this manual provides instructions for completing
specific Linux kernel development tasks. These instructions assume you
are comfortable working with
`BitBake <http://openembedded.org/wiki/Bitbake>`__ recipes and basic
open-source development tools. Understanding these concepts will
facilitate the process of working with the kernel recipes. If you find
you need some additional background, please be sure to review and
understand the following documentation:
- `Yocto Project Quick Build <&YOCTO_DOCS_BRIEF_URL;>`__ document.
- `Yocto Project Overview and Concepts Manual <&YOCTO_DOCS_OM_URL;>`__.
- ```devtool``
workflow <&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow>`__
as described in the Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) manual.
- The "`Understanding and Creating
Layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__"
section in the Yocto Project Development Tasks Manual.
- The "`Kernel Modification
Workflow <#kernel-modification-workflow>`__" section.
Kernel Modification Workflow
============================
Kernel modification involves changing the Yocto Project kernel, which
could involve changing configuration options as well as adding new
kernel recipes. Configuration changes can be added in the form of
configuration fragments, while recipe modification comes through the
kernel's ``recipes-kernel`` area in a kernel layer you create.
This section presents a high-level overview of the Yocto Project kernel
modification workflow. The illustration and accompanying list provide
general information and references for further information.
1. *Set up Your Host Development System to Support Development Using the
Yocto Project*: See the "`Setting Up the Development Host to Use the
Yocto Project <&YOCTO_DOCS_DEV_URL;#dev-manual-start>`__" section in
the Yocto Project Development Tasks Manual for options on how to get
a build host ready to use the Yocto Project.
2. *Set Up Your Host Development System for Kernel Development:* It is
recommended that you use ``devtool`` and an extensible SDK for kernel
development. Alternatively, you can use traditional kernel
development methods with the Yocto Project. Either way, there are
steps you need to take to get the development environment ready.
Using ``devtool`` and the eSDK requires that you have a clean build
of the image and that you are set up with the appropriate eSDK. For
more information, see the "`Getting Ready to Develop Using
``devtool`` <#getting-ready-to-develop-using-devtool>`__" section.
Using traditional kernel development requires that you have the
kernel source available in an isolated local Git repository. For more
information, see the "`Getting Ready for Traditional Kernel
Development <#getting-ready-for-traditional-kernel-development>`__"
section.
3. *Make Changes to the Kernel Source Code if applicable:* Modifying the
kernel does not always mean directly changing source files. However,
if you have to do this, you make the changes to the files in the
eSDK's Build Directory if you are using ``devtool``. For more
information, see the "`Using ``devtool`` to Patch the
Kernel <#using-devtool-to-patch-the-kernel>`__" section.
If you are using traditional kernel development, you edit the source
files in the kernel's local Git repository. For more information, see
the "`Using Traditional Kernel Development to Patch the
Kernel <#using-traditional-kernel-development-to-patch-the-kernel>`__"
section.
4. *Make Kernel Configuration Changes if Applicable:* If your situation
calls for changing the kernel's configuration, you can use
```menuconfig`` <#using-menuconfig>`__, which allows you to
interactively develop and test the configuration changes you are
making to the kernel. Saving changes you make with ``menuconfig``
updates the kernel's ``.config`` file.
.. note::
Try to resist the temptation to directly edit an existing
.config
file, which is found in the Build Directory among the source code
used for the build. Doing so, can produce unexpected results when
the OpenEmbedded build system regenerates the configuration file.
Once you are satisfied with the configuration changes made using
``menuconfig`` and you have saved them, you can directly compare the
resulting ``.config`` file against an existing original and gather
those changes into a `configuration fragment
file <#creating-config-fragments>`__ to be referenced from within the
kernel's ``.bbappend`` file.
Additionally, if you are working in a BSP layer and need to modify
the BSP's kernel's configuration, you can use ``menuconfig``.
5. *Rebuild the Kernel Image With Your Changes:* Rebuilding the kernel
image applies your changes. Depending on your target hardware, you
can verify your changes on actual hardware or perhaps QEMU.
The remainder of this developer's guide covers common tasks typically
used during kernel development, advanced Metadata usage, and Yocto Linux
kernel maintenance concepts.

View File

@@ -0,0 +1,226 @@
******************
Kernel Maintenance
******************
Tree Construction
=================
This section describes construction of the Yocto Project kernel source
repositories as accomplished by the Yocto Project team to create Yocto
Linux kernel repositories. These kernel repositories are found under the
heading "Yocto Linux Kernel" at `YOCTO_GIT_URL <&YOCTO_GIT_URL;>`__ and
are shipped as part of a Yocto Project release. The team creates these
repositories by compiling and executing the set of feature descriptions
for every BSP and feature in the product. Those feature descriptions
list all necessary patches, configurations, branches, tags, and feature
divisions found in a Yocto Linux kernel. Thus, the Yocto Project Linux
kernel repository (or tree) and accompanying Metadata in the
``yocto-kernel-cache`` are built.
The existence of these repositories allow you to access and clone a
particular Yocto Project Linux kernel repository and use it to build
images based on their configurations and features.
You can find the files used to describe all the valid features and BSPs
in the Yocto Project Linux kernel in any clone of the Yocto Project
Linux kernel source repository and ``yocto-kernel-cache`` Git trees. For
example, the following commands clone the Yocto Project baseline Linux
kernel that branches off ``linux.org`` version 4.12 and the
``yocto-kernel-cache``, which contains stores of kernel Metadata: $ git
clone git://git.yoctoproject.org/linux-yocto-4.12 $ git clone
git://git.yoctoproject.org/linux-kernel-cache For more information on
how to set up a local Git repository of the Yocto Project Linux kernel
files, see the "`Preparing the Build Host to Work on the
Kernel <#preparing-the-build-host-to-work-on-the-kernel>`__" section.
Once you have cloned the kernel Git repository and the cache of Metadata
on your local machine, you can discover the branches that are available
in the repository using the following Git command: $ git branch -a
Checking out a branch allows you to work with a particular Yocto Linux
kernel. For example, the following commands check out the
"standard/beagleboard" branch of the Yocto Linux kernel repository and
the "yocto-4.12" branch of the ``yocto-kernel-cache`` repository: $ cd
~/linux-yocto-4.12 $ git checkout -b my-kernel-4.12
remotes/origin/standard/beagleboard $ cd ~/linux-kernel-cache $ git
checkout -b my-4.12-metadata remotes/origin/yocto-4.12
.. note::
Branches in the
yocto-kernel-cache
repository correspond to Yocto Linux kernel versions (e.g.
"yocto-4.12", "yocto-4.10", "yocto-4.9", and so forth).
Once you have checked out and switched to appropriate branches, you can
see a snapshot of all the kernel source files used to used to build that
particular Yocto Linux kernel for a particular board.
To see the features and configurations for a particular Yocto Linux
kernel, you need to examine the ``yocto-kernel-cache`` Git repository.
As mentioned, branches in the ``yocto-kernel-cache`` repository
correspond to Yocto Linux kernel versions (e.g. ``yocto-4.12``).
Branches contain descriptions in the form of ``.scc`` and ``.cfg``
files.
You should realize, however, that browsing your local
``yocto-kernel-cache`` repository for feature descriptions and patches
is not an effective way to determine what is in a particular kernel
branch. Instead, you should use Git directly to discover the changes in
a branch. Using Git is an efficient and flexible way to inspect changes
to the kernel.
.. note::
Ground up reconstruction of the complete kernel tree is an action
only taken by the Yocto Project team during an active development
cycle. When you create a clone of the kernel Git repository, you are
simply making it efficiently available for building and development.
The following steps describe what happens when the Yocto Project Team
constructs the Yocto Project kernel source Git repository (or tree)
found at ` <&YOCTO_GIT_URL;>`__ given the introduction of a new
top-level kernel feature or BSP. The following actions effectively
provide the Metadata and create the tree that includes the new feature,
patch, or BSP:
1. *Pass Feature to the OpenEmbedded Build System:* A top-level kernel
feature is passed to the kernel build subsystem. Normally, this
feature is a BSP for a particular kernel type.
2. *Locate Feature:* The file that describes the top-level feature is
located by searching these system directories:
- The in-tree kernel-cache directories, which are located in the
```yocto-kernel-cache`` <&YOCTO_GIT_URL;/cgit/cgit.cgi/yocto-kernel-cache/tree/bsp>`__
repository organized under the "Yocto Linux Kernel" heading in the
`Yocto Project Source
Repositories <http://git.yoctoproject.org/cgit/cgit.cgi>`__.
- Areas pointed to by ``SRC_URI`` statements found in kernel recipes
For a typical build, the target of the search is a feature
description in an ``.scc`` file whose name follows this format (e.g.
``beaglebone-standard.scc`` and ``beaglebone-preempt-rt.scc``):
bsp_root_name-kernel_type.scc
3. *Expand Feature:* Once located, the feature description is either
expanded into a simple script of actions, or into an existing
equivalent script that is already part of the shipped kernel.
4. *Append Extra Features:* Extra features are appended to the top-level
feature description. These features can come from the
```KERNEL_FEATURES`` <&YOCTO_DOCS_REF_URL;#var-KERNEL_FEATURES>`__
variable in recipes.
5. *Locate, Expand, and Append Each Feature:* Each extra feature is
located, expanded and appended to the script as described in step
three.
6. *Execute the Script:* The script is executed to produce files
``.scc`` and ``.cfg`` files in appropriate directories of the
``yocto-kernel-cache`` repository. These files are descriptions of
all the branches, tags, patches and configurations that need to be
applied to the base Git repository to completely create the source
(build) branch for the new BSP or feature.
7. *Clone Base Repository:* The base repository is cloned, and the
actions listed in the ``yocto-kernel-cache`` directories are applied
to the tree.
8. *Perform Cleanup:* The Git repositories are left with the desired
branches checked out and any required branching, patching and tagging
has been performed.
The kernel tree and cache are ready for developer consumption to be
locally cloned, configured, and built into a Yocto Project kernel
specific to some target hardware.
.. note::
- The generated ``yocto-kernel-cache`` repository adds to the kernel
as shipped with the Yocto Project release. Any add-ons and
configuration data are applied to the end of an existing branch.
The full repository generation that is found in the official Yocto
Project kernel repositories at
`http://git.yoctoproject.org <&YOCTO_GIT_URL;>`__ is the
combination of all supported boards and configurations.
- The technique the Yocto Project team uses is flexible and allows
for seamless blending of an immutable history with additional
patches specific to a deployment. Any additions to the kernel
become an integrated part of the branches.
- The full kernel tree that you see on ` <&YOCTO_GIT_URL;>`__ is
generated through repeating the above steps for all valid BSPs.
The end result is a branched, clean history tree that makes up the
kernel for a given release. You can see the script (``kgit-scc``)
responsible for this in the
```yocto-kernel-tools`` <&YOCTO_GIT_URL;/cgit.cgi/yocto-kernel-tools/tree/tools>`__
repository.
- The steps used to construct the full kernel tree are the same
steps that BitBake uses when it builds a kernel image.
Build Strategy
==============
Once you have cloned a Yocto Linux kernel repository and the cache
repository (``yocto-kernel-cache``) onto your development system, you
can consider the compilation phase of kernel development, which is
building a kernel image. Some prerequisites exist that are validated by
the build process before compilation starts:
- The ```SRC_URI`` <&YOCTO_DOCS_REF_URL;#var-SRC_URI>`__ points to the
kernel Git repository.
- A BSP build branch with Metadata exists in the ``yocto-kernel-cache``
repository. The branch is based on the Yocto Linux kernel version and
has configurations and features grouped under the
``yocto-kernel-cache/bsp`` directory. For example, features and
configurations for the BeagleBone Board assuming a
``linux-yocto_4.12`` kernel reside in the following area of the
``yocto-kernel-cache`` repository: yocto-kernel-cache/bsp/beaglebone
.. note::
In the previous example, the "yocto-4.12" branch is checked out in
the
yocto-kernel-cache
repository.
The OpenEmbedded build system makes sure these conditions exist before
attempting compilation. Other means, however, do exist, such as as
bootstrapping a BSP.
Before building a kernel, the build process verifies the tree and
configures the kernel by processing all of the configuration "fragments"
specified by feature descriptions in the ``.scc`` files. As the features
are compiled, associated kernel configuration fragments are noted and
recorded in the series of directories in their compilation order. The
fragments are migrated, pre-processed and passed to the Linux Kernel
Configuration subsystem (``lkc``) as raw input in the form of a
``.config`` file. The ``lkc`` uses its own internal dependency
constraints to do the final processing of that information and generates
the final ``.config`` file that is used during compilation.
Using the board's architecture and other relevant values from the
board's template, kernel compilation is started and a kernel image is
produced.
The other thing that you notice once you configure a kernel is that the
build process generates a build tree that is separate from your kernel's
local Git source repository tree. This build tree has a name that uses
the following form, where ``${MACHINE}`` is the metadata name of the
machine (BSP) and "kernel_type" is one of the Yocto Project supported
kernel types (e.g. "standard"): linux-${MACHINE}-kernel_type-build
The existing support in the ``kernel.org`` tree achieves this default
functionality.
This behavior means that all the generated files for a particular
machine or BSP are now in the build tree directory. The files include
the final ``.config`` file, all the ``.o`` files, the ``.a`` files, and
so forth. Since each machine or BSP has its own separate `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__ in its own separate
branch of the Git repository, you can easily switch between different
builds.

View File

@@ -0,0 +1,14 @@
=============================================
Yocto Project Linux Kernel Development Manual
=============================================
.. toctree::
:caption: Table of Contents
:numbered:
kernel-dev-intro
kernel-dev-common
kernel-dev-advanced
kernel-dev-concepts-appx
kernel-dev-maint-appx
kernel-dev-faq

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,656 @@
*****************************************
The Yocto Project Development Environment
*****************************************
This chapter takes a look at the Yocto Project development environment.
The chapter provides Yocto Project Development environment concepts that
help you understand how work is accomplished in an open source
environment, which is very different as compared to work accomplished in
a closed, proprietary environment.
Specifically, this chapter addresses open source philosophy, source
repositories, workflows, Git, and licensing.
Open Source Philosophy
======================
Open source philosophy is characterized by software development directed
by peer production and collaboration through an active community of
developers. Contrast this to the more standard centralized development
models used by commercial software companies where a finite set of
developers produces a product for sale using a defined set of procedures
that ultimately result in an end product whose architecture and source
material are closed to the public.
Open source projects conceptually have differing concurrent agendas,
approaches, and production. These facets of the development process can
come from anyone in the public (community) who has a stake in the
software project. The open source environment contains new copyright,
licensing, domain, and consumer issues that differ from the more
traditional development environment. In an open source environment, the
end product, source material, and documentation are all available to the
public at no cost.
A benchmark example of an open source project is the Linux kernel, which
was initially conceived and created by Finnish computer science student
Linus Torvalds in 1991. Conversely, a good example of a non-open source
project is the Windows family of operating systems developed by
Microsoft Corporation.
Wikipedia has a good historical description of the Open Source
Philosophy `here <http://en.wikipedia.org/wiki/Open_source>`__. You can
also find helpful information on how to participate in the Linux
Community
`here <http://ldn.linuxfoundation.org/book/how-participate-linux-community>`__.
.. _gs-the-development-host:
The Development Host
====================
A development host or `build
host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__ is key to
using the Yocto Project. Because the goal of the Yocto Project is to
develop images or applications that run on embedded hardware,
development of those images and applications generally takes place on a
system not intended to run the software - the development host.
You need to set up a development host in order to use it with the Yocto
Project. Most find that it is best to have a native Linux machine
function as the development host. However, it is possible to use a
system that does not run Linux as its operating system as your
development host. When you have a Mac or Windows-based system, you can
set it up as the development host by using
`CROPS <https://github.com/crops/poky-container>`__, which leverages
`Docker Containers <https://www.docker.com/>`__. Once you take the steps
to set up a CROPS machine, you effectively have access to a shell
environment that is similar to what you see when using a Linux-based
development host. For the steps needed to set up a system using CROPS,
see the "`Setting Up to Use CROss PlatformS
(CROPS) <&YOCTO_DOCS_DEV_URL;#setting-up-to-use-crops>`__" section in
the Yocto Project Development Tasks Manual.
If your development host is going to be a system that runs a Linux
distribution, steps still exist that you must take to prepare the system
for use with the Yocto Project. You need to be sure that the Linux
distribution on the system is one that supports the Yocto Project. You
also need to be sure that the correct set of host packages are installed
that allow development using the Yocto Project. For the steps needed to
set up a development host that runs Linux, see the "`Setting Up a Native
Linux Host <&YOCTO_DOCS_DEV_URL;#setting-up-a-native-linux-host>`__"
section in the Yocto Project Development Tasks Manual.
Once your development host is set up to use the Yocto Project, several
methods exist for you to do work in the Yocto Project environment:
- *Command Lines, BitBake, and Shells:* Traditional development in the
Yocto Project involves using the `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__, which uses
BitBake, in a command-line environment from a shell on your
development host. You can accomplish this from a host that is a
native Linux machine or from a host that has been set up with CROPS.
Either way, you create, modify, and build images and applications all
within a shell-based environment using components and tools available
through your Linux distribution and the Yocto Project.
For a general flow of the build procedures, see the "`Building a
Simple Image <&YOCTO_DOCS_DEV_URL;#dev-building-a-simple-image>`__"
section in the Yocto Project Development Tasks Manual.
- *Board Support Package (BSP) Development:* Development of BSPs
involves using the Yocto Project to create and test layers that allow
easy development of images and applications targeted for specific
hardware. To development BSPs, you need to take some additional steps
beyond what was described in setting up a development host.
The `Yocto Project Board Support Package (BSP) Developer's
Guide <&YOCTO_DOCS_BSP_URL;>`__ provides BSP-related development
information. For specifics on development host preparation, see the
"`Preparing Your Build Host to Work With BSP
Layers <&YOCTO_DOCS_BSP_URL;#preparing-your-build-host-to-work-with-bsp-layers>`__"
section in the Yocto Project Board Support Package (BSP) Developer's
Guide.
- *Kernel Development:* If you are going to be developing kernels using
the Yocto Project you likely will be using ``devtool``. A workflow
using ``devtool`` makes kernel development quicker by reducing
iteration cycle times.
The `Yocto Project Linux Kernel Development
Manual <&YOCTO_DOCS_KERNEL_DEV_URL;>`__ provides kernel-related
development information. For specifics on development host
preparation, see the "`Preparing the Build Host to Work on the
Kernel <&YOCTO_DOCS_KERNEL_DEV_URL;#preparing-the-build-host-to-work-on-the-kernel>`__"
section in the Yocto Project Linux Kernel Development Manual.
- *Using Toaster:* The other Yocto Project development method that
involves an interface that effectively puts the Yocto Project into
the background is Toaster. Toaster provides an interface to the
OpenEmbedded build system. The interface enables you to configure and
run your builds. Information about builds is collected and stored in
a database. You can use Toaster to configure and start builds on
multiple remote build servers.
For steps that show you how to set up your development host to use
Toaster and on how to use Toaster in general, see the `Toaster User
Manual <&YOCTO_DOCS_TOAST_URL;>`__.
.. _yocto-project-repositories:
Yocto Project Source Repositories
=================================
The Yocto Project team maintains complete source repositories for all
Yocto Project files at ` <&YOCTO_GIT_URL;>`__. This web-based source
code browser is organized into categories by function such as IDE
Plugins, Matchbox, Poky, Yocto Linux Kernel, and so forth. From the
interface, you can click on any particular item in the "Name" column and
see the URL at the bottom of the page that you need to clone a Git
repository for that particular item. Having a local Git repository of
the `Source Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__, which
is usually named "poky", allows you to make changes, contribute to the
history, and ultimately enhance the Yocto Project's tools, Board Support
Packages, and so forth.
For any supported release of Yocto Project, you can also go to the
`Yocto Project Website <&YOCTO_HOME_URL;>`__ and select the "DOWNLOADS"
item from the "SOFTWARE" menu and get a released tarball of the ``poky``
repository, any supported BSP tarball, or Yocto Project tools. Unpacking
these tarballs gives you a snapshot of the released files.
.. note::
- The recommended method for setting up the Yocto Project `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ and the files
for supported BSPs (e.g., ``meta-intel``) is to use `Git <#git>`__
to create a local copy of the upstream repositories.
- Be sure to always work in matching branches for both the selected
BSP repository and the Source Directory (i.e. ``poky``)
repository. For example, if you have checked out the "master"
branch of ``poky`` and you are going to use ``meta-intel``, be
sure to checkout the "master" branch of ``meta-intel``.
In summary, here is where you can get the project files needed for
development:
- `Source Repositories: <&YOCTO_GIT_URL;>`__ This area contains IDE
Plugins, Matchbox, Poky, Poky Support, Tools, Yocto Linux Kernel, and
Yocto Metadata Layers. You can create local copies of Git
repositories for each of these areas.
For steps on how to view and access these upstream Git repositories,
see the "`Accessing Source
Repositories <&YOCTO_DOCS_DEV_URL;#accessing-source-repositories>`__"
Section in the Yocto Project Development Tasks Manual.
- `Index of /releases: <&YOCTO_DL_URL;/releases/>`__ This is an index
of releases such as Poky, Pseudo, installers for cross-development
toolchains, miscellaneous support and all released versions of Yocto
Project in the form of images or tarballs. Downloading and extracting
these files does not produce a local copy of the Git repository but
rather a snapshot of a particular release or image.
For steps on how to view and access these files, see the "`Accessing
Index of
Releases <&YOCTO_DOCS_DEV_URL;#accessing-index-of-releases>`__"
section in the Yocto Project Development Tasks Manual.
- *"DOWNLOADS" page for the*\ `Yocto Project
Website <&YOCTO_HOME_URL;>`__\ *:*
The Yocto Project website includes a "DOWNLOADS" page accessible
through the "SOFTWARE" menu that allows you to download any Yocto
Project release, tool, and Board Support Package (BSP) in tarball
form. The tarballs are similar to those found in the `Index of
/releases: <&YOCTO_DL_URL;/releases/>`__ area.
For steps on how to use the "DOWNLOADS" page, see the "`Using the
Downloads Page <&YOCTO_DOCS_DEV_URL;#using-the-downloads-page>`__"
section in the Yocto Project Development Tasks Manual.
.. _gs-git-workflows-and-the-yocto-project:
Git Workflows and the Yocto Project
===================================
Developing using the Yocto Project likely requires the use of
`Git <#git>`__. Git is a free, open source distributed version control
system used as part of many collaborative design environments. This
section provides workflow concepts using the Yocto Project and Git. In
particular, the information covers basic practices that describe roles
and actions in a collaborative development environment.
.. note::
If you are familiar with this type of development environment, you
might not want to read this section.
The Yocto Project files are maintained using Git in "branches" whose Git
histories track every change and whose structures provide branches for
all diverging functionality. Although there is no need to use Git, many
open source projects do so.
For the Yocto Project, a key individual called the "maintainer" is
responsible for the integrity of the "master" branch of a given Git
repository. The "master" branch is the “upstream” repository from which
final or most recent builds of a project occur. The maintainer is
responsible for accepting changes from other developers and for
organizing the underlying branch structure to reflect release strategies
and so forth.
.. note::
For information on finding out who is responsible for (maintains) a
particular area of code in the Yocto Project, see the "
Submitting a Change to the Yocto Project
" section of the Yocto Project Development Tasks Manual.
The Yocto Project ``poky`` Git repository also has an upstream
contribution Git repository named ``poky-contrib``. You can see all the
branches in this repository using the web interface of the `Source
Repositories <&YOCTO_GIT_URL;>`__ organized within the "Poky Support"
area. These branches hold changes (commits) to the project that have
been submitted or committed by the Yocto Project development team and by
community members who contribute to the project. The maintainer
determines if the changes are qualified to be moved from the "contrib"
branches into the "master" branch of the Git repository.
Developers (including contributing community members) create and
maintain cloned repositories of upstream branches. The cloned
repositories are local to their development platforms and are used to
develop changes. When a developer is satisfied with a particular feature
or change, they "push" the change to the appropriate "contrib"
repository.
Developers are responsible for keeping their local repository up-to-date
with whatever upstream branch they are working against. They are also
responsible for straightening out any conflicts that might arise within
files that are being worked on simultaneously by more than one person.
All this work is done locally on the development host before anything is
pushed to a "contrib" area and examined at the maintainers level.
A somewhat formal method exists by which developers commit changes and
push them into the "contrib" area and subsequently request that the
maintainer include them into an upstream branch. This process is called
“submitting a patch” or "submitting a change." For information on
submitting patches and changes, see the "`Submitting a Change to the
Yocto Project <&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change>`__" section
in the Yocto Project Development Tasks Manual.
In summary, a single point of entry exists for changes into a "master"
or development branch of the Git repository, which is controlled by the
projects maintainer. And, a set of developers exist who independently
develop, test, and submit changes to "contrib" areas for the maintainer
to examine. The maintainer then chooses which changes are going to
become a permanent part of the project.
While each development environment is unique, there are some best
practices or methods that help development run smoothly. The following
list describes some of these practices. For more information about Git
workflows, see the workflow topics in the `Git Community
Book <http://book.git-scm.com>`__.
- *Make Small Changes:* It is best to keep the changes you commit small
as compared to bundling many disparate changes into a single commit.
This practice not only keeps things manageable but also allows the
maintainer to more easily include or refuse changes.
- *Make Complete Changes:* It is also good practice to leave the
repository in a state that allows you to still successfully build
your project. In other words, do not commit half of a feature, then
add the other half as a separate, later commit. Each commit should
take you from one buildable project state to another buildable state.
- *Use Branches Liberally:* It is very easy to create, use, and delete
local branches in your working Git repository on the development
host. You can name these branches anything you like. It is helpful to
give them names associated with the particular feature or change on
which you are working. Once you are done with a feature or change and
have merged it into your local master branch, simply discard the
temporary branch.
- *Merge Changes:* The ``git merge`` command allows you to take the
changes from one branch and fold them into another branch. This
process is especially helpful when more than a single developer might
be working on different parts of the same feature. Merging changes
also automatically identifies any collisions or "conflicts" that
might happen as a result of the same lines of code being altered by
two different developers.
- *Manage Branches:* Because branches are easy to use, you should use a
system where branches indicate varying levels of code readiness. For
example, you can have a "work" branch to develop in, a "test" branch
where the code or change is tested, a "stage" branch where changes
are ready to be committed, and so forth. As your project develops,
you can merge code across the branches to reflect ever-increasing
stable states of the development.
- *Use Push and Pull:* The push-pull workflow is based on the concept
of developers "pushing" local commits to a remote repository, which
is usually a contribution repository. This workflow is also based on
developers "pulling" known states of the project down into their
local development repositories. The workflow easily allows you to
pull changes submitted by other developers from the upstream
repository into your work area ensuring that you have the most recent
software on which to develop. The Yocto Project has two scripts named
``create-pull-request`` and ``send-pull-request`` that ship with the
release to facilitate this workflow. You can find these scripts in
the ``scripts`` folder of the `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__. For information
on how to use these scripts, see the "`Using Scripts to Push a Change
Upstream and Request a
Pull <&YOCTO_DOCS_DEV_URL;#pushing-a-change-upstream>`__" section in
the Yocto Project Development Tasks Manual.
- *Patch Workflow:* This workflow allows you to notify the maintainer
through an email that you have a change (or patch) you would like
considered for the "master" branch of the Git repository. To send
this type of change, you format the patch and then send the email
using the Git commands ``git format-patch`` and ``git send-email``.
For information on how to use these scripts, see the "`Submitting a
Change to the Yocto
Project <&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change>`__" section in
the Yocto Project Development Tasks Manual.
Git
===
The Yocto Project makes extensive use of Git, which is a free, open
source distributed version control system. Git supports distributed
development, non-linear development, and can handle large projects. It
is best that you have some fundamental understanding of how Git tracks
projects and how to work with Git if you are going to use the Yocto
Project for development. This section provides a quick overview of how
Git works and provides you with a summary of some essential Git
commands.
.. note::
- For more information on Git, see
` <http://git-scm.com/documentation>`__.
- If you need to download Git, it is recommended that you add Git to
your system through your distribution's "software store" (e.g. for
Ubuntu, use the Ubuntu Software feature). For the Git download
page, see ` <http://git-scm.com/download>`__.
- For information beyond the introductory nature in this section,
see the "`Locating Yocto Project Source
Files <&YOCTO_DOCS_DEV_URL;#locating-yocto-project-source-files>`__"
section in the Yocto Project Development Tasks Manual.
Repositories, Tags, and Branches
--------------------------------
As mentioned briefly in the previous section and also in the "`Git
Workflows and the Yocto
Project <#gs-git-workflows-and-the-yocto-project>`__" section, the Yocto
Project maintains source repositories at ` <&YOCTO_GIT_URL;>`__. If you
look at this web-interface of the repositories, each item is a separate
Git repository.
Git repositories use branching techniques that track content change (not
files) within a project (e.g. a new feature or updated documentation).
Creating a tree-like structure based on project divergence allows for
excellent historical information over the life of a project. This
methodology also allows for an environment from which you can do lots of
local experimentation on projects as you develop changes or new
features.
A Git repository represents all development efforts for a given project.
For example, the Git repository ``poky`` contains all changes and
developments for that repository over the course of its entire life.
That means that all changes that make up all releases are captured. The
repository maintains a complete history of changes.
You can create a local copy of any repository by "cloning" it with the
``git clone`` command. When you clone a Git repository, you end up with
an identical copy of the repository on your development system. Once you
have a local copy of a repository, you can take steps to develop
locally. For examples on how to clone Git repositories, see the
"`Locating Yocto Project Source
Files <&YOCTO_DOCS_DEV_URL;#locating-yocto-project-source-files>`__"
section in the Yocto Project Development Tasks Manual.
It is important to understand that Git tracks content change and not
files. Git uses "branches" to organize different development efforts.
For example, the ``poky`` repository has several branches that include
the current "DISTRO_NAME_NO_CAP" branch, the "master" branch, and many
branches for past Yocto Project releases. You can see all the branches
by going to ` <&YOCTO_GIT_URL;/cgit.cgi/poky/>`__ and clicking on the
``[...]`` link beneath the "Branch" heading.
Each of these branches represents a specific area of development. The
"master" branch represents the current or most recent development. All
other branches represent offshoots of the "master" branch.
When you create a local copy of a Git repository, the copy has the same
set of branches as the original. This means you can use Git to create a
local working area (also called a branch) that tracks a specific
development branch from the upstream source Git repository. in other
words, you can define your local Git environment to work on any
development branch in the repository. To help illustrate, consider the
following example Git commands: $ cd ~ $ git clone
git://git.yoctoproject.org/poky $ cd poky $ git checkout -b
DISTRO_NAME_NO_CAP origin/DISTRO_NAME_NO_CAP In the previous example
after moving to the home directory, the ``git clone`` command creates a
local copy of the upstream ``poky`` Git repository. By default, Git
checks out the "master" branch for your work. After changing the working
directory to the new local repository (i.e. ``poky``), the
``git checkout`` command creates and checks out a local branch named
"DISTRO_NAME_NO_CAP", which tracks the upstream
"origin/DISTRO_NAME_NO_CAP" branch. Changes you make while in this
branch would ultimately affect the upstream "DISTRO_NAME_NO_CAP" branch
of the ``poky`` repository.
It is important to understand that when you create and checkout a local
working branch based on a branch name, your local environment matches
the "tip" of that particular development branch at the time you created
your local branch, which could be different from the files in the
"master" branch of the upstream repository. In other words, creating and
checking out a local branch based on the "DISTRO_NAME_NO_CAP" branch
name is not the same as checking out the "master" branch in the
repository. Keep reading to see how you create a local snapshot of a
Yocto Project Release.
Git uses "tags" to mark specific changes in a repository branch
structure. Typically, a tag is used to mark a special point such as the
final change (or commit) before a project is released. You can see the
tags used with the ``poky`` Git repository by going to
` <&YOCTO_GIT_URL;/cgit.cgi/poky/>`__ and clicking on the ``[...]`` link
beneath the "Tag" heading.
Some key tags for the ``poky`` repository are ``jethro-14.0.3``,
``morty-16.0.1``, ``pyro-17.0.0``, and
``DISTRO_NAME_NO_CAP-POKYVERSION``. These tags represent Yocto Project
releases.
When you create a local copy of the Git repository, you also have access
to all the tags in the upstream repository. Similar to branches, you can
create and checkout a local working Git branch based on a tag name. When
you do this, you get a snapshot of the Git repository that reflects the
state of the files when the change was made associated with that tag.
The most common use is to checkout a working branch that matches a
specific Yocto Project release. Here is an example: $ cd ~ $ git clone
git://git.yoctoproject.org/poky $ cd poky $ git fetch --tags $ git
checkout tags/rocko-18.0.0 -b my_rocko-18.0.0 In this example, the name
of the top-level directory of your local Yocto Project repository is
``poky``. After moving to the ``poky`` directory, the ``git fetch``
command makes all the upstream tags available locally in your
repository. Finally, the ``git checkout`` command creates and checks out
a branch named "my-rocko-18.0.0" that is based on the upstream branch
whose "HEAD" matches the commit in the repository associated with the
"rocko-18.0.0" tag. The files in your repository now exactly match that
particular Yocto Project release as it is tagged in the upstream Git
repository. It is important to understand that when you create and
checkout a local working branch based on a tag, your environment matches
a specific point in time and not the entire development branch (i.e.
from the "tip" of the branch backwards).
Basic Commands
--------------
Git has an extensive set of commands that lets you manage changes and
perform collaboration over the life of a project. Conveniently though,
you can manage with a small set of basic operations and workflows once
you understand the basic philosophy behind Git. You do not have to be an
expert in Git to be functional. A good place to look for instruction on
a minimal set of Git commands is
`here <http://git-scm.com/documentation>`__.
The following list of Git commands briefly describes some basic Git
operations as a way to get started. As with any set of commands, this
list (in most cases) simply shows the base command and omits the many
arguments it supports. See the Git documentation for complete
descriptions and strategies on how to use these commands:
- *``git init``:* Initializes an empty Git repository. You cannot use
Git commands unless you have a ``.git`` repository.
- *``git clone``:* Creates a local clone of a Git repository that is on
equal footing with a fellow developers Git repository or an upstream
repository.
- *``git add``:* Locally stages updated file contents to the index that
Git uses to track changes. You must stage all files that have changed
before you can commit them.
- *``git commit``:* Creates a local "commit" that documents the changes
you made. Only changes that have been staged can be committed.
Commits are used for historical purposes, for determining if a
maintainer of a project will allow the change, and for ultimately
pushing the change from your local Git repository into the projects
upstream repository.
- *``git status``:* Reports any modified files that possibly need to be
staged and gives you a status of where you stand regarding local
commits as compared to the upstream repository.
- *``git checkout`` branch-name:* Changes your local working branch and
in this form assumes the local branch already exists. This command is
analogous to "cd".
- *``git checkout b`` working-branch upstream-branch:* Creates and
checks out a working branch on your local machine. The local branch
tracks the upstream branch. You can use your local branch to isolate
your work. It is a good idea to use local branches when adding
specific features or changes. Using isolated branches facilitates
easy removal of changes if they do not work out.
- *``git branch``:* Displays the existing local branches associated
with your local repository. The branch that you have currently
checked out is noted with an asterisk character.
- *``git branch -D`` branch-name:* Deletes an existing local branch.
You need to be in a local branch other than the one you are deleting
in order to delete branch-name.
- *``git pull --rebase``:* Retrieves information from an upstream Git
repository and places it in your local Git repository. You use this
command to make sure you are synchronized with the repository from
which you are basing changes (.e.g. the "master" branch). The
"--rebase" option ensures that any local commits you have in your
branch are preserved at the top of your local branch.
- *``git push`` repo-name local-branch\ ``:``\ upstream-branch:* Sends
all your committed local changes to the upstream Git repository that
your local repository is tracking (e.g. a contribution repository).
The maintainer of the project draws from these repositories to merge
changes (commits) into the appropriate branch of project's upstream
repository.
- *``git merge``:* Combines or adds changes from one local branch of
your repository with another branch. When you create a local Git
repository, the default branch is named "master". A typical workflow
is to create a temporary branch that is based off "master" that you
would use for isolated work. You would make your changes in that
isolated branch, stage and commit them locally, switch to the
"master" branch, and then use the ``git merge`` command to apply the
changes from your isolated branch into the currently checked out
branch (e.g. "master"). After the merge is complete and if you are
done with working in that isolated branch, you can safely delete the
isolated branch.
- *``git cherry-pick`` commits:* Choose and apply specific commits from
one branch into another branch. There are times when you might not be
able to merge all the changes in one branch with another but need to
pick out certain ones.
- *``gitk``:* Provides a GUI view of the branches and changes in your
local Git repository. This command is a good way to graphically see
where things have diverged in your local repository.
.. note::
You need to install the
gitk
package on your development system to use this command.
- *``git log``:* Reports a history of your commits to the repository.
This report lists all commits regardless of whether you have pushed
them upstream or not.
- *``git diff``:* Displays line-by-line differences between a local
working file and the same file as understood by Git. This command is
useful to see what you have changed in any given file.
Licensing
=========
Because open source projects are open to the public, they have different
licensing structures in place. License evolution for both Open Source
and Free Software has an interesting history. If you are interested in
this history, you can find basic information here:
- `Open source license
history <http://en.wikipedia.org/wiki/Open-source_license>`__
- `Free software license
history <http://en.wikipedia.org/wiki/Free_software_license>`__
In general, the Yocto Project is broadly licensed under the
Massachusetts Institute of Technology (MIT) License. MIT licensing
permits the reuse of software within proprietary software as long as the
license is distributed with that software. MIT is also compatible with
the GNU General Public License (GPL). Patches to the Yocto Project
follow the upstream licensing scheme. You can find information on the
MIT license
`here <http://www.opensource.org/licenses/mit-license.php>`__. You can
find information on the GNU GPL
`here <http://www.opensource.org/licenses/LGPL-3.0>`__.
When you build an image using the Yocto Project, the build process uses
a known list of licenses to ensure compliance. You can find this list in
the `Source Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ at
``meta/files/common-licenses``. Once the build completes, the list of
all licenses found and used during that build are kept in the `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__ at
``tmp/deploy/licenses``.
If a module requires a license that is not in the base list, the build
process generates a warning during the build. These tools make it easier
for a developer to be certain of the licenses with which their shipped
products must comply. However, even with these tools it is still up to
the developer to resolve potential licensing issues.
The base list of licenses used by the build process is a combination of
the Software Package Data Exchange (SPDX) list and the Open Source
Initiative (OSI) projects. `SPDX Group <http://spdx.org>`__ is a working
group of the Linux Foundation that maintains a specification for a
standard format for communicating the components, licenses, and
copyrights associated with a software package.
`OSI <http://opensource.org>`__ is a corporation dedicated to the Open
Source Definition and the effort for reviewing and approving licenses
that conform to the Open Source Definition (OSD).
You can find a list of the combined SPDX and OSI licenses that the Yocto
Project uses in the ``meta/files/common-licenses`` directory in your
`Source Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__.
For information that can help you maintain compliance with various open
source licensing during the lifecycle of a product created using the
Yocto Project, see the "`Maintaining Open Source License Compliance
During Your Product's
Lifecycle <&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle>`__"
section in the Yocto Project Development Tasks Manual.

View File

@@ -0,0 +1,74 @@
**********************************************
The Yocto Project Overview and Concepts Manual
**********************************************
.. _overview-manual-welcome:
Welcome
=======
Welcome to the Yocto Project Overview and Concepts Manual! This manual
introduces the Yocto Project by providing concepts, software overviews,
best-known-methods (BKMs), and any other high-level introductory
information suitable for a new Yocto Project user.
The following list describes what you can get from this manual:
- `Introducing the Yocto Project <#overview-yp>`__\ *:* This chapter
provides an introduction to the Yocto Project. You will learn about
features and challenges of the Yocto Project, the layer model,
components and tools, development methods, the
`Poky <&YOCTO_DOCS_REF_URL;#poky>`__ reference distribution, the
OpenEmbedded build system workflow, and some basic Yocto terms.
- `The Yocto Project Development
Environment <#overview-development-environment>`__\ *:* This chapter
helps you get started understanding the Yocto Project development
environment. You will learn about open source, development hosts,
Yocto Project source repositories, workflows using Git and the Yocto
Project, a Git primer, and information about licensing.
- `Yocto Project Concepts <#overview-manual-concepts>`__\ *:* This
chapter presents various concepts regarding the Yocto Project. You
can find conceptual information about components, development,
cross-toolchains, and so forth.
This manual does not give you the following:
- *Step-by-step Instructions for Development Tasks:* Instructional
procedures reside in other manuals within the Yocto Project
documentation set. For example, the `Yocto Project Development Tasks
Manual <&YOCTO_DOCS_DEV_URL;>`__ provides examples on how to perform
various development tasks. As another example, the `Yocto Project
Application Development and the Extensible Software Development Kit
(eSDK) <&YOCTO_DOCS_SDK_URL;>`__ manual contains detailed
instructions on how to install an SDK, which is used to develop
applications for target hardware.
- *Reference Material:* This type of material resides in an appropriate
reference manual. For example, system variables are documented in the
`Yocto Project Reference Manual <&YOCTO_DOCS_REF_URL;>`__. As another
example, the `Yocto Project Board Support Package (BSP) Developer's
Guide <&YOCTO_DOCS_BSP_URL;>`__ contains reference information on
BSPs.
- *Detailed Public Information Not Specific to the Yocto Project:* For
example, exhaustive information on how to use the Source Control
Manager Git is better covered with Internet searches and official Git
Documentation than through the Yocto Project documentation.
.. _overview-manual-other-information:
Other Information
=================
Because this manual presents information for many different topics,
supplemental information is recommended for full comprehension. For
additional introductory information on the Yocto Project, see the `Yocto
Project Website <&YOCTO_HOME_URL;>`__. If you want to build an image
with no knowledge of Yocto Project as a way of quickly testing it out,
see the `Yocto Project Quick Build <&YOCTO_DOCS_BRIEF_URL;>`__ document.
For a comprehensive list of links and other documentation, see the
"`Links and Related
Documentation <&YOCTO_DOCS_REF_URL;#resources-links-and-related-documentation>`__"
section in the Yocto Project Reference Manual.

View File

@@ -0,0 +1,941 @@
*****************************
Introducing the Yocto Project
*****************************
What is the Yocto Project?
==========================
The Yocto Project is an open source collaboration project that helps
developers create custom Linux-based systems that are designed for
embedded products regardless of the product's hardware architecture.
Yocto Project provides a flexible toolset and a development environment
that allows embedded device developers across the world to collaborate
through shared technologies, software stacks, configurations, and best
practices used to create these tailored Linux images.
Thousands of developers worldwide have discovered that Yocto Project
provides advantages in both systems and applications development,
archival and management benefits, and customizations used for speed,
footprint, and memory utilization. The project is a standard when it
comes to delivering embedded software stacks. The project allows
software customizations and build interchange for multiple hardware
platforms as well as software stacks that can be maintained and scaled.
For further introductory information on the Yocto Project, you might be
interested in this
`article <https://www.embedded.com/electronics-blogs/say-what-/4458600/Why-the-Yocto-Project-for-my-IoT-Project->`__
by Drew Moseley and in this short introductory
`video <https://www.youtube.com/watch?v=utZpKM7i5Z4>`__.
The remainder of this section overviews advantages and challenges tied
to the Yocto Project.
.. _gs-features:
Features
--------
The following list describes features and advantages of the Yocto
Project:
- *Widely Adopted Across the Industry:* Semiconductor, operating
system, software, and service vendors exist whose products and
services adopt and support the Yocto Project. For a look at the Yocto
Project community and the companies involved with the Yocto Project,
see the "COMMUNITY" and "ECOSYSTEM" tabs on the `Yocto
Project <&YOCTO_HOME_URL;>`__ home page.
- *Architecture Agnostic:* Yocto Project supports Intel, ARM, MIPS,
AMD, PPC and other architectures. Most ODMs, OSVs, and chip vendors
create and supply BSPs that support their hardware. If you have
custom silicon, you can create a BSP that supports that architecture.
Aside from lots of architecture support, the Yocto Project fully
supports a wide range of device emulation through the Quick EMUlator
(QEMU).
- *Images and Code Transfer Easily:* Yocto Project output can easily
move between architectures without moving to new development
environments. Additionally, if you have used the Yocto Project to
create an image or application and you find yourself not able to
support it, commercial Linux vendors such as Wind River, Mentor
Graphics, Timesys, and ENEA could take it and provide ongoing
support. These vendors have offerings that are built using the Yocto
Project.
- *Flexibility:* Corporations use the Yocto Project many different
ways. One example is to create an internal Linux distribution as a
code base the corporation can use across multiple product groups.
Through customization and layering, a project group can leverage the
base Linux distribution to create a distribution that works for their
product needs.
- *Ideal for Constrained Embedded and IoT devices:* Unlike a full Linux
distribution, you can use the Yocto Project to create exactly what
you need for embedded devices. You only add the feature support or
packages that you absolutely need for the device. For devices that
have display hardware, you can use available system components such
as X11, GTK+, Qt, Clutter, and SDL (among others) to create a rich
user experience. For devices that do not have a display or where you
want to use alternative UI frameworks, you can choose to not install
these components.
- *Comprehensive Toolchain Capabilities:* Toolchains for supported
architectures satisfy most use cases. However, if your hardware
supports features that are not part of a standard toolchain, you can
easily customize that toolchain through specification of
platform-specific tuning parameters. And, should you need to use a
third-party toolchain, mechanisms built into the Yocto Project allow
for that.
- *Mechanism Rules Over Policy:* Focusing on mechanism rather than
policy ensures that you are free to set policies based on the needs
of your design instead of adopting decisions enforced by some system
software provider.
- *Uses a Layer Model:* The Yocto Project `layer
infrastructure <#the-yocto-project-layer-model>`__ groups related
functionality into separate bundles. You can incrementally add these
grouped functionalities to your project as needed. Using layers to
isolate and group functionality reduces project complexity and
redundancy, allows you to easily extend the system, make
customizations, and keep functionality organized.
- *Supports Partial Builds:* You can build and rebuild individual
packages as needed. Yocto Project accomplishes this through its
`shared-state cache <#shared-state-cache>`__ (sstate) scheme. Being
able to build and debug components individually eases project
development.
- *Releases According to a Strict Schedule:* Major releases occur on a
`six-month cycle <&YOCTO_DOCS_REF_URL;#ref-release-process>`__
predictably in October and April. The most recent two releases
support point releases to address common vulnerabilities and
exposures. This predictability is crucial for projects based on the
Yocto Project and allows development teams to plan activities.
- *Rich Ecosystem of Individuals and Organizations:* For open source
projects, the value of community is very important. Support forums,
expertise, and active developers who continue to push the Yocto
Project forward are readily available.
- *Binary Reproducibility:* The Yocto Project allows you to be very
specific about dependencies and achieves very high percentages of
binary reproducibility (e.g. 99.8% for ``core-image-minimal``). When
distributions are not specific about which packages are pulled in and
in what order to support dependencies, other build systems can
arbitrarily include packages.
- *License Manifest:* The Yocto Project provides a `license
manifest <&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle>`__
for review by people who need to track the use of open source
licenses (e.g.legal teams).
.. _gs-challenges:
Challenges
----------
The following list presents challenges you might encounter when
developing using the Yocto Project:
- *Steep Learning Curve:* The Yocto Project has a steep learning curve
and has many different ways to accomplish similar tasks. It can be
difficult to choose how to proceed when varying methods exist by
which to accomplish a given task.
- *Understanding What Changes You Need to Make For Your Design Requires
Some Research:* Beyond the simple tutorial stage, understanding what
changes need to be made for your particular design can require a
significant amount of research and investigation. For information
that helps you transition from trying out the Yocto Project to using
it for your project, see the "`What I wish I'd
Known <&YOCTO_DOCS_URL;/what-i-wish-id-known/>`__" and
"`Transitioning to a Custom Environment for Systems
Development <&YOCTO_DOCS_URL;/transitioning-to-a-custom-environment/>`__"
documents on the Yocto Project website.
- *Project Workflow Could Be Confusing:* The `Yocto Project
workflow <#overview-development-environment>`__ could be confusing if
you are used to traditional desktop and server software development.
In a desktop development environment, mechanisms exist to easily pull
and install new packages, which are typically pre-compiled binaries
from servers accessible over the Internet. Using the Yocto Project,
you must modify your configuration and rebuild to add additional
packages.
- *Working in a Cross-Build Environment Can Feel Unfamiliar:* When
developing code to run on a target, compilation, execution, and
testing done on the actual target can be faster than running a
BitBake build on a development host and then deploying binaries to
the target for test. While the Yocto Project does support development
tools on the target, the additional step of integrating your changes
back into the Yocto Project build environment would be required.
Yocto Project supports an intermediate approach that involves making
changes on the development system within the BitBake environment and
then deploying only the updated packages to the target.
The Yocto Project `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__ produces packages
in standard formats (i.e. RPM, DEB, IPK, and TAR). You can deploy
these packages into the running system on the target by using
utilities on the target such as ``rpm`` or ``ipk``.
- *Initial Build Times Can be Significant:* Long initial build times
are unfortunately unavoidable due to the large number of packages
initially built from scratch for a fully functioning Linux system.
Once that initial build is completed, however, the shared-state
(sstate) cache mechanism Yocto Project uses keeps the system from
rebuilding packages that have not been "touched" since the last
build. The sstate mechanism significantly reduces times for
successive builds.
The Yocto Project Layer Model
=============================
The Yocto Project's "Layer Model" is a development model for embedded
and IoT Linux creation that distinguishes the Yocto Project from other
simple build systems. The Layer Model simultaneously supports
collaboration and customization. Layers are repositories that contain
related sets of instructions that tell the `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__ what to do. You can
collaborate, share, and reuse layers.
Layers can contain changes to previous instructions or settings at any
time. This powerful override capability is what allows you to customize
previously supplied collaborative or community layers to suit your
product requirements.
You use different layers to logically separate information in your
build. As an example, you could have BSP, GUI, distro configuration,
middleware, or application layers. Putting your entire build into one
layer limits and complicates future customization and reuse. Isolating
information into layers, on the other hand, helps simplify future
customizations and reuse. You might find it tempting to keep everything
in one layer when working on a single project. However, the more modular
your Metadata, the easier it is to cope with future changes.
.. note::
- Use Board Support Package (BSP) layers from silicon vendors when
possible.
- Familiarize yourself with the `Yocto Project curated layer
index <https://caffelli-staging.yoctoproject.org/software-overview/layers/>`__
or the `OpenEmbedded layer
index <http://layers.openembedded.org/layerindex/branch/master/layers/>`__.
The latter contains more layers but they are less universally
validated.
- Layers support the inclusion of technologies, hardware components,
and software components. The `Yocto Project
Compatible <&YOCTO_DOCS_DEV_URL;#making-sure-your-layer-is-compatible-with-yocto-project>`__
designation provides a minimum level of standardization that
contributes to a strong ecosystem. "YP Compatible" is applied to
appropriate products and software components such as BSPs, other
OE-compatible layers, and related open-source projects, allowing
the producer to use Yocto Project badges and branding assets.
To illustrate how layers are used to keep things modular, consider
machine customizations. These types of customizations typically reside
in a special layer, rather than a general layer, called a BSP Layer.
Furthermore, the machine customizations should be isolated from recipes
and Metadata that support a new GUI environment, for example. This
situation gives you a couple of layers: one for the machine
configurations, and one for the GUI environment. It is important to
understand, however, that the BSP layer can still make machine-specific
additions to recipes within the GUI environment layer without polluting
the GUI layer itself with those machine-specific changes. You can
accomplish this through a recipe that is a BitBake append
(``.bbappend``) file, which is described later in this section.
.. note::
For general information on BSP layer structure, see the
Yocto Project Board Support Packages (BSP) Developer's Guide
.
The `Source Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__
contains both general layers and BSP layers right out of the box. You
can easily identify layers that ship with a Yocto Project release in the
Source Directory by their names. Layers typically have names that begin
with the string ``meta-``.
.. note::
It is not a requirement that a layer name begin with the prefix
meta-
, but it is a commonly accepted standard in the Yocto Project
community.
For example, if you were to examine the `tree
view <https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/>`__ of the
``poky`` repository, you will see several layers: ``meta``,
``meta-skeleton``, ``meta-selftest``, ``meta-poky``, and
``meta-yocto-bsp``. Each of these repositories represents a distinct
layer.
For procedures on how to create layers, see the "`Understanding and
Creating
Layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__"
section in the Yocto Project Development Tasks Manual.
Components and Tools
====================
The Yocto Project employs a collection of components and tools used by
the project itself, by project developers, and by those using the Yocto
Project. These components and tools are open source projects and
metadata that are separate from the reference distribution
(`Poky <&YOCTO_DOCS_REF_URL;#poky>`__) and the `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__. Most of the
components and tools are downloaded separately.
This section provides brief overviews of the components and tools
associated with the Yocto Project.
.. _gs-development-tools:
Development Tools
-----------------
The following list consists of tools that help you develop images and
applications using the Yocto Project:
- *CROPS:* `CROPS <https://github.com/crops/poky-container/>`__ is an
open source, cross-platform development framework that leverages
`Docker Containers <https://www.docker.com/>`__. CROPS provides an
easily managed, extensible environment that allows you to build
binaries for a variety of architectures on Windows, Linux and Mac OS
X hosts.
- *``devtool``:* This command-line tool is available as part of the
extensible SDK (eSDK) and is its cornerstone. You can use ``devtool``
to help build, test, and package software within the eSDK. You can
use the tool to optionally integrate what you build into an image
built by the OpenEmbedded build system.
The ``devtool`` command employs a number of sub-commands that allow
you to add, modify, and upgrade recipes. As with the OpenEmbedded
build system, “recipes” represent software packages within
``devtool``. When you use ``devtool add``, a recipe is automatically
created. When you use ``devtool modify``, the specified existing
recipe is used in order to determine where to get the source code and
how to patch it. In both cases, an environment is set up so that when
you build the recipe a source tree that is under your control is used
in order to allow you to make changes to the source as desired. By
default, both new recipes and the source go into a “workspace”
directory under the eSDK. The ``devtool upgrade`` command updates an
existing recipe so that you can build it for an updated set of source
files.
You can read about the ``devtool`` workflow in the Yocto Project
Application Development and Extensible Software Development Kit
(eSDK) Manual in the "`Using ``devtool`` in Your SDK
Workflow' <&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow>`__"
section.
- *Extensible Software Development Kit (eSDK):* The eSDK provides a
cross-development toolchain and libraries tailored to the contents of
a specific image. The eSDK makes it easy to add new applications and
libraries to an image, modify the source for an existing component,
test changes on the target hardware, and integrate into the rest of
the OpenEmbedded build system. The eSDK gives you a toolchain
experience supplemented with the powerful set of ``devtool`` commands
tailored for the Yocto Project environment.
For information on the eSDK, see the `Yocto Project Application
Development and the Extensible Software Development Kit
(eSDK) <&YOCTO_DOCS_SDK_URL;>`__ Manual.
- *Toaster:* Toaster is a web interface to the Yocto Project
OpenEmbedded build system. Toaster allows you to configure, run, and
view information about builds. For information on Toaster, see the
`Toaster User Manual <&YOCTO_DOCS_TOAST_URL;>`__.
.. _gs-production-tools:
Production Tools
----------------
The following list consists of tools that help production related
activities using the Yocto Project:
- *Auto Upgrade Helper:* This utility when used in conjunction with the
`OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__ (BitBake and
OE-Core) automatically generates upgrades for recipes that are based
on new versions of the recipes published upstream.
- *Recipe Reporting System:* The Recipe Reporting System tracks recipe
versions available for Yocto Project. The main purpose of the system
is to help you manage the recipes you maintain and to offer a dynamic
overview of the project. The Recipe Reporting System is built on top
of the `OpenEmbedded Layer
Index <http://layers.openembedded.org/layerindex/layers/>`__, which
is a website that indexes OpenEmbedded-Core layers.
- *Patchwork:* `Patchwork <http://jk.ozlabs.org/projects/patchwork/>`__
is a fork of a project originally started by
`OzLabs <http://ozlabs.org/>`__. The project is a web-based tracking
system designed to streamline the process of bringing contributions
into a project. The Yocto Project uses Patchwork as an organizational
tool to handle patches, which number in the thousands for every
release.
- *AutoBuilder:* AutoBuilder is a project that automates build tests
and quality assurance (QA). By using the public AutoBuilder, anyone
can determine the status of the current "master" branch of Poky.
.. note::
AutoBuilder is based on
buildbot
.
A goal of the Yocto Project is to lead the open source industry with
a project that automates testing and QA procedures. In doing so, the
project encourages a development community that publishes QA and test
plans, publicly demonstrates QA and test plans, and encourages
development of tools that automate and test and QA procedures for the
benefit of the development community.
You can learn more about the AutoBuilder used by the Yocto Project
`here <&YOCTO_AB_URL;>`__.
- *Cross-Prelink:* Prelinking is the process of pre-computing the load
addresses and link tables generated by the dynamic linker as compared
to doing this at runtime. Doing this ahead of time results in
performance improvements when the application is launched and reduced
memory usage for libraries shared by many applications.
Historically, cross-prelink is a variant of prelink, which was
conceived by `Jakub
Jelínek <http://people.redhat.com/jakub/prelink.pdf>`__ a number of
years ago. Both prelink and cross-prelink are maintained in the same
repository albeit on separate branches. By providing an emulated
runtime dynamic linker (i.e. ``glibc``-derived ``ld.so`` emulation),
the cross-prelink project extends the prelink softwares ability to
prelink a sysroot environment. Additionally, the cross-prelink
software enables the ability to work in sysroot style environments.
The dynamic linker determines standard load address calculations
based on a variety of factors such as mapping addresses, library
usage, and library function conflicts. The prelink tool uses this
information, from the dynamic linker, to determine unique load
addresses for executable and linkable format (ELF) binaries that are
shared libraries and dynamically linked. The prelink tool modifies
these ELF binaries with the pre-computed information. The result is
faster loading and often lower memory consumption because more of the
library code can be re-used from shared Copy-On-Write (COW) pages.
The original upstream prelink project only supports running prelink
on the end target device due to the reliance on the target devices
dynamic linker. This restriction causes issues when developing a
cross-compiled system. The cross-prelink adds a synthesized dynamic
loader that runs on the host, thus permitting cross-prelinking
without ever having to run on a read-write target filesystem.
- *Pseudo:* Pseudo is the Yocto Project implementation of
`fakeroot <http://man.he.net/man1/fakeroot>`__, which is used to run
commands in an environment that seemingly has root privileges.
During a build, it can be necessary to perform operations that
require system administrator privileges. For example, file ownership
or permissions might need definition. Pseudo is a tool that you can
either use directly or through the environment variable
``LD_PRELOAD``. Either method allows these operations to succeed as
if system administrator privileges exist even when they do not.
You can read more about Pseudo in the "`Fakeroot and
Pseudo <#fakeroot-and-pseudo>`__" section.
.. _gs-openembedded-build-system:
Open-Embedded Build System Components
-------------------------------------
The following list consists of components associated with the
`OpenEmbedded build system <&YOCTO_DOCS_REF_URL;#build-system-term>`__:
- *BitBake:* BitBake is a core component of the Yocto Project and is
used by the OpenEmbedded build system to build images. While BitBake
is key to the build system, BitBake is maintained separately from the
Yocto Project.
BitBake is a generic task execution engine that allows shell and
Python tasks to be run efficiently and in parallel while working
within complex inter-task dependency constraints. In short, BitBake
is a build engine that works through recipes written in a specific
format in order to perform sets of tasks.
You can learn more about BitBake in the `BitBake User
Manual <&YOCTO_DOCS_BB_URL;>`__.
- *OpenEmbedded-Core:* OpenEmbedded-Core (OE-Core) is a common layer of
metadata (i.e. recipes, classes, and associated files) used by
OpenEmbedded-derived systems, which includes the Yocto Project. The
Yocto Project and the OpenEmbedded Project both maintain the
OpenEmbedded-Core. You can find the OE-Core metadata in the Yocto
Project `Source
Repositories <&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta>`__.
Historically, the Yocto Project integrated the OE-Core metadata
throughout the Yocto Project source repository reference system
(Poky). After Yocto Project Version 1.0, the Yocto Project and
OpenEmbedded agreed to work together and share a common core set of
metadata (OE-Core), which contained much of the functionality
previously found in Poky. This collaboration achieved a long-standing
OpenEmbedded objective for having a more tightly controlled and
quality-assured core. The results also fit well with the Yocto
Project objective of achieving a smaller number of fully featured
tools as compared to many different ones.
Sharing a core set of metadata results in Poky as an integration
layer on top of OE-Core. You can see that in this
`figure <#yp-key-dev-elements>`__. The Yocto Project combines various
components such as BitBake, OE-Core, script “glue”, and documentation
for its build system.
.. _gs-reference-distribution-poky:
Reference Distribution (Poky)
-----------------------------
Poky is the Yocto Project reference distribution. It contains the
`Open-Embedded build system <&YOCTO_DOCS_REF_URL;#build-system-term>`__
(BitBake and OE-Core) as well as a set of metadata to get you started
building your own distribution. See the
`figure <#what-is-the-yocto-project>`__ in "What is the Yocto Project?"
section for an illustration that shows Poky and its relationship with
other parts of the Yocto Project.
To use the Yocto Project tools and components, you can download
(``clone``) Poky and use it to bootstrap your own distribution.
.. note::
Poky does not contain binary files. It is a working example of how to
build your own custom Linux distribution from source.
You can read more about Poky in the "`Reference Embedded Distribution
(Poky) <#reference-embedded-distribution>`__" section.
.. _gs-packages-for-finished-targets:
Packages for Finished Targets
-----------------------------
The following lists components associated with packages for finished
targets:
- *Matchbox:* Matchbox is an Open Source, base environment for the X
Window System running on non-desktop, embedded platforms such as
handhelds, set-top boxes, kiosks, and anything else for which screen
space, input mechanisms, or system resources are limited.
Matchbox consists of a number of interchangeable and optional
applications that you can tailor to a specific, non-desktop platform
to enhance usability in constrained environments.
You can find the Matchbox source in the Yocto Project `Source
Repositories <&YOCTO_GIT_URL;>`__.
- *Opkg* Open PacKaGe management (opkg) is a lightweight package
management system based on the itsy package (ipkg) management system.
Opkg is written in C and resembles Advanced Package Tool (APT) and
Debian Package (dpkg) in operation.
Opkg is intended for use on embedded Linux devices and is used in
this capacity in the
`OpenEmbedded <http://www.openembedded.org/wiki/Main_Page>`__ and
`OpenWrt <https://openwrt.org/>`__ projects, as well as the Yocto
Project.
.. note::
As best it can, opkg maintains backwards compatibility with ipkg
and conforms to a subset of Debians policy manual regarding
control files.
.. _gs-archived-components:
Archived Components
-------------------
The Build Appliance is a virtual machine image that enables you to build
and boot a custom embedded Linux image with the Yocto Project using a
non-Linux development system.
Historically, the Build Appliance was the second of three methods by
which you could use the Yocto Project on a system that was not native to
Linux.
1. *Hob:* Hob, which is now deprecated and is no longer available since
the 2.1 release of the Yocto Project provided a rudimentary,
GUI-based interface to the Yocto Project. Toaster has fully replaced
Hob.
2. *Build Appliance:* Post Hob, the Build Appliance became available. It
was never recommended that you use the Build Appliance as a
day-to-day production development environment with the Yocto Project.
Build Appliance was useful as a way to try out development in the
Yocto Project environment.
3. *CROPS:* The final and best solution available now for developing
using the Yocto Project on a system not native to Linux is with
`CROPS <#gs-crops-overview>`__.
.. _gs-development-methods:
Development Methods
===================
The Yocto Project development environment usually involves a `Build
Host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__ and target
hardware. You use the Build Host to build images and develop
applications, while you use the target hardware to test deployed
software.
This section provides an introduction to the choices or development
methods you have when setting up your Build Host. Depending on the your
particular workflow preference and the type of operating system your
Build Host runs, several choices exist that allow you to use the Yocto
Project.
.. note::
For additional detail about the Yocto Project development
environment, see the "
The Yocto Project Development Environment
" chapter.
- *Native Linux Host:* By far the best option for a Build Host. A
system running Linux as its native operating system allows you to
develop software by directly using the
`BitBake <&YOCTO_DOCS_REF_URL;#bitbake-term>`__ tool. You can
accomplish all aspects of development from a familiar shell of a
supported Linux distribution.
For information on how to set up a Build Host on a system running
Linux as its native operating system, see the "`Setting Up a Native
Linux Host <&YOCTO_DOCS_DEV_URL;#setting-up-a-native-linux-host>`__"
section in the Yocto Project Development Tasks Manual.
- *CROss PlatformS (CROPS):* Typically, you use
`CROPS <https://github.com/crops/poky-container/>`__, which leverages
`Docker Containers <https://www.docker.com/>`__, to set up a Build
Host that is not running Linux (e.g. Microsoft Windows or macOS).
.. note::
You can, however, use CROPS on a Linux-based system.
CROPS is an open source, cross-platform development framework that
provides an easily managed, extensible environment for building
binaries targeted for a variety of architectures on Windows, macOS,
or Linux hosts. Once the Build Host is set up using CROPS, you can
prepare a shell environment to mimic that of a shell being used on a
system natively running Linux.
For information on how to set up a Build Host with CROPS, see the
"`Setting Up to Use CROss PlatformS
(CROPS) <&YOCTO_DOCS_DEV_URL;#setting-up-to-use-crops>`__" section in
the Yocto Project Development Tasks Manual.
- *Windows Subsystem For Linux (WSLv2):* You may use Windows Subsystem
For Linux v2 to set up a build host using Windows 10.
.. note::
The Yocto Project is not compatible with WSLv1, it is compatible
but not officially supported nor validated with WSLv2, if you
still decide to use WSL please upgrade to WSLv2.
The Windows Subsystem For Linux allows Windows 10 to run a real Linux
kernel inside of a lightweight utility virtual machine (VM) using
virtualization technology.
For information on how to set up a Build Host with WSLv2, see the
"`Setting Up to Use Windows Subsystem For
Linux <&YOCTO_DOCS_DEV_URL;#setting-up-to-use-wsl>`__" section in the
Yocto Project Development Tasks Manual.
- *Toaster:* Regardless of what your Build Host is running, you can use
Toaster to develop software using the Yocto Project. Toaster is a web
interface to the Yocto Project's `Open-Embedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__. The interface
enables you to configure and run your builds. Information about
builds is collected and stored in a database. You can use Toaster to
configure and start builds on multiple remote build servers.
For information about and how to use Toaster, see the `Toaster User
Manual <&YOCTO_DOCS_TOAST_URL;>`__.
.. _reference-embedded-distribution:
Reference Embedded Distribution (Poky)
======================================
"Poky", which is pronounced *Pock*-ee, is the name of the Yocto
Project's reference distribution or Reference OS Kit. Poky contains the
`OpenEmbedded Build System <&YOCTO_DOCS_REF_URL;#build-system-term>`__
(`BitBake <&YOCTO_DOCS_REF_URL;#bitbake-term>`__ and
`OpenEmbedded-Core <&YOCTO_DOCS_REF_URL;#oe-core>`__) as well as a set
of `metadata <&YOCTO_DOCS_REF_URL;#metadata>`__ to get you started
building your own distro. In other words, Poky is a base specification
of the functionality needed for a typical embedded system as well as the
components from the Yocto Project that allow you to build a distribution
into a usable binary image.
Poky is a combined repository of BitBake, OpenEmbedded-Core (which is
found in ``meta``), ``meta-poky``, ``meta-yocto-bsp``, and documentation
provided all together and known to work well together. You can view
these items that make up the Poky repository in the `Source
Repositories <&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/>`__.
.. note::
If you are interested in all the contents of the
poky
Git repository, see the "
Top-Level Core Components
" section in the Yocto Project Reference Manual.
The following figure illustrates what generally comprises Poky:
- BitBake is a task executor and scheduler that is the heart of the
OpenEmbedded build system.
- ``meta-poky``, which is Poky-specific metadata.
- ``meta-yocto-bsp``, which are Yocto Project-specific Board Support
Packages (BSPs).
- OpenEmbedded-Core (OE-Core) metadata, which includes shared
configurations, global variable definitions, shared classes,
packaging, and recipes. Classes define the encapsulation and
inheritance of build logic. Recipes are the logical units of software
and images to be built.
- Documentation, which contains the Yocto Project source files used to
make the set of user manuals.
.. note::
While Poky is a "complete" distribution specification and is tested
and put through QA, you cannot use it as a product "out of the box"
in its current form.
To use the Yocto Project tools, you can use Git to clone (download) the
Poky repository then use your local copy of the reference distribution
to bootstrap your own distribution.
.. note::
Poky does not contain binary files. It is a working example of how to
build your own custom Linux distribution from source.
Poky has a regular, well established, six-month release cycle under its
own version. Major releases occur at the same time major releases (point
releases) occur for the Yocto Project, which are typically in the Spring
and Fall. For more information on the Yocto Project release schedule and
cadence, see the "`Yocto Project Releases and the Stable Release
Process <&YOCTO_DOCS_REF_URL;#ref-release-process>`__" chapter in the
Yocto Project Reference Manual.
Much has been said about Poky being a "default configuration." A default
configuration provides a starting image footprint. You can use Poky out
of the box to create an image ranging from a shell-accessible minimal
image all the way up to a Linux Standard Base-compliant image that uses
a GNOME Mobile and Embedded (GMAE) based reference user interface called
Sato.
One of the most powerful properties of Poky is that every aspect of a
build is controlled by the metadata. You can use metadata to augment
these base image types by adding metadata
`layers <#the-yocto-project-layer-model>`__ that extend functionality.
These layers can provide, for example, an additional software stack for
an image type, add a board support package (BSP) for additional
hardware, or even create a new image type.
Metadata is loosely grouped into configuration files or package recipes.
A recipe is a collection of non-executable metadata used by BitBake to
set variables or define additional build-time tasks. A recipe contains
fields such as the recipe description, the recipe version, the license
of the package and the upstream source repository. A recipe might also
indicate that the build process uses autotools, make, distutils or any
other build process, in which case the basic functionality can be
defined by the classes it inherits from the OE-Core layer's class
definitions in ``./meta/classes``. Within a recipe you can also define
additional tasks as well as task prerequisites. Recipe syntax through
BitBake also supports both ``_prepend`` and ``_append`` operators as a
method of extending task functionality. These operators inject code into
the beginning or end of a task. For information on these BitBake
operators, see the "`Appending and Prepending (Override Style
Syntax) <&YOCTO_DOCS_BB_URL;#appending-and-prepending-override-style-syntax>`__"
section in the BitBake User's Manual.
.. _openembedded-build-system-workflow:
The OpenEmbedded Build System Workflow
======================================
The `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__ uses a "workflow" to
accomplish image and SDK generation. The following figure overviews that
workflow: Following is a brief summary of the "workflow":
1. Developers specify architecture, policies, patches and configuration
details.
2. The build system fetches and downloads the source code from the
specified location. The build system supports standard methods such
as tarballs or source code repositories systems such as Git.
3. Once source code is downloaded, the build system extracts the sources
into a local work area where patches are applied and common steps for
configuring and compiling the software are run.
4. The build system then installs the software into a temporary staging
area where the binary package format you select (DEB, RPM, or IPK) is
used to roll up the software.
5. Different QA and sanity checks run throughout entire build process.
6. After the binaries are created, the build system generates a binary
package feed that is used to create the final root file image.
7. The build system generates the file system image and a customized
Extensible SDK (eSDK) for application development in parallel.
For a very detailed look at this workflow, see the "`OpenEmbedded Build
System Concepts <#openembedded-build-system-build-concepts>`__" section.
Some Basic Terms
================
It helps to understand some basic fundamental terms when learning the
Yocto Project. Although a list of terms exists in the "`Yocto Project
Terms <&YOCTO_DOCS_REF_URL;#ref-terms>`__" section of the Yocto Project
Reference Manual, this section provides the definitions of some terms
helpful for getting started:
- *Configuration Files:* Files that hold global definitions of
variables, user-defined variables, and hardware configuration
information. These files tell the `Open-Embedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__ what to build and
what to put into the image to support a particular platform.
- *Extensible Software Development Kit (eSDK):* A custom SDK for
application developers. This eSDK allows developers to incorporate
their library and programming changes back into the image to make
their code available to other application developers. For information
on the eSDK, see the `Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) <&YOCTO_DOCS_SDK_URL;>`__
manual.
- *Layer:* A collection of related recipes. Layers allow you to
consolidate related metadata to customize your build. Layers also
isolate information used when building for multiple architectures.
Layers are hierarchical in their ability to override previous
specifications. You can include any number of available layers from
the Yocto Project and customize the build by adding your layers after
them. You can search the Layer Index for layers used within Yocto
Project.
For more detailed information on layers, see the "`Understanding and
Creating
Layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__"
section in the Yocto Project Development Tasks Manual. For a
discussion specifically on BSP Layers, see the "`BSP
Layers <&YOCTO_DOCS_BSP_URL;#bsp-layers>`__" section in the Yocto
Project Board Support Packages (BSP) Developer's Guide.
- *Metadata:* A key element of the Yocto Project is the Metadata that
is used to construct a Linux distribution and is contained in the
files that the OpenEmbedded build system parses when building an
image. In general, Metadata includes recipes, configuration files,
and other information that refers to the build instructions
themselves, as well as the data used to control what things get built
and the effects of the build. Metadata also includes commands and
data used to indicate what versions of software are used, from where
they are obtained, and changes or additions to the software itself
(patches or auxiliary files) that are used to fix bugs or customize
the software for use in a particular situation. OpenEmbedded-Core is
an important set of validated metadata.
- *OpenEmbedded Build System:* The terms "BitBake" and "build system"
are sometimes used for the OpenEmbedded Build System.
BitBake is a task scheduler and execution engine that parses
instructions (i.e. recipes) and configuration data. After a parsing
phase, BitBake creates a dependency tree to order the compilation,
schedules the compilation of the included code, and finally executes
the building of the specified custom Linux image (distribution).
BitBake is similar to the ``make`` tool.
During a build process, the build system tracks dependencies and
performs a native or cross-compilation of the package. As a first
step in a cross-build setup, the framework attempts to create a
cross-compiler toolchain (i.e. Extensible SDK) suited for the target
platform.
- *OpenEmbedded-Core (OE-Core):* OE-Core is metadata comprised of
foundation recipes, classes, and associated files that are meant to
be common among many different OpenEmbedded-derived systems,
including the Yocto Project. OE-Core is a curated subset of an
original repository developed by the OpenEmbedded community that has
been pared down into a smaller, core set of continuously validated
recipes. The result is a tightly controlled and quality-assured core
set of recipes.
You can see the Metadata in the ``meta`` directory of the Yocto
Project `Source
Repositories <http://git.yoctoproject.org/cgit/cgit.cgi>`__.
- *Packages:* In the context of the Yocto Project, this term refers to
a recipe's packaged output produced by BitBake (i.e. a "baked
recipe"). A package is generally the compiled binaries produced from
the recipe's sources. You "bake" something by running it through
BitBake.
It is worth noting that the term "package" can, in general, have
subtle meanings. For example, the packages referred to in the
"`Required Packages for the Build
Host <&YOCTO_DOCS_REF_URL;#required-packages-for-the-build-host>`__"
section in the Yocto Project Reference Manual are compiled binaries
that, when installed, add functionality to your Linux distribution.
Another point worth noting is that historically within the Yocto
Project, recipes were referred to as packages - thus, the existence
of several BitBake variables that are seemingly mis-named, (e.g.
```PR`` <&YOCTO_DOCS_REF_URL;#var-PR>`__,
```PV`` <&YOCTO_DOCS_REF_URL;#var-PV>`__, and
```PE`` <&YOCTO_DOCS_REF_URL;#var-PE>`__).
- *Poky:* Poky is a reference embedded distribution and a reference
test configuration. Poky provides the following:
- A base-level functional distro used to illustrate how to customize
a distribution.
- A means by which to test the Yocto Project components (i.e. Poky
is used to validate the Yocto Project).
- A vehicle through which you can download the Yocto Project.
Poky is not a product level distro. Rather, it is a good starting
point for customization.
.. note::
Poky is an integration layer on top of OE-Core.
- *Recipe:* The most common form of metadata. A recipe contains a list
of settings and tasks (i.e. instructions) for building packages that
are then used to build the binary image. A recipe describes where you
get source code and which patches to apply. Recipes describe
dependencies for libraries or for other recipes as well as
configuration and compilation options. Related recipes are
consolidated into a layer.

View File

@@ -0,0 +1,12 @@
==========================================
Yocto Project Overview and Concepts Manual
==========================================
.. toctree::
:caption: Table of Contents
:numbered:
overview-manual-intro
overview-manual-yp-intro
overview-manual-development-environment
overview-manual-concepts

View File

@@ -0,0 +1,28 @@
*************************************************************
Overall Architecture of the Linux Tracing and Profiling Tools
*************************************************************
Architecture of the Tracing and Profiling Tools
===============================================
It may seem surprising to see a section covering an 'overall
architecture' for what seems to be a random collection of tracing tools
that together make up the Linux tracing and profiling space. The fact
is, however, that in recent years this seemingly disparate set of tools
has started to converge on a 'core' set of underlying mechanisms:
- static tracepoints
- dynamic tracepoints
- kprobes
- uprobes
- the perf_events subsystem
- debugfs
.. container:: informalexample
Tying it Together:
Rather than enumerating here how each tool makes use of these common
mechanisms, textboxes like this will make note of the specific usages
in each tool as they come up in the course of the text.

View File

@@ -0,0 +1,20 @@
*******************
Real-World Examples
*******************
This chapter contains real-world examples.
Slow Write Speed on Live Images
===============================
In one of our previous releases (denzil), users noticed that booting off
of a live image and writing to disk was noticeably slower. This included
the boot itself, especially the first one, since first boots tend to do
a significant amount of writing due to certain post-install scripts.
The problem (and solution) was discovered by using the Yocto tracing
tools, in this case 'perf stat', 'perf script', 'perf record' and 'perf
report'.
See all the unvarnished details of how this bug was diagnosed and solved
here: Yocto Bug #3049

View File

@@ -0,0 +1,67 @@
******************************************
Yocto Project Profiling and Tracing Manual
******************************************
.. _profile-intro:
Introduction
============
Yocto bundles a number of tracing and profiling tools - this 'HOWTO'
describes their basic usage and shows by example how to make use of them
to examine application and system behavior.
The tools presented are for the most part completely open-ended and have
quite good and/or extensive documentation of their own which can be used
to solve just about any problem you might come across in Linux. Each
section that describes a particular tool has links to that tool's
documentation and website.
The purpose of this 'HOWTO' is to present a set of common and generally
useful tracing and profiling idioms along with their application (as
appropriate) to each tool, in the context of a general-purpose
'drill-down' methodology that can be applied to solving a large number
(90%?) of problems. For help with more advanced usages and problems,
please see the documentation and/or websites listed for each tool.
The final section of this 'HOWTO' is a collection of real-world examples
which we'll be continually adding to as we solve more problems using the
tools - feel free to add your own examples to the list!
.. _profile-manual-general-setup:
General Setup
=============
Most of the tools are available only in 'sdk' images or in images built
after adding 'tools-profile' to your local.conf. So, in order to be able
to access all of the tools described here, please first build and boot
an 'sdk' image e.g. $ bitbake core-image-sato-sdk or alternatively by
adding 'tools-profile' to the EXTRA_IMAGE_FEATURES line in your
local.conf: EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile" If you
use the 'tools-profile' method, you don't need to build an sdk image -
the tracing and profiling tools will be included in non-sdk images as
well e.g.: $ bitbake core-image-sato
.. note::
By default, the Yocto build system strips symbols from the binaries
it packages, which makes it difficult to use some of the tools.
You can prevent that by setting the
```INHIBIT_PACKAGE_STRIP`` <&YOCTO_DOCS_REF_URL;#var-INHIBIT_PACKAGE_STRIP>`__
variable to "1" in your ``local.conf`` when you build the image:
INHIBIT_PACKAGE_STRIP = "1" The above setting will noticeably increase
the size of your image.
If you've already built a stripped image, you can generate debug
packages (xxx-dbg) which you can manually install as needed.
To generate debug info for packages, you can add dbg-pkgs to
EXTRA_IMAGE_FEATURES in local.conf. For example: EXTRA_IMAGE_FEATURES =
"debug-tweaks tools-profile dbg-pkgs" Additionally, in order to generate
the right type of debuginfo, we also need to set
```PACKAGE_DEBUG_SPLIT_STYLE`` <&YOCTO_DOCS_REF_URL;#var-PACKAGE_DEBUG_SPLIT_STYLE>`__
in the ``local.conf`` file: PACKAGE_DEBUG_SPLIT_STYLE =
'debug-file-directory'

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,12 @@
==========================================
Yocto Project Profiling and Tracing Manual
==========================================
.. toctree::
:caption: Table of Contents
:numbered:
profile-manual-intro
profile-manual-arch
profile-manual-usage
profile-manual-examples

View File

@@ -0,0 +1,418 @@
***
FAQ
***
**Q:** How does Poky differ from `OpenEmbedded <&OE_HOME_URL;>`__?
**A:** The term "`Poky <#>`__" refers to the specific reference build
system that the Yocto Project provides. Poky is based on
`OE-Core <#oe-core>`__ and `BitBake <#bitbake-term>`__. Thus, the
generic term used here for the build system is the "OpenEmbedded build
system." Development in the Yocto Project using Poky is closely tied to
OpenEmbedded, with changes always being merged to OE-Core or BitBake
first before being pulled back into Poky. This practice benefits both
projects immediately.
**Q:** My development system does not meet the required Git, tar, and
Python versions. In particular, I do not have Python 3.5.0 or greater.
Can I still use the Yocto Project?
**A:** You can get the required tools on your host development system a
couple different ways (i.e. building a tarball or downloading a
tarball). See the "`Required Git, tar, Python and gcc
Versions <#required-git-tar-python-and-gcc-versions>`__" section for
steps on how to update your build tools.
**Q:** How can you claim Poky / OpenEmbedded-Core is stable?
**A:** There are three areas that help with stability;
- The Yocto Project team keeps `OE-Core <#oe-core>`__ small and
focused, containing around 830 recipes as opposed to the thousands
available in other OpenEmbedded community layers. Keeping it small
makes it easy to test and maintain.
- The Yocto Project team runs manual and automated tests using a small,
fixed set of reference hardware as well as emulated targets.
- The Yocto Project uses an autobuilder, which provides continuous
build and integration tests.
**Q:** How do I get support for my board added to the Yocto Project?
**A:** Support for an additional board is added by creating a Board
Support Package (BSP) layer for it. For more information on how to
create a BSP layer, see the "`Understanding and Creating
Layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__"
section in the Yocto Project Development Tasks Manual and the `Yocto
Project Board Support Package (BSP) Developer's
Guide <&YOCTO_DOCS_BSP_URL;>`__.
Usually, if the board is not completely exotic, adding support in the
Yocto Project is fairly straightforward.
**Q:** Are there any products built using the OpenEmbedded build system?
**A:** The software running on the `Vernier
LabQuest <http://vernier.com/labquest/>`__ is built using the
OpenEmbedded build system. See the `Vernier
LabQuest <http://www.vernier.com/products/interfaces/labq/>`__ website
for more information. There are a number of pre-production devices using
the OpenEmbedded build system and the Yocto Project team announces them
as soon as they are released.
**Q:** What does the OpenEmbedded build system produce as output?
**A:** Because you can use the same set of recipes to create output of
various formats, the output of an OpenEmbedded build depends on how you
start it. Usually, the output is a flashable image ready for the target
device.
**Q:** How do I add my package to the Yocto Project?
**A:** To add a package, you need to create a BitBake recipe. For
information on how to create a BitBake recipe, see the "`Writing a New
Recipe <&YOCTO_DOCS_DEV_URL;#new-recipe-writing-a-new-recipe>`__"
section in the Yocto Project Development Tasks Manual.
**Q:** Do I have to reflash my entire board with a new Yocto Project
image when recompiling a package?
**A:** The OpenEmbedded build system can build packages in various
formats such as IPK for OPKG, Debian package (``.deb``), or RPM. You can
then upgrade the packages using the package tools on the device, much
like on a desktop distribution such as Ubuntu or Fedora. However,
package management on the target is entirely optional.
**Q:** I see the error
'``chmod: XXXXX new permissions are r-xrwxrwx, not r-xr-xr-x``'. What is
wrong?
**A:** You are probably running the build on an NTFS filesystem. Use
``ext2``, ``ext3``, or ``ext4`` instead.
**Q:** I see lots of 404 responses for files when the OpenEmbedded build
system is trying to download sources. Is something wrong?
**A:** Nothing is wrong. The OpenEmbedded build system checks any
configured source mirrors before downloading from the upstream sources.
The build system does this searching for both source archives and
pre-checked out versions of SCM-managed software. These checks help in
large installations because it can reduce load on the SCM servers
themselves. The address above is one of the default mirrors configured
into the build system. Consequently, if an upstream source disappears,
the team can place sources there so builds continue to work.
**Q:** I have machine-specific data in a package for one machine only
but the package is being marked as machine-specific in all cases, how do
I prevent this?
**A:** Set ``SRC_URI_OVERRIDES_PACKAGE_ARCH`` = "0" in the ``.bb`` file
but make sure the package is manually marked as machine-specific for the
case that needs it. The code that handles
``SRC_URI_OVERRIDES_PACKAGE_ARCH`` is in the
``meta/classes/base.bbclass`` file.
**Q:** I'm behind a firewall and need to use a proxy server. How do I do
that?
**A:** Most source fetching by the OpenEmbedded build system is done by
``wget`` and you therefore need to specify the proxy settings in a
``.wgetrc`` file, which can be in your home directory if you are a
single user or can be in ``/usr/local/etc/wgetrc`` as a global user
file.
Following is the applicable code for setting various proxy types in the
``.wgetrc`` file. By default, these settings are disabled with comments.
To use them, remove the comments: # You can set the default proxies for
Wget to use for http, https, and ftp. # They will override the value in
the environment. #https_proxy = http://proxy.yoyodyne.com:18023/
#http_proxy = http://proxy.yoyodyne.com:18023/ #ftp_proxy =
http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at
all, set this to off. #use_proxy = on The Yocto Project also includes a
``meta-poky/conf/site.conf.sample`` file that shows how to configure CVS
and Git proxy servers if needed. For more information on setting up
various proxy types and configuring proxy servers, see the "`Working
Behind a Network
Proxy <&YOCTO_WIKI_URL;/wiki/Working_Behind_a_Network_Proxy>`__" Wiki
page.
**Q:** Whats the difference between target and target\ ``-native``?
**A:** The ``*-native`` targets are designed to run on the system being
used for the build. These are usually tools that are needed to assist
the build in some way such as ``quilt-native``, which is used to apply
patches. The non-native version is the one that runs on the target
device.
**Q:** I'm seeing random build failures. Help?!
**A:** If the same build is failing in totally different and random
ways, the most likely explanation is:
- The hardware you are running the build on has some problem.
- You are running the build under virtualization, in which case the
virtualization probably has bugs.
The OpenEmbedded build system processes a massive amount of data that
causes lots of network, disk and CPU activity and is sensitive to even
single-bit failures in any of these areas. True random failures have
always been traced back to hardware or virtualization issues.
**Q:** When I try to build a native recipe, the build fails with
``iconv.h`` problems.
**A:** If you get an error message that indicates GNU ``libiconv`` is
not in use but ``iconv.h`` has been included from ``libiconv``, you need
to check to see if you have a previously installed version of the header
file in ``/usr/local/include``. #error GNU libiconv not in use but
included iconv.h is from libiconv If you find a previously installed
file, you should either uninstall it or temporarily rename it and try
the build again.
This issue is just a single manifestation of "system leakage" issues
caused when the OpenEmbedded build system finds and uses previously
installed files during a native build. This type of issue might not be
limited to ``iconv.h``. Be sure that leakage cannot occur from
``/usr/local/include`` and ``/opt`` locations.
**Q:** What do we need to ship for license compliance?
**A:** This is a difficult question and you need to consult your lawyer
for the answer for your specific case. It is worth bearing in mind that
for GPL compliance, there needs to be enough information shipped to
allow someone else to rebuild and produce the same end result you are
shipping. This means sharing the source code, any patches applied to it,
and also any configuration information about how that package was
configured and built.
You can find more information on licensing in the
"`Licensing <&YOCTO_DOCS_OM_URL;#licensing>`__" section in the Yocto
Project Overview and Concepts Manual and also in the "`Maintaining Open
Source License Compliance During Your Product's
Lifecycle <&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle>`__"
section in the Yocto Project Development Tasks Manual.
**Q:** How do I disable the cursor on my touchscreen device?
**A:** You need to create a form factor file as described in the
"`Miscellaneous BSP-Specific Recipe
Files <&YOCTO_DOCS_BSP_URL;#bsp-filelayout-misc-recipes>`__" section in
the Yocto Project Board Support Packages (BSP) Developer's Guide. Set
the ``HAVE_TOUCHSCREEN`` variable equal to one as follows:
HAVE_TOUCHSCREEN=1
**Q:** How do I make sure connected network interfaces are brought up by
default?
**A:** The default interfaces file provided by the netbase recipe does
not automatically bring up network interfaces. Therefore, you will need
to add a BSP-specific netbase that includes an interfaces file. See the
"`Miscellaneous BSP-Specific Recipe
Files <&YOCTO_DOCS_BSP_URL;#bsp-filelayout-misc-recipes>`__" section in
the Yocto Project Board Support Packages (BSP) Developer's Guide for
information on creating these types of miscellaneous recipe files.
For example, add the following files to your layer:
meta-MACHINE/recipes-bsp/netbase/netbase/MACHINE/interfaces
meta-MACHINE/recipes-bsp/netbase/netbase_5.0.bbappend
**Q:** How do I create images with more free space?
**A:** By default, the OpenEmbedded build system creates images that are
1.3 times the size of the populated root filesystem. To affect the image
size, you need to set various configurations:
- *Image Size:* The OpenEmbedded build system uses the
```IMAGE_ROOTFS_SIZE`` <#var-IMAGE_ROOTFS_SIZE>`__ variable to define
the size of the image in Kbytes. The build system determines the size
by taking into account the initial root filesystem size before any
modifications such as requested size for the image and any requested
additional free disk space to be added to the image.
- *Overhead:* Use the
```IMAGE_OVERHEAD_FACTOR`` <#var-IMAGE_OVERHEAD_FACTOR>`__ variable
to define the multiplier that the build system applies to the initial
image size, which is 1.3 by default.
- *Additional Free Space:* Use the
```IMAGE_ROOTFS_EXTRA_SPACE`` <#var-IMAGE_ROOTFS_EXTRA_SPACE>`__
variable to add additional free space to the image. The build system
adds this space to the image after it determines its
``IMAGE_ROOTFS_SIZE``.
**Q:** Why don't you support directories with spaces in the pathnames?
**A:** The Yocto Project team has tried to do this before but too many
of the tools the OpenEmbedded build system depends on, such as
``autoconf``, break when they find spaces in pathnames. Until that
situation changes, the team will not support spaces in pathnames.
**Q:** How do I use an external toolchain?
**A:** The toolchain configuration is very flexible and customizable. It
is primarily controlled with the ``TCMODE`` variable. This variable
controls which ``tcmode-*.inc`` file to include from the
``meta/conf/distro/include`` directory within the `Source
Directory <#source-directory>`__.
The default value of ``TCMODE`` is "default", which tells the
OpenEmbedded build system to use its internally built toolchain (i.e.
``tcmode-default.inc``). However, other patterns are accepted. In
particular, "external-*" refers to external toolchains. One example is
the Sourcery G++ Toolchain. The support for this toolchain resides in
the separate ``meta-sourcery`` layer at
` <http://github.com/MentorEmbedded/meta-sourcery/>`__.
In addition to the toolchain configuration, you also need a
corresponding toolchain recipe file. This recipe file needs to package
up any pre-built objects in the toolchain such as ``libgcc``,
``libstdcc++``, any locales, and ``libc``.
**Q:** How does the OpenEmbedded build system obtain source code and
will it work behind my firewall or proxy server?
**A:** The way the build system obtains source code is highly
configurable. You can setup the build system to get source code in most
environments if HTTP transport is available.
When the build system searches for source code, it first tries the local
download directory. If that location fails, Poky tries
```PREMIRRORS`` <#var-PREMIRRORS>`__, the upstream source, and then
```MIRRORS`` <#var-MIRRORS>`__ in that order.
Assuming your distribution is "poky", the OpenEmbedded build system uses
the Yocto Project source ``PREMIRRORS`` by default for SCM-based
sources, upstreams for normal tarballs, and then falls back to a number
of other mirrors including the Yocto Project source mirror if those
fail.
As an example, you could add a specific server for the build system to
attempt before any others by adding something like the following to the
``local.conf`` configuration file: PREMIRRORS_prepend = "\\ git://.*/.\*
http://www.yoctoproject.org/sources/ \\n \\ ftp://.*/.\*
http://www.yoctoproject.org/sources/ \\n \\ http://.*/.\*
http://www.yoctoproject.org/sources/ \\n \\ https://.*/.\*
http://www.yoctoproject.org/sources/ \\n"
These changes cause the build system to intercept Git, FTP, HTTP, and
HTTPS requests and direct them to the ``http://`` sources mirror. You
can use ``file://`` URLs to point to local directories or network shares
as well.
Aside from the previous technique, these options also exist:
BB_NO_NETWORK = "1" This statement tells BitBake to issue an error
instead of trying to access the Internet. This technique is useful if
you want to ensure code builds only from local sources.
Here is another technique: BB_FETCH_PREMIRRORONLY = "1" This statement
limits the build system to pulling source from the ``PREMIRRORS`` only.
Again, this technique is useful for reproducing builds.
Here is another technique: BB_GENERATE_MIRROR_TARBALLS = "1" This
statement tells the build system to generate mirror tarballs. This
technique is useful if you want to create a mirror server. If not,
however, the technique can simply waste time during the build.
Finally, consider an example where you are behind an HTTP-only firewall.
You could make the following changes to the ``local.conf`` configuration
file as long as the ``PREMIRRORS`` server is current: PREMIRRORS_prepend
= "\\ ftp://.*/.\* http://www.yoctoproject.org/sources/ \\n \\
http://.*/.\* http://www.yoctoproject.org/sources/ \\n \\ https://.*/.\*
http://www.yoctoproject.org/sources/ \\n" BB_FETCH_PREMIRRORONLY = "1"
These changes would cause the build system to successfully fetch source
over HTTP and any network accesses to anything other than the
``PREMIRRORS`` would fail.
The build system also honors the standard shell environment variables
``http_proxy``, ``ftp_proxy``, ``https_proxy``, and ``all_proxy`` to
redirect requests through proxy servers.
.. note::
You can find more information on the "
Working Behind a Network Proxy
" Wiki page.
**Q:** Can I get rid of build output so I can start over?
**A:** Yes - you can easily do this. When you use BitBake to build an
image, all the build output goes into the directory created when you run
the build environment setup script (i.e.
````` <#structure-core-script>`__). By default, this `Build
Directory <#build-directory>`__ is named ``build`` but can be named
anything you want.
Within the Build Directory, is the ``tmp`` directory. To remove all the
build output yet preserve any source code or downloaded files from
previous builds, simply remove the ``tmp`` directory.
**Q:** Why do ``${bindir}`` and ``${libdir}`` have strange values for
``-native`` recipes?
**A:** Executables and libraries might need to be used from a directory
other than the directory into which they were initially installed.
Complicating this situation is the fact that sometimes these executables
and libraries are compiled with the expectation of being run from that
initial installation target directory. If this is the case, moving them
causes problems.
This scenario is a fundamental problem for package maintainers of
mainstream Linux distributions as well as for the OpenEmbedded build
system. As such, a well-established solution exists. Makefiles,
Autotools configuration scripts, and other build systems are expected to
respect environment variables such as ``bindir``, ``libdir``, and
``sysconfdir`` that indicate where executables, libraries, and data
reside when a program is actually run. They are also expected to respect
a ``DESTDIR`` environment variable, which is prepended to all the other
variables when the build system actually installs the files. It is
understood that the program does not actually run from within
``DESTDIR``.
When the OpenEmbedded build system uses a recipe to build a
target-architecture program (i.e. one that is intended for inclusion on
the image being built), that program eventually runs from the root file
system of that image. Thus, the build system provides a value of
"/usr/bin" for ``bindir``, a value of "/usr/lib" for ``libdir``, and so
forth.
Meanwhile, ``DESTDIR`` is a path within the `Build
Directory <#build-directory>`__. However, when the recipe builds a
native program (i.e. one that is intended to run on the build machine),
that program is never installed directly to the build machine's root
file system. Consequently, the build system uses paths within the Build
Directory for ``DESTDIR``, ``bindir`` and related variables. To better
understand this, consider the following two paths where the first is
relatively normal and the second is not:
.. note::
Due to these lengthy examples, the paths are artificially broken
across lines for readability.
/home/maxtothemax/poky-bootchart2/build/tmp/work/i586-poky-linux/zlib/
1.2.8-r0/sysroot-destdir/usr/bin
/home/maxtothemax/poky-bootchart2/build/tmp/work/x86_64-linux/
zlib-native/1.2.8-r0/sysroot-destdir/home/maxtothemax/poky-bootchart2/
build/tmp/sysroots/x86_64-linux/usr/bin Even if the paths look unusual,
they both are correct - the first for a target and the second for a
native recipe. These paths are a consequence of the ``DESTDIR``
mechanism and while they appear strange, they are correct and in
practice very effective.
**Q:** The files provided by my ``*-native`` recipe do not appear to be
available to other recipes. Files are missing from the native sysroot,
my recipe is installing to the wrong place, or I am getting permissions
errors during the do_install task in my recipe! What is wrong?
**A:** This situation results when a build system does not recognize the
environment variables supplied to it by `BitBake <#bitbake-term>`__. The
incident that prompted this FAQ entry involved a Makefile that used an
environment variable named ``BINDIR`` instead of the more standard
variable ``bindir``. The makefile's hardcoded default value of
"/usr/bin" worked most of the time, but not for the recipe's ``-native``
variant. For another example, permissions errors might be caused by a
Makefile that ignores ``DESTDIR`` or uses a different name for that
environment variable. Check the the build system to see if these kinds
of issues exist.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,533 @@
***************************
``devtool`` Quick Reference
***************************
The ``devtool`` command-line tool provides a number of features that
help you build, test, and package software. This command is available
alongside the ``bitbake`` command. Additionally, the ``devtool`` command
is a key part of the extensible SDK.
This chapter provides a Quick Reference for the ``devtool`` command. For
more information on how to apply the command when using the extensible
SDK, see the "`Using the Extensible
SDK <&YOCTO_DOCS_SDK_URL;#sdk-extensible>`__" chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual.
.. _devtool-getting-help:
Getting Help
============
The ``devtool`` command line is organized similarly to Git in that it
has a number of sub-commands for each function. You can run
``devtool --help`` to see all the commands: $ devtool -h NOTE: Starting
bitbake server... usage: devtool [--basepath BASEPATH] [--bbpath BBPATH]
[-d] [-q] [--color COLOR] [-h] <subcommand> ... OpenEmbedded development
tool options: --basepath BASEPATH Base directory of SDK / build
directory --bbpath BBPATH Explicitly specify the BBPATH, rather than
getting it from the metadata -d, --debug Enable debug output -q, --quiet
Print only errors --color COLOR Colorize output (where COLOR is auto,
always, never) -h, --help show this help message and exit subcommands:
Beginning work on a recipe: add Add a new recipe modify Modify the
source for an existing recipe upgrade Upgrade an existing recipe Getting
information: status Show workspace status search Search available
recipes latest-version Report the latest version of an existing recipe
check-upgrade-status Report upgradability for multiple (or all) recipes
Working on a recipe in the workspace: build Build a recipe rename Rename
a recipe file in the workspace edit-recipe Edit a recipe file
find-recipe Find a recipe file configure-help Get help on configure
script options update-recipe Apply changes from external source tree to
recipe reset Remove a recipe from your workspace finish Finish working
on a recipe in your workspace Testing changes on target: deploy-target
Deploy recipe output files to live target machine undeploy-target
Undeploy recipe output files in live target machine build-image Build
image including workspace recipe packages Advanced: create-workspace Set
up workspace in an alternative location export Export workspace into a
tar archive import Import exported tar archive into workspace extract
Extract the source for an existing recipe sync Synchronize the source
tree for an existing recipe Use devtool <subcommand> --help to get help
on a specific command As directed in the general help output, you can
get more syntax on a specific command by providing the command name and
using "--help": $ devtool add --help NOTE: Starting bitbake server...
usage: devtool add [-h] [--same-dir \| --no-same-dir] [--fetch URI]
[--fetch-dev] [--version VERSION] [--no-git] [--srcrev SRCREV \|
--autorev] [--srcbranch SRCBRANCH] [--binary] [--also-native]
[--src-subdir SUBDIR] [--mirrors] [--provides PROVIDES] [recipename]
[srctree] [fetchuri] Adds a new recipe to the workspace to build a
specified source tree. Can optionally fetch a remote URI and unpack it
to create the source tree. arguments: recipename Name for new recipe to
add (just name - no version, path or extension). If not specified, will
attempt to auto-detect it. srctree Path to external source tree. If not
specified, a subdirectory of /home/scottrif/poky/build/workspace/sources
will be used. fetchuri Fetch the specified URI and extract it to create
the source tree options: -h, --help show this help message and exit
--same-dir, -s Build in same directory as source --no-same-dir Force
build in a separate build directory --fetch URI, -f URI Fetch the
specified URI and extract it to create the source tree (deprecated -
pass as positional argument instead) --fetch-dev For npm, also fetch
devDependencies --version VERSION, -V VERSION Version to use within
recipe (PV) --no-git, -g If fetching source, do not set up source tree
as a git repository --srcrev SRCREV, -S SRCREV Source revision to fetch
if fetching from an SCM such as git (default latest) --autorev, -a When
fetching from a git repository, set SRCREV in the recipe to a floating
revision instead of fixed --srcbranch SRCBRANCH, -B SRCBRANCH Branch in
source repository if fetching from an SCM such as git (default master)
--binary, -b Treat the source tree as something that should be installed
verbatim (no compilation, same directory structure). Useful with binary
packages e.g. RPMs. --also-native Also add native variant (i.e. support
building recipe for the build host as well as the target machine)
--src-subdir SUBDIR Specify subdirectory within source tree to use
--mirrors Enable PREMIRRORS and MIRRORS for source tree fetching
(disable by default). --provides PROVIDES, -p PROVIDES Specify an alias
for the item provided by the recipe. E.g. virtual/libgl
.. _devtool-the-workspace-layer-structure:
The Workspace Layer Structure
=============================
``devtool`` uses a "Workspace" layer in which to accomplish builds. This
layer is not specific to any single ``devtool`` command but is rather a
common working area used across the tool.
The following figure shows the workspace structure:
attic - A directory created if devtool believes it must preserve
anything when you run "devtool reset". For example, if you run "devtool
add", make changes to the recipe, and then run "devtool reset", devtool
takes notice that the file has been changed and moves it into the attic
should you still want the recipe. README - Provides information on what
is in workspace layer and how to manage it. .devtool_md5 - A checksum
file used by devtool. appends - A directory that contains \*.bbappend
files, which point to external source. conf - A configuration directory
that contains the layer.conf file. recipes - A directory containing
recipes. This directory contains a folder for each directory added whose
name matches that of the added recipe. devtool places the recipe.bb file
within that sub-directory. sources - A directory containing a working
copy of the source files used when building the recipe. This is the
default directory used as the location of the source tree when you do
not provide a source tree path. This directory contains a folder for
each set of source files matched to a corresponding recipe.
.. _devtool-adding-a-new-recipe-to-the-workspace:
Adding a New Recipe to the Workspace Layer
==========================================
Use the ``devtool add`` command to add a new recipe to the workspace
layer. The recipe you add should not exist - ``devtool`` creates it for
you. The source files the recipe uses should exist in an external area.
The following example creates and adds a new recipe named ``jackson`` to
a workspace layer the tool creates. The source code built by the recipes
resides in ``/home/user/sources/jackson``: $ devtool add jackson
/home/user/sources/jackson
If you add a recipe and the workspace layer does not exist, the command
creates the layer and populates it as described in "`The Workspace Layer
Structure <#devtool-the-workspace-layer-structure>`__" section.
Running ``devtool add`` when the workspace layer exists causes the tool
to add the recipe, append files, and source files into the existing
workspace layer. The ``.bbappend`` file is created to point to the
external source tree.
.. note::
If your recipe has runtime dependencies defined, you must be sure
that these packages exist on the target hardware before attempting to
run your application. If dependent packages (e.g. libraries) do not
exist on the target, your application, when run, will fail to find
those functions. For more information, see the "
Deploying Your Software on the Target Machine
" section.
By default, ``devtool add`` uses the latest revision (i.e. master) when
unpacking files from a remote URI. In some cases, you might want to
specify a source revision by branch, tag, or commit hash. You can
specify these options when using the ``devtool add`` command:
- To specify a source branch, use the ``--srcbranch`` option: $ devtool
add --srcbranch DISTRO_NAME_NO_CAP jackson /home/user/sources/jackson
In the previous example, you are checking out the DISTRO_NAME_NO_CAP
branch.
- To specify a specific tag or commit hash, use the ``--srcrev``
option: $ devtool add --srcrev DISTRO_REL_TAG jackson
/home/user/sources/jackson $ devtool add --srcrev some_commit_hash
/home/user/sources/jackson The previous examples check out the
DISTRO_REL_TAG tag and the commit associated with the
some_commit_hash hash.
.. note::
If you prefer to use the latest revision every time the recipe is
built, use the options
--autorev
or
-a
.
.. _devtool-extracting-the-source-for-an-existing-recipe:
Extracting the Source for an Existing Recipe
============================================
Use the ``devtool extract`` command to extract the source for an
existing recipe. When you use this command, you must supply the root
name of the recipe (i.e. no version, paths, or extensions), and you must
supply the directory to which you want the source extracted.
Additional command options let you control the name of a development
branch into which you can checkout the source and whether or not to keep
a temporary directory, which is useful for debugging.
.. _devtool-synchronizing-a-recipes-extracted-source-tree:
Synchronizing a Recipe's Extracted Source Tree
==============================================
Use the ``devtool sync`` command to synchronize a previously extracted
source tree for an existing recipe. When you use this command, you must
supply the root name of the recipe (i.e. no version, paths, or
extensions), and you must supply the directory to which you want the
source extracted.
Additional command options let you control the name of a development
branch into which you can checkout the source and whether or not to keep
a temporary directory, which is useful for debugging.
.. _devtool-modifying-a-recipe:
Modifying an Existing Recipe
============================
Use the ``devtool modify`` command to begin modifying the source of an
existing recipe. This command is very similar to the
```add`` <#devtool-adding-a-new-recipe-to-the-workspace>`__ command
except that it does not physically create the recipe in the workspace
layer because the recipe already exists in an another layer.
The ``devtool modify`` command extracts the source for a recipe, sets it
up as a Git repository if the source had not already been fetched from
Git, checks out a branch for development, and applies any patches from
the recipe as commits on top. You can use the following command to
checkout the source files: $ devtool modify recipe Using the above
command form, ``devtool`` uses the existing recipe's
```SRC_URI`` <#var-SRC_URI>`__ statement to locate the upstream source,
extracts the source into the default sources location in the workspace.
The default development branch used is "devtool".
.. _devtool-edit-an-existing-recipe:
Edit an Existing Recipe
=======================
Use the ``devtool edit-recipe`` command to run the default editor, which
is identified using the ``EDITOR`` variable, on the specified recipe.
When you use the ``devtool edit-recipe`` command, you must supply the
root name of the recipe (i.e. no version, paths, or extensions). Also,
the recipe file itself must reside in the workspace as a result of the
``devtool add`` or ``devtool upgrade`` commands. However, you can
override that requirement by using the "-a" or "--any-recipe" option.
Using either of these options allows you to edit any recipe regardless
of its location.
.. _devtool-updating-a-recipe:
Updating a Recipe
=================
Use the ``devtool update-recipe`` command to update your recipe with
patches that reflect changes you make to the source files. For example,
if you know you are going to work on some code, you could first use the
```devtool modify`` <#devtool-modifying-a-recipe>`__ command to extract
the code and set up the workspace. After which, you could modify,
compile, and test the code.
When you are satisfied with the results and you have committed your
changes to the Git repository, you can then run the
``devtool update-recipe`` to create the patches and update the recipe: $
devtool update-recipe recipe If you run the ``devtool update-recipe``
without committing your changes, the command ignores the changes.
Often, you might want to apply customizations made to your software in
your own layer rather than apply them to the original recipe. If so, you
can use the ``-a`` or ``--append`` option with the
``devtool update-recipe`` command. These options allow you to specify
the layer into which to write an append file: $ devtool update-recipe
recipe -a base-layer-directory The ``*.bbappend`` file is created at the
appropriate path within the specified layer directory, which may or may
not be in your ``bblayers.conf`` file. If an append file already exists,
the command updates it appropriately.
.. _devtool-checking-on-the-upgrade-status-of-a-recipe:
Checking on the Upgrade Status of a Recipe
==========================================
Upstream recipes change over time. Consequently, you might find that you
need to determine if you can upgrade a recipe to a newer version.
To check on the upgrade status of a recipe, use the
``devtool check-upgrade-status`` command. The command displays a table
of your current recipe versions, the latest upstream versions, the email
address of the recipe's maintainer, and any additional information such
as commit hash strings and reasons you might not be able to upgrade a
particular recipe.
.. note::
- For the ``oe-core`` layer, recipe maintainers come from the
```maintainers.inc`` <http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/conf/distro/include/maintainers.inc>`__
file.
- If the recipe is using the `Git
fetcher <&YOCTO_DOCS_BB_URL;#git-fetcher>`__ rather than a
tarball, the commit hash points to the commit that matches the
recipe's latest version tag.
As with all ``devtool`` commands, you can get help on the individual
command: $ devtool check-upgrade-status -h NOTE: Starting bitbake
server... usage: devtool check-upgrade-status [-h] [--all] [recipe
[recipe ...]] Prints a table of recipes together with versions currently
provided by recipes, and latest upstream versions, when there is a later
version available arguments: recipe Name of the recipe to report (omit
to report upgrade info for all recipes) options: -h, --help show this
help message and exit --all, -a Show all recipes, not just recipes
needing upgrade
Unless you provide a specific recipe name on the command line, the
command checks all recipes in all configured layers.
Following is a partial example table that reports on all the recipes.
Notice the reported reason for not upgrading the ``base-passwd`` recipe.
In this example, while a new version is available upstream, you do not
want to use it because the dependency on ``cdebconf`` is not easily
satisfied.
.. note::
When a reason for not upgrading displays, the reason is usually
written into the recipe using the
RECIPE_NO_UPDATE_REASON
variable. See the
base-passwd.bb
recipe for an example.
$ devtool check-upgrade-status ... NOTE: acpid 2.0.30 2.0.31 Ross Burton
<ross.burton@intel.com> NOTE: u-boot-fw-utils 2018.11 2019.01 Marek
Vasut <marek.vasut@gmail.com> d3689267f92c5956e09cc7d1baa4700141662bff
NOTE: u-boot-tools 2018.11 2019.01 Marek Vasut <marek.vasut@gmail.com>
d3689267f92c5956e09cc7d1baa4700141662bff . . . NOTE: base-passwd 3.5.29
3.5.45 Anuj Mittal <anuj.mittal@intel.com> cannot be updated due to:
Version 3.5.38 requires cdebconf for update-passwd utility NOTE: busybox
1.29.2 1.30.0 Andrej Valek <andrej.valek@siemens.com> NOTE: dbus-test
1.12.10 1.12.12 Chen Qi <Qi.Chen@windriver.com>
.. _devtool-upgrading-a-recipe:
Upgrading a Recipe
==================
As software matures, upstream recipes are upgraded to newer versions. As
a developer, you need to keep your local recipes up-to-date with the
upstream version releases. Several methods exist by which you can
upgrade recipes. You can read about them in the "`Upgrading
Recipes <&YOCTO_DOCS_DEV_URL;#gs-upgrading-recipes>`__" section of the
Yocto Project Development Tasks Manual. This section overviews the
``devtool upgrade`` command.
.. note::
Before you upgrade a recipe, you can check on its upgrade status. See
the "
Checking on the Upgrade Status of a Recipe
" for more information.
The ``devtool upgrade`` command upgrades an existing recipe to a more
recent version of the recipe upstream. The command puts the upgraded
recipe file along with any associated files into a "workspace" and, if
necessary, extracts the source tree to a specified location. During the
upgrade, patches associated with the recipe are rebased or added as
needed.
When you use the ``devtool upgrade`` command, you must supply the root
name of the recipe (i.e. no version, paths, or extensions), and you must
supply the directory to which you want the source extracted. Additional
command options let you control things such as the version number to
which you want to upgrade (i.e. the ```PV`` <#var-PV>`__), the source
revision to which you want to upgrade (i.e. the
```SRCREV`` <#var-SRCREV>`__), whether or not to apply patches, and so
forth.
You can read more on the ``devtool upgrade`` workflow in the "`Use
``devtool upgrade`` to Create a Version of the Recipe that Supports a
Newer Version of the
Software <&YOCTO_DOCS_SDK_URL;#sdk-devtool-use-devtool-upgrade-to-create-a-version-of-the-recipe-that-supports-a-newer-version-of-the-software>`__"
section in the Yocto Project Application Development and the Extensible
Software Development Kit (eSDK) manual. You can also see an example of
how to use ``devtool upgrade`` in the "`Using
``devtool upgrade`` <&YOCTO_DOCS_DEV_URL;#gs-using-devtool-upgrade>`__"
section in the Yocto Project Development Tasks Manual.
.. _devtool-resetting-a-recipe:
Resetting a Recipe
==================
Use the ``devtool reset`` command to remove a recipe and its
configuration (e.g. the corresponding ``.bbappend`` file) from the
workspace layer. Realize that this command deletes the recipe and the
append file. The command does not physically move them for you.
Consequently, you must be sure to physically relocate your updated
recipe and the append file outside of the workspace layer before running
the ``devtool reset`` command.
If the ``devtool reset`` command detects that the recipe or the append
files have been modified, the command preserves the modified files in a
separate "attic" subdirectory under the workspace layer.
Here is an example that resets the workspace directory that contains the
``mtr`` recipe: $ devtool reset mtr NOTE: Cleaning sysroot for recipe
mtr... NOTE: Leaving source tree
/home/scottrif/poky/build/workspace/sources/mtr as-is; if you no longer
need it then please delete it manually $
.. _devtool-building-your-recipe:
Building Your Recipe
====================
Use the ``devtool build`` command to build your recipe. The
``devtool build`` command is equivalent to the
``bitbake -c populate_sysroot`` command.
When you use the ``devtool build`` command, you must supply the root
name of the recipe (i.e. do not provide versions, paths, or extensions).
You can use either the "-s" or the "--disable-parallel-make" options to
disable parallel makes during the build. Here is an example: $ devtool
build recipe
.. _devtool-building-your-image:
Building Your Image
===================
Use the ``devtool build-image`` command to build an image, extending it
to include packages from recipes in the workspace. Using this command is
useful when you want an image that ready for immediate deployment onto a
device for testing. For proper integration into a final image, you need
to edit your custom image recipe appropriately.
When you use the ``devtool build-image`` command, you must supply the
name of the image. This command has no command line options: $ devtool
build-image image
.. _devtool-deploying-your-software-on-the-target-machine:
Deploying Your Software on the Target Machine
=============================================
Use the ``devtool deploy-target`` command to deploy the recipe's build
output to the live target machine: $ devtool deploy-target recipe target
The target is the address of the target machine, which must be running
an SSH server (i.e. ``user@hostname[:destdir]``).
This command deploys all files installed during the
```do_install`` <#ref-tasks-install>`__ task. Furthermore, you do not
need to have package management enabled within the target machine. If
you do, the package manager is bypassed.
.. note::
The ``deploy-target`` functionality is for development only. You
should never use it to update an image that will be used in
production.
Some conditions exist that could prevent a deployed application from
behaving as expected. When both of the following conditions exist, your
application has the potential to not behave correctly when run on the
target:
- You are deploying a new application to the target and the recipe you
used to build the application had correctly defined runtime
dependencies.
- The target does not physically have the packages on which the
application depends installed.
If both of these conditions exist, your application will not behave as
expected. The reason for this misbehavior is because the
``devtool deploy-target`` command does not deploy the packages (e.g.
libraries) on which your new application depends. The assumption is that
the packages are already on the target. Consequently, when a runtime
call is made in the application for a dependent function (e.g. a library
call), the function cannot be found.
To be sure you have all the dependencies local to the target, you need
to be sure that the packages are pre-deployed (installed) on the target
before attempting to run your application.
.. _devtool-removing-your-software-from-the-target-machine:
Removing Your Software from the Target Machine
==============================================
Use the ``devtool undeploy-target`` command to remove deployed build
output from the target machine. For the ``devtool undeploy-target``
command to work, you must have previously used the
```devtool deploy-target`` <#devtool-deploying-your-software-on-the-target-machine>`__
command. $ devtool undeploy-target recipe target The target is the
address of the target machine, which must be running an SSH server (i.e.
``user@hostname``).
.. _devtool-creating-the-workspace:
Creating the Workspace Layer in an Alternative Location
=======================================================
Use the ``devtool create-workspace`` command to create a new workspace
layer in your `Build Directory <#build-directory>`__. When you create a
new workspace layer, it is populated with the ``README`` file and the
``conf`` directory only.
The following example creates a new workspace layer in your current
working and by default names the workspace layer "workspace": $ devtool
create-workspace
You can create a workspace layer anywhere by supplying a pathname with
the command. The following command creates a new workspace layer named
"new-workspace": $ devtool create-workspace /home/scottrif/new-workspace
.. _devtool-get-the-status-of-the-recipes-in-your-workspace:
Get the Status of the Recipes in Your Workspace
===============================================
Use the ``devtool status`` command to list the recipes currently in your
workspace. Information includes the paths to their respective external
source trees.
The ``devtool status`` command has no command-line options: $ devtool
status Following is sample output after using
```devtool add`` <#devtool-adding-a-new-recipe-to-the-workspace>`__ to
create and add the ``mtr_0.86.bb`` recipe to the ``workspace``
directory: $ devtool status mtr:
/home/scottrif/poky/build/workspace/sources/mtr
(/home/scottrif/poky/build/workspace/recipes/mtr/mtr_0.86.bb) $
.. _devtool-search-for-available-target-recipes:
Search for Available Target Recipes
===================================
Use the ``devtool search`` command to search for available target
recipes. The command matches the recipe name, package name, description,
and installed files. The command displays the recipe name as a result of
a match.
When you use the ``devtool search`` command, you must supply a keyword.
The command uses the keyword when searching for a match.

View File

@@ -0,0 +1,353 @@
********
Features
********
This chapter provides a reference of shipped machine and distro features
you can include as part of your image, a reference on image features you
can select, and a reference on feature backfilling.
Features provide a mechanism for working out which packages should be
included in the generated images. Distributions can select which
features they want to support through the ``DISTRO_FEATURES`` variable,
which is set or appended to in a distribution's configuration file such
as ``poky.conf``, ``poky-tiny.conf``, ``poky-lsb.conf`` and so forth.
Machine features are set in the ``MACHINE_FEATURES`` variable, which is
set in the machine configuration file and specifies the hardware
features for a given machine.
These two variables combine to work out which kernel modules, utilities,
and other packages to include. A given distribution can support a
selected subset of features so some machine features might not be
included if the distribution itself does not support them.
One method you can use to determine which recipes are checking to see if
a particular feature is contained or not is to ``grep`` through the
`Metadata <#metadata>`__ for the feature. Here is an example that
discovers the recipes whose build is potentially changed based on a
given feature: $ cd poky $ git grep
'contains.*MACHINE_FEATURES.*feature'
.. _ref-features-machine:
Machine Features
================
The items below are features you can use with
```MACHINE_FEATURES`` <#var-MACHINE_FEATURES>`__. Features do not have a
one-to-one correspondence to packages, and they can go beyond simply
controlling the installation of a package or packages. Sometimes a
feature can influence how certain recipes are built. For example, a
feature might determine whether a particular configure option is
specified within the ```do_configure`` <#ref-tasks-configure>`__ task
for a particular recipe.
This feature list only represents features as shipped with the Yocto
Project metadata:
- *acpi:* Hardware has ACPI (x86/x86_64 only)
- *alsa:* Hardware has ALSA audio drivers
- *apm:* Hardware uses APM (or APM emulation)
- *bluetooth:* Hardware has integrated BT
- *efi:* Support for booting through EFI
- *ext2:* Hardware HDD or Microdrive
- *keyboard:* Hardware has a keyboard
- *pcbios:* Support for booting through BIOS
- *pci:* Hardware has a PCI bus
- *pcmcia:* Hardware has PCMCIA or CompactFlash sockets
- *phone:* Mobile phone (voice) support
- *qvga:* Machine has a QVGA (320x240) display
- *rtc:* Machine has a Real-Time Clock
- *screen:* Hardware has a screen
- *serial:* Hardware has serial support (usually RS232)
- *touchscreen:* Hardware has a touchscreen
- *usbgadget:* Hardware is USB gadget device capable
- *usbhost:* Hardware is USB Host capable
- *vfat:* FAT file system support
- *wifi:* Hardware has integrated WiFi
.. _ref-features-distro:
Distro Features
===============
The items below are features you can use with
```DISTRO_FEATURES`` <#var-DISTRO_FEATURES>`__ to enable features across
your distribution. Features do not have a one-to-one correspondence to
packages, and they can go beyond simply controlling the installation of
a package or packages. In most cases, the presence or absence of a
feature translates to the appropriate option supplied to the configure
script during the ```do_configure`` <#ref-tasks-configure>`__ task for
the recipes that optionally support the feature.
Some distro features are also machine features. These select features
make sense to be controlled both at the machine and distribution
configuration level. See the
```COMBINED_FEATURES`` <#var-COMBINED_FEATURES>`__ variable for more
information.
This list only represents features as shipped with the Yocto Project
metadata:
- *alsa:* Include ALSA support (OSS compatibility kernel modules
installed if available).
- *api-documentation:* Enables generation of API documentation during
recipe builds. The resulting documentation is added to SDK tarballs
when the ``bitbake -c populate_sdk`` command is used. See the
"`Adding API Documentation to the Standard
SDK <&YOCTO_DOCS_SDK_URL;#adding-api-documentation-to-the-standard-sdk>`__"
section in the Yocto Project Application Development and the
Extensible Software Development Kit (eSDK) manual.
- *bluetooth:* Include bluetooth support (integrated BT only).
- *cramfs:* Include CramFS support.
- *directfb:* Include DirectFB support.
- *ext2:* Include tools for supporting for devices with internal
HDD/Microdrive for storing files (instead of Flash only devices).
- *ipsec:* Include IPSec support.
- *ipv6:* Include IPv6 support.
- *keyboard:* Include keyboard support (e.g. keymaps will be loaded
during boot).
- *ldconfig:* Include support for ldconfig and ``ld.so.conf`` on the
target.
- *nfs:* Include NFS client support (for mounting NFS exports on
device).
- *opengl:* Include the Open Graphics Library, which is a
cross-language, multi-platform application programming interface used
for rendering two and three-dimensional graphics.
- *pci:* Include PCI bus support.
- *pcmcia:* Include PCMCIA/CompactFlash support.
- *ppp:* Include PPP dialup support.
- *ptest:* Enables building the package tests where supported by
individual recipes. For more information on package tests, see the
"`Testing Packages With
ptest <&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest>`__" section
in the Yocto Project Development Tasks Manual.
- *smbfs:* Include SMB networks client support (for mounting
Samba/Microsoft Windows shares on device).
- *systemd:* Include support for this ``init`` manager, which is a full
replacement of for ``init`` with parallel starting of services,
reduced shell overhead, and other features. This ``init`` manager is
used by many distributions.
- *usbgadget:* Include USB Gadget Device support (for USB
networking/serial/storage).
- *usbhost:* Include USB Host support (allows to connect external
keyboard, mouse, storage, network etc).
- *usrmerge:* Merges the ``/bin``, ``/sbin``, ``/lib``, and ``/lib64``
directories into their respective counterparts in the ``/usr``
directory to provide better package and application compatibility.
- *wayland:* Include the Wayland display server protocol and the
library that supports it.
- *wifi:* Include WiFi support (integrated only).
- *x11:* Include the X server and libraries.
.. _ref-features-image:
Image Features
==============
The contents of images generated by the OpenEmbedded build system can be
controlled by the ```IMAGE_FEATURES`` <#var-IMAGE_FEATURES>`__ and
```EXTRA_IMAGE_FEATURES`` <#var-EXTRA_IMAGE_FEATURES>`__ variables that
you typically configure in your image recipes. Through these variables,
you can add several different predefined packages such as development
utilities or packages with debug information needed to investigate
application problems or profile applications.
The following image features are available for all images:
- *allow-empty-password:* Allows Dropbear and OpenSSH to accept root
logins and logins from accounts having an empty password string.
- *dbg-pkgs:* Installs debug symbol packages for all packages installed
in a given image.
- *debug-tweaks:* Makes an image suitable for development (e.g. allows
root logins without passwords and enables post-installation logging).
See the 'allow-empty-password', 'empty-root-password', and
'post-install-logging' features in this list for additional
information.
- *dev-pkgs:* Installs development packages (headers and extra library
links) for all packages installed in a given image.
- *doc-pkgs:* Installs documentation packages for all packages
installed in a given image.
- *empty-root-password:* Sets the root password to an empty string,
which allows logins with a blank password.
- *package-management:* Installs package management tools and preserves
the package manager database.
- *post-install-logging:* Enables logging postinstall script runs to
the ``/var/log/postinstall.log`` file on first boot of the image on
the target system.
.. note::
To make the
/var/log
directory on the target persistent, use the
VOLATILE_LOG_DIR
variable by setting it to "no".
- *ptest-pkgs:* Installs ptest packages for all ptest-enabled recipes.
- *read-only-rootfs:* Creates an image whose root filesystem is
read-only. See the "`Creating a Read-Only Root
Filesystem <&YOCTO_DOCS_DEV_URL;#creating-a-read-only-root-filesystem>`__"
section in the Yocto Project Development Tasks Manual for more
information.
- *splash:* Enables showing a splash screen during boot. By default,
this screen is provided by ``psplash``, which does allow
customization. If you prefer to use an alternative splash screen
package, you can do so by setting the ``SPLASH`` variable to a
different package name (or names) within the image recipe or at the
distro configuration level.
- *staticdev-pkgs:* Installs static development packages, which are
static libraries (i.e. ``*.a`` files), for all packages installed in
a given image.
Some image features are available only when you inherit the
```core-image`` <#ref-classes-core-image>`__ class. The current list of
these valid features is as follows:
- *hwcodecs:* Installs hardware acceleration codecs.
- *nfs-server:* Installs an NFS server.
- *perf:* Installs profiling tools such as ``perf``, ``systemtap``, and
``LTTng``. For general information on user-space tools, see the
`Yocto Project Application Development and the Extensible Software
Development Kit (eSDK) <&YOCTO_DOCS_SDK_URL;>`__ manual.
- *ssh-server-dropbear:* Installs the Dropbear minimal SSH server.
- *ssh-server-openssh:* Installs the OpenSSH SSH server, which is more
full-featured than Dropbear. Note that if both the OpenSSH SSH server
and the Dropbear minimal SSH server are present in
``IMAGE_FEATURES``, then OpenSSH will take precedence and Dropbear
will not be installed.
- *tools-debug:* Installs debugging tools such as ``strace`` and
``gdb``. For information on GDB, see the "`Debugging With the GNU
Project Debugger (GDB)
Remotely <&YOCTO_DOCS_DEV_URL;#platdev-gdb-remotedebug>`__" section
in the Yocto Project Development Tasks Manual. For information on
tracing and profiling, see the `Yocto Project Profiling and Tracing
Manual <&YOCTO_DOCS_PROF_URL;>`__.
- *tools-sdk:* Installs a full SDK that runs on the device.
- *tools-testapps:* Installs device testing tools (e.g. touchscreen
debugging).
- *x11:* Installs the X server.
- *x11-base:* Installs the X server with a minimal environment.
- *x11-sato:* Installs the OpenedHand Sato environment.
.. _ref-features-backfill:
Feature Backfilling
===================
Sometimes it is necessary in the OpenEmbedded build system to extend
```MACHINE_FEATURES`` <#var-MACHINE_FEATURES>`__ or
```DISTRO_FEATURES`` <#var-DISTRO_FEATURES>`__ to control functionality
that was previously enabled and not able to be disabled. For these
cases, we need to add an additional feature item to appear in one of
these variables, but we do not want to force developers who have
existing values of the variables in their configuration to add the new
feature in order to retain the same overall level of functionality.
Thus, the OpenEmbedded build system has a mechanism to automatically
"backfill" these added features into existing distro or machine
configurations. You can see the list of features for which this is done
by finding the
```DISTRO_FEATURES_BACKFILL`` <#var-DISTRO_FEATURES_BACKFILL>`__ and
```MACHINE_FEATURES_BACKFILL`` <#var-MACHINE_FEATURES_BACKFILL>`__
variables in the ``meta/conf/bitbake.conf`` file.
Because such features are backfilled by default into all configurations
as described in the previous paragraph, developers who wish to disable
the new features need to be able to selectively prevent the backfilling
from occurring. They can do this by adding the undesired feature or
features to the
```DISTRO_FEATURES_BACKFILL_CONSIDERED`` <#var-DISTRO_FEATURES_BACKFILL_CONSIDERED>`__
or
```MACHINE_FEATURES_BACKFILL_CONSIDERED`` <#var-MACHINE_FEATURES_BACKFILL_CONSIDERED>`__
variables for distro features and machine features respectively.
Here are two examples to help illustrate feature backfilling:
- *The "pulseaudio" distro feature option*: Previously, PulseAudio
support was enabled within the Qt and GStreamer frameworks. Because
of this, the feature is backfilled and thus enabled for all distros
through the ``DISTRO_FEATURES_BACKFILL`` variable in the
``meta/conf/bitbake.conf`` file. However, your distro needs to
disable the feature. You can disable the feature without affecting
other existing distro configurations that need PulseAudio support by
adding "pulseaudio" to ``DISTRO_FEATURES_BACKFILL_CONSIDERED`` in
your distro's ``.conf`` file. Adding the feature to this variable
when it also exists in the ``DISTRO_FEATURES_BACKFILL`` variable
prevents the build system from adding the feature to your
configuration's ``DISTRO_FEATURES``, effectively disabling the
feature for that particular distro.
- *The "rtc" machine feature option*: Previously, real time clock (RTC)
support was enabled for all target devices. Because of this, the
feature is backfilled and thus enabled for all machines through the
``MACHINE_FEATURES_BACKFILL`` variable in the
``meta/conf/bitbake.conf`` file. However, your target device does not
have this capability. You can disable RTC support for your device
without affecting other machines that need RTC support by adding the
feature to your machine's ``MACHINE_FEATURES_BACKFILL_CONSIDERED``
list in the machine's ``.conf`` file. Adding the feature to this
variable when it also exists in the ``MACHINE_FEATURES_BACKFILL``
variable prevents the build system from adding the feature to your
configuration's ``MACHINE_FEATURES``, effectively disabling RTC
support for that particular machine.

View File

@@ -0,0 +1,137 @@
******
Images
******
The OpenEmbedded build system provides several example images to satisfy
different needs. When you issue the ``bitbake`` command you provide a
“top-level” recipe that essentially begins the build for the type of
image you want.
.. note::
Building an image without GNU General Public License Version 3
(GPLv3), GNU Lesser General Public License Version 3 (LGPLv3), and
the GNU Affero General Public License Version 3 (AGPL-3.0) components
is only supported for minimal and base images. Furthermore, if you
are going to build an image using non-GPLv3 and similarly licensed
components, you must make the following changes in the
local.conf
file before using the BitBake command to build the minimal or base
image:
::
1. Comment out the EXTRA_IMAGE_FEATURES line
2. Set INCOMPATIBLE_LICENSE = "GPL-3.0 LGPL-3.0 AGPL-3.0"
From within the ``poky`` Git repository, you can use the following
command to display the list of directories within the `Source
Directory <#source-directory>`__ that contain image recipe files: $ ls
meta*/recipes*/images/*.bb
Following is a list of supported recipes:
- ``build-appliance-image``: An example virtual machine that contains
all the pieces required to run builds using the build system as well
as the build system itself. You can boot and run the image using
either the `VMware
Player <http://www.vmware.com/products/player/overview.html>`__ or
`VMware
Workstation <http://www.vmware.com/products/workstation/overview.html>`__.
For more information on this image, see the `Build
Appliance <&YOCTO_HOME_URL;/software-item/build-appliance/>`__ page
on the Yocto Project website.
- ``core-image-base``: A console-only image that fully supports the
target device hardware.
- ``core-image-clutter``: An image with support for the Open GL-based
toolkit Clutter, which enables development of rich and animated
graphical user interfaces.
- ``core-image-full-cmdline``: A console-only image with more
full-featured Linux system functionality installed.
- ``core-image-lsb``: An image that conforms to the Linux Standard Base
(LSB) specification. This image requires a distribution configuration
that enables LSB compliance (e.g. ``poky-lsb``). If you build
``core-image-lsb`` without that configuration, the image will not be
LSB-compliant.
- ``core-image-lsb-dev``: A ``core-image-lsb`` image that is suitable
for development work using the host. The image includes headers and
libraries you can use in a host development environment. This image
requires a distribution configuration that enables LSB compliance
(e.g. ``poky-lsb``). If you build ``core-image-lsb-dev`` without that
configuration, the image will not be LSB-compliant.
- ``core-image-lsb-sdk``: A ``core-image-lsb`` that includes everything
in the cross-toolchain but also includes development headers and
libraries to form a complete standalone SDK. This image requires a
distribution configuration that enables LSB compliance (e.g.
``poky-lsb``). If you build ``core-image-lsb-sdk`` without that
configuration, the image will not be LSB-compliant. This image is
suitable for development using the target.
- ``core-image-minimal``: A small image just capable of allowing a
device to boot.
- ``core-image-minimal-dev``: A ``core-image-minimal`` image suitable
for development work using the host. The image includes headers and
libraries you can use in a host development environment.
- ``core-image-minimal-initramfs``: A ``core-image-minimal`` image that
has the Minimal RAM-based Initial Root Filesystem (initramfs) as part
of the kernel, which allows the system to find the first “init”
program more efficiently. See the
```PACKAGE_INSTALL`` <#var-PACKAGE_INSTALL>`__ variable for
additional information helpful when working with initramfs images.
- ``core-image-minimal-mtdutils``: A ``core-image-minimal`` image that
has support for the Minimal MTD Utilities, which let the user
interact with the MTD subsystem in the kernel to perform operations
on flash devices.
- ``core-image-rt``: A ``core-image-minimal`` image plus a real-time
test suite and tools appropriate for real-time use.
- ``core-image-rt-sdk``: A ``core-image-rt`` image that includes
everything in the cross-toolchain. The image also includes
development headers and libraries to form a complete stand-alone SDK
and is suitable for development using the target.
- ``core-image-sato``: An image with Sato support, a mobile environment
and visual style that works well with mobile devices. The image
supports X11 with a Sato theme and applications such as a terminal,
editor, file manager, media player, and so forth.
- ``core-image-sato-dev``: A ``core-image-sato`` image suitable for
development using the host. The image includes libraries needed to
build applications on the device itself, testing and profiling tools,
and debug symbols. This image was formerly ``core-image-sdk``.
- ``core-image-sato-sdk``: A ``core-image-sato`` image that includes
everything in the cross-toolchain. The image also includes
development headers and libraries to form a complete standalone SDK
and is suitable for development using the target.
- ``core-image-testmaster``: A "master" image designed to be used for
automated runtime testing. Provides a "known good" image that is
deployed to a separate partition so that you can boot into it and use
it to deploy a second image to be tested. You can find more
information about runtime testing in the "`Performing Automated
Runtime
Testing <&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing>`__"
section in the Yocto Project Development Tasks Manual.
- ``core-image-testmaster-initramfs``: A RAM-based Initial Root
Filesystem (initramfs) image tailored for use with the
``core-image-testmaster`` image.
- ``core-image-weston``: A very basic Wayland image with a terminal.
This image provides the Wayland protocol libraries and the reference
Weston compositor. For more information, see the "`Using Wayland and
Weston <&YOCTO_DOCS_DEV_URL;#dev-using-wayland-and-weston>`__"
section in the Yocto Project Development Tasks Manual.
- ``core-image-x11``: A very basic X11 image with a terminal.

View File

@@ -0,0 +1,205 @@
*******************************************
OpenEmbedded Kickstart (``.wks``) Reference
*******************************************
.. _openembedded-kickstart-wks-reference:
Introduction
============
The current Wic implementation supports only the basic kickstart
partitioning commands: ``partition`` (or ``part`` for short) and
``bootloader``.
.. note::
Future updates will implement more commands and options. If you use
anything that is not specifically supported, results can be
unpredictable.
This chapter provides a reference on the available kickstart commands.
The information lists the commands, their syntax, and meanings.
Kickstart commands are based on the Fedora kickstart versions but with
modifications to reflect Wic capabilities. You can see the original
documentation for those commands at the following link:
http://pykickstart.readthedocs.io/en/latest/kickstart-docs.html
Command: part or partition
==========================
Either of these commands creates a partition on the system and uses the
following syntax: part [mntpoint] partition [mntpoint] If you do not
provide mntpoint, Wic creates a partition but does not mount it.
The ``mntpoint`` is where the partition is mounted and must be in one of
the following forms:
- ``/path``: For example, "/", "/usr", or "/home"
- ``swap``: The created partition is used as swap space
Specifying a mntpoint causes the partition to automatically be mounted.
Wic achieves this by adding entries to the filesystem table (fstab)
during image generation. In order for Wic to generate a valid fstab, you
must also provide one of the ``--ondrive``, ``--ondisk``, or
``--use-uuid`` partition options as part of the command.
.. note::
The mount program must understand the PARTUUID syntax you use with
--use-uuid
and non-root
mountpoint
, including swap. The busybox versions of these application are
currently excluded.
Here is an example that uses "/" as the mountpoint. The command uses
``--ondisk`` to force the partition onto the ``sdb`` disk: part /
--source rootfs --ondisk sdb --fstype=ext3 --label platform --align 1024
Here is a list that describes other supported options you can use with
the ``part`` and ``partition`` commands:
- *``--size``:* The minimum partition size in MBytes. Specify an
integer value such as 500. Do not append the number with "MB". You do
not need this option if you use ``--source``.
- *``--fixed-size``:* The exact partition size in MBytes. You cannot
specify with ``--size``. An error occurs when assembling the disk
image if the partition data is larger than ``--fixed-size``.
- *``--source``:* This option is a Wic-specific option that names the
source of the data that populates the partition. The most common
value for this option is "rootfs", but you can use any value that
maps to a valid source plugin. For information on the source plugins,
see the "`Using the Wic Plugins
Interface <&YOCTO_DOCS_DEV_URL;#wic-using-the-wic-plugin-interface>`__"
section in the Yocto Project Development Tasks Manual.
If you use ``--source rootfs``, Wic creates a partition as large as
needed and fills it with the contents of the root filesystem pointed
to by the ``-r`` command-line option or the equivalent rootfs derived
from the ``-e`` command-line option. The filesystem type used to
create the partition is driven by the value of the ``--fstype``
option specified for the partition. See the entry on ``--fstype``
that follows for more information.
If you use ``--source plugin-name``, Wic creates a partition as large
as needed and fills it with the contents of the partition that is
generated by the specified plugin name using the data pointed to by
the ``-r`` command-line option or the equivalent rootfs derived from
the ``-e`` command-line option. Exactly what those contents are and
filesystem type used are dependent on the given plugin
implementation.
If you do not use the ``--source`` option, the ``wic`` command
creates an empty partition. Consequently, you must use the ``--size``
option to specify the size of the empty partition.
- *``--ondisk`` or ``--ondrive``:* Forces the partition to be created
on a particular disk.
- *``--fstype``:* Sets the file system type for the partition. Valid
values are:
- ``ext4``
- ``ext3``
- ``ext2``
- ``btrfs``
- ``squashfs``
- ``swap``
- *``--fsoptions``:* Specifies a free-form string of options to be used
when mounting the filesystem. This string is copied into the
``/etc/fstab`` file of the installed system and should be enclosed in
quotes. If not specified, the default string is "defaults".
- *``--label label``:* Specifies the label to give to the filesystem to
be made on the partition. If the given label is already in use by
another filesystem, a new label is created for the partition.
- *``--active``:* Marks the partition as active.
- *``--align (in KBytes)``:* This option is a Wic-specific option that
says to start partitions on boundaries given x KBytes.
- *``--no-table``:* This option is a Wic-specific option. Using the
option reserves space for the partition and causes it to become
populated. However, the partition is not added to the partition
table.
- *``--exclude-path``:* This option is a Wic-specific option that
excludes the given relative path from the resulting image. This
option is only effective with the rootfs source plugin.
- *``--extra-space``:* This option is a Wic-specific option that adds
extra space after the space filled by the content of the partition.
The final size can exceed the size specified by the ``--size``
option. The default value is 10 Mbytes.
- *``--overhead-factor``:* This option is a Wic-specific option that
multiplies the size of the partition by the option's value. You must
supply a value greater than or equal to "1". The default value is
"1.3".
- *``--part-name``:* This option is a Wic-specific option that
specifies a name for GPT partitions.
- *``--part-type``:* This option is a Wic-specific option that
specifies the partition type globally unique identifier (GUID) for
GPT partitions. You can find the list of partition type GUIDs at
` <http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs>`__.
- *``--use-uuid``:* This option is a Wic-specific option that causes
Wic to generate a random GUID for the partition. The generated
identifier is used in the bootloader configuration to specify the
root partition.
- *``--uuid``:* This option is a Wic-specific option that specifies the
partition UUID.
- *``--fsuuid``:* This option is a Wic-specific option that specifies
the filesystem UUID. You can generate or modify
```WKS_FILE`` <#var-WKS_FILE>`__ with this option if a preconfigured
filesystem UUID is added to the kernel command line in the bootloader
configuration before you run Wic.
- *``--system-id``:* This option is a Wic-specific option that
specifies the partition system ID, which is a one byte long,
hexadecimal parameter with or without the 0x prefix.
- *``--mkfs-extraopts``:* This option specifies additional options to
pass to the ``mkfs`` utility. Some default options for certain
filesystems do not take effect. See Wic's help on kickstart (i.e.
``wic help kickstart``).
Command: bootloader
===================
This command specifies how the bootloader should be configured and
supports the following options:
.. note::
Bootloader functionality and boot partitions are implemented by the
various
--source
plugins that implement bootloader functionality. The bootloader
command essentially provides a means of modifying bootloader
configuration.
- *``--timeout``:* Specifies the number of seconds before the
bootloader times out and boots the default option.
- *``--append``:* Specifies kernel parameters. These parameters will be
added to the syslinux ``APPEND`` or ``grub`` kernel command line.
- *``--configfile``:* Specifies a user-defined configuration file for
the bootloader. You can provide a full pathname for the file or a
file that exists in the ``canned-wks`` folder. This option overrides
all other bootloader options.

View File

@@ -0,0 +1,24 @@
==============================
Yocto Project Reference Manual
==============================
.. toctree::
:caption: Table of Contents
:numbered:
ref-system-requirements
ref-terms
ref-release-process
migration
ref-structure
ref-classes
ref-tasks
ref-devtool-reference
ref-kickstart
ref-qa-checks
ref-images
ref-features
ref-variables
ref-varlocality
faq
resources

View File

@@ -0,0 +1,524 @@
*****************************
QA Error and Warning Messages
*****************************
.. _qa-introduction:
Introduction
============
When building a recipe, the OpenEmbedded build system performs various
QA checks on the output to ensure that common issues are detected and
reported. Sometimes when you create a new recipe to build new software,
it will build with no problems. When this is not the case, or when you
have QA issues building any software, it could take a little time to
resolve them.
While it is tempting to ignore a QA message or even to disable QA
checks, it is best to try and resolve any reported QA issues. This
chapter provides a list of the QA messages and brief explanations of the
issues you could encounter so that you can properly resolve problems.
The next section provides a list of all QA error and warning messages
based on a default configuration. Each entry provides the message or
error form along with an explanation.
.. note::
- At the end of each message, the name of the associated QA test (as
listed in the "```insane.bbclass`` <#ref-classes-insane>`__"
section) appears within square brackets.
- As mentioned, this list of error and warning messages is for QA
checks only. The list does not cover all possible build errors or
warnings you could encounter.
- Because some QA checks are disabled by default, this list does not
include all possible QA check errors and warnings.
.. _qa-errors-and-warnings:
Errors and Warnings
===================
- ``<packagename>: <path> is using libexec please relocate to <libexecdir> [libexec]``
The specified package contains files in ``/usr/libexec`` when the
distro configuration uses a different path for ``<libexecdir>`` By
default, ``<libexecdir>`` is ``$prefix/libexec``. However, this
default can be changed (e.g. ``${libdir}``).
 
- ``package <packagename> contains bad RPATH <rpath> in file <file> [rpaths]``
The specified binary produced by the recipe contains dynamic library
load paths (rpaths) that contain build system paths such as
```TMPDIR`` <#var-TMPDIR>`__, which are incorrect for the target and
could potentially be a security issue. Check for bad ``-rpath``
options being passed to the linker in your
```do_compile`` <#ref-tasks-compile>`__ log. Depending on the build
system used by the software being built, there might be a configure
option to disable rpath usage completely within the build of the
software.
 
- ``<packagename>: <file> contains probably-redundant RPATH <rpath> [useless-rpaths]``
The specified binary produced by the recipe contains dynamic library
load paths (rpaths) that on a standard system are searched by default
by the linker (e.g. ``/lib`` and ``/usr/lib``). While these paths
will not cause any breakage, they do waste space and are unnecessary.
Depending on the build system used by the software being built, there
might be a configure option to disable rpath usage completely within
the build of the software.
 
- ``<packagename> requires <files>, but no providers in its RDEPENDS [file-rdeps]``
A file-level dependency has been identified from the specified
package on the specified files, but there is no explicit
corresponding entry in ```RDEPENDS`` <#var-RDEPENDS>`__. If
particular files are required at runtime then ``RDEPENDS`` should be
declared in the recipe to ensure the packages providing them are
built.
 
- ``<packagename1> rdepends on <packagename2>, but it isn't a build dependency? [build-deps]``
A runtime dependency exists between the two specified packages, but
there is nothing explicit within the recipe to enable the
OpenEmbedded build system to ensure that dependency is satisfied.
This condition is usually triggered by an
```RDEPENDS`` <#var-RDEPENDS>`__ value being added at the packaging
stage rather than up front, which is usually automatic based on the
contents of the package. In most cases, you should change the recipe
to add an explicit ``RDEPENDS`` for the dependency.
 
- ``non -dev/-dbg/nativesdk- package contains symlink .so: <packagename> path '<path>' [dev-so]``
Symlink ``.so`` files are for development only, and should therefore
go into the ``-dev`` package. This situation might occur if you add
``*.so*`` rather than ``*.so.*`` to a non-dev package. Change
```FILES`` <#var-FILES>`__ (and possibly
```PACKAGES`` <#var-PACKAGES>`__) such that the specified ``.so``
file goes into an appropriate ``-dev`` package.
 
- ``non -staticdev package contains static .a library: <packagename> path '<path>' [staticdev]``
Static ``.a`` library files should go into a ``-staticdev`` package.
Change ```FILES`` <#var-FILES>`__ (and possibly
```PACKAGES`` <#var-PACKAGES>`__) such that the specified ``.a`` file
goes into an appropriate ``-staticdev`` package.
 
- ``<packagename>: found library in wrong location [libdir]``
The specified file may have been installed into an incorrect
(possibly hardcoded) installation path. For example, this test will
catch recipes that install ``/lib/bar.so`` when ``${base_libdir}`` is
"lib32". Another example is when recipes install
``/usr/lib64/foo.so`` when ``${libdir}`` is "/usr/lib". False
positives occasionally exist. For these cases add "libdir" to
```INSANE_SKIP`` <#var-INSANE_SKIP>`__ for the package.
 
- ``non debug package contains .debug directory: <packagename> path <path> [debug-files]``
The specified package contains a ``.debug`` directory, which should
not appear in anything but the ``-dbg`` package. This situation might
occur if you add a path which contains a ``.debug`` directory and do
not explicitly add the ``.debug`` directory to the ``-dbg`` package.
If this is the case, add the ``.debug`` directory explicitly to
``FILES_${PN}-dbg``. See ```FILES`` <#var-FILES>`__ for additional
information on ``FILES``.
 
- ``Architecture did not match (<machine_arch> to <file_arch>) on <file> [arch]``
By default, the OpenEmbedded build system checks the Executable and
Linkable Format (ELF) type, bit size, and endianness of any binaries
to ensure they match the target architecture. This test fails if any
binaries do not match the type since there would be an
incompatibility. The test could indicate that the wrong compiler or
compiler options have been used. Sometimes software, like
bootloaders, might need to bypass this check. If the file you receive
the error for is firmware that is not intended to be executed within
the target operating system or is intended to run on a separate
processor within the device, you can add "arch" to
```INSANE_SKIP`` <#var-INSANE_SKIP>`__ for the package. Another
option is to check the ```do_compile`` <#ref-tasks-compile>`__ log
and verify that the compiler options being used are correct.
 
- ``Bit size did not match (<machine_bits> to <file_bits>) <recipe> on <file> [arch]``
By default, the OpenEmbedded build system checks the Executable and
Linkable Format (ELF) type, bit size, and endianness of any binaries
to ensure they match the target architecture. This test fails if any
binaries do not match the type since there would be an
incompatibility. The test could indicate that the wrong compiler or
compiler options have been used. Sometimes software, like
bootloaders, might need to bypass this check. If the file you receive
the error for is firmware that is not intended to be executed within
the target operating system or is intended to run on a separate
processor within the device, you can add "arch" to
```INSANE_SKIP`` <#var-INSANE_SKIP>`__ for the package. Another
option is to check the ```do_compile`` <#ref-tasks-compile>`__ log
and verify that the compiler options being used are correct.
 
- ``Endianness did not match (<machine_endianness> to <file_endianness>) on <file> [arch]``
By default, the OpenEmbedded build system checks the Executable and
Linkable Format (ELF) type, bit size, and endianness of any binaries
to ensure they match the target architecture. This test fails if any
binaries do not match the type since there would be an
incompatibility. The test could indicate that the wrong compiler or
compiler options have been used. Sometimes software, like
bootloaders, might need to bypass this check. If the file you receive
the error for is firmware that is not intended to be executed within
the target operating system or is intended to run on a separate
processor within the device, you can add "arch" to
```INSANE_SKIP`` <#var-INSANE_SKIP>`__ for the package. Another
option is to check the ```do_compile`` <#ref-tasks-compile>`__ log
and verify that the compiler options being used are correct.
 
- ``ELF binary '<file>' has relocations in .text [textrel]``
The specified ELF binary contains relocations in its ``.text``
sections. This situation can result in a performance impact at
runtime.
Typically, the way to solve this performance issue is to add "-fPIC"
or "-fpic" to the compiler command-line options. For example, given
software that reads ```CFLAGS`` <#var-CFLAGS>`__ when you build it,
you could add the following to your recipe: CFLAGS_append = " -fPIC "
For more information on text relocations at runtime, see
` <http://www.akkadia.org/drepper/textrelocs.html>`__.
 
- ``No GNU_HASH in the elf binary: '<file>' [ldflags]``
This indicates that binaries produced when building the recipe have
not been linked with the ```LDFLAGS`` <#var-LDFLAGS>`__ options
provided by the build system. Check to be sure that the ``LDFLAGS``
variable is being passed to the linker command. A common workaround
for this situation is to pass in ``LDFLAGS`` using
```TARGET_CC_ARCH`` <#var-TARGET_CC_ARCH>`__ within the recipe as
follows: TARGET_CC_ARCH += "${LDFLAGS}"
 
- ``Package <packagename> contains Xorg driver (<driver>) but no xorg-abi- dependencies [xorg-driver-abi]``
The specified package contains an Xorg driver, but does not have a
corresponding ABI package dependency. The xserver-xorg recipe
provides driver ABI names. All drivers should depend on the ABI
versions that they have been built against. Driver recipes that
include ``xorg-driver-input.inc`` or ``xorg-driver-video.inc`` will
automatically get these versions. Consequently, you should only need
to explicitly add dependencies to binary driver recipes.
 
- ``The /usr/share/info/dir file is not meant to be shipped in a particular package. [infodir]``
The ``/usr/share/info/dir`` should not be packaged. Add the following
line to your ```do_install`` <#ref-tasks-install>`__ task or to your
``do_install_append`` within the recipe as follows: rm
${D}${infodir}/dir
 
- ``Symlink <path> in <packagename> points to TMPDIR [symlink-to-sysroot]``
The specified symlink points into ```TMPDIR`` <#var-TMPDIR>`__ on the
host. Such symlinks will work on the host. However, they are clearly
invalid when running on the target. You should either correct the
symlink to use a relative path or remove the symlink.
 
- ``<file> failed sanity test (workdir) in path <path> [la]``
The specified ``.la`` file contains ```TMPDIR`` <#var-TMPDIR>`__
paths. Any ``.la`` file containing these paths is incorrect since
``libtool`` adds the correct sysroot prefix when using the files
automatically itself.
 
- ``<file> failed sanity test (tmpdir) in path <path> [pkgconfig]``
The specified ``.pc`` file contains
```TMPDIR`` <#var-TMPDIR>`__\ ``/``\ ```WORKDIR`` <#var-WORKDIR>`__
paths. Any ``.pc`` file containing these paths is incorrect since
``pkg-config`` itself adds the correct sysroot prefix when the files
are accessed.
 
- ``<packagename> rdepends on <debug_packagename> [debug-deps]``
A dependency exists between the specified non-dbg package (i.e. a
package whose name does not end in ``-dbg``) and a package that is a
``dbg`` package. The ``dbg`` packages contain debug symbols and are
brought in using several different methods:
- Using the ``dbg-pkgs``
```IMAGE_FEATURES`` <#var-IMAGE_FEATURES>`__ value.
- Using ```IMAGE_INSTALL`` <#var-IMAGE_INSTALL>`__.
- As a dependency of another ``dbg`` package that was brought in
using one of the above methods.
The dependency might have been automatically added because the
``dbg`` package erroneously contains files that it should not contain
(e.g. a non-symlink ``.so`` file) or it might have been added
manually (e.g. by adding to ```RDEPENDS`` <#var-RDEPENDS>`__).
 
- ``<packagename> rdepends on <dev_packagename> [dev-deps]``
A dependency exists between the specified non-dev package (a package
whose name does not end in ``-dev``) and a package that is a ``dev``
package. The ``dev`` packages contain development headers and are
usually brought in using several different methods:
- Using the ``dev-pkgs``
```IMAGE_FEATURES`` <#var-IMAGE_FEATURES>`__ value.
- Using ```IMAGE_INSTALL`` <#var-IMAGE_INSTALL>`__.
- As a dependency of another ``dev`` package that was brought in
using one of the above methods.
The dependency might have been automatically added (because the
``dev`` package erroneously contains files that it should not have
(e.g. a non-symlink ``.so`` file) or it might have been added
manually (e.g. by adding to ```RDEPENDS`` <#var-RDEPENDS>`__).
 
- ``<var>_<packagename> is invalid: <comparison> (<value>) only comparisons <, =, >, <=, and >= are allowed [dep-cmp]``
If you are adding a versioned dependency relationship to one of the
dependency variables (```RDEPENDS`` <#var-RDEPENDS>`__,
```RRECOMMENDS`` <#var-RRECOMMENDS>`__,
```RSUGGESTS`` <#var-RSUGGESTS>`__,
```RPROVIDES`` <#var-RPROVIDES>`__,
```RREPLACES`` <#var-RREPLACES>`__, or
```RCONFLICTS`` <#var-RCONFLICTS>`__), you must only use the named
comparison operators. Change the versioned dependency values you are
adding to match those listed in the message.
 
- ``<recipename>: The compile log indicates that host include and/or library paths were used. Please check the log '<logfile>' for more information. [compile-host-path]``
The log for the ```do_compile`` <#ref-tasks-compile>`__ task
indicates that paths on the host were searched for files, which is
not appropriate when cross-compiling. Look for "is unsafe for
cross-compilation" or "CROSS COMPILE Badness" in the specified log
file.
 
- ``<recipename>: The install log indicates that host include and/or library paths were used. Please check the log '<logfile>' for more information. [install-host-path]``
The log for the ```do_install`` <#ref-tasks-install>`__ task
indicates that paths on the host were searched for files, which is
not appropriate when cross-compiling. Look for "is unsafe for
cross-compilation" or "CROSS COMPILE Badness" in the specified log
file.
 
- ``This autoconf log indicates errors, it looked at host include and/or library paths while determining system capabilities. Rerun configure task after fixing this. The path was '<path>'``
The log for the ```do_configure`` <#ref-tasks-configure>`__ task
indicates that paths on the host were searched for files, which is
not appropriate when cross-compiling. Look for "is unsafe for
cross-compilation" or "CROSS COMPILE Badness" in the specified log
file.
 
- ``<packagename> doesn't match the [a-z0-9.+-]+ regex [pkgname]``
The convention within the OpenEmbedded build system (sometimes
enforced by the package manager itself) is to require that package
names are all lower case and to allow a restricted set of characters.
If your recipe name does not match this, or you add packages to
```PACKAGES`` <#var-PACKAGES>`__ that do not conform to the
convention, then you will receive this error. Rename your recipe. Or,
if you have added a non-conforming package name to ``PACKAGES``,
change the package name appropriately.
 
- ``<recipe>: configure was passed unrecognized options: <options> [unknown-configure-option]``
The configure script is reporting that the specified options are
unrecognized. This situation could be because the options were
previously valid but have been removed from the configure script. Or,
there was a mistake when the options were added and there is another
option that should be used instead. If you are unsure, consult the
upstream build documentation, the ``./configure --help`` output, and
the upstream change log or release notes. Once you have worked out
what the appropriate change is, you can update
```EXTRA_OECONF`` <#var-EXTRA_OECONF>`__,
```PACKAGECONFIG_CONFARGS`` <#var-PACKAGECONFIG_CONFARGS>`__, or the
individual ```PACKAGECONFIG`` <#var-PACKAGECONFIG>`__ option values
accordingly.
 
- ``Recipe <recipefile> has PN of "<recipename>" which is in OVERRIDES, this can result in unexpected behavior. [pn-overrides]``
The specified recipe has a name (```PN`` <#var-PN>`__) value that
appears in ```OVERRIDES`` <#var-OVERRIDES>`__. If a recipe is named
such that its ``PN`` value matches something already in ``OVERRIDES``
(e.g. ``PN`` happens to be the same as ```MACHINE`` <#var-MACHINE>`__
or ```DISTRO`` <#var-DISTRO>`__), it can have unexpected
consequences. For example, assignments such as
``FILES_${PN} = "xyz"`` effectively turn into ``FILES = "xyz"``.
Rename your recipe (or if ``PN`` is being set explicitly, change the
``PN`` value) so that the conflict does not occur. See
```FILES`` <#var-FILES>`__ for additional information.
 
- ``<recipefile>: Variable <variable> is set as not being package specific, please fix this. [pkgvarcheck]``
Certain variables (```RDEPENDS`` <#var-RDEPENDS>`__,
```RRECOMMENDS`` <#var-RRECOMMENDS>`__,
```RSUGGESTS`` <#var-RSUGGESTS>`__,
```RCONFLICTS`` <#var-RCONFLICTS>`__,
```RPROVIDES`` <#var-RPROVIDES>`__,
```RREPLACES`` <#var-RREPLACES>`__, ```FILES`` <#var-FILES>`__,
``pkg_preinst``, ``pkg_postinst``, ``pkg_prerm``, ``pkg_postrm``, and
```ALLOW_EMPTY`` <#var-ALLOW_EMPTY>`__) should always be set specific
to a package (i.e. they should be set with a package name override
such as ``RDEPENDS_${PN} = "value"`` rather than
``RDEPENDS = "value"``). If you receive this error, correct any
assignments to these variables within your recipe.
 
- ``File '<file>' from <recipename> was already stripped, this will prevent future debugging! [already-stripped]``
Produced binaries have already been stripped prior to the build
system extracting debug symbols. It is common for upstream software
projects to default to stripping debug symbols for output binaries.
In order for debugging to work on the target using ``-dbg`` packages,
this stripping must be disabled.
Depending on the build system used by the software being built,
disabling this stripping could be as easy as specifying an additional
configure option. If not, disabling stripping might involve patching
the build scripts. In the latter case, look for references to "strip"
or "STRIP", or the "-s" or "-S" command-line options being specified
on the linker command line (possibly through the compiler command
line if preceded with "-Wl,").
.. note::
Disabling stripping here does not mean that the final packaged
binaries will be unstripped. Once the OpenEmbedded build system
splits out debug symbols to the
-dbg
package, it will then strip the symbols from the binaries.
 
- ``<packagename> is listed in PACKAGES multiple times, this leads to packaging errors. [packages-list]``
Package names must appear only once in the
```PACKAGES`` <#var-PACKAGES>`__ variable. You might receive this
error if you are attempting to add a package to ``PACKAGES`` that is
already in the variable's value.
 
- ``FILES variable for package <packagename> contains '//' which is invalid. Attempting to fix this but you should correct the metadata. [files-invalid]``
The string "//" is invalid in a Unix path. Correct all occurrences
where this string appears in a ```FILES`` <#var-FILES>`__ variable so
that there is only a single "/".
 
- ``<recipename>: Files/directories were installed but not shipped in any package [installed-vs-shipped]``
Files have been installed within the
```do_install`` <#ref-tasks-install>`__ task but have not been
included in any package by way of the ```FILES`` <#var-FILES>`__
variable. Files that do not appear in any package cannot be present
in an image later on in the build process. You need to do one of the
following:
- Add the files to ``FILES`` for the package you want them to appear
in (e.g. ``FILES_${``\ ```PN`` <#var-PN>`__\ ``}`` for the main
package).
- Delete the files at the end of the ``do_install`` task if the
files are not needed in any package.
 
- ``<oldpackage>-<oldpkgversion> was registered as shlib provider for <library>, changing it to <newpackage>-<newpkgversion> because it was built later``
This message means that both ``<oldpackage>`` and ``<newpackage>``
provide the specified shared library. You can expect this message
when a recipe has been renamed. However, if that is not the case, the
message might indicate that a private version of a library is being
erroneously picked up as the provider for a common library. If that
is the case, you should add the library's ``.so`` file name to
```PRIVATE_LIBS`` <#var-PRIVATE_LIBS>`__ in the recipe that provides
the private version of the library.
- ``LICENSE_<packagename> includes licenses (<licenses>) that are not listed in LICENSE [unlisted-pkg-lics]``
The ```LICENSE`` <#var-LICENSE>`__ of the recipe should be a superset
of all the licenses of all packages produced by this recipe. In other
words, any license in ``LICENSE_*`` should also appear in
```LICENSE`` <#var-LICENSE>`__.
 
Configuring and Disabling QA Checks
===================================
You can configure the QA checks globally so that specific check failures
either raise a warning or an error message, using the
```WARN_QA`` <#var-WARN_QA>`__ and ```ERROR_QA`` <#var-ERROR_QA>`__
variables, respectively. You can also disable checks within a particular
recipe using ```INSANE_SKIP`` <#var-INSANE_SKIP>`__. For information on
how to work with the QA checks, see the
"```insane.bbclass`` <#ref-classes-insane>`__" section.
.. note::
Please keep in mind that the QA checks exist in order to detect real
or potential problems in the packaged output. So exercise caution
when disabling these checks.

View File

@@ -0,0 +1,182 @@
*****************************************************
Yocto Project Releases and the Stable Release Process
*****************************************************
The Yocto Project release process is predictable and consists of both
major and minor (point) releases. This brief chapter provides
information on how releases are named, their life cycle, and their
stability.
Major and Minor Release Cadence
===============================
The Yocto Project delivers major releases (e.g. DISTRO) using a six
month cadence roughly timed each April and October of the year.
Following are examples of some major YP releases with their codenames
also shown. See the "`Major Release
Codenames <#major-release-codenames>`__" section for information on
codenames used with major releases. 2.2 (Morty) 2.1 (Krogoth) 2.0
(Jethro) While the cadence is never perfect, this timescale facilitates
regular releases that have strong QA cycles while not overwhelming users
with too many new releases. The cadence is predictable and avoids many
major holidays in various geographies.
The Yocto project delivers minor (point) releases on an unscheduled
basis and are usually driven by the accumulation of enough significant
fixes or enhancements to the associated major release. Following are
some example past point releases: 2.1.1 2.1.2 2.2.1 The point release
indicates a point in the major release branch where a full QA cycle and
release process validates the content of the new branch.
.. note::
Realize that there can be patches merged onto the stable release
branches as and when they become available.
Major Release Codenames
=======================
Each major release receives a codename that identifies the release in
the `Yocto Project Source
Repositories <&YOCTO_DOCS_OM_URL;#yocto-project-repositories>`__. The
concept is that branches of `Metadata <#metadata>`__ with the same
codename are likely to be compatible and thus work together.
.. note::
Codenames are associated with major releases because a Yocto Project
release number (e.g. DISTRO) could conflict with a given layer or
company versioning scheme. Codenames are unique, interesting, and
easily identifiable.
Releases are given a nominal release version as well but the codename is
used in repositories for this reason. You can find information on Yocto
Project releases and codenames at
` <https://wiki.yoctoproject.org/wiki/Releases>`__.
Stable Release Process
======================
Once released, the release enters the stable release process at which
time a person is assigned as the maintainer for that stable release.
This maintainer monitors activity for the release by investigating and
handling nominated patches and backport activity. Only fixes and
enhancements that have first been applied on the "master" branch (i.e.
the current, in-development branch) are considered for backporting to a
stable release.
.. note::
The current Yocto Project policy regarding backporting is to consider
bug fixes and security fixes only. Policy dictates that features are
not backported to a stable release. This policy means generic recipe
version upgrades are unlikely to be accepted for backporting. The
exception to this policy occurs when a strong reason exists such as
the fix happens to also be the preferred upstream approach.
Stable release branches have strong maintenance for about a year after
their initial release. Should significant issues be found for any
release regardless of its age, fixes could be backported to older
releases. For issues that are not backported given an older release,
Community LTS trees and branches exist where community members share
patches for older releases. However, these types of patches do not go
through the same release process as do point releases. You can find more
information about stable branch maintenance at
` <https://wiki.yoctoproject.org/wiki/Stable_branch_maintenance>`__.
Testing and Quality Assurance
=============================
Part of the Yocto Project development and release process is quality
assurance through the execution of test strategies. Test strategies
provide the Yocto Project team a way to ensure a release is validated.
Additionally, because the test strategies are visible to you as a
developer, you can validate your projects. This section overviews the
available test infrastructure used in the Yocto Project. For information
on how to run available tests on your projects, see the "`Performing
Automated Runtime
Testing <&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing>`__"
section in the Yocto Project Development Tasks Manual.
The QA/testing infrastructure is woven into the project to the point
where core developers take some of it for granted. The infrastructure
consists of the following pieces:
- ``bitbake-selftest``: A standalone command that runs unit tests on
key pieces of BitBake and its fetchers.
- ```sanity.bbclass`` <#ref-classes-sanity>`__: This automatically
included class checks the build environment for missing tools (e.g.
``gcc``) or common misconfigurations such as
```MACHINE`` <#var-MACHINE>`__ set incorrectly.
- ```insane.bbclass`` <#ref-classes-insane>`__: This class checks the
generated output from builds for sanity. For example, if building for
an ARM target, did the build produce ARM binaries. If, for example,
the build produced PPC binaries then there is a problem.
- ```testimage.bbclass`` <#ref-classes-testimage*>`__: This class
performs runtime testing of images after they are built. The tests
are usually used with `QEMU <&YOCTO_DOCS_DEV_URL;#dev-manual-qemu>`__
to boot the images and check the combined runtime result boot
operation and functions. However, the test can also use the IP
address of a machine to test.
- ```ptest`` <&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest>`__:
Runs tests against packages produced during the build for a given
piece of software. The test allows the packages to be be run within a
target image.
- ``oe-selftest``: Tests combination BitBake invocations. These tests
operate outside the OpenEmbedded build system itself. The
``oe-selftest`` can run all tests by default or can run selected
tests or test suites.
.. note::
Running
oe-selftest
requires host packages beyond the "Essential" grouping. See the "
Required Packages for the Build Host
" section for more information.
Originally, much of this testing was done manually. However, significant
effort has been made to automate the tests so that more people can use
them and the Yocto Project development team can run them faster and more
efficiently.
The Yocto Project's main Autobuilder (``autobuilder.yoctoproject.org``)
publicly tests each Yocto Project release's code in the
`OE-Core <#oe-core>`__, Poky, and BitBake repositories. The testing
occurs for both the current state of the "master" branch and also for
submitted patches. Testing for submitted patches usually occurs in the
"ross/mut" branch in the ``poky-contrib`` repository (i.e. the
master-under-test branch) or in the "master-next" branch in the ``poky``
repository.
.. note::
You can find all these branches in the Yocto Project
Source Repositories
.
Testing within these public branches ensures in a publicly visible way
that all of the main supposed architectures and recipes in OE-Core
successfully build and behave properly.
Various features such as ``multilib``, sub architectures (e.g. ``x32``,
``poky-tiny``, ``musl``, ``no-x11`` and and so forth),
``bitbake-selftest``, and ``oe-selftest`` are tested as part of the QA
process of a release. Complete testing and validation for a release
takes the Autobuilder workers several hours.
.. note::
The Autobuilder workers are non-homogeneous, which means regular
testing across a variety of Linux distributions occurs. The
Autobuilder is limited to only testing QEMU-based setups and not real
hardware.
Finally, in addition to the Autobuilder's tests, the Yocto Project QA
team also performs testing on a variety of platforms, which includes
actual hardware, to ensure expected results.

View File

@@ -0,0 +1,871 @@
**************************
Source Directory Structure
**************************
The `Source Directory <#source-directory>`__ consists of numerous files,
directories and subdirectories; understanding their locations and
contents is key to using the Yocto Project effectively. This chapter
describes the Source Directory and gives information about those files
and directories.
For information on how to establish a local Source Directory on your
development system, see the "`Locating Yocto Project Source
Files <&YOCTO_DOCS_DEV_URL;#locating-yocto-project-source-files>`__"
section in the Yocto Project Development Tasks Manual.
.. note::
The OpenEmbedded build system does not support file or directory
names that contain spaces. Be sure that the Source Directory you use
does not contain these types of names.
.. _structure-core:
Top-Level Core Components
=========================
This section describes the top-level components of the `Source
Directory <#source-directory>`__.
.. _structure-core-bitbake:
``bitbake/``
------------
This directory includes a copy of BitBake for ease of use. The copy
usually matches the current stable BitBake release from the BitBake
project. BitBake, a `Metadata <#metadata>`__ interpreter, reads the
Yocto Project Metadata and runs the tasks defined by that data. Failures
are usually caused by errors in your Metadata and not from BitBake
itself; consequently, most users do not need to worry about BitBake.
When you run the ``bitbake`` command, the main BitBake executable (which
resides in the ``bitbake/bin/`` directory) starts. Sourcing the
environment setup script (i.e. ````` <#structure-core-script>`__) places
the ``scripts/`` and ``bitbake/bin/`` directories (in that order) into
the shell's ``PATH`` environment variable.
For more information on BitBake, see the `BitBake User
Manual <&YOCTO_DOCS_BB_URL;>`__.
.. _structure-core-build:
``build/``
----------
This directory contains user configuration files and the output
generated by the OpenEmbedded build system in its standard configuration
where the source tree is combined with the output. The `Build
Directory <#build-directory>`__ is created initially when you ``source``
the OpenEmbedded build environment setup script (i.e.
````` <#structure-core-script>`__).
It is also possible to place output and configuration files in a
directory separate from the `Source Directory <#source-directory>`__ by
providing a directory name when you ``source`` the setup script. For
information on separating output from your local Source Directory files
(commonly described as an "out of tree" build), see the
"````` <#structure-core-script>`__" section.
.. _handbook:
``documentation/``
------------------
This directory holds the source for the Yocto Project documentation as
well as templates and tools that allow you to generate PDF and HTML
versions of the manuals. Each manual is contained in its own sub-folder;
for example, the files for this reference manual reside in the
``ref-manual/`` directory.
.. _structure-core-meta:
``meta/``
---------
This directory contains the minimal, underlying OpenEmbedded-Core
metadata. The directory holds recipes, common classes, and machine
configuration for strictly emulated targets (``qemux86``, ``qemuarm``,
and so forth.)
.. _structure-core-meta-poky:
``meta-poky/``
--------------
Designed above the ``meta/`` content, this directory adds just enough
metadata to define the Poky reference distribution.
.. _structure-core-meta-yocto-bsp:
``meta-yocto-bsp/``
-------------------
This directory contains the Yocto Project reference hardware Board
Support Packages (BSPs). For more information on BSPs, see the `Yocto
Project Board Support Package (BSP) Developer's
Guide <&YOCTO_DOCS_BSP_URL;>`__.
.. _structure-meta-selftest:
``meta-selftest/``
------------------
This directory adds additional recipes and append files used by the
OpenEmbedded selftests to verify the behavior of the build system. You
do not have to add this layer to your ``bblayers.conf`` file unless you
want to run the selftests.
.. _structure-meta-skeleton:
``meta-skeleton/``
------------------
This directory contains template recipes for BSP and kernel development.
.. _structure-core-scripts:
``scripts/``
------------
This directory contains various integration scripts that implement extra
functionality in the Yocto Project environment (e.g. QEMU scripts). The
````` <#structure-core-script>`__ script prepends this directory to the
shell's ``PATH`` environment variable.
The ``scripts`` directory has useful scripts that assist in contributing
back to the Yocto Project, such as ``create-pull-request`` and
``send-pull-request``.
.. _structure-core-script:
````
----
This script sets up the OpenEmbedded build environment. Running this
script with the ``source`` command in a shell makes changes to ``PATH``
and sets other core BitBake variables based on the current working
directory. You need to run an environment setup script before running
BitBake commands. The script uses other scripts within the ``scripts``
directory to do the bulk of the work.
When you run this script, your Yocto Project environment is set up, a
`Build Directory <#build-directory>`__ is created, your working
directory becomes the Build Directory, and you are presented with some
simple suggestions as to what to do next, including a list of some
possible targets to build. Here is an example: $ source
oe-init-build-env ### Shell environment set up for builds. ### You can
now run 'bitbake <target>' Common targets are: core-image-minimal
core-image-sato meta-toolchain meta-ide-support You can also run
generated qemu images with a command like 'runqemu qemux86-64' The
default output of the ``oe-init-build-env`` script is from the
``conf-notes.txt`` file, which is found in the ``meta-poky`` directory
within the `Source Directory <#source-directory>`__. If you design a
custom distribution, you can include your own version of this
configuration file to mention the targets defined by your distribution.
See the "`Creating a Custom Template Configuration
Directory <&YOCTO_DOCS_DEV_URL;#creating-a-custom-template-configuration-directory>`__"
section in the Yocto Project Development Tasks Manual for more
information.
By default, running this script without a Build Directory argument
creates the ``build/`` directory in your current working directory. If
you provide a Build Directory argument when you ``source`` the script,
you direct the OpenEmbedded build system to create a Build Directory of
your choice. For example, the following command creates a Build
Directory named ``mybuilds/`` that is outside of the `Source
Directory <#source-directory>`__: $ source OE_INIT_FILE ~/mybuilds The
OpenEmbedded build system uses the template configuration files, which
are found by default in the ``meta-poky/conf/`` directory in the Source
Directory. See the "`Creating a Custom Template Configuration
Directory <&YOCTO_DOCS_DEV_URL;#creating-a-custom-template-configuration-directory>`__"
section in the Yocto Project Development Tasks Manual for more
information.
.. note::
The OpenEmbedded build system does not support file or directory
names that contain spaces. If you attempt to run the
OE_INIT_FILE
script from a Source Directory that contains spaces in either the
filenames or directory names, the script returns an error indicating
no such file or directory. Be sure to use a Source Directory free of
names containing spaces.
.. _structure-basic-top-level:
``LICENSE, README, and README.hardware``
----------------------------------------
These files are standard top-level files.
.. _structure-build:
The Build Directory - ``build/``
================================
The OpenEmbedded build system creates the `Build
Directory <#build-directory>`__ when you run the build environment setup
script ````` <#structure-core-script>`__. If you do not give the Build
Directory a specific name when you run the setup script, the name
defaults to ``build/``.
For subsequent parsing and processing, the name of the Build directory
is available via the ```TOPDIR`` <#var-TOPDIR>`__ variable.
.. _structure-build-buildhistory:
``build/buildhistory/``
-----------------------
The OpenEmbedded build system creates this directory when you enable
build history via the ``buildhistory`` class file. The directory
organizes build information into image, packages, and SDK
subdirectories. For information on the build history feature, see the
"`Maintaining Build Output
Quality <&YOCTO_DOCS_DEV_URL;#maintaining-build-output-quality>`__"
section in the Yocto Project Development Tasks Manual.
.. _structure-build-conf-local.conf:
``build/conf/local.conf``
-------------------------
This configuration file contains all the local user configurations for
your build environment. The ``local.conf`` file contains documentation
on the various configuration options. Any variable set here overrides
any variable set elsewhere within the environment unless that variable
is hard-coded within a file (e.g. by using '=' instead of '?='). Some
variables are hard-coded for various reasons but such variables are
relatively rare.
At a minimum, you would normally edit this file to select the target
``MACHINE``, which package types you wish to use
(```PACKAGE_CLASSES`` <#var-PACKAGE_CLASSES>`__), and the location from
which you want to access downloaded files (``DL_DIR``).
If ``local.conf`` is not present when you start the build, the
OpenEmbedded build system creates it from ``local.conf.sample`` when you
``source`` the top-level build environment setup script
````` <#structure-core-script>`__.
The source ``local.conf.sample`` file used depends on the
``$TEMPLATECONF`` script variable, which defaults to ``meta-poky/conf/``
when you are building from the Yocto Project development environment,
and to ``meta/conf/`` when you are building from the OpenEmbedded-Core
environment. Because the script variable points to the source of the
``local.conf.sample`` file, this implies that you can configure your
build environment from any layer by setting the variable in the
top-level build environment setup script as follows:
TEMPLATECONF=your_layer/conf Once the build process gets the sample
file, it uses ``sed`` to substitute final
``${``\ ```OEROOT`` <#var-OEROOT>`__\ ``}`` values for all
``##OEROOT##`` values.
.. note::
You can see how the
TEMPLATECONF
variable is used by looking at the
scripts/oe-setup-builddir
script in the
Source Directory
. You can find the Yocto Project version of the
local.conf.sample
file in the
meta-poky/conf
directory.
.. _structure-build-conf-bblayers.conf:
``build/conf/bblayers.conf``
----------------------------
This configuration file defines
`layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__,
which are directory trees, traversed (or walked) by BitBake. The
``bblayers.conf`` file uses the ```BBLAYERS`` <#var-BBLAYERS>`__
variable to list the layers BitBake tries to find.
If ``bblayers.conf`` is not present when you start the build, the
OpenEmbedded build system creates it from ``bblayers.conf.sample`` when
you ``source`` the top-level build environment setup script (i.e.
````` <#structure-core-script>`__).
As with the ``local.conf`` file, the source ``bblayers.conf.sample``
file used depends on the ``$TEMPLATECONF`` script variable, which
defaults to ``meta-poky/conf/`` when you are building from the Yocto
Project development environment, and to ``meta/conf/`` when you are
building from the OpenEmbedded-Core environment. Because the script
variable points to the source of the ``bblayers.conf.sample`` file, this
implies that you can base your build from any layer by setting the
variable in the top-level build environment setup script as follows:
TEMPLATECONF=your_layer/conf Once the build process gets the sample
file, it uses ``sed`` to substitute final
``${``\ ```OEROOT`` <#var-OEROOT>`__\ ``}`` values for all
``##OEROOT##`` values.
.. note::
You can see how the
TEMPLATECONF
variable
scripts/oe-setup-builddir
script in the
Source Directory
. You can find the Yocto Project version of the
bblayers.conf.sample
file in the
meta-poky/conf/
directory.
.. _structure-build-conf-sanity_info:
``build/cache/sanity_info``
---------------------------
This file indicates the state of the sanity checks and is created during
the build.
.. _structure-build-downloads:
``build/downloads/``
--------------------
This directory contains downloaded upstream source tarballs. You can
reuse the directory for multiple builds or move the directory to another
location. You can control the location of this directory through the
``DL_DIR`` variable.
.. _structure-build-sstate-cache:
``build/sstate-cache/``
-----------------------
This directory contains the shared state cache. You can reuse the
directory for multiple builds or move the directory to another location.
You can control the location of this directory through the
``SSTATE_DIR`` variable.
.. _structure-build-tmp:
``build/tmp/``
--------------
The OpenEmbedded build system creates and uses this directory for all
the build system's output. The ```TMPDIR`` <#var-TMPDIR>`__ variable
points to this directory.
BitBake creates this directory if it does not exist. As a last resort,
to clean up a build and start it from scratch (other than the
downloads), you can remove everything in the ``tmp`` directory or get
rid of the directory completely. If you do, you should also completely
remove the ``build/sstate-cache`` directory.
.. _structure-build-tmp-buildstats:
``build/tmp/buildstats/``
-------------------------
This directory stores the build statistics.
.. _structure-build-tmp-cache:
``build/tmp/cache/``
--------------------
When BitBake parses the metadata (recipes and configuration files), it
caches the results in ``build/tmp/cache/`` to speed up future builds.
The results are stored on a per-machine basis.
During subsequent builds, BitBake checks each recipe (together with, for
example, any files included or appended to it) to see if they have been
modified. Changes can be detected, for example, through file
modification time (mtime) changes and hashing of file contents. If no
changes to the file are detected, then the parsed result stored in the
cache is reused. If the file has changed, it is reparsed.
.. _structure-build-tmp-deploy:
``build/tmp/deploy/``
---------------------
This directory contains any "end result" output from the OpenEmbedded
build process. The ```DEPLOY_DIR`` <#var-DEPLOY_DIR>`__ variable points
to this directory. For more detail on the contents of the ``deploy``
directory, see the
"`Images <&YOCTO_DOCS_OM_URL;#images-dev-environment>`__" and
"`Application Development
SDK <&YOCTO_DOCS_OM_URL;#sdk-dev-environment>`__" sections in the Yocto
Project Overview and Concepts Manual.
.. _structure-build-tmp-deploy-deb:
``build/tmp/deploy/deb/``
-------------------------
This directory receives any ``.deb`` packages produced by the build
process. The packages are sorted into feeds for different architecture
types.
.. _structure-build-tmp-deploy-rpm:
``build/tmp/deploy/rpm/``
-------------------------
This directory receives any ``.rpm`` packages produced by the build
process. The packages are sorted into feeds for different architecture
types.
.. _structure-build-tmp-deploy-ipk:
``build/tmp/deploy/ipk/``
-------------------------
This directory receives ``.ipk`` packages produced by the build process.
.. _structure-build-tmp-deploy-licenses:
``build/tmp/deploy/licenses/``
------------------------------
This directory receives package licensing information. For example, the
directory contains sub-directories for ``bash``, ``busybox``, and
``glibc`` (among others) that in turn contain appropriate ``COPYING``
license files with other licensing information. For information on
licensing, see the "`Maintaining Open Source License Compliance During
Your Product's
Lifecycle <&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle>`__"
section in the Yocto Project Development Tasks Manual.
.. _structure-build-tmp-deploy-images:
``build/tmp/deploy/images/``
----------------------------
This directory is populated with the basic output objects of the build
(think of them as the "generated artifacts" of the build process),
including things like the boot loader image, kernel, root filesystem and
more. If you want to flash the resulting image from a build onto a
device, look here for the necessary components.
Be careful when deleting files in this directory. You can safely delete
old images from this directory (e.g. ``core-image-*``). However, the
kernel (``*zImage*``, ``*uImage*``, etc.), bootloader and other
supplementary files might be deployed here prior to building an image.
Because these files are not directly produced from the image, if you
delete them they will not be automatically re-created when you build the
image again.
If you do accidentally delete files here, you will need to force them to
be re-created. In order to do that, you will need to know the target
that produced them. For example, these commands rebuild and re-create
the kernel files: $ bitbake -c clean virtual/kernel $ bitbake
virtual/kernel
.. _structure-build-tmp-deploy-sdk:
``build/tmp/deploy/sdk/``
-------------------------
The OpenEmbedded build system creates this directory to hold toolchain
installer scripts which, when executed, install the sysroot that matches
your target hardware. You can find out more about these installers in
the "`Building an SDK
Installer <&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer>`__"
section in the Yocto Project Application Development and the Extensible
Software Development Kit (eSDK) manual.
.. _structure-build-tmp-sstate-control:
``build/tmp/sstate-control/``
-----------------------------
The OpenEmbedded build system uses this directory for the shared state
manifest files. The shared state code uses these files to record the
files installed by each sstate task so that the files can be removed
when cleaning the recipe or when a newer version is about to be
installed. The build system also uses the manifests to detect and
produce a warning when files from one task are overwriting those from
another.
.. _structure-build-tmp-sysroots-components:
``build/tmp/sysroots-components/``
----------------------------------
This directory is the location of the sysroot contents that the task
```do_prepare_recipe_sysroot`` <#ref-tasks-prepare_recipe_sysroot>`__
links or copies into the recipe-specific sysroot for each recipe listed
in ```DEPENDS`` <#var-DEPENDS>`__. Population of this directory is
handled through shared state, while the path is specified by the
```COMPONENTS_DIR`` <#var-COMPONENTS_DIR>`__ variable. Apart from a few
unusual circumstances, handling of the ``sysroots-components`` directory
should be automatic, and recipes should not directly reference
``build/tmp/sysroots-components``.
.. _structure-build-tmp-sysroots:
``build/tmp/sysroots/``
-----------------------
Previous versions of the OpenEmbedded build system used to create a
global shared sysroot per machine along with a native sysroot. Beginning
with the DISTRO version of the Yocto Project, sysroots exist in
recipe-specific ```WORKDIR`` <#var-WORKDIR>`__ directories. Thus, the
``build/tmp/sysroots/`` directory is unused.
.. note::
The
build/tmp/sysroots/
directory can still be populated using the
bitbake build-sysroots
command and can be used for compatibility in some cases. However, in
general it is not recommended to populate this directory. Individual
recipe-specific sysroots should be used.
.. _structure-build-tmp-stamps:
``build/tmp/stamps/``
---------------------
This directory holds information that BitBake uses for accounting
purposes to track what tasks have run and when they have run. The
directory is sub-divided by architecture, package name, and version.
Following is an example:
stamps/all-poky-linux/distcc-config/1.0-r0.do_build-2fdd....2do Although
the files in the directory are empty of data, BitBake uses the filenames
and timestamps for tracking purposes.
For information on how BitBake uses stamp files to determine if a task
should be rerun, see the "`Stamp Files and the Rerunning of
Tasks <&YOCTO_DOCS_OM_URL;#stamp-files-and-the-rerunning-of-tasks>`__"
section in the Yocto Project Overview and Concepts Manual.
.. _structure-build-tmp-log:
``build/tmp/log/``
------------------
This directory contains general logs that are not otherwise placed using
the package's ``WORKDIR``. Examples of logs are the output from the
``do_check_pkg`` or ``do_distro_check`` tasks. Running a build does not
necessarily mean this directory is created.
.. _structure-build-tmp-work:
``build/tmp/work/``
-------------------
This directory contains architecture-specific work sub-directories for
packages built by BitBake. All tasks execute from the appropriate work
directory. For example, the source for a particular package is unpacked,
patched, configured and compiled all within its own work directory.
Within the work directory, organization is based on the package group
and version for which the source is being compiled as defined by the
```WORKDIR`` <#var-WORKDIR>`__.
It is worth considering the structure of a typical work directory. As an
example, consider ``linux-yocto-kernel-3.0`` on the machine ``qemux86``
built within the Yocto Project. For this package, a work directory of
``tmp/work/qemux86-poky-linux/linux-yocto/3.0+git1+<.....>``, referred
to as the ``WORKDIR``, is created. Within this directory, the source is
unpacked to ``linux-qemux86-standard-build`` and then patched by Quilt.
(See the "`Using Quilt in Your
Workflow <&YOCTO_DOCS_DEV_URL;#using-a-quilt-workflow>`__" section in
the Yocto Project Development Tasks Manual for more information.) Within
the ``linux-qemux86-standard-build`` directory, standard Quilt
directories ``linux-3.0/patches`` and ``linux-3.0/.pc`` are created, and
standard Quilt commands can be used.
There are other directories generated within ``WORKDIR``. The most
important directory is ``WORKDIR/temp/``, which has log files for each
task (``log.do_*.pid``) and contains the scripts BitBake runs for each
task (``run.do_*.pid``). The ``WORKDIR/image/`` directory is where "make
install" places its output that is then split into sub-packages within
``WORKDIR/packages-split/``.
.. _structure-build-tmp-work-tunearch-recipename-version:
``build/tmp/work/tunearch/recipename/version/``
-----------------------------------------------
The recipe work directory - ``${WORKDIR}``.
As described earlier in the
"```build/tmp/sysroots/`` <#structure-build-tmp-sysroots>`__" section,
beginning with the DISTRO release of the Yocto Project, the OpenEmbedded
build system builds each recipe in its own work directory (i.e.
```WORKDIR`` <#var-WORKDIR>`__). The path to the work directory is
constructed using the architecture of the given build (e.g.
```TUNE_PKGARCH`` <#var-TUNE_PKGARCH>`__,
```MACHINE_ARCH`` <#var-MACHINE_ARCH>`__, or "allarch"), the recipe
name, and the version of the recipe (i.e.
```PE`` <#var-PE>`__\ ``:``\ ```PV`` <#var-PV>`__\ ``-``\ ```PR`` <#var-PR>`__).
A number of key subdirectories exist within each recipe work directory:
- ``${WORKDIR}/temp``: Contains the log files of each task executed for
this recipe, the "run" files for each executed task, which contain
the code run, and a ``log.task_order`` file, which lists the order in
which tasks were executed.
- ``${WORKDIR}/image``: Contains the output of the
```do_install`` <#ref-tasks-install>`__ task, which corresponds to
the ``${``\ ```D`` <#var-D>`__\ ``}`` variable in that task.
- ``${WORKDIR}/pseudo``: Contains the pseudo database and log for any
tasks executed under pseudo for the recipe.
- ``${WORKDIR}/sysroot-destdir``: Contains the output of the
```do_populate_sysroot`` <#ref-tasks-populate_sysroot>`__ task.
- ``${WORKDIR}/package``: Contains the output of the
```do_package`` <#ref-tasks-package>`__ task before the output is
split into individual packages.
- ``${WORKDIR}/packages-split``: Contains the output of the
``do_package`` task after the output has been split into individual
packages. Subdirectories exist for each individual package created by
the recipe.
- ``${WORKDIR}/recipe-sysroot``: A directory populated with the target
dependencies of the recipe. This directory looks like the target
filesystem and contains libraries that the recipe might need to link
against (e.g. the C library).
- ``${WORKDIR}/recipe-sysroot-native``: A directory populated with the
native dependencies of the recipe. This directory contains the tools
the recipe needs to build (e.g. the compiler, Autoconf, libtool, and
so forth).
- ``${WORKDIR}/build``: This subdirectory applies only to recipes that
support builds where the source is separate from the build artifacts.
The OpenEmbedded build system uses this directory as a separate build
directory (i.e. ``${``\ ```B`` <#var-B>`__\ ``}``).
.. _structure-build-work-shared:
``build/tmp/work-shared/``
--------------------------
For efficiency, the OpenEmbedded build system creates and uses this
directory to hold recipes that share a work directory with other
recipes. In practice, this is only used for ``gcc`` and its variants
(e.g. ``gcc-cross``, ``libgcc``, ``gcc-runtime``, and so forth).
.. _structure-meta:
The Metadata - ``meta/``
========================
As mentioned previously, `Metadata <#metadata>`__ is the core of the
Yocto Project. Metadata has several important subdivisions:
.. _structure-meta-classes:
``meta/classes/``
-----------------
This directory contains the ``*.bbclass`` files. Class files are used to
abstract common code so it can be reused by multiple packages. Every
package inherits the ``base.bbclass`` file. Examples of other important
classes are ``autotools.bbclass``, which in theory allows any
Autotool-enabled package to work with the Yocto Project with minimal
effort. Another example is ``kernel.bbclass`` that contains common code
and functions for working with the Linux kernel. Functions like image
generation or packaging also have their specific class files such as
``image.bbclass``, ``rootfs_*.bbclass`` and ``package*.bbclass``.
For reference information on classes, see the
"`Classes <#ref-classes>`__" chapter.
.. _structure-meta-conf:
``meta/conf/``
--------------
This directory contains the core set of configuration files that start
from ``bitbake.conf`` and from which all other configuration files are
included. See the include statements at the end of the ``bitbake.conf``
file and you will note that even ``local.conf`` is loaded from there.
While ``bitbake.conf`` sets up the defaults, you can often override
these by using the (``local.conf``) file, machine file or the
distribution configuration file.
.. _structure-meta-conf-machine:
``meta/conf/machine/``
----------------------
This directory contains all the machine configuration files. If you set
``MACHINE = "qemux86"``, the OpenEmbedded build system looks for a
``qemux86.conf`` file in this directory. The ``include`` directory
contains various data common to multiple machines. If you want to add
support for a new machine to the Yocto Project, look in this directory.
.. _structure-meta-conf-distro:
``meta/conf/distro/``
---------------------
The contents of this directory controls any distribution-specific
configurations. For the Yocto Project, the ``defaultsetup.conf`` is the
main file here. This directory includes the versions and the ``SRCDATE``
definitions for applications that are configured here. An example of an
alternative configuration might be ``poky-bleeding.conf``. Although this
file mainly inherits its configuration from Poky.
.. _structure-meta-conf-machine-sdk:
``meta/conf/machine-sdk/``
--------------------------
The OpenEmbedded build system searches this directory for configuration
files that correspond to the value of
```SDKMACHINE`` <#var-SDKMACHINE>`__. By default, 32-bit and 64-bit x86
files ship with the Yocto Project that support some SDK hosts. However,
it is possible to extend that support to other SDK hosts by adding
additional configuration files in this subdirectory within another
layer.
.. _structure-meta-files:
``meta/files/``
---------------
This directory contains common license files and several text files used
by the build system. The text files contain minimal device information
and lists of files and directories with known permissions.
.. _structure-meta-lib:
``meta/lib/``
-------------
This directory contains OpenEmbedded Python library code used during the
build process.
.. _structure-meta-recipes-bsp:
``meta/recipes-bsp/``
---------------------
This directory contains anything linking to specific hardware or
hardware configuration information such as "u-boot" and "grub".
.. _structure-meta-recipes-connectivity:
``meta/recipes-connectivity/``
------------------------------
This directory contains libraries and applications related to
communication with other devices.
.. _structure-meta-recipes-core:
``meta/recipes-core/``
----------------------
This directory contains what is needed to build a basic working Linux
image including commonly used dependencies.
.. _structure-meta-recipes-devtools:
``meta/recipes-devtools/``
--------------------------
This directory contains tools that are primarily used by the build
system. The tools, however, can also be used on targets.
.. _structure-meta-recipes-extended:
``meta/recipes-extended/``
--------------------------
This directory contains non-essential applications that add features
compared to the alternatives in core. You might need this directory for
full tool functionality or for Linux Standard Base (LSB) compliance.
.. _structure-meta-recipes-gnome:
``meta/recipes-gnome/``
-----------------------
This directory contains all things related to the GTK+ application
framework.
.. _structure-meta-recipes-graphics:
``meta/recipes-graphics/``
--------------------------
This directory contains X and other graphically related system
libraries.
.. _structure-meta-recipes-kernel:
``meta/recipes-kernel/``
------------------------
This directory contains the kernel and generic applications and
libraries that have strong kernel dependencies.
.. _structure-meta-recipes-lsb4:
``meta/recipes-lsb4/``
----------------------
This directory contains recipes specifically added to support the Linux
Standard Base (LSB) version 4.x.
.. _structure-meta-recipes-multimedia:
``meta/recipes-multimedia/``
----------------------------
This directory contains codecs and support utilities for audio, images
and video.
.. _structure-meta-recipes-rt:
``meta/recipes-rt/``
--------------------
This directory contains package and image recipes for using and testing
the ``PREEMPT_RT`` kernel.
.. _structure-meta-recipes-sato:
``meta/recipes-sato/``
----------------------
This directory contains the Sato demo/reference UI/UX and its associated
applications and configuration data.
.. _structure-meta-recipes-support:
``meta/recipes-support/``
-------------------------
This directory contains recipes used by other recipes, but that are not
directly included in images (i.e. dependencies of other recipes).
.. _structure-meta-site:
``meta/site/``
--------------
This directory contains a list of cached results for various
architectures. Because certain "autoconf" test results cannot be
determined when cross-compiling due to the tests not able to run on a
live system, the information in this directory is passed to "autoconf"
for the various architectures.
.. _structure-meta-recipes-txt:
``meta/recipes.txt``
--------------------
This file is a description of the contents of ``recipes-*``.

View File

@@ -0,0 +1,378 @@
*******************
System Requirements
*******************
Welcome to the Yocto Project Reference Manual! This manual provides
reference information for the current release of the Yocto Project, and
is most effectively used after you have an understanding of the basics
of the Yocto Project. The manual is neither meant to be read as a
starting point to the Yocto Project, nor read from start to finish.
Rather, use this manual to find variable definitions, class
descriptions, and so forth as needed during the course of using the
Yocto Project.
For introductory information on the Yocto Project, see the `Yocto
Project Website <&YOCTO_HOME_URL;>`__ and the "`Yocto Project
Development
Environment <&YOCTO_DOCS_OM_URL;#overview-development-environment>`__"
chapter in the Yocto Project Overview and Concepts Manual.
If you want to use the Yocto Project to quickly build an image without
having to understand concepts, work through the `Yocto Project Quick
Build <&YOCTO_DOCS_BRIEF_URL;>`__ document. You can find "how-to"
information in the `Yocto Project Development Tasks
Manual <&YOCTO_DOCS_DEV_URL;>`__. You can find Yocto Project overview
and conceptual information in the `Yocto Project Overview and Concepts
Manual <&YOCTO_DOCS_OM_URL;>`__.
.. note::
For more information about the Yocto Project Documentation set, see
the "
Links and Related Documentation
" section.
.. _detailed-supported-distros:
Supported Linux Distributions
=============================
Currently, the Yocto Project is supported on the following
distributions:
.. note::
- Yocto Project releases are tested against the stable Linux
distributions in the following list. The Yocto Project should work
on other distributions but validation is not performed against
them.
- In particular, the Yocto Project does not support and currently
has no plans to support rolling-releases or development
distributions due to their constantly changing nature. We welcome
patches and bug reports, but keep in mind that our priority is on
the supported platforms listed below.
- You may use Windows Subsystem For Linux v2 to set up a build host
using Windows 10, but validation is not performed against build
hosts using WSLv2.
.. note::
The Yocto Project is not compatible with WSLv1, it is
compatible but not officially supported nor validated with
WSLv2, if you still decide to use WSL please upgrade to WSLv2.
- If you encounter problems, please go to `Yocto Project
Bugzilla <&YOCTO_BUGZILLA_URL;>`__ and submit a bug. We are
interested in hearing about your experience. For information on
how to submit a bug, see the Yocto Project `Bugzilla wiki
page <&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking>`__
and the "`Submitting a Defect Against the Yocto
Project <&YOCTO_DOCS_DEV_URL;#submitting-a-defect-against-the-yocto-project>`__"
section in the Yocto Project Development Tasks Manual.
- Ubuntu 16.04 (LTS)
- Ubuntu 18.04 (LTS)
- Ubuntu 20.04
- Fedora 30
- Fedora 31
- Fedora 32
- CentOS 7.x
- CentOS 8.x
- Debian GNU/Linux 8.x (Jessie)
- Debian GNU/Linux 9.x (Stretch)
- Debian GNU/Linux 10.x (Buster)
- OpenSUSE Leap 15.1
.. note::
While the Yocto Project Team attempts to ensure all Yocto Project
releases are one hundred percent compatible with each officially
supported Linux distribution, instances might exist where you
encounter a problem while using the Yocto Project on a specific
distribution.
Required Packages for the Build Host
====================================
The list of packages you need on the host development system can be
large when covering all build scenarios using the Yocto Project. This
section describes required packages according to Linux distribution and
function.
.. _ubuntu-packages:
Ubuntu and Debian
-----------------
The following list shows the required packages by function given a
supported Ubuntu or Debian Linux distribution:
.. note::
- If your build system has the ``oss4-dev`` package installed, you
might experience QEMU build failures due to the package installing
its own custom ``/usr/include/linux/soundcard.h`` on the Debian
system. If you run into this situation, either of the following
solutions exist: $ sudo apt-get build-dep qemu $ sudo apt-get
remove oss4-dev
- For Debian-8, ``python3-git`` and ``pylint3`` are no longer
available via ``apt-get``. $ sudo pip3 install GitPython
pylint==1.9.5
- *Essentials:* Packages needed to build an image on a headless system:
$ sudo apt-get install UBUNTU_HOST_PACKAGES_ESSENTIAL
- *Documentation:* Packages needed if you are going to build out the
Yocto Project documentation manuals: $ sudo apt-get install make
xsltproc docbook-utils fop dblatex xmlto
Fedora Packages
---------------
The following list shows the required packages by function given a
supported Fedora Linux distribution:
- *Essentials:* Packages needed to build an image for a headless
system: $ sudo dnf install FEDORA_HOST_PACKAGES_ESSENTIAL
- *Documentation:* Packages needed if you are going to build out the
Yocto Project documentation manuals: $ sudo dnf install
docbook-style-dsssl docbook-style-xsl \\ docbook-dtds docbook-utils
fop libxslt dblatex xmlto
openSUSE Packages
-----------------
The following list shows the required packages by function given a
supported openSUSE Linux distribution:
- *Essentials:* Packages needed to build an image for a headless
system: $ sudo zypper install OPENSUSE_HOST_PACKAGES_ESSENTIAL
- *Documentation:* Packages needed if you are going to build out the
Yocto Project documentation manuals: $ sudo zypper install dblatex
xmlto
CentOS-7 Packages
-----------------
The following list shows the required packages by function given a
supported CentOS-7 Linux distribution:
- *Essentials:* Packages needed to build an image for a headless
system: $ sudo yum install CENTOS7_HOST_PACKAGES_ESSENTIAL
.. note::
- Extra Packages for Enterprise Linux (i.e. ``epel-release``) is
a collection of packages from Fedora built on RHEL/CentOS for
easy installation of packages not included in enterprise Linux
by default. You need to install these packages separately.
- The ``makecache`` command consumes additional Metadata from
``epel-release``.
- *Documentation:* Packages needed if you are going to build out the
Yocto Project documentation manuals: $ sudo yum install
docbook-style-dsssl docbook-style-xsl \\ docbook-dtds docbook-utils
fop libxslt dblatex xmlto
CentOS-8 Packages
-----------------
The following list shows the required packages by function given a
supported CentOS-8 Linux distribution:
- *Essentials:* Packages needed to build an image for a headless
system: $ sudo dnf install CENTOS8_HOST_PACKAGES_ESSENTIAL
.. note::
- Extra Packages for Enterprise Linux (i.e. ``epel-release``) is
a collection of packages from Fedora built on RHEL/CentOS for
easy installation of packages not included in enterprise Linux
by default. You need to install these packages separately.
- The ``PowerTools`` repo provides additional packages such as
``rpcgen`` and ``texinfo``.
- The ``makecache`` command consumes additional Metadata from
``epel-release``.
- *Documentation:* Packages needed if you are going to build out the
Yocto Project documentation manuals: $ sudo dnf install
docbook-style-dsssl docbook-style-xsl \\ docbook-dtds docbook-utils
fop libxslt dblatex xmlto
Required Git, tar, Python and gcc Versions
==========================================
In order to use the build system, your host development system must meet
the following version requirements for Git, tar, and Python:
- Git 1.8.3.1 or greater
- tar 1.28 or greater
- Python 3.5.0 or greater
If your host development system does not meet all these requirements,
you can resolve this by installing a ``buildtools`` tarball that
contains these tools. You can get the tarball one of two ways: download
a pre-built tarball or use BitBake to build the tarball.
In addition, your host development system must meet the following
version requirement for gcc:
- gcc 5.0 or greater
If your host development system does not meet this requirement, you can
resolve this by installing a ``buildtools-extended`` tarball that
contains additional tools, the equivalent of ``buildtools-essential``.
Installing a Pre-Built ``buildtools`` Tarball with ``install-buildtools`` script
--------------------------------------------------------------------------------
The ``install-buildtools`` script is the easiest of the three methods by
which you can get these tools. It downloads a pre-built buildtools
installer and automatically installs the tools for you:
1. Execute the ``install-buildtools`` script. Here is an example: $ cd
poky $ scripts/install-buildtools --without-extended-buildtools \\
--base-url YOCTO_DL_URL/releases/yocto \\ --release yocto-DISTRO \\
--installer-version DISTRO
During execution, the buildtools tarball will be downloaded, the
checksum of the download will be verified, the installer will be run
for you, and some basic checks will be run to to make sure the
installation is functional.
To avoid the need of ``sudo`` privileges, the ``install-buildtools``
script will by default tell the installer to install in:
/path/to/poky/buildtools
If your host development system needs the additional tools provided
in the ``buildtools-extended`` tarball, you can instead execute the
``install-buildtools`` script with the default parameters: $ cd poky
$ scripts/install-buildtools
2. Source the tools environment setup script by using a command like the
following: $ source
/path/to/poky/buildtools/environment-setup-x86_64-pokysdk-linux Of
course, you need to supply your installation directory and be sure to
use the right file (i.e. i586 or x86_64).
After you have sourced the setup script, the tools are added to
``PATH`` and any other environment variables required to run the
tools are initialized. The results are working versions versions of
Git, tar, Python and ``chrpath``. And in the case of the
``buildtools-extended`` tarball, additional working versions of tools
including ``gcc``, ``make`` and the other tools included in
``packagegroup-core-buildessential``.
Downloading a Pre-Built ``buildtools`` Tarball
----------------------------------------------
Downloading and running a pre-built buildtools installer is the easiest
of the two methods by which you can get these tools:
1. Locate and download the ``*.sh`` at
` <&YOCTO_RELEASE_DL_URL;/buildtools/>`__.
2. Execute the installation script. Here is an example for the
traditional installer: $ sh
~/Downloads/x86_64-buildtools-nativesdk-standalone-DISTRO.sh Here is
an example for the extended installer: $ sh
~/Downloads/x86_64-buildtools-extended-nativesdk-standalone-DISTRO.sh
During execution, a prompt appears that allows you to choose the
installation directory. For example, you could choose the following:
/home/your-username/buildtools
3. Source the tools environment setup script by using a command like the
following: $ source
/home/your_username/buildtools/environment-setup-i586-poky-linux Of
course, you need to supply your installation directory and be sure to
use the right file (i.e. i585 or x86-64).
After you have sourced the setup script, the tools are added to
``PATH`` and any other environment variables required to run the
tools are initialized. The results are working versions versions of
Git, tar, Python and ``chrpath``. And in the case of the
``buildtools-extended`` tarball, additional working versions of tools
including ``gcc``, ``make`` and the other tools included in
``packagegroup-core-buildessential``.
Building Your Own ``buildtools`` Tarball
----------------------------------------
Building and running your own buildtools installer applies only when you
have a build host that can already run BitBake. In this case, you use
that machine to build the ``.sh`` file and then take steps to transfer
and run it on a machine that does not meet the minimal Git, tar, and
Python (or gcc) requirements.
Here are the steps to take to build and run your own buildtools
installer:
1. On the machine that is able to run BitBake, be sure you have set up
your build environment with the setup script
(````` <#structure-core-script>`__).
2. Run the BitBake command to build the tarball: $ bitbake
buildtools-tarball or run the BitBake command to build the extended
tarball: $ bitbake buildtools-extended-tarball
.. note::
The
SDKMACHINE
variable in your
local.conf
file determines whether you build tools for a 32-bit or 64-bit
system.
Once the build completes, you can find the ``.sh`` file that installs
the tools in the ``tmp/deploy/sdk`` subdirectory of the `Build
Directory <#build-directory>`__. The installer file has the string
"buildtools" (or "buildtools-extended") in the name.
3. Transfer the ``.sh`` file from the build host to the machine that
does not meet the Git, tar, or Python (or gcc) requirements.
4. On the machine that does not meet the requirements, run the ``.sh``
file to install the tools. Here is an example for the traditional
installer: $ sh
~/Downloads/x86_64-buildtools-nativesdk-standalone-DISTRO.sh Here is
an example for the extended installer: $ sh
~/Downloads/x86_64-buildtools-extended-nativesdk-standalone-DISTRO.sh
During execution, a prompt appears that allows you to choose the
installation directory. For example, you could choose the following:
/home/your_username/buildtools
5. Source the tools environment setup script by using a command like the
following: $ source
/home/your_username/buildtools/environment-setup-x86_64-poky-linux Of
course, you need to supply your installation directory and be sure to
use the right file (i.e. i586 or x86_64).
After you have sourced the setup script, the tools are added to
``PATH`` and any other environment variables required to run the
tools are initialized. The results are working versions versions of
Git, tar, Python and ``chrpath``. And in the case of the
``buildtools-extended`` tarball, additional working versions of tools
including ``gcc``, ``make`` and the other tools included in
``packagegroup-core-buildessential``.

View File

@@ -0,0 +1,834 @@
*****
Tasks
*****
Tasks are units of execution for BitBake. Recipes (``.bb`` files) use
tasks to complete configuring, compiling, and packaging software. This
chapter provides a reference of the tasks defined in the OpenEmbedded
build system.
Normal Recipe Build Tasks
=========================
The following sections describe normal tasks associated with building a
recipe. For more information on tasks and dependencies, see the
"`Tasks <&YOCTO_DOCS_BB_URL;#tasks>`__" and
"`Dependencies <&YOCTO_DOCS_BB_URL;#dependencies>`__" sections in the
BitBake User Manual.
.. _ref-tasks-build:
``do_build``
------------
The default task for all recipes. This task depends on all other normal
tasks required to build a recipe.
.. _ref-tasks-compile:
``do_compile``
--------------
Compiles the source code. This task runs with the current working
directory set to ``${``\ ```B`` <#var-B>`__\ ``}``.
The default behavior of this task is to run the ``oe_runmake`` function
if a makefile (``Makefile``, ``makefile``, or ``GNUmakefile``) is found.
If no such file is found, the ``do_compile`` task does nothing.
.. _ref-tasks-compile_ptest_base:
``do_compile_ptest_base``
-------------------------
Compiles the runtime test suite included in the software being built.
.. _ref-tasks-configure:
``do_configure``
----------------
Configures the source by enabling and disabling any build-time and
configuration options for the software being built. The task runs with
the current working directory set to ``${``\ ```B`` <#var-B>`__\ ``}``.
The default behavior of this task is to run ``oe_runmake clean`` if a
makefile (``Makefile``, ``makefile``, or ``GNUmakefile``) is found and
```CLEANBROKEN`` <#var-CLEANBROKEN>`__ is not set to "1". If no such
file is found or the ``CLEANBROKEN`` variable is set to "1", the
``do_configure`` task does nothing.
.. _ref-tasks-configure_ptest_base:
``do_configure_ptest_base``
---------------------------
Configures the runtime test suite included in the software being built.
.. _ref-tasks-deploy:
``do_deploy``
-------------
Writes output files that are to be deployed to
``${``\ ```DEPLOY_DIR_IMAGE`` <#var-DEPLOY_DIR_IMAGE>`__\ ``}``. The
task runs with the current working directory set to
``${``\ ```B`` <#var-B>`__\ ``}``.
Recipes implementing this task should inherit the
```deploy`` <#ref-classes-deploy>`__ class and should write the output
to ``${``\ ```DEPLOYDIR`` <#var-DEPLOYDIR>`__\ ``}``, which is not to be
confused with ``${DEPLOY_DIR}``. The ``deploy`` class sets up
``do_deploy`` as a shared state (sstate) task that can be accelerated
through sstate use. The sstate mechanism takes care of copying the
output from ``${DEPLOYDIR}`` to ``${DEPLOY_DIR_IMAGE}``.
.. note::
Do not write the output directly to
${DEPLOY_DIR_IMAGE}
, as this causes the sstate mechanism to malfunction.
The ``do_deploy`` task is not added as a task by default and
consequently needs to be added manually. If you want the task to run
after ```do_compile`` <#ref-tasks-compile>`__, you can add it by doing
the following: addtask deploy after do_compile Adding ``do_deploy``
after other tasks works the same way.
.. note::
You do not need to add
before do_build
to the
addtask
command (though it is harmless), because the
base
class contains the following:
::
do_build[recrdeptask] += "do_deploy"
See the "
Dependencies
" section in the BitBake User Manual for more information.
If the ``do_deploy`` task re-executes, any previous output is removed
(i.e. "cleaned").
.. _ref-tasks-fetch:
``do_fetch``
------------
Fetches the source code. This task uses the
```SRC_URI`` <#var-SRC_URI>`__ variable and the argument's prefix to
determine the correct `fetcher <&YOCTO_DOCS_BB_URL;#bb-fetchers>`__
module.
.. _ref-tasks-image:
``do_image``
------------
Starts the image generation process. The ``do_image`` task runs after
the OpenEmbedded build system has run the
```do_rootfs`` <#ref-tasks-rootfs>`__ task during which packages are
identified for installation into the image and the root filesystem is
created, complete with post-processing.
The ``do_image`` task performs pre-processing on the image through the
```IMAGE_PREPROCESS_COMMAND`` <#var-IMAGE_PREPROCESS_COMMAND>`__ and
dynamically generates supporting ``do_image_*`` tasks as needed.
For more information on image creation, see the "`Image
Generation <&YOCTO_DOCS_OM_URL;#image-generation-dev-environment>`__"
section in the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-image-complete:
``do_image_complete``
---------------------
Completes the image generation process. The ``do_image_complete`` task
runs after the OpenEmbedded build system has run the
```do_image`` <#ref-tasks-image>`__ task during which image
pre-processing occurs and through dynamically generated ``do_image_*``
tasks the image is constructed.
The ``do_image_complete`` task performs post-processing on the image
through the
```IMAGE_POSTPROCESS_COMMAND`` <#var-IMAGE_POSTPROCESS_COMMAND>`__.
For more information on image creation, see the "`Image
Generation <&YOCTO_DOCS_OM_URL;#image-generation-dev-environment>`__"
section in the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-install:
``do_install``
--------------
Copies files that are to be packaged into the holding area
``${``\ ```D`` <#var-D>`__\ ``}``. This task runs with the current
working directory set to ``${``\ ```B`` <#var-B>`__\ ``}``, which is the
compilation directory. The ``do_install`` task, as well as other tasks
that either directly or indirectly depend on the installed files (e.g.
```do_package`` <#ref-tasks-package>`__,
```do_package_write_*`` <#ref-tasks-package_write_deb>`__, and
```do_rootfs`` <#ref-tasks-rootfs>`__), run under
`fakeroot <&YOCTO_DOCS_OM_URL;#fakeroot-and-pseudo>`__.
.. note::
When installing files, be careful not to set the owner and group IDs
of the installed files to unintended values. Some methods of copying
files, notably when using the recursive ``cp`` command, can preserve
the UID and/or GID of the original file, which is usually not what
you want. The
```host-user-contaminated`` <#insane-host-user-contaminated>`__ QA
check checks for files that probably have the wrong ownership.
Safe methods for installing files include the following:
- The ``install`` utility. This utility is the preferred method.
- The ``cp`` command with the "--no-preserve=ownership" option.
- The ``tar`` command with the "--no-same-owner" option. See the
``bin_package.bbclass`` file in the ``meta/classes`` directory of
the `Source Directory <#source-directory>`__ for an example.
.. _ref-tasks-install_ptest_base:
``do_install_ptest_base``
-------------------------
Copies the runtime test suite files from the compilation directory to a
holding area.
.. _ref-tasks-package:
``do_package``
--------------
Analyzes the content of the holding area
``${``\ ```D`` <#var-D>`__\ ``}`` and splits the content into subsets
based on available packages and files. This task makes use of the
```PACKAGES`` <#var-PACKAGES>`__ and ```FILES`` <#var-FILES>`__
variables.
The ``do_package`` task, in conjunction with the
```do_packagedata`` <#ref-tasks-packagedata>`__ task, also saves some
important package metadata. For additional information, see the
```PKGDESTWORK`` <#var-PKGDESTWORK>`__ variable and the "`Automatically
Added Runtime
Dependencies <&YOCTO_DOCS_OM_URL;#automatically-added-runtime-dependencies>`__"
section in the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-package_qa:
``do_package_qa``
-----------------
Runs QA checks on packaged files. For more information on these checks,
see the ```insane`` <#ref-classes-insane>`__ class.
.. _ref-tasks-package_write_deb:
``do_package_write_deb``
------------------------
Creates Debian packages (i.e. ``*.deb`` files) and places them in the
``${``\ ```DEPLOY_DIR_DEB`` <#var-DEPLOY_DIR_DEB>`__\ ``}`` directory in
the package feeds area. For more information, see the "`Package
Feeds <&YOCTO_DOCS_OM_URL;#package-feeds-dev-environment>`__" section in
the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-package_write_ipk:
``do_package_write_ipk``
------------------------
Creates IPK packages (i.e. ``*.ipk`` files) and places them in the
``${``\ ```DEPLOY_DIR_IPK`` <#var-DEPLOY_DIR_IPK>`__\ ``}`` directory in
the package feeds area. For more information, see the "`Package
Feeds <&YOCTO_DOCS_OM_URL;#package-feeds-dev-environment>`__" section in
the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-package_write_rpm:
``do_package_write_rpm``
------------------------
Creates RPM packages (i.e. ``*.rpm`` files) and places them in the
``${``\ ```DEPLOY_DIR_RPM`` <#var-DEPLOY_DIR_RPM>`__\ ``}`` directory in
the package feeds area. For more information, see the "`Package
Feeds <&YOCTO_DOCS_OM_URL;#package-feeds-dev-environment>`__" section in
the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-package_write_tar:
``do_package_write_tar``
------------------------
Creates tarballs and places them in the
``${``\ ```DEPLOY_DIR_TAR`` <#var-DEPLOY_DIR_TAR>`__\ ``}`` directory in
the package feeds area. For more information, see the "`Package
Feeds <&YOCTO_DOCS_OM_URL;#package-feeds-dev-environment>`__" section in
the Yocto Project Overview and Concepts Manual.
.. _ref-tasks-packagedata:
``do_packagedata``
------------------
Saves package metadata generated by the
```do_package`` <#ref-tasks-package>`__ task in
```PKGDATA_DIR`` <#var-PKGDATA_DIR>`__ to make it available globally.
.. _ref-tasks-patch:
``do_patch``
------------
Locates patch files and applies them to the source code.
After fetching and unpacking source files, the build system uses the
recipe's ```SRC_URI`` <&YOCTO_DOCS_REF_URL;#var-SRC_URI>`__ statements
to locate and apply patch files to the source code.
.. note::
The build system uses the
FILESPATH
variable to determine the default set of directories when searching
for patches.
Patch files, by default, are ``*.patch`` and ``*.diff`` files created
and kept in a subdirectory of the directory holding the recipe file. For
example, consider the
```bluez5`` <&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/recipes-connectivity/bluez5>`__
recipe from the OE-Core layer (i.e. ``poky/meta``):
poky/meta/recipes-connectivity/bluez5 This recipe has two patch files
located here: poky/meta/recipes-connectivity/bluez5/bluez5
In the ``bluez5`` recipe, the ``SRC_URI`` statements point to the source
and patch files needed to build the package.
.. note::
In the case for the
bluez5_5.48.bb
recipe, the
SRC_URI
statements are from an include file
bluez5.inc
.
As mentioned earlier, the build system treats files whose file types are
``.patch`` and ``.diff`` as patch files. However, you can use the
"apply=yes" parameter with the ``SRC_URI`` statement to indicate any
file as a patch file: SRC_URI = " \\ git://path_to_repo/some_package \\
file://file;apply=yes \\ "
Conversely, if you have a directory full of patch files and you want to
exclude some so that the ``do_patch`` task does not apply them during
the patch phase, you can use the "apply=no" parameter with the
``SRC_URI`` statement: SRC_URI = " \\ git://path_to_repo/some_package \\
file://path_to_lots_of_patch_files \\
file://path_to_lots_of_patch_files/patch_file5;apply=no \\ " In the
previous example, assuming all the files in the directory holding the
patch files end with either ``.patch`` or ``.diff``, every file would be
applied as a patch by default except for the patch_file5 patch.
You can find out more about the patching process in the
"`Patching <&YOCTO_DOCS_OM_URL;#patching-dev-environment>`__" section in
the Yocto Project Overview and Concepts Manual and the "`Patching
Code <&YOCTO_DOCS_DEV_URL;#new-recipe-patching-code>`__" section in the
Yocto Project Development Tasks Manual.
.. _ref-tasks-populate_lic:
``do_populate_lic``
-------------------
Writes license information for the recipe that is collected later when
the image is constructed.
.. _ref-tasks-populate_sdk:
``do_populate_sdk``
-------------------
Creates the file and directory structure for an installable SDK. See the
"`SDK
Generation <&YOCTO_DOCS_OM_URL;#sdk-generation-dev-environment>`__"
section in the Yocto Project Overview and Concepts Manual for more
information.
.. _ref-tasks-populate_sysroot:
``do_populate_sysroot``
-----------------------
Stages (copies) a subset of the files installed by the
```do_install`` <#ref-tasks-install>`__ task into the appropriate
sysroot. For information on how to access these files from other
recipes, see the ```STAGING_DIR*`` <#var-STAGING_DIR_HOST>`__ variables.
Directories that would typically not be needed by other recipes at build
time (e.g. ``/etc``) are not copied by default.
For information on what directories are copied by default, see the
```SYSROOT_DIRS*`` <#var-SYSROOT_DIRS>`__ variables. You can change
these variables inside your recipe if you need to make additional (or
fewer) directories available to other recipes at build time.
The ``do_populate_sysroot`` task is a shared state (sstate) task, which
means that the task can be accelerated through sstate use. Realize also
that if the task is re-executed, any previous output is removed (i.e.
"cleaned").
.. _ref-tasks-prepare_recipe_sysroot:
``do_prepare_recipe_sysroot``
-----------------------------
Installs the files into the individual recipe specific sysroots (i.e.
``recipe-sysroot`` and ``recipe-sysroot-native`` under
``${``\ ```WORKDIR`` <#var-WORKDIR>`__\ ``}`` based upon the
dependencies specified by ```DEPENDS`` <#var-DEPENDS>`__). See the
"```staging`` <#ref-classes-staging>`__" class for more information.
.. _ref-tasks-rm_work:
``do_rm_work``
--------------
Removes work files after the OpenEmbedded build system has finished with
them. You can learn more by looking at the
"```rm_work.bbclass`` <#ref-classes-rm-work>`__" section.
.. _ref-tasks-unpack:
``do_unpack``
-------------
Unpacks the source code into a working directory pointed to by
``${``\ ```WORKDIR`` <#var-WORKDIR>`__\ ``}``. The ```S`` <#var-S>`__
variable also plays a role in where unpacked source files ultimately
reside. For more information on how source files are unpacked, see the
"`Source
Fetching <&YOCTO_DOCS_OM_URL;#source-fetching-dev-environment>`__"
section in the Yocto Project Overview and Concepts Manual and also see
the ``WORKDIR`` and ``S`` variable descriptions.
Manually Called Tasks
=====================
These tasks are typically manually triggered (e.g. by using the
``bitbake -c`` command-line option):
.. _ref-tasks-checkpkg:
``do_checkpkg``
---------------
Provides information about the recipe including its upstream version and
status. The upstream version and status reveals whether or not a version
of the recipe exists upstream and a status of not updated, updated, or
unknown.
To check the upstream version and status of a recipe, use the following
devtool commands: $ devtool latest-version $ devtool
check-upgrade-status See the "```devtool`` Quick
Reference <#ref-devtool-reference>`__" chapter for more information on
``devtool``. See the "`Checking on the Upgrade Status of a
Recipe <&YOCTO_DOCS_REF_URL;#devtool-checking-on-the-upgrade-status-of-a-recipe>`__"
section for information on checking the upgrade status of a recipe.
To build the ``checkpkg`` task, use the ``bitbake`` command with the
"-c" option and task name: $ bitbake core-image-minimal -c checkpkg By
default, the results are stored in ```$LOG_DIR`` <#var-LOG_DIR>`__ (e.g.
``$BUILD_DIR/tmp/log``).
.. _ref-tasks-checkuri:
``do_checkuri``
---------------
Validates the ```SRC_URI`` <#var-SRC_URI>`__ value.
.. _ref-tasks-clean:
``do_clean``
------------
Removes all output files for a target from the
```do_unpack`` <#ref-tasks-unpack>`__ task forward (i.e. ``do_unpack``,
```do_configure`` <#ref-tasks-configure>`__,
```do_compile`` <#ref-tasks-compile>`__,
```do_install`` <#ref-tasks-install>`__, and
```do_package`` <#ref-tasks-package>`__).
You can run this task using BitBake as follows: $ bitbake -c clean
recipe
Running this task does not remove the
`sstate <&YOCTO_DOCS_OM_URL;#shared-state-cache>`__ cache files.
Consequently, if no changes have been made and the recipe is rebuilt
after cleaning, output files are simply restored from the sstate cache.
If you want to remove the sstate cache files for the recipe, you need to
use the ```do_cleansstate`` <#ref-tasks-cleansstate>`__ task instead
(i.e. ``bitbake -c cleansstate`` recipe).
.. _ref-tasks-cleanall:
``do_cleanall``
---------------
Removes all output files, shared state
(`sstate <&YOCTO_DOCS_OM_URL;#shared-state-cache>`__) cache, and
downloaded source files for a target (i.e. the contents of
```DL_DIR`` <#var-DL_DIR>`__). Essentially, the ``do_cleanall`` task is
identical to the ```do_cleansstate`` <#ref-tasks-cleansstate>`__ task
with the added removal of downloaded source files.
You can run this task using BitBake as follows: $ bitbake -c cleanall
recipe
Typically, you would not normally use the ``cleanall`` task. Do so only
if you want to start fresh with the ```do_fetch`` <#ref-tasks-fetch>`__
task.
.. _ref-tasks-cleansstate:
``do_cleansstate``
------------------
Removes all output files and shared state
(`sstate <&YOCTO_DOCS_OM_URL;#shared-state-cache>`__) cache for a
target. Essentially, the ``do_cleansstate`` task is identical to the
```do_clean`` <#ref-tasks-clean>`__ task with the added removal of
shared state (`sstate <&YOCTO_DOCS_OM_URL;#shared-state-cache>`__)
cache.
You can run this task using BitBake as follows: $ bitbake -c cleansstate
recipe
When you run the ``do_cleansstate`` task, the OpenEmbedded build system
no longer uses any sstate. Consequently, building the recipe from
scratch is guaranteed.
.. note::
The
do_cleansstate
task cannot remove sstate from a remote sstate mirror. If you need to
build a target from scratch using remote mirrors, use the "-f" option
as follows:
::
$ bitbake -f -c do_cleansstate target
.. _ref-tasks-devpyshell:
``do_devpyshell``
-----------------
Starts a shell in which an interactive Python interpreter allows you to
interact with the BitBake build environment. From within this shell, you
can directly examine and set bits from the data store and execute
functions as if within the BitBake environment. See the "`Using a
Development Python
Shell <&YOCTO_DOCS_DEV_URL;#platdev-appdev-devpyshell>`__" section in
the Yocto Project Development Tasks Manual for more information about
using ``devpyshell``.
.. _ref-tasks-devshell:
``do_devshell``
---------------
Starts a shell whose environment is set up for development, debugging,
or both. See the "`Using a Development
Shell <&YOCTO_DOCS_DEV_URL;#platdev-appdev-devshell>`__" section in the
Yocto Project Development Tasks Manual for more information about using
``devshell``.
.. _ref-tasks-listtasks:
``do_listtasks``
----------------
Lists all defined tasks for a target.
.. _ref-tasks-package_index:
``do_package_index``
--------------------
Creates or updates the index in the `Package
Feeds <&YOCTO_DOCS_OM_URL;#package-feeds-dev-environment>`__ area.
.. note::
This task is not triggered with the
bitbake -c
command-line option as are the other tasks in this section. Because
this task is specifically for the
package-index
recipe, you run it using
bitbake package-index
.
Image-Related Tasks
===================
The following tasks are applicable to image recipes.
.. _ref-tasks-bootimg:
``do_bootimg``
--------------
Creates a bootable live image. See the
```IMAGE_FSTYPES`` <#var-IMAGE_FSTYPES>`__ variable for additional
information on live image types.
.. _ref-tasks-bundle_initramfs:
``do_bundle_initramfs``
-----------------------
Combines an initial RAM disk (initramfs) image and kernel together to
form a single image. The
```CONFIG_INITRAMFS_SOURCE`` <#var-CONFIG_INITRAMFS_SOURCE>`__ variable
has some more information about these types of images.
.. _ref-tasks-rootfs:
``do_rootfs``
-------------
Creates the root filesystem (file and directory structure) for an image.
See the "`Image
Generation <&YOCTO_DOCS_OM_URL;#image-generation-dev-environment>`__"
section in the Yocto Project Overview and Concepts Manual for more
information on how the root filesystem is created.
.. _ref-tasks-testimage:
``do_testimage``
----------------
Boots an image and performs runtime tests within the image. For
information on automatically testing images, see the "`Performing
Automated Runtime
Testing <&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing>`__"
section in the Yocto Project Development Tasks Manual.
.. _ref-tasks-testimage_auto:
``do_testimage_auto``
---------------------
Boots an image and performs runtime tests within the image immediately
after it has been built. This task is enabled when you set
```TESTIMAGE_AUTO`` <#var-TESTIMAGE_AUTO>`__ equal to "1".
For information on automatically testing images, see the "`Performing
Automated Runtime
Testing <&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing>`__"
section in the Yocto Project Development Tasks Manual.
Kernel-Related Tasks
====================
The following tasks are applicable to kernel recipes. Some of these
tasks (e.g. the ```do_menuconfig`` <#ref-tasks-menuconfig>`__ task) are
also applicable to recipes that use Linux kernel style configuration
such as the BusyBox recipe.
.. _ref-tasks-compile_kernelmodules:
``do_compile_kernelmodules``
----------------------------
Runs the step that builds the kernel modules (if needed). Building a
kernel consists of two steps: 1) the kernel (``vmlinux``) is built, and
2) the modules are built (i.e. ``make modules``).
.. _ref-tasks-diffconfig:
``do_diffconfig``
-----------------
When invoked by the user, this task creates a file containing the
differences between the original config as produced by
```do_kernel_configme`` <#ref-tasks-kernel_configme>`__ task and the
changes made by the user with other methods (i.e. using
(```do_kernel_menuconfig`` <#ref-tasks-kernel_menuconfig>`__). Once the
file of differences is created, it can be used to create a config
fragment that only contains the differences. You can invoke this task
from the command line as follows: $ bitbake linux-yocto -c diffconfig
For more information, see the "`Creating Configuration
Fragments <&YOCTO_DOCS_KERNEL_DEV_URL;#creating-config-fragments>`__"
section in the Yocto Project Linux Kernel Development Manual.
.. _ref-tasks-kernel_checkout:
``do_kernel_checkout``
----------------------
Converts the newly unpacked kernel source into a form with which the
OpenEmbedded build system can work. Because the kernel source can be
fetched in several different ways, the ``do_kernel_checkout`` task makes
sure that subsequent tasks are given a clean working tree copy of the
kernel with the correct branches checked out.
.. _ref-tasks-kernel_configcheck:
``do_kernel_configcheck``
-------------------------
Validates the configuration produced by the
```do_kernel_menuconfig`` <#ref-tasks-kernel_menuconfig>`__ task. The
``do_kernel_configcheck`` task produces warnings when a requested
configuration does not appear in the final ``.config`` file or when you
override a policy configuration in a hardware configuration fragment.
You can run this task explicitly and view the output by using the
following command: $ bitbake linux-yocto -c kernel_configcheck -f For
more information, see the "`Validating
Configuration <&YOCTO_DOCS_KERNEL_DEV_URL;#validating-configuration>`__"
section in the Yocto Project Linux Kernel Development Manual.
.. _ref-tasks-kernel_configme:
``do_kernel_configme``
----------------------
After the kernel is patched by the ```do_patch`` <#ref-tasks-patch>`__
task, the ``do_kernel_configme`` task assembles and merges all the
kernel config fragments into a merged configuration that can then be
passed to the kernel configuration phase proper. This is also the time
during which user-specified defconfigs are applied if present, and where
configuration modes such as ``--allnoconfig`` are applied.
.. _ref-tasks-kernel_menuconfig:
``do_kernel_menuconfig``
------------------------
Invoked by the user to manipulate the ``.config`` file used to build a
linux-yocto recipe. This task starts the Linux kernel configuration
tool, which you then use to modify the kernel configuration.
.. note::
You can also invoke this tool from the command line as follows:
::
$ bitbake linux-yocto -c menuconfig
See the "`Using
``menuconfig`` <&YOCTO_DOCS_KERNEL_DEV_URL;#using-menuconfig>`__"
section in the Yocto Project Linux Kernel Development Manual for more
information on this configuration tool.
.. _ref-tasks-kernel_metadata:
``do_kernel_metadata``
----------------------
Collects all the features required for a given kernel build, whether the
features come from ```SRC_URI`` <#var-SRC_URI>`__ or from Git
repositories. After collection, the ``do_kernel_metadata`` task
processes the features into a series of config fragments and patches,
which can then be applied by subsequent tasks such as
```do_patch`` <#ref-tasks-patch>`__ and
```do_kernel_configme`` <#ref-tasks-kernel_configme>`__.
.. _ref-tasks-menuconfig:
``do_menuconfig``
-----------------
Runs ``make menuconfig`` for the kernel. For information on
``menuconfig``, see the
"`Using  ``menuconfig`` <&YOCTO_DOCS_KERNEL_DEV_URL;#using-menuconfig>`__"
section in the Yocto Project Linux Kernel Development Manual.
.. _ref-tasks-savedefconfig:
``do_savedefconfig``
--------------------
When invoked by the user, creates a defconfig file that can be used
instead of the default defconfig. The saved defconfig contains the
differences between the default defconfig and the changes made by the
user using other methods (i.e. the
```do_kernel_menuconfig`` <#ref-tasks-kernel_menuconfig>`__ task. You
can invoke the task using the following command: $ bitbake linux-yocto
-c savedefconfig
.. _ref-tasks-shared_workdir:
``do_shared_workdir``
---------------------
After the kernel has been compiled but before the kernel modules have
been compiled, this task copies files required for module builds and
which are generated from the kernel build into the shared work
directory. With these copies successfully copied, the
```do_compile_kernelmodules`` <#ref-tasks-compile_kernelmodules>`__ task
can successfully build the kernel modules in the next step of the build.
.. _ref-tasks-sizecheck:
``do_sizecheck``
----------------
After the kernel has been built, this task checks the size of the
stripped kernel image against
```KERNEL_IMAGE_MAXSIZE`` <#var-KERNEL_IMAGE_MAXSIZE>`__. If that
variable was set and the size of the stripped kernel exceeds that size,
the kernel build produces a warning to that effect.
.. _ref-tasks-strip:
``do_strip``
------------
If ``KERNEL_IMAGE_STRIP_EXTRA_SECTIONS`` is defined, this task strips
the sections named in that variable from ``vmlinux``. This stripping is
typically used to remove nonessential sections such as ``.comment``
sections from a size-sensitive configuration.
.. _ref-tasks-validate_branches:
``do_validate_branches``
------------------------
After the kernel is unpacked but before it is patched, this task makes
sure that the machine and metadata branches as specified by the
```SRCREV`` <#var-SRCREV>`__ variables actually exist on the specified
branches. If these branches do not exist and
```AUTOREV`` <#var-AUTOREV>`__ is not being used, the
``do_validate_branches`` task fails during the build.
Miscellaneous Tasks
===================
The following sections describe miscellaneous tasks.
.. _ref-tasks-spdx:
``do_spdx``
-----------
A build stage that takes the source code and scans it on a remote
FOSSOLOGY server in order to produce an SPDX document. This task applies
only to the ```spdx`` <#ref-classes-spdx>`__ class.

View File

@@ -0,0 +1,369 @@
*******************
Yocto Project Terms
*******************
Following is a list of terms and definitions users new to the Yocto
Project development environment might find helpful. While some of these
terms are universal, the list includes them just in case:
- *Append Files:* Files that append build information to a recipe file.
Append files are known as BitBake append files and ``.bbappend``
files. The OpenEmbedded build system expects every append file to
have a corresponding recipe (``.bb``) file. Furthermore, the append
file and corresponding recipe file must use the same root filename.
The filenames can differ only in the file type suffix used (e.g.
``formfactor_0.0.bb`` and ``formfactor_0.0.bbappend``).
Information in append files extends or overrides the information in
the similarly-named recipe file. For an example of an append file in
use, see the "`Using .bbappend Files in Your
Layer <&YOCTO_DOCS_DEV_URL;#using-bbappend-files>`__" section in the
Yocto Project Development Tasks Manual.
When you name an append file, you can use the "``%``" wildcard
character to allow for matching recipe names. For example, suppose
you have an append file named as follows: busybox_1.21.%.bbappend
That append file would match any ``busybox_1.21.``\ x\ ``.bb``
version of the recipe. So, the append file would match any of the
following recipe names: busybox_1.21.1.bb busybox_1.21.2.bb
busybox_1.21.3.bb busybox_1.21.10.bb busybox_1.21.25.bb
.. note::
The use of the "
%
" character is limited in that it only works directly in front of
the
.bbappend
portion of the append file's name. You cannot use the wildcard
character in any other location of the name.
- *BitBake:* The task executor and scheduler used by the OpenEmbedded
build system to build images. For more information on BitBake, see
the `BitBake User Manual <&YOCTO_DOCS_BB_URL;>`__.
- *Board Support Package (BSP):* A group of drivers, definitions, and
other components that provide support for a specific hardware
configuration. For more information on BSPs, see the `Yocto Project
Board Support Package (BSP) Developer's
Guide <&YOCTO_DOCS_BSP_URL;>`__.
- *Build Directory:* This term refers to the area used by the
OpenEmbedded build system for builds. The area is created when you
``source`` the setup environment script that is found in the Source
Directory (i.e. ````` <#structure-core-script>`__). The
```TOPDIR`` <#var-TOPDIR>`__ variable points to the Build Directory.
You have a lot of flexibility when creating the Build Directory.
Following are some examples that show how to create the directory.
The examples assume your `Source Directory <#source-directory>`__ is
named ``poky``:
- Create the Build Directory inside your Source Directory and let
the name of the Build Directory default to ``build``: $ cd
$HOME/poky $ source OE_INIT_FILE
- Create the Build Directory inside your home directory and
specifically name it ``test-builds``: $ cd $HOME $ source
poky/OE_INIT_FILE test-builds
- Provide a directory path and specifically name the Build
Directory. Any intermediate folders in the pathname must exist.
This next example creates a Build Directory named
``YP-POKYVERSION`` in your home directory within the existing
directory ``mybuilds``: $ cd $HOME $ source
$HOME/poky/OE_INIT_FILE $HOME/mybuilds/YP-POKYVERSION
.. note::
By default, the Build Directory contains
TMPDIR
, which is a temporary directory the build system uses for its
work.
TMPDIR
cannot be under NFS. Thus, by default, the Build Directory cannot
be under NFS. However, if you need the Build Directory to be under
NFS, you can set this up by setting
TMPDIR
in your
local.conf
file to use a local drive. Doing so effectively separates
TMPDIR
from
TOPDIR
, which is the Build Directory.
- *Build Host:* The system used to build images in a Yocto Project
Development environment. The build system is sometimes referred to as
the development host.
- *Classes:* Files that provide for logic encapsulation and inheritance
so that commonly used patterns can be defined once and then easily
used in multiple recipes. For reference information on the Yocto
Project classes, see the "`Classes <#ref-classes>`__" chapter. Class
files end with the ``.bbclass`` filename extension.
- *Configuration File:* Files that hold global definitions of
variables, user-defined variables, and hardware configuration
information. These files tell the OpenEmbedded build system what to
build and what to put into the image to support a particular
platform.
Configuration files end with a ``.conf`` filename extension. The
``conf/local.conf`` configuration file in the `Build
Directory <#build-directory>`__ contains user-defined variables that
affect every build. The ``meta-poky/conf/distro/poky.conf``
configuration file defines Yocto "distro" configuration variables
used only when building with this policy. Machine configuration
files, which are located throughout the `Source
Directory <#source-directory>`__, define variables for specific
hardware and are only used when building for that target (e.g. the
``machine/beaglebone.conf`` configuration file defines variables for
the Texas Instruments ARM Cortex-A8 development board).
- *Container Layer:* Layers that hold other layers. An example of a
container layer is OpenEmbedded's
```meta-openembedded`` <https://github.com/openembedded/meta-openembedded>`__
layer. The ``meta-openembedded`` layer contains many ``meta-*``
layers.
- *Cross-Development Toolchain:* In general, a cross-development
toolchain is a collection of software development tools and utilities
that run on one architecture and allow you to develop software for a
different, or targeted, architecture. These toolchains contain
cross-compilers, linkers, and debuggers that are specific to the
target architecture.
The Yocto Project supports two different cross-development
toolchains:
- A toolchain only used by and within BitBake when building an image
for a target architecture.
- A relocatable toolchain used outside of BitBake by developers when
developing applications that will run on a targeted device.
Creation of these toolchains is simple and automated. For information
on toolchain concepts as they apply to the Yocto Project, see the
"`Cross-Development Toolchain
Generation <&YOCTO_DOCS_OM_URL;#cross-development-toolchain-generation>`__"
section in the Yocto Project Overview and Concepts Manual. You can
also find more information on using the relocatable toolchain in the
`Yocto Project Application Development and the Extensible Software
Development Kit (eSDK) <&YOCTO_DOCS_SDK_URL;>`__ manual.
- *Extensible Software Development Kit (eSDK):* A custom SDK for
application developers. This eSDK allows developers to incorporate
their library and programming changes back into the image to make
their code available to other application developers.
For information on the eSDK, see the `Yocto Project Application
Development and the Extensible Software Development Kit
(eSDK) <&YOCTO_DOCS_SDK_URL;>`__ manual.
- *Image:* An image is an artifact of the BitBake build process given a
collection of recipes and related Metadata. Images are the binary
output that run on specific hardware or QEMU and are used for
specific use-cases. For a list of the supported image types that the
Yocto Project provides, see the "`Images <#ref-images>`__" chapter.
- *Layer:* A collection of related recipes. Layers allow you to
consolidate related metadata to customize your build. Layers also
isolate information used when building for multiple architectures.
Layers are hierarchical in their ability to override previous
specifications. You can include any number of available layers from
the Yocto Project and customize the build by adding your layers after
them. You can search the Layer Index for layers used within Yocto
Project.
For introductory information on layers, see the "`The Yocto Project
Layer Model <&YOCTO_DOCS_OM_URL;#the-yocto-project-layer-model>`__"
section in the Yocto Project Overview and Concepts Manual. For more
detailed information on layers, see the "`Understanding and Creating
Layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__"
section in the Yocto Project Development Tasks Manual. For a
discussion specifically on BSP Layers, see the "`BSP
Layers <&YOCTO_DOCS_BSP_URL;#bsp-layers>`__" section in the Yocto
Project Board Support Packages (BSP) Developer's Guide.
- *Metadata:* A key element of the Yocto Project is the Metadata that
is used to construct a Linux distribution and is contained in the
files that the `OpenEmbedded build system <#build-system-term>`__
parses when building an image. In general, Metadata includes recipes,
configuration files, and other information that refers to the build
instructions themselves, as well as the data used to control what
things get built and the effects of the build. Metadata also includes
commands and data used to indicate what versions of software are
used, from where they are obtained, and changes or additions to the
software itself (patches or auxiliary files) that are used to fix
bugs or customize the software for use in a particular situation.
OpenEmbedded-Core is an important set of validated metadata.
In the context of the kernel ("kernel Metadata"), the term refers to
the kernel config fragments and features contained in the
```yocto-kernel-cache`` <&YOCTO_GIT_URL;/cgit/cgit.cgi/yocto-kernel-cache>`__
Git repository.
- *OpenEmbedded-Core (OE-Core):* OE-Core is metadata comprised of
foundational recipes, classes, and associated files that are meant to
be common among many different OpenEmbedded-derived systems,
including the Yocto Project. OE-Core is a curated subset of an
original repository developed by the OpenEmbedded community that has
been pared down into a smaller, core set of continuously validated
recipes. The result is a tightly controlled and an quality-assured
core set of recipes.
You can see the Metadata in the ``meta`` directory of the Yocto
Project `Source
Repositories <http://git.yoctoproject.org/cgit/cgit.cgi>`__.
- *OpenEmbedded Build System:* The build system specific to the Yocto
Project. The OpenEmbedded build system is based on another project
known as "Poky", which uses `BitBake <#bitbake-term>`__ as the task
executor. Throughout the Yocto Project documentation set, the
OpenEmbedded build system is sometimes referred to simply as "the
build system". If other build systems, such as a host or target build
system are referenced, the documentation clearly states the
difference.
.. note::
For some historical information about Poky, see the
Poky
term.
- *Package:* In the context of the Yocto Project, this term refers to a
recipe's packaged output produced by BitBake (i.e. a "baked recipe").
A package is generally the compiled binaries produced from the
recipe's sources. You "bake" something by running it through BitBake.
It is worth noting that the term "package" can, in general, have
subtle meanings. For example, the packages referred to in the
"`Required Packages for the Build
Host <#required-packages-for-the-build-host>`__" section are compiled
binaries that, when installed, add functionality to your Linux
distribution.
Another point worth noting is that historically within the Yocto
Project, recipes were referred to as packages - thus, the existence
of several BitBake variables that are seemingly mis-named, (e.g.
```PR`` <#var-PR>`__, ```PV`` <#var-PV>`__, and
```PE`` <#var-PE>`__).
- *Package Groups:* Arbitrary groups of software Recipes. You use
package groups to hold recipes that, when built, usually accomplish a
single task. For example, a package group could contain the recipes
for a companys proprietary or value-add software. Or, the package
group could contain the recipes that enable graphics. A package group
is really just another recipe. Because package group files are
recipes, they end with the ``.bb`` filename extension.
- *Poky:* Poky, which is pronounced *Pock*-ee, is a reference embedded
distribution and a reference test configuration. Poky provides the
following:
- A base-level functional distro used to illustrate how to customize
a distribution.
- A means by which to test the Yocto Project components (i.e. Poky
is used to validate the Yocto Project).
- A vehicle through which you can download the Yocto Project.
Poky is not a product level distro. Rather, it is a good starting
point for customization.
.. note::
Poky began as an open-source project initially developed by
OpenedHand. OpenedHand developed Poky from the existing
OpenEmbedded build system to create a commercially supportable
build system for embedded Linux. After Intel Corporation acquired
OpenedHand, the poky project became the basis for the Yocto
Project's build system.
- *Recipe:* A set of instructions for building packages. A recipe
describes where you get source code, which patches to apply, how to
configure the source, how to compile it and so on. Recipes also
describe dependencies for libraries or for other recipes. Recipes
represent the logical unit of execution, the software to build, the
images to build, and use the ``.bb`` file extension.
- *Reference Kit:* A working example of a system, which includes a
`BSP <#board-support-package-bsp-term>`__ as well as a `build
host <#hardware-build-system-term>`__ and other components, that can
work on specific hardware.
- *Source Directory:* This term refers to the directory structure
created as a result of creating a local copy of the ``poky`` Git
repository ``git://git.yoctoproject.org/poky`` or expanding a
released ``poky`` tarball.
.. note::
Creating a local copy of the
poky
Git repository is the recommended method for setting up your
Source Directory.
Sometimes you might hear the term "poky directory" used to refer to
this directory structure.
.. note::
The OpenEmbedded build system does not support file or directory
names that contain spaces. Be sure that the Source Directory you
use does not contain these types of names.
The Source Directory contains BitBake, Documentation, Metadata and
other files that all support the Yocto Project. Consequently, you
must have the Source Directory in place on your development system in
order to do any development using the Yocto Project.
When you create a local copy of the Git repository, you can name the
repository anything you like. Throughout much of the documentation,
"poky" is used as the name of the top-level folder of the local copy
of the poky Git repository. So, for example, cloning the ``poky`` Git
repository results in a local Git repository whose top-level folder
is also named "poky".
While it is not recommended that you use tarball expansion to set up
the Source Directory, if you do, the top-level directory name of the
Source Directory is derived from the Yocto Project release tarball.
For example, downloading and unpacking ```` results in a Source
Directory whose root folder is named ````.
It is important to understand the differences between the Source
Directory created by unpacking a released tarball as compared to
cloning ``git://git.yoctoproject.org/poky``. When you unpack a
tarball, you have an exact copy of the files based on the time of
release - a fixed release point. Any changes you make to your local
files in the Source Directory are on top of the release and will
remain local only. On the other hand, when you clone the ``poky`` Git
repository, you have an active development repository with access to
the upstream repository's branches and tags. In this case, any local
changes you make to the local Source Directory can be later applied
to active development branches of the upstream ``poky`` Git
repository.
For more information on concepts related to Git repositories,
branches, and tags, see the "`Repositories, Tags, and
Branches <&YOCTO_DOCS_OM_URL;#repositories-tags-and-branches>`__"
section in the Yocto Project Overview and Concepts Manual.
- *Task:* A unit of execution for BitBake (e.g.
```do_compile`` <#ref-tasks-compile>`__,
```do_fetch`` <#ref-tasks-fetch>`__,
```do_patch`` <#ref-tasks-patch>`__, and so forth).
- *Toaster:* A web interface to the Yocto Project's `OpenEmbedded Build
System <#build-system-term>`__. The interface enables you to
configure and run your builds. Information about builds is collected
and stored in a database. For information on Toaster, see the
`Toaster User Manual <&YOCTO_DOCS_TOAST_URL;>`__.
- *Upstream:* A reference to source code or repositories that are not
local to the development system but located in a master area that is
controlled by the maintainer of the source code. For example, in
order for a developer to work on a particular piece of code, they
need to first get a copy of it from an "upstream" source.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,164 @@
****************
Variable Context
****************
While you can use most variables in almost any context such as
``.conf``, ``.bbclass``, ``.inc``, and ``.bb`` files, some variables are
often associated with a particular locality or context. This chapter
describes some common associations.
.. _ref-varlocality-configuration:
Configuration
=============
The following subsections provide lists of variables whose context is
configuration: distribution, machine, and local.
.. _ref-varlocality-config-distro:
Distribution (Distro)
---------------------
This section lists variables whose configuration context is the
distribution, or distro.
- ``DISTRO``
- ``DISTRO_NAME``
- ``DISTRO_VERSION``
- ``MAINTAINER``
- ``PACKAGE_CLASSES``
- ``TARGET_OS``
- ``TARGET_FPU``
- ``TCMODE``
- ``TCLIBC``
.. _ref-varlocality-config-machine:
Machine
-------
This section lists variables whose configuration context is the machine.
- ``TARGET_ARCH``
- ``SERIAL_CONSOLES``
- ``PACKAGE_EXTRA_ARCHS``
- ``IMAGE_FSTYPES``
- ``MACHINE_FEATURES``
- ``MACHINE_EXTRA_RDEPENDS``
- ``MACHINE_EXTRA_RRECOMMENDS``
- ``MACHINE_ESSENTIAL_EXTRA_RDEPENDS``
- ``MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS``
.. _ref-varlocality-config-local:
Local
-----
This section lists variables whose configuration context is the local
configuration through the ``local.conf`` file.
- ``DISTRO``
- ``MACHINE``
- ``DL_DIR``
- ``BBFILES``
- ``EXTRA_IMAGE_FEATURES``
- ``PACKAGE_CLASSES``
- ``BB_NUMBER_THREADS``
- ``BBINCLUDELOGS``
- ``ENABLE_BINARY_LOCALE_GENERATION``
.. _ref-varlocality-recipes:
Recipes
=======
The following subsections provide lists of variables whose context is
recipes: required, dependencies, path, and extra build information.
.. _ref-varlocality-recipe-required:
Required
--------
This section lists variables that are required for recipes.
- ``LICENSE``
- ``LIC_FILES_CHKSUM``
- ``SRC_URI`` - used in recipes that fetch local or remote files.
.. _ref-varlocality-recipe-dependencies:
Dependencies
------------
This section lists variables that define recipe dependencies.
- ``DEPENDS``
- ``RDEPENDS``
- ``RRECOMMENDS``
- ``RCONFLICTS``
- ``RREPLACES``
.. _ref-varlocality-recipe-paths:
Paths
-----
This section lists variables that define recipe paths.
- ``WORKDIR``
- ``S``
- ``FILES``
.. _ref-varlocality-recipe-build:
Extra Build Information
-----------------------
This section lists variables that define extra build information for
recipes.
- ``DEFAULT_PREFERENCE``
- ``EXTRA_OECMAKE``
- ``EXTRA_OECONF``
- ``EXTRA_OEMAKE``
- ``PACKAGECONFIG_CONFARGS``
- ``PACKAGES``

View File

@@ -0,0 +1,207 @@
****************************************
Contributions and Additional Information
****************************************
.. _resources-intro:
Introduction
============
The Yocto Project team is happy for people to experiment with the Yocto
Project. A number of places exist to find help if you run into
difficulties or find bugs. This presents information about contributing
and participating in the Yocto Project.
.. _resources-contributions:
Contributions
=============
The Yocto Project gladly accepts contributions. You can submit changes
to the project either by creating and sending pull requests, or by
submitting patches through email. For information on how to do both as
well as information on how to identify the maintainer for each area of
code, see the "`Submitting a Change to the Yocto
Project <&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change>`__" section in the
Yocto Project Development Tasks Manual.
.. _resources-bugtracker:
Yocto Project Bugzilla
======================
The Yocto Project uses its own implementation of
`Bugzilla <&YOCTO_BUGZILLA_URL;>`__ to track defects (bugs).
Implementations of Bugzilla work well for group development because they
track bugs and code changes, can be used to communicate changes and
problems with developers, can be used to submit and review patches, and
can be used to manage quality assurance.
Sometimes it is helpful to submit, investigate, or track a bug against
the Yocto Project itself (e.g. when discovering an issue with some
component of the build system that acts contrary to the documentation or
your expectations).
A general procedure and guidelines exist for when you use Bugzilla to
submit a bug. For information on how to use Bugzilla to submit a bug
against the Yocto Project, see the following:
- The "`Submitting a Defect Against the Yocto
Project <&YOCTO_DOCS_DEV_URL;#submitting-a-defect-against-the-yocto-project>`__"
section in the Yocto Project Development Tasks Manual.
- The Yocto Project `Bugzilla wiki
page <&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking>`__
For information on Bugzilla in general, see
` <http://www.bugzilla.org/about/>`__.
.. _resources-mailinglist:
Mailing lists
=============
A number of mailing lists maintained by the Yocto Project exist as well
as related OpenEmbedded mailing lists for discussion, patch submission
and announcements. To subscribe to one of the following mailing lists,
click on the appropriate URL in the following list and follow the
instructions:
- ` <&YOCTO_LISTS_URL;/listinfo/yocto>`__ - General Yocto Project
discussion mailing list.
- ` <&OE_LISTS_URL;/listinfo/openembedded-core>`__ - Discussion mailing
list about OpenEmbedded-Core (the core metadata).
- ` <&OE_LISTS_URL;/listinfo/openembedded-devel>`__ - Discussion
mailing list about OpenEmbedded.
- ` <&OE_LISTS_URL;/listinfo/bitbake-devel>`__ - Discussion mailing
list about the `BitBake <#bitbake-term>`__ build tool.
- ` <&YOCTO_LISTS_URL;/listinfo/poky>`__ - Discussion mailing list
about `Poky <#poky>`__.
- ` <&YOCTO_LISTS_URL;/listinfo/yocto-announce>`__ - Mailing list to
receive official Yocto Project release and milestone announcements.
For more Yocto Project-related mailing lists, see the
Yocto Project Website
.
.. _resources-irc:
Internet Relay Chat (IRC)
=========================
Two IRC channels on freenode are available for the Yocto Project and
Poky discussions:
- ``#yocto``
- ``#poky``
.. _resources-links-and-related-documentation:
Links and Related Documentation
===============================
Here is a list of resources you might find helpful:
- `The Yocto Project website <&YOCTO_HOME_URL;>`__\ *:* The home site
for the Yocto Project.
- `The Yocto Project Main Wiki
Page <&YOCTO_WIKI_URL;/wiki/Main_Page>`__\ *:* The main wiki page for
the Yocto Project. This page contains information about project
planning, release engineering, QA & automation, a reference site map,
and other resources related to the Yocto Project.
- `OpenEmbedded <&OE_HOME_URL;>`__\ *:* The build system used by the
Yocto Project. This project is the upstream, generic, embedded
distribution from which the Yocto Project derives its build system
(Poky) and to which it contributes.
- `BitBake <http://www.openembedded.org/wiki/BitBake>`__\ *:* The tool
used to process metadata.
- `BitBake User Manual <&YOCTO_DOCS_BB_URL;>`__\ *:* A comprehensive
guide to the BitBake tool. If you want information on BitBake, see
this manual.
- `Yocto Project Quick Build <&YOCTO_DOCS_BRIEF_URL;>`__\ *:* This
short document lets you experience building an image using the Yocto
Project without having to understand any concepts or details.
- `Yocto Project Overview and Concepts
Manual <&YOCTO_DOCS_OM_URL;>`__\ *:* This manual provides overview
and conceptual information about the Yocto Project.
- `Yocto Project Development Tasks
Manual <&YOCTO_DOCS_DEV_URL;>`__\ *:* This manual is a "how-to" guide
that presents procedures useful to both application and system
developers who use the Yocto Project.
- `Yocto Project Application Development and the Extensible Software
Development Kit (eSDK) <&YOCTO_DOCS_SDK_URL;>`__\ *manual:* This
guide provides information that lets you get going with the standard
or extensible SDK. An SDK, with its cross-development toolchains,
allows you to develop projects inside or outside of the Yocto Project
environment.
- `Yocto Project Board Support Package (BSP) Developer's
Guide <&YOCTO_DOCS_BSP_URL;>`__\ *:* This guide defines the structure
for BSP components. Having a commonly understood structure encourages
standardization.
- `Yocto Project Linux Kernel Development
Manual <&YOCTO_DOCS_KERNEL_DEV_URL;>`__\ *:* This manual describes
how to work with Linux Yocto kernels as well as provides a bit of
conceptual information on the construction of the Yocto Linux kernel
tree.
- `Yocto Project Reference Manual <&YOCTO_DOCS_REF_URL;>`__\ *:* This
manual provides reference material such as variable, task, and class
descriptions.
- `Yocto Project Mega-Manual <&YOCTO_DOCS_MM_URL;>`__\ *:* This manual
is simply a single HTML file comprised of the bulk of the Yocto
Project manuals. The Mega-Manual primarily exists as a vehicle by
which you can easily search for phrases and terms used in the Yocto
Project documentation set.
- `Yocto Project Profiling and Tracing
Manual <&YOCTO_DOCS_PROF_URL;>`__\ *:* This manual presents a set of
common and generally useful tracing and profiling schemes along with
their applications (as appropriate) to each tool.
- `Toaster User Manual <&YOCTO_DOCS_TOAST_URL;>`__\ *:* This manual
introduces and describes how to set up and use Toaster. Toaster is an
Application Programming Interface (API) and web-based interface to
the `OpenEmbedded Build System <#build-system-term>`__, which uses
BitBake, that reports build information.
- `FAQ <&YOCTO_WIKI_URL;/wiki/FAQ>`__\ *:* A list of commonly asked
questions and their answers.
- *Release Notes:* Features, updates and known issues for the current
release of the Yocto Project. To access the Release Notes, go to the
`Downloads <&YOCTO_HOME_URL;/software-overview/downloads/>`__ page on
the Yocto Project website and click on the "RELEASE INFORMATION" link
for the appropriate release.
- `Bugzilla <&YOCTO_BUGZILLA_URL;>`__\ *:* The bug tracking application
the Yocto Project uses. If you find problems with the Yocto Project,
you should report them using this application.
- `Bugzilla Configuration and Bug Tracking Wiki
Page <&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking>`__\ *:*
Information on how to get set up and use the Yocto Project
implementation of Bugzilla for logging and tracking Yocto Project
defects.
- *Internet Relay Chat (IRC):* Two IRC channels on freenode are
available for Yocto Project and Poky discussions: ``#yocto`` and
``#poky``, respectively.
- `Quick EMUlator (QEMU) <http://wiki.qemu.org/Index.html>`__\ *:* An
open-source machine emulator and virtualizer.

View File

@@ -0,0 +1,32 @@
****************************
Customizing the Standard SDK
****************************
This appendix presents customizations you can apply to the standard SDK.
Adding Individual Packages to the Standard SDK
==============================================
When you build a standard SDK using the ``bitbake -c populate_sdk``, a
default set of packages is included in the resulting SDK. The
```TOOLCHAIN_HOST_TASK`` <&YOCTO_DOCS_REF_URL;#var-TOOLCHAIN_HOST_TASK>`__
and
```TOOLCHAIN_TARGET_TASK`` <&YOCTO_DOCS_REF_URL;#var-TOOLCHAIN_TARGET_TASK>`__
variables control the set of packages adding to the SDK.
If you want to add individual packages to the toolchain that runs on the
host, simply add those packages to the ``TOOLCHAIN_HOST_TASK`` variable.
Similarly, if you want to add packages to the default set that is part
of the toolchain that runs on the target, add the packages to the
``TOOLCHAIN_TARGET_TASK`` variable.
Adding API Documentation to the Standard SDK
============================================
You can include API documentation as well as any other documentation
provided by recipes with the standard SDK by adding "api-documentation"
to the
```DISTRO_FEATURES`` <&YOCTO_DOCS_REF_URL;#var-DISTRO_FEATURES>`__
variable: DISTRO_FEATURES_append = " api-documentation" Setting this
variable as shown here causes the OpenEmbedded build system to build the
documentation and then include it in the standard SDK.

View File

@@ -0,0 +1,347 @@
******************************
Customizing the Extensible SDK
******************************
This appendix describes customizations you can apply to the extensible
SDK.
Configuring the Extensible SDK
==============================
The extensible SDK primarily consists of a pre-configured copy of the
OpenEmbedded build system from which it was produced. Thus, the SDK's
configuration is derived using that build system and the filters shown
in the following list. When these filters are present, the OpenEmbedded
build system applies them against ``local.conf`` and ``auto.conf``:
- Variables whose values start with "/" are excluded since the
assumption is that those values are paths that are likely to be
specific to the `build
host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__.
- Variables listed in
```SDK_LOCAL_CONF_BLACKLIST`` <&YOCTO_DOCS_REF_URL;#var-SDK_LOCAL_CONF_BLACKLIST>`__
are excluded. These variables are not allowed through from the
OpenEmbedded build system configuration into the extensible SDK
configuration. Typically, these variables are specific to the machine
on which the build system is running and could be problematic as part
of the extensible SDK configuration.
For a list of the variables excluded by default, see the
```SDK_LOCAL_CONF_BLACKLIST`` <&YOCTO_DOCS_REF_URL;#var-SDK_LOCAL_CONF_BLACKLIST>`__
in the glossary of the Yocto Project Reference Manual.
- Variables listed in
```SDK_LOCAL_CONF_WHITELIST`` <&YOCTO_DOCS_REF_URL;#var-SDK_LOCAL_CONF_WHITELIST>`__
are included. Including a variable in the value of
``SDK_LOCAL_CONF_WHITELIST`` overrides either of the previous two
filters. The default value is blank.
- Classes inherited globally with
```INHERIT`` <&YOCTO_DOCS_REF_URL;#var-INHERIT>`__ that are listed in
```SDK_INHERIT_BLACKLIST`` <&YOCTO_DOCS_REF_URL;#var-SDK_INHERIT_BLACKLIST>`__
are disabled. Using ``SDK_INHERIT_BLACKLIST`` to disable these
classes is the typical method to disable classes that are problematic
or unnecessary in the SDK context. The default value blacklists the
```buildhistory`` <&YOCTO_DOCS_REF_URL;#ref-classes-buildhistory>`__
and ```icecc`` <&YOCTO_DOCS_REF_URL;#ref-classes-icecc>`__ classes.
Additionally, the contents of ``conf/sdk-extra.conf``, when present, are
appended to the end of ``conf/local.conf`` within the produced SDK,
without any filtering. The ``sdk-extra.conf`` file is particularly
useful if you want to set a variable value just for the SDK and not the
OpenEmbedded build system used to create the SDK.
Adjusting the Extensible SDK to Suit Your Build Host's Setup
============================================================
In most cases, the extensible SDK defaults should work with your `build
host's <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__ setup.
However, some cases exist for which you might consider making
adjustments:
- If your SDK configuration inherits additional classes using the
```INHERIT`` <&YOCTO_DOCS_REF_URL;#var-INHERIT>`__ variable and you
do not need or want those classes enabled in the SDK, you can
blacklist them by adding them to the
```SDK_INHERIT_BLACKLIST`` <&YOCTO_DOCS_REF_URL;#var-SDK_INHERIT_BLACKLIST>`__
variable as described in the fourth bullet of the previous section.
.. note::
The default value of
SDK_INHERIT_BLACKLIST
is set using the "?=" operator. Consequently, you will need to
either define the entire list by using the "=" operator, or you
will need to append a value using either "_append" or the "+="
operator. You can learn more about these operators in the "
Basic Syntax
" section of the BitBake User Manual.
.
- If you have classes or recipes that add additional tasks to the
standard build flow (i.e. the tasks execute as the recipe builds as
opposed to being called explicitly), then you need to do one of the
following:
- After ensuring the tasks are `shared
state <&YOCTO_DOCS_OM_URL;#shared-state-cache>`__ tasks (i.e. the
output of the task is saved to and can be restored from the shared
state cache) or ensuring the tasks are able to be produced quickly
from a task that is a shared state task, add the task name to the
value of
```SDK_RECRDEP_TASKS`` <&YOCTO_DOCS_REF_URL;#var-SDK_RECRDEP_TASKS>`__.
- Disable the tasks if they are added by a class and you do not need
the functionality the class provides in the extensible SDK. To
disable the tasks, add the class to the ``SDK_INHERIT_BLACKLIST``
variable as described in the previous section.
- Generally, you want to have a shared state mirror set up so users of
the SDK can add additional items to the SDK after installation
without needing to build the items from source. See the "`Providing
Additional Installable Extensible SDK
Content <#sdk-providing-additional-installable-extensible-sdk-content>`__"
section for information.
- If you want users of the SDK to be able to easily update the SDK, you
need to set the
```SDK_UPDATE_URL`` <&YOCTO_DOCS_REF_URL;#var-SDK_UPDATE_URL>`__
variable. For more information, see the "`Providing Updates to the
Extensible SDK After
Installation <#sdk-providing-updates-to-the-extensible-sdk-after-installation>`__"
section.
- If you have adjusted the list of files and directories that appear in
```COREBASE`` <&YOCTO_DOCS_REF_URL;#var-COREBASE>`__ (other than
layers that are enabled through ``bblayers.conf``), then you must
list these files in
```COREBASE_FILES`` <&YOCTO_DOCS_REF_URL;#var-COREBASE_FILES>`__ so
that the files are copied into the SDK.
- If your OpenEmbedded build system setup uses a different environment
setup script other than
````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__, then you must
set
```OE_INIT_ENV_SCRIPT`` <&YOCTO_DOCS_REF_URL;#var-OE_INIT_ENV_SCRIPT>`__
to point to the environment setup script you use.
.. note::
You must also reflect this change in the value used for the
COREBASE_FILES
variable as previously described.
Changing the Extensible SDK Installer Title
===========================================
You can change the displayed title for the SDK installer by setting the
```SDK_TITLE`` <&YOCTO_DOCS_REF_URL;#var-SDK_TITLE>`__ variable and then
rebuilding the the SDK installer. For information on how to build an SDK
installer, see the "`Building an SDK
Installer <#sdk-building-an-sdk-installer>`__" section.
By default, this title is derived from
```DISTRO_NAME`` <&YOCTO_DOCS_REF_URL;#var-DISTRO_NAME>`__ when it is
set. If the ``DISTRO_NAME`` variable is not set, the title is derived
from the ```DISTRO`` <&YOCTO_DOCS_REF_URL;#var-DISTRO>`__ variable.
The
```populate_sdk_base`` <&YOCTO_DOCS_REF_URL;#ref-classes-populate-sdk-*>`__
class defines the default value of the ``SDK_TITLE`` variable as
follows: SDK_TITLE ??= "${@d.getVar('DISTRO_NAME') or
d.getVar('DISTRO')} SDK"
While several ways exist to change this variable, an efficient method is
to set the variable in your distribution's configuration file. Doing so
creates an SDK installer title that applies across your distribution. As
an example, assume you have your own layer for your distribution named
"meta-mydistro" and you are using the same type of file hierarchy as
does the default "poky" distribution. If so, you could update the
``SDK_TITLE`` variable in the
``~/meta-mydistro/conf/distro/mydistro.conf`` file using the following
form: SDK_TITLE = "your_title"
Providing Updates to the Extensible SDK After Installation
==========================================================
When you make changes to your configuration or to the metadata and if
you want those changes to be reflected in installed SDKs, you need to
perform additional steps. These steps make it possible for anyone using
the installed SDKs to update the installed SDKs by using the
``devtool sdk-update`` command:
1. Create a directory that can be shared over HTTP or HTTPS. You can do
this by setting up a web server such as an `Apache HTTP
Server <https://en.wikipedia.org/wiki/Apache_HTTP_Server>`__ or
`Nginx <https://en.wikipedia.org/wiki/Nginx>`__ server in the cloud
to host the directory. This directory must contain the published SDK.
2. Set the
```SDK_UPDATE_URL`` <&YOCTO_DOCS_REF_URL;#var-SDK_UPDATE_URL>`__
variable to point to the corresponding HTTP or HTTPS URL. Setting
this variable causes any SDK built to default to that URL and thus,
the user does not have to pass the URL to the ``devtool sdk-update``
command as described in the "`Applying Updates to an Installed
Extensible
SDK <#sdk-applying-updates-to-an-installed-extensible-sdk>`__"
section.
3. Build the extensible SDK normally (i.e., use the
``bitbake -c populate_sdk_ext`` imagename command).
4. Publish the SDK using the following command: $ oe-publish-sdk
some_path/sdk-installer.sh path_to_shared_http_directory You must
repeat this step each time you rebuild the SDK with changes that you
want to make available through the update mechanism.
Completing the above steps allows users of the existing installed SDKs
to simply run ``devtool sdk-update`` to retrieve and apply the latest
updates. See the "`Applying Updates to an Installed Extensible
SDK <#sdk-applying-updates-to-an-installed-extensible-sdk>`__" section
for further information.
Changing the Default SDK Installation Directory
===============================================
When you build the installer for the Extensible SDK, the default
installation directory for the SDK is based on the
```DISTRO`` <&YOCTO_DOCS_REF_URL;#var-DISTRO>`__ and
```SDKEXTPATH`` <&YOCTO_DOCS_REF_URL;#var-SDKEXTPATH>`__ variables from
within the
```populate_sdk_base`` <&YOCTO_DOCS_REF_URL;#ref-classes-populate-sdk-*>`__
class as follows: SDKEXTPATH ??= "~/${@d.getVar('DISTRO')}_sdk" You can
change this default installation directory by specifically setting the
``SDKEXTPATH`` variable.
While a number of ways exist through which you can set this variable,
the method that makes the most sense is to set the variable in your
distribution's configuration file. Doing so creates an SDK installer
default directory that applies across your distribution. As an example,
assume you have your own layer for your distribution named
"meta-mydistro" and you are using the same type of file hierarchy as
does the default "poky" distribution. If so, you could update the
``SDKEXTPATH`` variable in the
``~/meta-mydistro/conf/distro/mydistro.conf`` file using the following
form: SDKEXTPATH = "some_path_for_your_installed_sdk"
After building your installer, running it prompts the user for
acceptance of the some_path_for_your_installed_sdk directory as the
default location to install the Extensible SDK.
Providing Additional Installable Extensible SDK Content
=======================================================
If you want the users of an extensible SDK you build to be able to add
items to the SDK without requiring the users to build the items from
source, you need to do a number of things:
1. Ensure the additional items you want the user to be able to install
are already built:
- Build the items explicitly. You could use one or more "meta"
recipes that depend on lists of other recipes.
- Build the "world" target and set
``EXCLUDE_FROM_WORLD_pn-``\ recipename for the recipes you do not
want built. See the
```EXCLUDE_FROM_WORLD`` <&YOCTO_DOCS_REF_URL;#var-EXCLUDE_FROM_WORLD>`__
variable for additional information.
2. Expose the ``sstate-cache`` directory produced by the build.
Typically, you expose this directory by making it available through
an `Apache HTTP
Server <https://en.wikipedia.org/wiki/Apache_HTTP_Server>`__ or
`Nginx <https://en.wikipedia.org/wiki/Nginx>`__ server.
3. Set the appropriate configuration so that the produced SDK knows how
to find the configuration. The variable you need to set is
```SSTATE_MIRRORS`` <&YOCTO_DOCS_REF_URL;#var-SSTATE_MIRRORS>`__:
SSTATE_MIRRORS = "file://.\*
http://example.com/some_path/sstate-cache/PATH" You can set the
``SSTATE_MIRRORS`` variable in two different places:
- If the mirror value you are setting is appropriate to be set for
both the OpenEmbedded build system that is actually building the
SDK and the SDK itself (i.e. the mirror is accessible in both
places or it will fail quickly on the OpenEmbedded build system
side, and its contents will not interfere with the build), then
you can set the variable in your ``local.conf`` or custom distro
configuration file. You can then "whitelist" the variable through
to the SDK by adding the following: SDK_LOCAL_CONF_WHITELIST =
"SSTATE_MIRRORS"
- Alternatively, if you just want to set the ``SSTATE_MIRRORS``
variable's value for the SDK alone, create a
``conf/sdk-extra.conf`` file either in your `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__ or within any
layer and put your ``SSTATE_MIRRORS`` setting within that file.
.. note::
This second option is the safest option should you have any
doubts as to which method to use when setting
SSTATE_MIRRORS
.
Minimizing the Size of the Extensible SDK Installer Download
============================================================
By default, the extensible SDK bundles the shared state artifacts for
everything needed to reconstruct the image for which the SDK was built.
This bundling can lead to an SDK installer file that is a Gigabyte or
more in size. If the size of this file causes a problem, you can build
an SDK that has just enough in it to install and provide access to the
``devtool command`` by setting the following in your configuration:
SDK_EXT_TYPE = "minimal" Setting
```SDK_EXT_TYPE`` <&YOCTO_DOCS_REF_URL;#var-SDK_EXT_TYPE>`__ to
"minimal" produces an SDK installer that is around 35 Mbytes in size,
which downloads and installs quickly. You need to realize, though, that
the minimal installer does not install any libraries or tools out of the
box. These libraries and tools must be installed either "on the fly" or
through actions you perform using ``devtool`` or explicitly with the
``devtool sdk-install`` command.
In most cases, when building a minimal SDK you need to also enable
bringing in the information on a wider range of packages produced by the
system. Requiring this wider range of information is particularly true
so that ``devtool add`` is able to effectively map dependencies it
discovers in a source tree to the appropriate recipes. Additionally, the
information enables the ``devtool search`` command to return useful
results.
To facilitate this wider range of information, you would need to set the
following: SDK_INCLUDE_PKGDATA = "1" See the
```SDK_INCLUDE_PKGDATA`` <&YOCTO_DOCS_REF_URL;#var-SDK_INCLUDE_PKGDATA>`__
variable for additional information.
Setting the ``SDK_INCLUDE_PKGDATA`` variable as shown causes the "world"
target to be built so that information for all of the recipes included
within it are available. Having these recipes available increases build
time significantly and increases the size of the SDK installer by 30-80
Mbytes depending on how many recipes are included in your configuration.
You can use ``EXCLUDE_FROM_WORLD_pn-``\ recipename for recipes you want
to exclude. However, it is assumed that you would need to be building
the "world" target if you want to provide additional items to the SDK.
Consequently, building for "world" should not represent undue overhead
in most cases.
.. note::
If you set
SDK_EXT_TYPE
to "minimal", then providing a shared state mirror is mandatory so
that items can be installed as needed. See the "
Providing Additional Installable Extensible SDK Content
" section for more information.
You can explicitly control whether or not to include the toolchain when
you build an SDK by setting the
```SDK_INCLUDE_TOOLCHAIN`` <&YOCTO_DOCS_REF_URL;#var-SDK_INCLUDE_TOOLCHAIN>`__
variable to "1". In particular, it is useful to include the toolchain
when you have set ``SDK_EXT_TYPE`` to "minimal", which by default,
excludes the toolchain. Also, it is helpful if you are building a small
SDK for use with an IDE or some other tool where you do not want to take
extra steps to install a toolchain.

View File

@@ -0,0 +1,270 @@
*****************
Obtaining the SDK
*****************
.. _sdk-locating-pre-built-sdk-installers:
Locating Pre-Built SDK Installers
=================================
You can use existing, pre-built toolchains by locating and running an
SDK installer script that ships with the Yocto Project. Using this
method, you select and download an architecture-specific SDK installer
and then run the script to hand-install the toolchain.
Follow these steps to locate and hand-install the toolchain:
1. *Go to the Installers Directory:* Go to
` <&YOCTO_TOOLCHAIN_DL_URL;>`__
2. *Open the Folder for Your Build Host:* Open the folder that matches
your `build host <&YOCTO_DOCS_REF_URL;#build-system-term>`__ (i.e.
``i686`` for 32-bit machines or ``x86_64`` for 64-bit machines).
3. *Locate and Download the SDK Installer:* You need to find and
download the installer appropriate for your build host, target
hardware, and image type.
The installer files (``*.sh``) follow this naming convention:
poky-glibc-host_system-core-image-type-arch-toolchain[-ext]-release.sh
Where: host_system is a string representing your development system:
"i686" or "x86_64" type is a string representing the image: "sato" or
"minimal" arch is a string representing the target architecture:
"aarch64", "armv5e", "core2-64", "coretexa8hf-neon", "i586",
"mips32r2", "mips64", or "ppc7400" release is the version of Yocto
Project. NOTE: The standard SDK installer does not have the "-ext"
string as part of the filename. The toolchains provided by the Yocto
Project are based off of the ``core-image-sato`` and
``core-image-minimal`` images and contain libraries appropriate for
developing against those images.
For example, if your build host is a 64-bit x86 system and you need
an extended SDK for a 64-bit core2 target, go into the ``x86_64``
folder and download the following installer:
poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-DISTRO.sh
4. *Run the Installer:* Be sure you have execution privileges and run
the installer. Following is an example from the ``Downloads``
directory: $
~/Downloads/poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-DISTRO.sh
During execution of the script, you choose the root location for the
toolchain. See the "`Installed Standard SDK Directory
Structure <#sdk-installed-standard-sdk-directory-structure>`__"
section and the "`Installed Extensible SDK Directory
Structure <#sdk-installed-extensible-sdk-directory-structure>`__"
section for more information.
Building an SDK Installer
=========================
As an alternative to locating and downloading an SDK installer, you can
build the SDK installer. Follow these steps:
1. *Set Up the Build Environment:* Be sure you are set up to use BitBake
in a shell. See the "`Preparing the Build
Host <&YOCTO_DOCS_DEV_URL;#dev-preparing-the-build-host>`__" section
in the Yocto Project Development Tasks Manual for information on how
to get a build host ready that is either a native Linux machine or a
machine that uses CROPS.
2. *Clone the ``poky`` Repository:* You need to have a local copy of the
Yocto Project `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ (i.e. a local
``poky`` repository). See the "`Cloning the ``poky``
Repository <&YOCTO_DOCS_DEV_URL;#cloning-the-poky-repository>`__" and
possibly the "`Checking Out by Branch in
Poky <&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky>`__" and
"`Checking Out by Tag in
Poky <&YOCTO_DOCS_DEV_URL;#checkout-out-by-tag-in-poky>`__" sections
all in the Yocto Project Development Tasks Manual for information on
how to clone the ``poky`` repository and check out the appropriate
branch for your work.
3. *Initialize the Build Environment:* While in the root directory of
the Source Directory (i.e. ``poky``), run the
````` <&YOCTO_DOCS_REF_URL;#structure-core-script>`__ environment
setup script to define the OpenEmbedded build environment on your
build host. $ source OE_INIT_FILE Among other things, the script
creates the `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__, which is
``build`` in this case and is located in the Source Directory. After
the script runs, your current working directory is set to the
``build`` directory.
4. *Make Sure You Are Building an Installer for the Correct Machine:*
Check to be sure that your
```MACHINE`` <&YOCTO_DOCS_REF_URL;#var-MACHINE>`__ variable in the
``local.conf`` file in your Build Directory matches the architecture
for which you are building.
5. *Make Sure Your SDK Machine is Correctly Set:* If you are building a
toolchain designed to run on an architecture that differs from your
current development host machine (i.e. the build host), be sure that
the ```SDKMACHINE`` <&YOCTO_DOCS_REF_URL;#var-SDKMACHINE>`__ variable
in the ``local.conf`` file in your Build Directory is correctly set.
.. note::
If you are building an SDK installer for the Extensible SDK, the
SDKMACHINE
value must be set for the architecture of the machine you are
using to build the installer. If
SDKMACHINE
is not set appropriately, the build fails and provides an error
message similar to the following:
::
The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is
set to i686 (likely via setting SDKMACHINE) which is different from the architecture of the build machine (x86_64).
Unable to continue.
6. *Build the SDK Installer:* To build the SDK installer for a standard
SDK and populate the SDK image, use the following command form. Be
sure to replace image with an image (e.g. "core-image-sato"): $
bitbake image -c populate_sdk You can do the same for the extensible
SDK using this command form: $ bitbake image -c populate_sdk_ext
These commands produce an SDK installer that contains the sysroot
that matches your target root filesystem.
When the ``bitbake`` command completes, the SDK installer will be in
``tmp/deploy/sdk`` in the Build Directory.
.. note::
- By default, the previous BitBake command does not build static
binaries. If you want to use the toolchain to build these types
of libraries, you need to be sure your SDK has the appropriate
static development libraries. Use the
```TOOLCHAIN_TARGET_TASK`` <&YOCTO_DOCS_REF_URL;#var-TOOLCHAIN_TARGET_TASK>`__
variable inside your ``local.conf`` file before building the
SDK installer. Doing so ensures that the eventual SDK
installation process installs the appropriate library packages
as part of the SDK. Following is an example using ``libc``
static development libraries: TOOLCHAIN_TARGET_TASK_append = "
libc-staticdev"
7. *Run the Installer:* You can now run the SDK installer from
``tmp/deploy/sdk`` in the Build Directory. Following is an example: $
cd ~/poky/build/tmp/deploy/sdk $
./poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-DISTRO.sh
During execution of the script, you choose the root location for the
toolchain. See the "`Installed Standard SDK Directory
Structure <#sdk-installed-standard-sdk-directory-structure>`__"
section and the "`Installed Extensible SDK Directory
Structure <#sdk-installed-extensible-sdk-directory-structure>`__"
section for more information.
Extracting the Root Filesystem
==============================
After installing the toolchain, for some use cases you might need to
separately extract a root filesystem:
- You want to boot the image using NFS.
- You want to use the root filesystem as the target sysroot.
- You want to develop your target application using the root filesystem
as the target sysroot.
Follow these steps to extract the root filesystem:
1. *Locate and Download the Tarball for the Pre-Built Root Filesystem
Image File:* You need to find and download the root filesystem image
file that is appropriate for your target system. These files are kept
in machine-specific folders in the `Index of
Releases <&YOCTO_DL_URL;/releases/yocto/yocto-&DISTRO;/machines/>`__
in the "machines" directory.
The machine-specific folders of the "machines" directory contain
tarballs (``*.tar.bz2``) for supported machines. These directories
also contain flattened root filesystem image files (``*.ext4``),
which you can use with QEMU directly.
The pre-built root filesystem image files follow these naming
conventions: core-image-profile-arch.tar.bz2 Where: profile is the
filesystem image's profile: lsb, lsb-dev, lsb-sdk, minimal,
minimal-dev, minimal-initramfs, sato, sato-dev, sato-sdk,
sato-sdk-ptest. For information on these types of image profiles, see
the "`Images <&YOCTO_DOCS_REF_URL;#ref-images>`__" chapter in the
Yocto Project Reference Manual. arch is a string representing the
target architecture: beaglebone-yocto, beaglebone-yocto-lsb,
edgerouter, edgerouter-lsb, genericx86, genericx86-64,
genericx86-64-lsb, genericx86-lsb and qemu*. The root filesystems
provided by the Yocto Project are based off of the
``core-image-sato`` and ``core-image-minimal`` images.
For example, if you plan on using a BeagleBone device as your target
hardware and your image is a ``core-image-sato-sdk`` image, you can
download the following file:
core-image-sato-sdk-beaglebone-yocto.tar.bz2
2. *Initialize the Cross-Development Environment:* You must ``source``
the cross-development environment setup script to establish necessary
environment variables.
This script is located in the top-level directory in which you
installed the toolchain (e.g. ``poky_sdk``).
Following is an example based on the toolchain installed in the
"`Locating Pre-Built SDK
Installers <#sdk-locating-pre-built-sdk-installers>`__" section: $
source ~/poky_sdk/environment-setup-core2-64-poky-linux
3. *Extract the Root Filesystem:* Use the ``runqemu-extract-sdk``
command and provide the root filesystem image.
Following is an example command that extracts the root filesystem
from a previously built root filesystem image that was downloaded
from the `Index of Releases <&YOCTO_DOCS_OM_URL;#index-downloads>`__.
This command extracts the root filesystem into the ``core2-64-sato``
directory: $ runqemu-extract-sdk
~/Downloads/core-image-sato-sdk-beaglebone-yocto.tar.bz2
~/beaglebone-sato You could now point to the target sysroot at
``beablebone-sato``.
Installed Standard SDK Directory Structure
==========================================
The following figure shows the resulting directory structure after you
install the Standard SDK by running the ``*.sh`` SDK installation
script:
The installed SDK consists of an environment setup script for the SDK, a
configuration file for the target, a version file for the target, and
the root filesystem (``sysroots``) needed to develop objects for the
target system.
Within the figure, italicized text is used to indicate replaceable
portions of the file or directory name. For example, install_dir/version
is the directory where the SDK is installed. By default, this directory
is ``/opt/poky/``. And, version represents the specific snapshot of the
SDK (e.g. ````). Furthermore, target represents the target architecture
(e.g. ``i586``) and host represents the development system's
architecture (e.g. ``x86_64``). Thus, the complete names of the two
directories within the ``sysroots`` could be ``i586-poky-linux`` and
``x86_64-pokysdk-linux`` for the target and host, respectively.
Installed Extensible SDK Directory Structure
============================================
The following figure shows the resulting directory structure after you
install the Extensible SDK by running the ``*.sh`` SDK installation
script:
The installed directory structure for the extensible SDK is quite
different than the installed structure for the standard SDK. The
extensible SDK does not separate host and target parts in the same
manner as does the standard SDK. The extensible SDK uses an embedded
copy of the OpenEmbedded build system, which has its own sysroots.
Of note in the directory structure are an environment setup script for
the SDK, a configuration file for the target, a version file for the
target, and log files for the OpenEmbedded build system preparation
script run by the installer and BitBake.
Within the figure, italicized text is used to indicate replaceable
portions of the file or directory name. For example, install_dir is the
directory where the SDK is installed, which is ``poky_sdk`` by default,
and target represents the target architecture (e.g. ``i586``).

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,223 @@
************
Introduction
************
.. _sdk-manual-intro:
Introduction
============
Welcome to the Yocto Project Application Development and the Extensible
Software Development Kit (eSDK) manual. This manual provides information
that explains how to use both the Yocto Project extensible and standard
SDKs to develop applications and images.
.. note::
Prior to the 2.0 Release of the Yocto Project, application
development was primarily accomplished through the use of the
Application Development Toolkit (ADT) and the availability of
stand-alone cross-development toolchains and other tools. With the
2.1 Release of the Yocto Project, application development has
transitioned to within a tool-rich extensible SDK and the more
traditional standard SDK.
All SDKs consist of the following:
- *Cross-Development Toolchain*: This toolchain contains a compiler,
debugger, and various miscellaneous tools.
- *Libraries, Headers, and Symbols*: The libraries, headers, and
symbols are specific to the image (i.e. they match the image).
- *Environment Setup Script*: This ``*.sh`` file, once run, sets up the
cross-development environment by defining variables and preparing for
SDK use.
Additionally, an extensible SDK has tools that allow you to easily add
new applications and libraries to an image, modify the source of an
existing component, test changes on the target hardware, and easily
integrate an application into the `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__.
You can use an SDK to independently develop and test code that is
destined to run on some target machine. SDKs are completely
self-contained. The binaries are linked against their own copy of
``libc``, which results in no dependencies on the target system. To
achieve this, the pointer to the dynamic loader is configured at install
time since that path cannot be dynamically altered. This is the reason
for a wrapper around the ``populate_sdk`` and ``populate_sdk_ext``
archives.
Another feature for the SDKs is that only one set of cross-compiler
toolchain binaries are produced for any given architecture. This feature
takes advantage of the fact that the target hardware can be passed to
``gcc`` as a set of compiler options. Those options are set up by the
environment script and contained in variables such as
```CC`` <&YOCTO_DOCS_REF_URL;#var-CC>`__ and
```LD`` <&YOCTO_DOCS_REF_URL;#var-LD>`__. This reduces the space needed
for the tools. Understand, however, that every target still needs a
sysroot because those binaries are target-specific.
The SDK development environment consists of the following:
- The self-contained SDK, which is an architecture-specific
cross-toolchain and matching sysroots (target and native) all built
by the OpenEmbedded build system (e.g. the SDK). The toolchain and
sysroots are based on a `Metadata <&YOCTO_DOCS_REF_URL;#metadata>`__
configuration and extensions, which allows you to cross-develop on
the host machine for the target hardware. Additionally, the
extensible SDK contains the ``devtool`` functionality.
- The Quick EMUlator (QEMU), which lets you simulate target hardware.
QEMU is not literally part of the SDK. You must build and include
this emulator separately. However, QEMU plays an important role in
the development process that revolves around use of the SDK.
In summary, the extensible and standard SDK share many features.
However, the extensible SDK has powerful development tools to help you
more quickly develop applications. Following is a table that summarizes
the primary differences between the standard and extensible SDK types
when considering which to build:
+-----------------------+-----------------------+-----------------------+
| *Feature* | *Standard SDK* | *Extensible SDK* |
+=======================+=======================+=======================+
| Toolchain | Yes | Yes\* |
+-----------------------+-----------------------+-----------------------+
| Debugger | Yes | Yes\* |
+-----------------------+-----------------------+-----------------------+
| Size | 100+ MBytes | 1+ GBytes (or 300+ |
| | | MBytes for minimal |
| | | w/toolchain) |
+-----------------------+-----------------------+-----------------------+
| ``devtool`` | No | Yes |
+-----------------------+-----------------------+-----------------------+
| Build Images | No | Yes |
+-----------------------+-----------------------+-----------------------+
| Updateable | No | Yes |
+-----------------------+-----------------------+-----------------------+
| Managed Sysroot*\* | No | Yes |
+-----------------------+-----------------------+-----------------------+
| Installed Packages | No**\* | Yes***\* |
+-----------------------+-----------------------+-----------------------+
| Construction | Packages | Shared State |
+-----------------------+-----------------------+-----------------------+
\* Extensible SDK contains the toolchain and debugger if
```SDK_EXT_TYPE`` <&YOCTO_DOCS_REF_URL;#var-SDK_EXT_TYPE>`__ is "full"
or
```SDK_INCLUDE_TOOLCHAIN`` <&YOCTO_DOCS_REF_URL;#var-SDK_INCLUDE_TOOLCHAIN>`__
is "1", which is the default. \*\* Sysroot is managed through the use of
``devtool``. Thus, it is less likely that you will corrupt your SDK
sysroot when you try to add additional libraries. \**\* You can add
runtime package management to the standard SDK but it is not supported
by default. \***\* You must build and make the shared state available to
extensible SDK users for "packages" you want to enable users to install.
The Cross-Development Toolchain
-------------------------------
The `Cross-Development
Toolchain <&YOCTO_DOCS_REF_URL;#cross-development-toolchain>`__ consists
of a cross-compiler, cross-linker, and cross-debugger that are used to
develop user-space applications for targeted hardware. Additionally, for
an extensible SDK, the toolchain also has built-in ``devtool``
functionality. This toolchain is created by running a SDK installer
script or through a `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__ that is based on
your metadata configuration or extension for your targeted device. The
cross-toolchain works with a matching target sysroot.
.. _sysroot:
Sysroots
--------
The native and target sysroots contain needed headers and libraries for
generating binaries that run on the target architecture. The target
sysroot is based on the target root filesystem image that is built by
the OpenEmbedded build system and uses the same metadata configuration
used to build the cross-toolchain.
The QEMU Emulator
-----------------
The QEMU emulator allows you to simulate your hardware while running
your application or image. QEMU is not part of the SDK but is made
available a number of different ways:
- If you have cloned the ``poky`` Git repository to create a `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ and you have
sourced the environment setup script, QEMU is installed and
automatically available.
- If you have downloaded a Yocto Project release and unpacked it to
create a Source Directory and you have sourced the environment setup
script, QEMU is installed and automatically available.
- If you have installed the cross-toolchain tarball and you have
sourced the toolchain's setup environment script, QEMU is also
installed and automatically available.
SDK Development Model
=====================
Fundamentally, the SDK fits into the development process as follows: The
SDK is installed on any machine and can be used to develop applications,
images, and kernels. An SDK can even be used by a QA Engineer or Release
Engineer. The fundamental concept is that the machine that has the SDK
installed does not have to be associated with the machine that has the
Yocto Project installed. A developer can independently compile and test
an object on their machine and then, when the object is ready for
integration into an image, they can simply make it available to the
machine that has the Yocto Project. Once the object is available, the
image can be rebuilt using the Yocto Project to produce the modified
image.
You just need to follow these general steps:
1. *Install the SDK for your target hardware:* For information on how to
install the SDK, see the "`Installing the
SDK <#sdk-installing-the-sdk>`__" section.
2. *Download or Build the Target Image:* The Yocto Project supports
several target architectures and has many pre-built kernel images and
root filesystem images.
If you are going to develop your application on hardware, go to the
```machines`` <&YOCTO_MACHINES_DL_URL;>`__ download area and choose a
target machine area from which to download the kernel image and root
filesystem. This download area could have several files in it that
support development using actual hardware. For example, the area
might contain ``.hddimg`` files that combine the kernel image with
the filesystem, boot loaders, and so forth. Be sure to get the files
you need for your particular development process.
If you are going to develop your application and then run and test it
using the QEMU emulator, go to the
```machines/qemu`` <&YOCTO_QEMU_DL_URL;>`__ download area. From this
area, go down into the directory for your target architecture (e.g.
``qemux86_64`` for an Intel-based 64-bit architecture). Download the
kernel, root filesystem, and any other files you need for your
process.
.. note::
To use the root filesystem in QEMU, you need to extract it. See
the "
Extracting the Root Filesystem
" section for information on how to extract the root filesystem.
3. *Develop and Test your Application:* At this point, you have the
tools to develop your application. If you need to separately install
and use the QEMU emulator, you can go to `QEMU Home
Page <http://wiki.qemu.org/Main_Page>`__ to download and learn about
the emulator. See the "`Using the Quick EMUlator
(QEMU) <&YOCTO_DOCS_DEV_URL;#dev-manual-qemu>`__" chapter in the
Yocto Project Development Tasks Manual for information on using QEMU
within the Yocto Project.
The remainder of this manual describes how to use the extensible and
standard SDKs. Information also exists in appendix form that describes
how you can build, install, and modify an SDK.

View File

@@ -0,0 +1,15 @@
========================================================================================
Yocto Project Application Development and the Extensible Software Development Kit (eSDK)
========================================================================================
.. toctree::
:caption: Table of Contents
:numbered:
sdk-intro
sdk-extensible
sdk-using
sdk-working-projects
sdk-appendix-obtain
sdk-appendix-customizing
sdk-appendix-customizing-standard

View File

@@ -0,0 +1,136 @@
**********************
Using the Standard SDK
**********************
This chapter describes the standard SDK and how to install it.
Information includes unique installation and setup aspects for the
standard SDK.
.. note::
For a side-by-side comparison of main features supported for a
standard SDK as compared to an extensible SDK, see the "
Introduction
" section.
You can use a standard SDK to work on Makefile and Autotools-based
projects. See the "`Using the SDK Toolchain
Directly <#sdk-working-projects>`__" chapter for more information.
.. _sdk-standard-sdk-intro:
Why use the Standard SDK and What is in It?
===========================================
The Standard SDK provides a cross-development toolchain and libraries
tailored to the contents of a specific image. You would use the Standard
SDK if you want a more traditional toolchain experience as compared to
the extensible SDK, which provides an internal build system and the
``devtool`` functionality.
The installed Standard SDK consists of several files and directories.
Basically, it contains an SDK environment setup script, some
configuration files, and host and target root filesystems to support
usage. You can see the directory structure in the "`Installed Standard
SDK Directory
Structure <#sdk-installed-standard-sdk-directory-structure>`__" section.
.. _sdk-installing-the-sdk:
Installing the SDK
==================
The first thing you need to do is install the SDK on your `Build
Host <&YOCTO_DOCS_REF_URL;#hardware-build-system-term>`__ by running the
``*.sh`` installation script.
You can download a tarball installer, which includes the pre-built
toolchain, the ``runqemu`` script, and support files from the
appropriate `toolchain <&YOCTO_TOOLCHAIN_DL_URL;>`__ directory within
the Index of Releases. Toolchains are available for several 32-bit and
64-bit architectures with the ``x86_64`` directories, respectively. The
toolchains the Yocto Project provides are based off the
``core-image-sato`` and ``core-image-minimal`` images and contain
libraries appropriate for developing against that image.
The names of the tarball installer scripts are such that a string
representing the host system appears first in the filename and then is
immediately followed by a string representing the target architecture.
poky-glibc-host_system-image_type-arch-toolchain-release_version.sh
Where: host_system is a string representing your development system:
i686 or x86_64. image_type is the image for which the SDK was built:
core-image-minimal or core-image-sato. arch is a string representing the
tuned target architecture: aarch64, armv5e, core2-64, i586, mips32r2,
mips64, ppc7400, or cortexa8hf-neon. release_version is a string
representing the release number of the Yocto Project: DISTRO,
DISTRO+snapshot For example, the following SDK installer is for a 64-bit
development host system and a i586-tuned target architecture based off
the SDK for ``core-image-sato`` and using the current DISTRO snapshot:
poky-glibc-x86_64-core-image-sato-i586-toolchain-DISTRO.sh
.. note::
As an alternative to downloading an SDK, you can build the SDK
installer. For information on building the installer, see the "
Building an SDK Installer
" section.
The SDK and toolchains are self-contained and by default are installed
into the ``poky_sdk`` folder in your home directory. You can choose to
install the extensible SDK in any location when you run the installer.
However, because files need to be written under that directory during
the normal course of operation, the location you choose for installation
must be writable for whichever users need to use the SDK.
The following command shows how to run the installer given a toolchain
tarball for a 64-bit x86 development host system and a 64-bit x86 target
architecture. The example assumes the SDK installer is located in
``~/Downloads/`` and has execution rights.
.. note::
If you do not have write permissions for the directory into which you
are installing the SDK, the installer notifies you and exits. For
that case, set up the proper permissions in the directory and run the
installer again.
$ ./Downloads/poky-glibc-x86_64-core-image-sato-i586-toolchain-DISTRO.sh
Poky (Yocto Project Reference Distro) SDK installer version DISTRO
=============================================================== Enter
target directory for SDK (default: /opt/poky/DISTRO): You are about to
install the SDK to "/opt/poky/DISTRO". Proceed [Y/n]? Y Extracting
SDK........................................
..............................done Setting it up...done SDK has been
successfully set up and is ready to be used. Each time you wish to use
the SDK in a new shell session, you need to source the environment setup
script e.g. $ . /opt/poky/DISTRO/environment-setup-i586-poky-linux
Again, reference the "`Installed Standard SDK Directory
Structure <#sdk-installed-standard-sdk-directory-structure>`__" section
for more details on the resulting directory structure of the installed
SDK.
.. _sdk-running-the-sdk-environment-setup-script:
Running the SDK Environment Setup Script
========================================
Once you have the SDK installed, you must run the SDK environment setup
script before you can actually use the SDK. This setup script resides in
the directory you chose when you installed the SDK, which is either the
default ``/opt/poky/DISTRO`` directory or the directory you chose during
installation.
Before running the script, be sure it is the one that matches the
architecture for which you are developing. Environment setup scripts
begin with the string "``environment-setup``" and include as part of
their name the tuned target architecture. As an example, the following
commands set the working directory to where the SDK was installed and
then source the environment setup script. In this example, the setup
script is for an IA-based target machine using i586 tuning: $ source
/opt/poky/DISTRO/environment-setup-i586-poky-linux When you run the
setup script, the same environment variables are defined as are when you
run the setup script for an extensible SDK. See the "`Running the
Extensible SDK Environment Setup
Script <#sdk-running-the-extensible-sdk-environment-setup-script>`__"
section for more information.

View File

@@ -0,0 +1,284 @@
********************************
Using the SDK Toolchain Directly
********************************
You can use the SDK toolchain directly with Makefile and Autotools-based
projects.
Autotools-Based Projects
========================
Once you have a suitable `cross-development
toolchain <&YOCTO_DOCS_REF_URL;#cross-development-toolchain>`__
installed, it is very easy to develop a project using the `GNU
Autotools-based <https://en.wikipedia.org/wiki/GNU_Build_System>`__
workflow, which is outside of the `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__.
The following figure presents a simple Autotools workflow.
Follow these steps to create a simple Autotools-based "Hello World"
project:
.. note::
For more information on the GNU Autotools workflow, see the same
example on the
GNOME Developer
site.
1. *Create a Working Directory and Populate It:* Create a clean
directory for your project and then make that directory your working
location. $ mkdir $HOME/helloworld $ cd $HOME/helloworld After
setting up the directory, populate it with files needed for the flow.
You need a project source file, a file to help with configuration,
and a file to help create the Makefile, and a README file:
``hello.c``, ``configure.ac``, ``Makefile.am``, and ``README``,
respectively.
Use the following command to create an empty README file, which is
required by GNU Coding Standards: $ touch README Create the remaining
three files as follows:
- *``hello.c``:* #include <stdio.h> main() { printf("Hello
World!\n"); }
- *``configure.ac``:* AC_INIT(hello,0.1) AM_INIT_AUTOMAKE([foreign])
AC_PROG_CC AC_CONFIG_FILES(Makefile) AC_OUTPUT
- *``Makefile.am``:* bin_PROGRAMS = hello hello_SOURCES = hello.c
2. *Source the Cross-Toolchain Environment Setup File:* As described
earlier in the manual, installing the cross-toolchain creates a
cross-toolchain environment setup script in the directory that the
SDK was installed. Before you can use the tools to develop your
project, you must source this setup script. The script begins with
the string "environment-setup" and contains the machine architecture,
which is followed by the string "poky-linux". For this example, the
command sources a script from the default SDK installation directory
that uses the 32-bit Intel x86 Architecture and the DISTRO_NAME Yocto
Project release: $ source
/opt/poky/DISTRO/environment-setup-i586-poky-linux
3. *Create the ``configure`` Script:* Use the ``autoreconf`` command to
generate the ``configure`` script. $ autoreconf The ``autoreconf``
tool takes care of running the other Autotools such as ``aclocal``,
``autoconf``, and ``automake``.
.. note::
If you get errors from
configure.ac
, which
autoreconf
runs, that indicate missing files, you can use the "-i" option,
which ensures missing auxiliary files are copied to the build
host.
4. *Cross-Compile the Project:* This command compiles the project using
the cross-compiler. The
```CONFIGURE_FLAGS`` <&YOCTO_DOCS_REF_URL;#var-CONFIGURE_FLAGS>`__
environment variable provides the minimal arguments for GNU
configure: $ ./configure ${CONFIGURE_FLAGS} For an Autotools-based
project, you can use the cross-toolchain by just passing the
appropriate host option to ``configure.sh``. The host option you use
is derived from the name of the environment setup script found in the
directory in which you installed the cross-toolchain. For example,
the host option for an ARM-based target that uses the GNU EABI is
``armv5te-poky-linux-gnueabi``. You will notice that the name of the
script is ``environment-setup-armv5te-poky-linux-gnueabi``. Thus, the
following command works to update your project and rebuild it using
the appropriate cross-toolchain tools: $ ./configure
--host=armv5te-poky-linux-gnueabi --with-libtool-sysroot=sysroot_dir
5. *Make and Install the Project:* These two commands generate and
install the project into the destination directory: $ make $ make
install DESTDIR=./tmp
.. note::
To learn about environment variables established when you run the
cross-toolchain environment setup script and how they are used or
overridden when the Makefile, see the "
Makefile-Based Projects
" section.
This next command is a simple way to verify the installation of your
project. Running the command prints the architecture on which the
binary file can run. This architecture should be the same
architecture that the installed cross-toolchain supports. $ file
./tmp/usr/local/bin/hello
6. *Execute Your Project:* To execute the project, you would need to run
it on your target hardware. If your target hardware happens to be
your build host, you could run the project as follows: $
./tmp/usr/local/bin/hello As expected, the project displays the
"Hello World!" message.
Makefile-Based Projects
=======================
Simple Makefile-based projects use and interact with the cross-toolchain
environment variables established when you run the cross-toolchain
environment setup script. The environment variables are subject to
general ``make`` rules.
This section presents a simple Makefile development flow and provides an
example that lets you see how you can use cross-toolchain environment
variables and Makefile variables during development.
The main point of this section is to explain the following three cases
regarding variable behavior:
- *Case 1 - No Variables Set in the ``Makefile`` Map to Equivalent
Environment Variables Set in the SDK Setup Script:* Because matching
variables are not specifically set in the ``Makefile``, the variables
retain their values based on the environment setup script.
- *Case 2 - Variables Are Set in the Makefile that Map to Equivalent
Environment Variables from the SDK Setup Script:* Specifically
setting matching variables in the ``Makefile`` during the build
results in the environment settings of the variables being
overwritten. In this case, the variables you set in the ``Makefile``
are used.
- *Case 3 - Variables Are Set Using the Command Line that Map to
Equivalent Environment Variables from the SDK Setup Script:*
Executing the ``Makefile`` from the command line results in the
environment variables being overwritten. In this case, the
command-line content is used.
.. note::
Regardless of how you set your variables, if you use the "-e" option
with
make
, the variables from the SDK setup script take precedence:
::
$ make -e target
The remainder of this section presents a simple Makefile example that
demonstrates these variable behaviors.
In a new shell environment variables are not established for the SDK
until you run the setup script. For example, the following commands show
a null value for the compiler variable (i.e.
```CC`` <&YOCTO_DOCS_REF_URL;#var-CC>`__). $ echo ${CC} $ Running the
SDK setup script for a 64-bit build host and an i586-tuned target
architecture for a ``core-image-sato`` image using the current DISTRO
Yocto Project release and then echoing that variable shows the value
established through the script: $ source
/opt/poky/DISTRO/environment-setup-i586-poky-linux $ echo ${CC}
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux
To illustrate variable use, work through this simple "Hello World!"
example:
1. *Create a Working Directory and Populate It:* Create a clean
directory for your project and then make that directory your working
location. $ mkdir $HOME/helloworld $ cd $HOME/helloworld After
setting up the directory, populate it with files needed for the flow.
You need a ``main.c`` file from which you call your function, a
``module.h`` file to contain headers, and a ``module.c`` that defines
your function.
Create the three files as follows:
- *``main.c``:* #include "module.h" void sample_func(); int main() {
sample_func(); return 0; }
- *``module.h``:* #include <stdio.h> void sample_func();
- *``module.c``:* #include "module.h" void sample_func() {
printf("Hello World!"); printf("\n"); }
2. *Source the Cross-Toolchain Environment Setup File:* As described
earlier in the manual, installing the cross-toolchain creates a
cross-toolchain environment setup script in the directory that the
SDK was installed. Before you can use the tools to develop your
project, you must source this setup script. The script begins with
the string "environment-setup" and contains the machine architecture,
which is followed by the string "poky-linux". For this example, the
command sources a script from the default SDK installation directory
that uses the 32-bit Intel x86 Architecture and the DISTRO_NAME Yocto
Project release: $ source
/opt/poky/DISTRO/environment-setup-i586-poky-linux
3. *Create the ``Makefile``:* For this example, the Makefile contains
two lines that can be used to set the ``CC`` variable. One line is
identical to the value that is set when you run the SDK environment
setup script, and the other line sets ``CC`` to "gcc", the default
GNU compiler on the build host: # CC=i586-poky-linux-gcc -m32
-march=i586 --sysroot=/opt/poky/2.5/sysroots/i586-poky-linux #
CC="gcc" all: main.o module.o ${CC} main.o module.o -o target_bin
main.o: main.c module.h ${CC} -I . -c main.c module.o: module.c
module.h ${CC} -I . -c module.c clean: rm -rf \*.o rm target_bin
4. *Make the Project:* Use the ``make`` command to create the binary
output file. Because variables are commented out in the Makefile, the
value used for ``CC`` is the value set when the SDK environment setup
file was run: $ make i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux -I . -c main.c
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux -I . -c module.c
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux main.o module.o -o
target_bin From the results of the previous command, you can see that
the compiler used was the compiler established through the ``CC``
variable defined in the setup script.
You can override the ``CC`` environment variable with the same
variable as set from the Makefile by uncommenting the line in the
Makefile and running ``make`` again. $ make clean rm -rf \*.o rm
target_bin # # Edit the Makefile by uncommenting the line that sets
CC to "gcc" # $ make gcc -I . -c main.c gcc -I . -c module.c gcc
main.o module.o -o target_bin As shown in the previous example, the
cross-toolchain compiler is not used. Rather, the default compiler is
used.
This next case shows how to override a variable by providing the
variable as part of the command line. Go into the Makefile and
re-insert the comment character so that running ``make`` uses the
established SDK compiler. However, when you run ``make``, use a
command-line argument to set ``CC`` to "gcc": $ make clean rm -rf
\*.o rm target_bin # # Edit the Makefile to comment out the line
setting CC to "gcc" # $ make i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux -I . -c main.c
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux -I . -c module.c
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux main.o module.o -o
target_bin $ make clean rm -rf \*.o rm target_bin $ make CC="gcc" gcc
-I . -c main.c gcc -I . -c module.c gcc main.o module.o -o target_bin
In the previous case, the command-line argument overrides the SDK
environment variable.
In this last case, edit Makefile again to use the "gcc" compiler but
then use the "-e" option on the ``make`` command line: $ make clean
rm -rf \*.o rm target_bin # # Edit the Makefile to use "gcc" # $ make
gcc -I . -c main.c gcc -I . -c module.c gcc main.o module.o -o
target_bin $ make clean rm -rf \*.o rm target_bin $ make -e
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux -I . -c main.c
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux -I . -c module.c
i586-poky-linux-gcc -m32 -march=i586
--sysroot=/opt/poky/2.5/sysroots/i586-poky-linux main.o module.o -o
target_bin In the previous case, the "-e" option forces ``make`` to
use the SDK environment variables regardless of the values in the
Makefile.
5. *Execute Your Project:* To execute the project (i.e. ``target_bin``),
use the following command: $ ./target_bin Hello World!
.. note::
If you used the cross-toolchain compiler to build
target_bin
and your build host differs in architecture from that of the
target machine, you need to run your project on the target device.
As expected, the project displays the "Hello World!" message.

View File

@@ -0,0 +1,486 @@
*****************************************
The Yocto Project Test Environment Manual
*****************************************
.. _test-welcome:
Welcome
=======
Welcome to the Yocto Project Test Environment Manual! This manual is a
work in progress. The manual contains information about the testing
environment used by the Yocto Project to make sure each major and minor
release works as intended. All the projects testing infrastructure and
processes are publicly visible and available so that the community can
see what testing is being performed, how its being done and the current
status of the tests and the project at any given time. It is intended
that Other organizations can leverage off the process and testing
environment used by the Yocto Project to create their own automated,
production test environment, building upon the foundations from the
project core.
Currently, the Yocto Project Test Environment Manual has no projected
release date. This manual is a work-in-progress and is being initially
loaded with information from the `README <>`__ files and notes from key
engineers:
- *``yocto-autobuilder2``:* This
```README.md`` <http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder2/tree/README.md>`__
is the main README which detials how to set up the Yocto Project
Autobuilder. The ``yocto-autobuilder2`` repository represents the
Yocto Project's console UI plugin to Buildbot and the configuration
necessary to configure Buildbot to perform the testing the project
requires.
- *``yocto-autobuilder-helper``:* This
```README`` <http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder-helper/tree/README>`__
and repository contains Yocto Project Autobuilder Helper scripts and
configuration. The ``yocto-autobuilder-helper`` repository contains
the "glue" logic that defines which tests to run and how to run them.
As a result, it can be used by any Continuous Improvement (CI) system
to run builds, support getting the correct code revisions, configure
builds and layers, run builds, and collect results. The code is
independent of any CI system, which means the code can work Buildbot,
Jenkins, or others. This repository has a branch per release of the
project defining the tests to run on a per release basis.
.. _test-yocto-project-autobuilder-overview:
Yocto Project Autobuilder Overview
==================================
The Yocto Project Autobuilder collectively refers to the software,
tools, scripts, and procedures used by the Yocto Project to test
released software across supported hardware in an automated and regular
fashion. Basically, during the development of a Yocto Project release,
the Autobuilder tests if things work. The Autobuilder builds all test
targets and runs all the tests.
The Yocto Project uses now uses standard upstream
`Buildbot <https://docs.buildbot.net/0.9.15.post1/>`__ (version 9) to
drive its integration and testing. Buildbot Nine has a plug-in interface
that the Yocto Project customizes using code from the
``yocto-autobuilder2`` repository, adding its own console UI plugin. The
resulting UI plug-in allows you to visualize builds in a way suited to
the project's needs.
A ``helper`` layer provides configuration and job management through
scripts found in the ``yocto-autobuilder-helper`` repository. The
``helper`` layer contains the bulk of the build configuration
information and is release-specific, which makes it highly customizable
on a per-project basis. The layer is CI system-agnostic and contains a
number of Helper scripts that can generate build configurations from
simple JSON files.
.. note::
The project uses Buildbot for historical reasons but also because
many of the project developers have knowledge of python. It is
possible to use the outer layers from another Continuous Integration
(CI) system such as
`Jenkins <https://en.wikipedia.org/wiki/Jenkins_(software)>`__
instead of Buildbot.
The following figure shows the Yocto Project Autobuilder stack with a
topology that includes a controller and a cluster of workers:
.. _test-project-tests:
Yocto Project Tests - Types of Testing Overview
===============================================
The Autobuilder tests different elements of the project by using
thefollowing types of tests:
- *Build Testing:* Tests whether specific configurations build by
varying ```MACHINE`` <&YOCTO_DOCS_REF_URL;#var-MACHINE>`__,
```DISTRO`` <&YOCTO_DOCS_REF_URL;#var-DISTRO>`__, other configuration
options, and the specific target images being built (or world). Used
to trigger builds of all the different test configurations on the
Autobuilder. Builds usually cover many different targets for
different architectures, machines, and distributions, as well as
different configurations, such as different init systems. The
Autobuilder tests literally hundreds of configurations and targets.
- *Sanity Checks During the Build Process:* Tests initiated through
the ```insane`` <&YOCTO_DOCS_REF_URL;#ref-classes-insane>`__
class. These checks ensure the output of the builds are correct.
For example, does the ELF architecture in the generated binaries
match the target system? ARM binaries would not work in a MIPS
system!
- *Build Performance Testing:* Tests whether or not commonly used steps
during builds work efficiently and avoid regressions. Tests to time
commonly used usage scenarios are run through ``oe-build-perf-test``.
These tests are run on isolated machines so that the time
measurements of the tests are accurate and no other processes
interfere with the timing results. The project currently tests
performance on two different distributions, Fedora and Ubuntu, to
ensure we have no single point of failure and can ensure the
different distros work effectively.
- *eSDK Testing:* Image tests initiated through the following command:
$ bitbake image -c testsdkext The tests utilize the ``testsdkext``
class and the ``do_testsdkext`` task.
- *Feature Testing:* Various scenario-based tests are run through the
`OpenEmbedded
Self-Test <&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance>`__
(oe-selftest). We test oe-selftest on each of the main distrubutions
we support.
- *Image Testing:* Image tests initiated through the following command:
$ bitbake image -c testimage The tests utilize the
```testimage*`` <&YOCTO_DOCS_REF_URL;#ref-classes-testimage*>`__
classes and the
```do_testimage`` <&YOCTO_DOCS_REF_URL;#ref-tasks-testimage>`__ task.
- *Layer Testing:* The Autobuilder has the possibility to test whether
specific layers work with the test of the system. The layers tested
may be selected by members of the project. Some key community layers
are also tested periodically.
- *Package Testing:* A Package Test (ptest) runs tests against packages
built by the OpenEmbedded build system on the target machine. See the
"`Testing Packages With
ptest <&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest>`__" section
in the Yocto Project Development Tasks Manual and the
"`Ptest <&YOCTO_WIKI_URL;/wiki/Ptest>`__" Wiki page for more
information on Ptest.
- *SDK Testing:* Image tests initiated through the following command: $
bitbake image -c testsdk The tests utilize the
```testsdk`` <&YOCTO_DOCS_REF_URL;#ref-classes-testsdk>`__ class and
the ``do_testsdk`` task.
- *Unit Testing:* Unit tests on various components of the system run
through ``oe-selftest`` and
```bitbake-selftest`` <&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance>`__.
- *Automatic Upgrade Helper:* This target tests whether new versions of
software are available and whether we can automatically upgrade to
those new versions. If so, this target emails the maintainers with a
patch to let them know this is possible.
.. _test-test-mapping:
How Tests Map to Areas of Code
==============================
Tests map into the codebase as follows:
- *bitbake-selftest*:
These tests are self-contained and test BitBake as well as its APIs,
which include the fetchers. The tests are located in
``bitbake/lib/*/tests``.
From within the BitBake repository, run the following: $
bitbake-selftest
To skip tests that access the Internet, use the ``BB_SKIP_NETTEST``
variable when running "bitbake-selftest" as follows: $
BB_SKIP_NETTEST=yes bitbake-selftest
The default output is quiet and just prints a summary of what was
run. To see more information, there is a verbose option:$
bitbake-selftest -v
Use this option when you wish to skip tests that access the network,
which are mostly necessary to test the fetcher modules. To specify
individual test modules to run, append the test module name to the
"bitbake-selftest" command. For example, to specify the tests for the
bb.data.module, run: $ bitbake-selftest bb.test.data.moduleYou can
also specify individual tests by defining the full name and module
plus the class path of the test, for example: $ bitbake-selftest
bb.tests.data.TestOverrides.test_one_override
The tests are based on `Python
unittest <https://docs.python.org/3/library/unittest.html>`__.
- *oe-selftest*:
- These tests use OE to test the workflows, which include testing
specific features, behaviors of tasks, and API unit tests.
- The tests can take advantage of parallelism through the "-j"
option, which can specify a number of threads to spread the tests
across. Note that all tests from a given class of tests will run
in the same thread. To parallelize large numbers of tests you can
split the class into multiple units.
- The tests are based on Python unittest.
- The code for the tests resides in
``meta/lib/oeqa/selftest/cases/``.
- To run all the tests, enter the following command: $ oe-selftest
-a
- To run a specific test, use the following command form where
testname is the name of the specific test: $ oe-selftest -r
testname For example, the following command would run the tinfoil
getVar API test:$ oe-selftest -r
tinfoil.TinfoilTests.test_getvarIt is also possible to run a set
of tests. For example the following command will run all of the
tinfoil tests:$ oe-selftest -r tinfoil
- *testimage:*
- These tests build an image, boot it, and run tests against the
image's content.
- The code for these tests resides in
``meta/lib/oeqa/runtime/cases/``.
- You need to set the
```IMAGE_CLASSES`` <&YOCTO_DOCS_REF_URL;#var-IMAGE_CLASSES>`__
variable as follows: IMAGE_CLASSES += "testimage"
- Run the tests using the following command form: $ bitbake image -c
testimage
- *testsdk:*
- These tests build an SDK, install it, and then run tests against
that SDK.
- The code for these tests resides in ``meta/lib/oeqa/sdk/cases/``.
- Run the test using the following command form: $ bitbake image -c
testsdk
- *testsdk_ext:*
- These tests build an extended SDK (eSDK), install that eSDK, and
run tests against the eSDK.
- The code for these tests resides in ``meta/lib/oeqa/esdk``.
- To run the tests, use the following command form: $ bitbake image
-c testsdkext
- *oe-build-perf-test:*
- These tests run through commonly used usage scenarios and measure
the performance times.
- The code for these tests resides in ``meta/lib/oeqa/buildperf``.
- To run the tests, use the following command form: $
oe-build-perf-test optionsThe command takes a number of options,
such as where to place the test results. The Autobuilder Helper
Scripts include the ``build-perf-test-wrapper`` script with
examples of how to use the oe-build-perf-test from the command
line.
Use the ``oe-git-archive`` command to store test results into a
Git repository.
Use the ``oe-build-perf-report`` command to generate text reports
and HTML reports with graphs of the performance data. For
examples, see
`http://downloads.yoctoproject.org/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.html <#>`__
and
`http://downloads.yoctoproject.org/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.txt <#>`__.
- The tests are contained in ``lib/oeqa/buildperf/test_basic.py``.
Test Examples
=============
This section provides example tests for each of the tests listed in the
`How Tests Map to Areas of Code <#test-test-mapping>`__ section.
For oeqa tests, testcases for each area reside in the main test
directory at ``meta/lib/oeqa/selftest/cases`` directory.
For oe-selftest. bitbake testcases reside in the ``lib/bb/tests/``
directory.
.. _bitbake-selftest-example:
``bitbake-selftest``
--------------------
A simple test example from ``lib/bb/tests/data.py`` is: class
DataExpansions(unittest.TestCase): def setUp(self): self.d =
bb.data.init() self.d["foo"] = "value_of_foo" self.d["bar"] =
"value_of_bar" self.d["value_of_foo"] = "value_of_'value_of_foo'" def
test_one_var(self): val = self.d.expand("${foo}")
self.assertEqual(str(val), "value_of_foo")
In this example, a ```DataExpansions`` <>`__ class of tests is created,
derived from standard python unittest. The class has a common ``setUp``
function which is shared by all the tests in the class. A simple test is
then added to test that when a variable is expanded, the correct value
is found.
Bitbake selftests are straightforward python unittest. Refer to the
Python unittest documentation for additional information on writing
these tests at: `https://docs.python.org/3/library/unittest.html <#>`__.
.. _oe-selftest-example:
``oe-selftest``
---------------
These tests are more complex due to the setup required behind the scenes
for full builds. Rather than directly using Python's unittest, the code
wraps most of the standard objects. The tests can be simple, such as
testing a command from within the OE build environment using the
following example:class BitbakeLayers(OESelftestTestCase): def
test_bitbakelayers_showcrossdepends(self): result =
runCmd('bitbake-layers show-cross-depends') self.assertTrue('aspell' in
result.output, msg = "No dependencies were shown. bitbake-layers
show-cross-depends output: %s"% result.output)
This example, taken from ``meta/lib/oeqa/selftest/cases/bblayers.py``,
creates a testcase from the ```OESelftestTestCase`` <>`__ class, derived
from ``unittest.TestCase``, which runs the ``bitbake-layers`` command
and checks the output to ensure it contains something we know should be
here.
The ``oeqa.utils.commands`` module contains Helpers which can assist
with common tasks, including:
- *Obtaining the value of a bitbake variable:* Use
``oeqa.utils.commands.get_bb_var()`` or use
``oeqa.utils.commands.get_bb_vars()`` for more than one variable
- *Running a bitbake invocation for a build:* Use
``oeqa.utils.commands.bitbake()``
- *Running a command:* Use ``oeqa.utils.commandsrunCmd()``
There is also a ``oeqa.utils.commands.runqemu()`` function for launching
the ``runqemu`` command for testing things within a running, virtualized
image.
You can run these tests in parallel. Parallelism works per test class,
so tests within a given test class should always run in the same build,
while tests in different classes or modules may be split into different
builds. There is no data store available for these tests since the tests
launch the ``bitbake`` command and exist outside of its context. As a
result, common bitbake library functions (bb.*) are also unavailable.
.. _testimage-example:
``testimage``
-------------
These tests are run once an image is up and running, either on target
hardware or under QEMU. As a result, they are assumed to be running in a
target image environment, as opposed to a host build environment. A
simple example from ``meta/lib/oeqa/runtime/cases/python.py`` contains
the following:class PythonTest(OERuntimeTestCase):
@OETestDepends(['ssh.SSHTest.test_ssh']) @OEHasPackage(['python3-core'])
def test_python3(self): cmd = "python3 -c \\"import codecs;
print(codecs.encode('Uryyb, jbeyq', 'rot13'))\"" status, output =
self.target.run(cmd) msg = 'Exit status was not 0. Output: %s' % output
self.assertEqual(status, 0, msg=msg)
In this example, the ```OERuntimeTestCase`` <>`__ class wraps
``unittest.TestCase``. Within the test, ``self.target`` represents the
target system, where commands can be run on it using the ``run()``
method.
To ensure certain test or package dependencies are met, you can use the
``OETestDepends`` and ``OEHasPackage`` decorators. For example, the test
in this example would only make sense if python3-core is installed in
the image.
.. _testsdk_ext-example:
``testsdk_ext``
---------------
These tests are run against built extensible SDKs (eSDKs). The tests can
assume that the eSDK environment has already been setup. An example from
``meta/lib/oeqa/sdk/cases/devtool.py`` contains the following:class
DevtoolTest(OESDKExtTestCase): @classmethod def setUpClass(cls):
myapp_src = os.path.join(cls.tc.esdk_files_dir, "myapp") cls.myapp_dst =
os.path.join(cls.tc.sdk_dir, "myapp") shutil.copytree(myapp_src,
cls.myapp_dst) subprocess.check_output(['git', 'init', '.'],
cwd=cls.myapp_dst) subprocess.check_output(['git', 'add', '.'],
cwd=cls.myapp_dst) subprocess.check_output(['git', 'commit', '-m',
"'test commit'"], cwd=cls.myapp_dst) @classmethod def
tearDownClass(cls): shutil.rmtree(cls.myapp_dst) def
\_test_devtool_build(self, directory): self._run('devtool add myapp %s'
% directory) try: self._run('devtool build myapp') finally:
self._run('devtool reset myapp') def test_devtool_build_make(self):
self._test_devtool_build(self.myapp_dst)In this example, the ``devtool``
command is tested to see whether a sample application can be built with
the ``devtool build`` command within the eSDK.
.. _testsdk-example:
``testsdk``
-----------
These tests are run against built SDKs. The tests can assume that an SDK
has already been extracted and its environment file has been sourced. A
simple example from ``meta/lib/oeqa/sdk/cases/python2.py`` contains the
following:class Python3Test(OESDKTestCase): def setUp(self): if not
(self.tc.hasHostPackage("nativesdk-python3-core") or
self.tc.hasHostPackage("python3-core-native")): raise
unittest.SkipTest("No python3 package in the SDK") def
test_python3(self): cmd = "python3 -c \\"import codecs;
print(codecs.encode('Uryyb, jbeyq', 'rot13'))\"" output = self._run(cmd)
self.assertEqual(output, "Hello, world\n")In this example, if
nativesdk-python3-core has been installed into the SDK, the code runs
the python3 interpreter with a basic command to check it is working
correctly. The test would only run if python3 is installed in the SDK.
.. _oe-build-perf-test-example:
``oe-build-perf-test``
----------------------
The performance tests usually measure how long operations take and the
resource utilisation as that happens. An example from
``meta/lib/oeqa/buildperf/test_basic.py`` contains the following:class
Test3(BuildPerfTestCase): def test3(self): """Bitbake parsing (bitbake
-p)""" # Drop all caches and parse self.rm_cache()
oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True)
self.measure_cmd_resources(['bitbake', '-p'], 'parse_1', 'bitbake -p (no
caches)') # Drop tmp/cache
oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True)
self.measure_cmd_resources(['bitbake', '-p'], 'parse_2', 'bitbake -p (no
tmp/cache)') # Parse with fully cached data
self.measure_cmd_resources(['bitbake', '-p'], 'parse_3', 'bitbake -p
(cached)')This example shows how three specific parsing timings are
measured, with and without various caches, to show how BitBakes parsing
performance trends over time.
.. _test-writing-considerations:
Considerations When Writing Tests
=================================
When writing good tests, there are several things to keep in mind. Since
things running on the Autobuilder are accessed concurrently by multiple
workers, consider the following:
**Running "cleanall" is not permitted.**
This can delete files from DL_DIR which would potentially break other
builds running in parallel. If this is required, DL_DIR must be set to
an isolated directory.
**Running "cleansstate" is not permitted.**
This can delete files from SSTATE_DIR which would potentially break
other builds running in parallel. If this is required, SSTATE_DIR must
be set to an isolated directory. Alternatively, you can use the "-f"
option with the ``bitbake`` command to "taint" tasks by changing the
sstate checksums to ensure sstate cache items will not be reused.
**Tests should not change the metadata.**
This is particularly true for oe-selftests since these can run in
parallel and changing metadata leads to changing checksums, which
confuses BitBake while running in parallel. If this is necessary, copy
layers to a temporary location and modify them. Some tests need to
change metadata, such as the devtool tests. To prevent the metadate from
changes, set up temporary copies of that data first.

View File

@@ -0,0 +1,103 @@
***********************************
Project Testing and Release Process
***********************************
.. _test-daily-devel:
Day to Day Development
======================
This section details how the project tests changes, through automation
on the Autobuilder or with the assistance of QA teams, through to making
releases.
The project aims to test changes against our test matrix before those
changes are merged into the master branch. As such, changes are queued
up in batches either in the ``master-next`` branch in the main trees, or
in user trees such as ``ross/mut`` in ``poky-contrib`` (Ross Burton
helps review and test patches and this is his testing tree).
We have two broad categories of test builds, including "full" and
"quick". On the Autobuilder, these can be seen as "a-quick" and
"a-full", simply for ease of sorting in the UI. Use our Autobuilder
console view to see where me manage most test-related items, available
at: `https://autobuilder.yoctoproject.org/typhoon/#/console <#>`__.
Builds are triggered manually when the test branches are ready. The
builds are monitored by the SWAT team. For additional information, see
`https://wiki.yoctoproject.org/wiki/Yocto_Build_Failure_Swat_Team <#>`__.
If successful, the changes would usually be merged to the ``master``
branch. If not successful, someone would respond to the changes on the
mailing list explaining that there was a failure in testing. The choice
of quick or full would depend on the type of changes and the speed with
which the result was required.
The Autobuilder does build the ``master`` branch once daily for several
reasons, in particular, to ensure the current ``master`` branch does
build, but also to keep ``yocto-testresults``
(`http://git.yoctoproject.org/cgit.cgi/yocto-testresults/ <#>`__),
buildhistory
(`http://git.yoctoproject.org/cgit.cgi/poky-buildhistory/ <#>`__), and
our sstate up to date. On the weekend, there is a master-next build
instead to ensure the test results are updated for the less frequently
run targets.
Performance builds (buildperf-\* targets in the console) are triggered
separately every six hours and automatically push their results to the
buildstats repository at:
`http://git.yoctoproject.org/cgit.cgi/yocto-buildstats/ <#>`__.
The 'quick' targets have been selected to be the ones which catch the
most failures or give the most valuable data. We run 'fast' ptests in
this case for example but not the ones which take a long time. The quick
target doesn't include \*-lsb builds for all architectures, some world
builds and doesn't trigger performance tests or ltp testing. The full
build includes all these things and is slower but more comprehensive.
.. _test-yocto-project-autobuilder-overview:
Release Builds
==============
The project typically has two major releases a year with a six month
cadence in April and October. Between these there would be a number of
milestone releases (usually four) with the final one being stablization
only along with point releases of our stable branches.
The build and release process for these project releases is similar to
that in `Day to Day Development <#test-daily-devel>`__, in that the
a-full target of the Autobuilder is used but in addition the form is
configured to generate and publish artefacts and the milestone number,
version, release candidate number and other information is entered. The
box to "generate an email to QA"is also checked.
When the build completes, an email is sent out using the send-qa-email
script in the ``yocto-autobuilder-helper`` repository to the list of
people configured for that release. Release builds are placed into a
directory in `https://autobuilder.yocto.io/pub/releases <#>`__ on the
Autobuilder which is included in the email. The process from here is
more manual and control is effectively passed to release engineering.
The next steps include:
- QA teams respond to the email saying which tests they plan to run and
when the results will be available.
- QA teams run their tests and share their results in the yocto-
testresults-contrib repository, along with a summary of their
findings.
- Release engineering prepare the release as per their process.
- Test results from the QA teams are included into the release in
separate directories and also uploaded to the yocto-testresults
repository alongside the other test results for the given revision.
- The QA report in the final release is regenerated using resulttool to
include the new test results and the test summaries from the teams
(as headers to the generated report).
- The release is checked against the release checklist and release
readiness criteria.
- A final decision on whether to release is made by the YP TSC who have
final oversight on release readiness.

View File

@@ -0,0 +1,287 @@
*******************************************
Understanding the Yocto Project Autobuilder
*******************************************
Execution Flow within the Autobuilder
=====================================
The “a-full” and “a-quick” targets are the usual entry points into the
Autobuilder and it makes sense to follow the process through the system
starting there. This is best visualised from the Autobuilder Console
view (`https://autobuilder.yoctoproject.org/typhoon/#/console <#>`__).
Each item along the top of that view represents some “target build” and
these targets are all run in parallel. The full build will trigger the
majority of them, the “quick” build will trigger some subset of them.
The Autobuilder effectively runs whichever configuration is defined for
each of those targets on a seperate buildbot worker. To understand the
configuration, you need to look at the entry on ``config.json`` file
within the ``yocto-autobuilder-helper`` repository. The targets are
defined in the overrides section, a quick example could be qemux86-64
which looks like:"qemux86-64" : { "MACHINE" : "qemux86-64", "TEMPLATE" :
"arch-qemu", "step1" : { "extravars" : [ "IMAGE_FSTYPES_append = ' wic
wic.bmap'" ] } },And to expand that, you need the “arch-qemu” entry from
the “templates” section, which looks like:"arch-qemu" : { "BUILDINFO" :
true, "BUILDHISTORY" : true, "step1" : { "BBTARGETS" : "core-image-sato
core-image-sato-dev core-image-sato-sdk core-image-minimal
core-image-minimal-dev core-image-sato:do_populate_sdk", "SANITYTARGETS"
: "core-image-minimal:do_testimage core-image-sato:do_testimage
core-image-sato-sdk:do_testimage core-image-sato:do_testsdk" }, "step2"
: { "SDKMACHINE" : "x86_64", "BBTARGETS" :
"core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext
core-image-sato:do_populate_sdk_ext", "SANITYTARGETS" :
"core-image-sato:do_testsdk core-image-minimal:do_testsdkext
core-image-sato:do_testsdkext" }, "step3" : { "BUILDHISTORY" : false,
"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 oe-selftest
${HELPERSTMACHTARGS} -j 15"], "ADDLAYER" :
["${BUILDDIR}/../meta-selftest"] } },Combining these two entries you can
see that “qemux86-64” is a three step build where the
``bitbake BBTARGETS`` would be run, then ``bitbake
SANITYTARGETS`` for each step; all for
``MACHINE=”qemx86-64”`` but with differing SDKMACHINE settings. In step
1 an extra variable is added to the ``auto.conf`` file to enable wic
image generation.
While not every detail of this is covered here, you can see how the
templating mechanism allows quite complex configurations to be built up
yet allows duplication and repetition to be kept to a minimum.
The different build targets are designed to allow for parallelisation,
so different machines are usually built in parallel, operations using
the same machine and metadata are built sequentially, with the aim of
trying to optimise build efficiency as much as possible.
The ``config.json`` file is processed by the scripts in the Helper
repository in the ``scripts`` directory. The following section details
how this works.
.. _test-autobuilder-target-exec-overview:
Autobuilder Target Execution Overview
=====================================
For each given target in a build, the Autobuilder executes several
steps. These are configured in ``yocto-autobuilder2/builders.py`` and
roughly consist of:
1. *Run ``clobberdir``*
This cleans out any previous build. Old builds are left around to
allow easier debugging of failed builds. For additional information,
see ```clobberdir`` <#test-clobberdir>`__.
2. *Obtain yocto-autobuilder-helper*
This step clones the ``yocto-autobuilder-helper`` git repository.
This is necessary to prevent the requirement to maintain all the
release or project-specific code within Buildbot. The branch chosen
matches the release being built so we can support older releases and
still make changes in newer ones.
3. *Write layerinfo.json*
This transfers data in the Buildbot UI when the build was configured
to the Helper.
4. *Call scripts/shared-repo-unpack*
This is a call into the Helper scripts to set up a checkout of all
the pieces this build might need. It might clone the BitBake
repository and the OpenEmbedded-Core repository. It may clone the
Poky repository, as well as additional layers. It will use the data
from the ``layerinfo.json`` file to help understand the
configuration. It will also use a local cache of repositories to
speed up the clone checkouts. For additional information, see
`Autobuilder Clone Cache <#test-autobuilder-clone-cache>`__.
This step has two possible modes of operation. If the build is part
of a parent build, its possible that all the repositories needed may
already be available, ready in a pre-prepared directory. An "a-quick"
or "a-full" build would prepare this before starting the other
sub-target builds. This is done for two reasons:
- the upstream may change during a build, for example, from a forced
push and this ensures we have matching content for the whole build
- if 15 Workers all tried to pull the same data from the same repos,
we can hit resource limits on upstream servers as they can think
they are under some kind of network attack
This pre-prepared directory is shared among the Workers over NFS. If
the build is an individual build and there is no "shared" directory
available, it would clone from the cache and the upstreams as
necessary. This is considered the fallback mode.
5. *Call scripts/run-config*
This is another call into the Helper scripts where its expected that
the main functionality of this target will be executed.
.. _test-autobuilder-tech:
Autobuilder Technology
======================
The Autobuilder has Yocto Project-specific functionality to allow builds
to operate with increased efficiency and speed.
.. _test-clobberdir:
clobberdir
----------
When deleting files, the Autobuilder uses ``clobberdir``, which is a
special script that moves files to a special location, rather than
deleting them. Files in this location are deleted by an ``rm`` command,
which is run under ``ionice -c 3``. For example, the deletion only
happens when there is idle IO capacity on the Worker. The Autobuilder
Worker Janitor runs this deletion. See `Autobuilder Worker
Janitor <#test-autobuilder-worker-janitor>`__.
.. _test-autobuilder-clone-cache:
Autobuilder Clone Cache
-----------------------
Cloning repositories from scratch each time they are required was slow
on the Autobuilder. We therefore have a stash of commonly used
repositories pre-cloned on the Workers. Data is fetched from these
during clones first, then "topped up" with later revisions from any
upstream when necesary. The cache is maintained by the Autobuilder
Worker Janitor. See `Autobuilder Worker
Janitor <#test-autobuilder-worker-janitor>`__.
.. _test-autobuilder-worker-janitor:
Autobuilder Worker Janitor
--------------------------
This is a process running on each Worker that performs two basic
operations, including background file deletion at IO idle (see `Target
Execution: clobberdir <#test-list-tgt-exec-clobberdir>`__) and
maintainenance of a cache of cloned repositories to improve the speed
the system can checkout repositories.
.. _test-shared-dl-dir:
Shared DL_DIR
-------------
The Workers are all connected over NFS which allows DL_DIR to be shared
between them. This reduces network accesses from the system and allows
the build to be sped up. Usage of the directory within the build system
is designed to be able to be shared over NFS.
.. _test-shared-sstate-cache:
Shared SSTATE_DIR
-----------------
The Workers are all connected over NFS which allows the ``sstate``
directory to be shared between them. This means once a Worker has built
an artefact, all the others can benefit from it. Usage of the directory
within the directory is designed for sharing over NFS.
.. _test-resulttool:
Resulttool
----------
All of the different tests run as part of the build generate output into
``testresults.json`` files. This allows us to determine which tests ran
in a given build and their status. Additional information, such as
failure logs or the time taken to run the tests, may also be included.
Resulttool is part of OpenEmbedded-Core and is used to manipulate these
json results files. It has the ability to merge files together, display
reports of the test results and compare different result files.
For details, see `https://wiki.yoctoproject.org/wiki/Resulttool <#>`__.
.. _test-run-config-tgt-execution:
run-config Target Execution
===========================
The ``scripts/run-config`` execution is where most of the work within
the Autobuilder happens. It runs through a number of steps; the first
are general setup steps that are run once and include:
1. Set up any ``buildtools-tarball`` if configured.
2. Call "buildhistory-init" if buildhistory is configured.
For each step that is configured in ``config.json``, it will perform the
following:
## WRITER's question: What does "logging in as stepXa" and others refer
to below? ##
1. Add any layers that are specified using the
``bitbake-layers add-layer`` command (logging as stepXa)
2. Call the ``scripts/setup-config`` script to generate the necessary
``auto.conf`` configuration file for the build
3. Run the ``bitbake BBTARGETS`` command (logging as stepXb)
4. Run the ``bitbake SANITYTARGETS`` command (logging as stepXc)
5. Run the ``EXTRACMDS`` command, which are run within the BitBake build
environment (logging as stepXd)
6. Run the ``EXTRAPLAINCMDS`` command(s), which are run outside the
BitBake build environment (logging as stepXd)
7. Remove any layers added in `step
1 <#test-run-config-add-layers-step>`__ using the
``bitbake-layers remove-layer`` command (logging as stepXa)
Once the execution steps above complete, ``run-config`` executes a set
of post-build steps, including:
1. Call ``scripts/publish-artifacts`` to collect any output which is to
be saved from the build.
2. Call ``scripts/collect-results`` to collect any test results to be
saved from the build.
3. Call ``scripts/upload-error-reports`` to send any error reports
generated to the remote server.
4. Cleanup the build directory using
```clobberdir`` <#test-clobberdir>`__ if the build was successful,
else rename it to “build-renamed” for potential future debugging.
.. _test-deploying-yp-autobuilder:
Deploying Yocto Autobuilder
===========================
The most up to date information about how to setup and deploy your own
Autbuilder can be found in README.md in the ``yocto-autobuilder2``
repository.
We hope that people can use the ``yocto-autobuilder2`` code directly but
it is inevitable that users will end up needing to heavily customise the
``yocto-autobuilder-helper`` repository, particularly the
``config.json`` file as they will want to define their own test matrix.
The Autobuilder supports wo customization options:
- variable substitution
- overlaying configuration files
The standard ``config.json`` minimally attempts to allow substitution of
the paths. The Helper script repository includes a
``local-example.json`` file to show how you could override these from a
separate configuration file. Pass the following into the environment of
the Autobuilder:$ ABHELPER_JSON="config.json local-example.json"As
another example, you could also pass the following into the
environment:$ ABHELPER_JSON="config.json /some/location/local.json"One
issue users often run into is validation of the ``config.json`` files. A
tip for minimizing issues from invalid json files is to use a Git
``pre-commit-hook.sh`` script to verify the JSON file before committing
it. Create a symbolic link as follows:$ ln -s
../../scripts/pre-commit-hook.sh .git/hooks/pre-commit

View File

@@ -0,0 +1,12 @@
=====================================
Yocto Project Test Environment Manual
=====================================
.. toctree::
:caption: Table of Contents
:numbered:
test-manual-intro
test-manual-test-process
test-manual-understand-autobuilder

View File

@@ -0,0 +1,97 @@
************
Introduction
************
Toaster is a web interface to the Yocto Project's `OpenEmbedded build
system <&YOCTO_DOCS_REF_URL;#build-system-term>`__. The interface
enables you to configure and run your builds. Information about builds
is collected and stored in a database. You can use Toaster to configure
and start builds on multiple remote build servers.
.. _intro-features:
Toaster Features
================
Toaster allows you to configure and run builds, and it provides
extensive information about the build process.
- *Configure and Run Builds:* You can use the Toaster web interface to
configure and start your builds. Builds started using the Toaster web
interface are organized into projects. When you create a project, you
are asked to select a release, or version of the build system you
want to use for the project builds. As shipped, Toaster supports
Yocto Project releases 1.8 and beyond. With the Toaster web
interface, you can:
- Browse layers listed in the various `layer
sources <#layer-source>`__ that are available in your project
(e.g. the OpenEmbedded Layer Index at
` <http://layers.openembedded.org/layerindex/>`__).
- Browse images, recipes, and machines provided by those layers.
- Import your own layers for building.
- Add and remove layers from your configuration.
- Set configuration variables.
- Select a target or multiple targets to build.
- Start your builds.
Toaster also allows you to configure and run your builds from the
command line, and switch between the command line and the web
interface at any time. Builds started from the command line appear
within a special Toaster project called "Command line builds".
- *Information About the Build Process:* Toaster also records extensive
information about your builds. Toaster collects data for builds you
start from the web interface and from the command line as long as
Toaster is running.
.. note::
You must start Toaster before the build or it will not collect
build data.
With Toaster you can:
- See what was built (recipes and packages) and what packages were
installed into your final image.
- Browse the directory structure of your image.
- See the value of all variables in your build configuration, and
which files set each value.
- Examine error, warning, and trace messages to aid in debugging.
- See information about the BitBake tasks executed and reused during
your build, including those that used shared state.
- See dependency relationships between recipes, packages, and tasks.
- See performance information such as build time, task time, CPU
usage, and disk I/O.
For an overview of Toaster shipped with the Yocto Project DISTRO
Release, see the "`Toaster - Yocto Project
2.2 <https://youtu.be/BlXdOYLgPxA>`__" video.
.. _toaster-installation-options:
Installation Options
====================
You can set Toaster up to run as a local instance or as a shared hosted
service.
When Toaster is set up as a local instance, all the components reside on
a single build host. Fundamentally, a local instance of Toaster is
suited for a single user developing on a single build host.
Toaster as a hosted service is suited for multiple users developing
across several build hosts. When Toaster is set up as a hosted service,
its components can be spread across several machines:

View File

@@ -0,0 +1,515 @@
**********************
Concepts and Reference
**********************
In order to configure and use Toaster, you should understand some
concepts and have some basic command reference material available. This
final chapter provides conceptual information on layer sources,
releases, and JSON configuration files. Also provided is a quick look at
some useful ``manage.py`` commands that are Toaster-specific.
Information on ``manage.py`` commands does exist across the Web and the
information in this manual by no means attempts to provide a command
comprehensive reference.
Layer Source
============
In general, a "layer source" is a source of information about existing
layers. In particular, we are concerned with layers that you can use
with the Yocto Project and Toaster. This chapter describes a particular
type of layer source called a "layer index."
A layer index is a web application that contains information about a set
of custom layers. A good example of an existing layer index is the
OpenEmbedded Layer Index. A public instance of this layer index exists
at ` <http://layers.openembedded.org>`__. You can find the code for this
layer index's web application at
` <http://git.yoctoproject.org/cgit/cgit.cgi/layerindex-web/>`__.
When you tie a layer source into Toaster, it can query the layer source
through a
`REST <http://en.wikipedia.org/wiki/Representational_state_transfer>`__
API, store the information about the layers in the Toaster database, and
then show the information to users. Users are then able to view that
information and build layers from Toaster itself without worrying about
cloning or editing the BitBake layers configuration file
``bblayers.conf``.
Tying a layer source into Toaster is convenient when you have many
custom layers that need to be built on a regular basis by a community of
developers. In fact, Toaster comes pre-configured with the OpenEmbedded
Metadata Index.
.. note::
You do not have to use a layer source to use Toaster. Tying into a
layer source is optional.
.. _layer-source-using-with-toaster:
Setting Up and Using a Layer Source
-----------------------------------
To use your own layer source, you need to set up the layer source and
then tie it into Toaster. This section describes how to tie into a layer
index in a manner similar to the way Toaster ties into the OpenEmbedded
Metadata Index.
Understanding Your Layers
~~~~~~~~~~~~~~~~~~~~~~~~~
The obvious first step for using a layer index is to have several custom
layers that developers build and access using the Yocto Project on a
regular basis. This set of layers needs to exist and you need to be
familiar with where they reside. You will need that information when you
set up the code for the web application that "hooks" into your set of
layers.
For general information on layers, see the "`The Yocto Project Layer
Model <&YOCTO_DOCS_OM_URL;#the-yocto-project-layer-model>`__" section in
the Yocto Project Overview and Concepts Manual. For information on how
to create layers, see the "`Understanding and Creating
Layers <&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers>`__"
section in the Yocto Project Development Tasks Manual.
.. _configuring-toaster-to-hook-into-your-layer-source:
Configuring Toaster to Hook Into Your Layer Index
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want Toaster to use your layer index, you must host the web
application in a server to which Toaster can connect. You also need to
give Toaster the information about your layer index. In other words, you
have to configure Toaster to use your layer index. This section
describes two methods by which you can configure and use your layer
index.
In the previous section, the code for the OpenEmbedded Metadata Index
(i.e. ` <http://layers.openembedded.org>`__) was referenced. You can use
this code, which is at
` <http://git.yoctoproject.org/cgit/cgit.cgi/layerindex-web/>`__, as a
base to create your own layer index.
Use the Administration Interface
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access the administration interface through a browser by entering the
URL of your Toaster instance and adding "``/admin``" to the end of the
URL. As an example, if you are running Toaster locally, use the
following URL: http://127.0.0.1:8000/admin
The administration interface has a "Layer sources" section that includes
an "Add layer source" button. Click that button and provide the required
information. Make sure you select "layerindex" as the layer source type.
Use the Fixture Feature
^^^^^^^^^^^^^^^^^^^^^^^
The Django fixture feature overrides the default layer server when you
use it to specify a custom URL. To use the fixture feature, create (or
edit) the file ``bitbake/lib/toaster.orm/fixtures/custom.xml``, and then
set the following Toaster setting to your custom URL: <?xml
version="1.0" ?> <django-objects version="1.0"> <object
model="orm.toastersetting" pk="100"> <field name="name"
type="CharField">CUSTOM_LAYERINDEX_SERVER</field> <field name="value"
type="CharField">https://layers.my_organization.org/layerindex/branch/master/layers/</field>
</object> <django-objects> When you start Toaster for the first time, or
if you delete the file ``toaster.sqlite`` and restart, the database will
populate cleanly from this layer index server.
Once the information has been updated, verify the new layer information
is available by using the Toaster web interface. To do that, visit the
"All compatible layers" page inside a Toaster project. The layers from
your layer source should be listed there.
If you change the information in your layer index server, refresh the
Toaster database by running the following command: $
bitbake/lib/toaster/manage.py lsupdates If Toaster can reach the API
URL, you should see a message telling you that Toaster is updating the
layer source information.
.. _toaster-releases:
Releases
========
When you create a Toaster project using the web interface, you are asked
to choose a "Release." In the context of Toaster, the term "Release"
refers to a set of layers and a BitBake version the OpenEmbedded build
system uses to build something. As shipped, Toaster is pre-configured
with releases that correspond to Yocto Project release branches.
However, you can modify, delete, and create new releases according to
your needs. This section provides some background information on
releases.
.. _toaster-releases-supported:
Pre-Configured Releases
-----------------------
As shipped, Toaster is configured to use a specific set of releases. Of
course, you can always configure Toaster to use any release. For
example, you might want your project to build against a specific commit
of any of the "out-of-the-box" releases. Or, you might want your project
to build against different revisions of OpenEmbedded and BitBake.
As shipped, Toaster is configured to work with the following releases:
- *Yocto Project DISTRO "DISTRO_NAME" or OpenEmbedded "DISTRO_NAME":*
This release causes your Toaster projects to build against the head
of the DISTRO_NAME_NO_CAP branch at
` <&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/log/?h=rocko>`__ or
` <http://git.openembedded.org/openembedded-core/commit/?h=rocko>`__.
- *Yocto Project "Master" or OpenEmbedded "Master":* This release
causes your Toaster Projects to build against the head of the master
branch, which is where active development takes place, at
` <&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/log/>`__ or
` <http://git.openembedded.org/openembedded-core/log/>`__.
- *Local Yocto Project or Local OpenEmbedded:* This release causes your
Toaster Projects to build against the head of the ``poky`` or
``openembedded-core`` clone you have local to the machine running
Toaster.
Configuring Toaster
===================
In order to use Toaster, you must configure the database with the
default content. The following subsections describe various aspects of
Toaster configuration.
Configuring the Workflow
------------------------
The ``bldcontrol/management/commands/checksettings.py`` file controls
workflow configuration. The following steps outline the process to
initially populate this database.
1. The default project settings are set from
``orm/fixtures/settings.xml``.
2. The default project distro and layers are added from
``orm/fixtures/poky.xml`` if poky is installed. If poky is not
installed, they are added from ``orm/fixtures/oe-core.xml``.
3. If the ``orm/fixtures/custom.xml`` file exists, then its values are
added.
4. The layer index is then scanned and added to the database.
Once these steps complete, Toaster is set up and ready to use.
Customizing Pre-Set Data
------------------------
The pre-set data for Toaster is easily customizable. You can create the
``orm/fixtures/custom.xml`` file to customize the values that go into to
the database. Customization is additive, and can either extend or
completely replace the existing values.
You use the ``orm/fixtures/custom.xml`` file to change the default
project settings for the machine, distro, file images, and layers. When
creating a new project, you can use the file to define the offered
alternate project release selections. For example, you can add one or
more additional selections that present custom layer sets or distros,
and any other local or proprietary content.
Additionally, you can completely disable the content from the
``oe-core.xml`` and ``poky.xml`` files by defining the section shown
below in the ``settings.xml`` file. For example, this option is
particularly useful if your custom configuration defines fewer releases
or layers than the default fixture files.
The following example sets "name" to "CUSTOM_XML_ONLY" and its value to
"True". <object model="orm.toastersetting" pk="99"> <field
type="CharField" name="name">CUSTOM_XML_ONLY</field> <field
type="CharField" name="value">True</field> </object>
Understanding Fixture File Format
---------------------------------
The following is an overview of the file format used by the
``oe-core.xml``, ``poky.xml``, and ``custom.xml`` files.
The following subsections describe each of the sections in the fixture
files, and outline an example section of the XML code. you can use to
help understand this information and create a local ``custom.xml`` file.
Defining the Default Distro and Other Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section defines the default distro value for new projects. By
default, it reserves the first Toaster Setting record "1". The following
demonstrates how to set the project default value for
```DISTRO`` <&YOCTO_DOCS_REF_URL;#var-DISTRO>`__: <!-- Set the project
default value for DISTRO --> <object model="orm.toastersetting" pk="1">
<field type="CharField" name="name">DEFCONF_DISTRO</field> <field
type="CharField" name="value">poky</field> </object> You can override
other default project values by adding additional Toaster Setting
sections such as any of the settings coming from the ``settings.xml``
file. Also, you can add custom values that are included in the BitBake
environment. The "pk" values must be unique. By convention, values that
set default project values have a "DEFCONF" prefix.
Defining BitBake Version
~~~~~~~~~~~~~~~~~~~~~~~~
The following defines which version of BitBake is used for the following
release selection: <!-- Bitbake versions which correspond to the
metadata release --> <object model="orm.bitbakeversion" pk="1"> <field
type="CharField" name="name">rocko</field> <field type="CharField"
name="giturl">git://git.yoctoproject.org/poky</field> <field
type="CharField" name="branch">rocko</field> <field type="CharField"
name="dirpath">bitbake</field> </object>
.. _defining-releases:
Defining Release
~~~~~~~~~~~~~~~~
The following defines the releases when you create a new project. <!--
Releases available --> <object model="orm.release" pk="1"> <field
type="CharField" name="name">rocko</field> <field type="CharField"
name="description">Yocto Project 2.4 "Rocko"</field> <field
rel="ManyToOneRel" to="orm.bitbakeversion"
name="bitbake_version">1</field> <field type="CharField"
name="branch_name">rocko</field> <field type="TextField"
name="helptext">Toaster will run your builds using the tip of the <a
href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko">Yocto
Project Rocko branch</a>.</field> </object> The "pk" value must match
the above respective BitBake version record.
Defining the Release Default Layer Names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following defines the default layers for each release: <!-- Default
project layers for each release --> <object
model="orm.releasedefaultlayer" pk="1"> <field rel="ManyToOneRel"
to="orm.release" name="release">1</field> <field type="CharField"
name="layer_name">openembedded-core</field> </object> The 'pk' values in
the example above should start at "1" and increment uniquely. You can
use the same layer name in multiple releases.
Defining Layer Definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~
Layer definitions are the most complex. The following defines each of
the layers, and then defines the exact layer version of the layer used
for each respective release. You must have one ``orm.layer`` entry for
each layer. Then, with each entry you need a set of
``orm.layer_version`` entries that connects the layer with each release
that includes the layer. In general all releases include the layer.
<object model="orm.layer" pk="1"> <field type="CharField"
name="name">openembedded-core</field> <field type="CharField"
name="layer_index_url"></field> <field type="CharField"
name="vcs_url">git://git.yoctoproject.org/poky</field> <field
type="CharField"
name="vcs_web_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
<field type="CharField"
name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
<field type="CharField"
name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
</object> <object model="orm.layer_version" pk="1"> <field
rel="ManyToOneRel" to="orm.layer" name="layer">1</field> <field
type="IntegerField" name="layer_source">0</field> <field
rel="ManyToOneRel" to="orm.release" name="release">1</field> <field
type="CharField" name="branch">rocko</field> <field type="CharField"
name="dirpath">meta</field> </object> <object model="orm.layer_version"
pk="2"> <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field> <field
rel="ManyToOneRel" to="orm.release" name="release">2</field> <field
type="CharField" name="branch">HEAD</field> <field type="CharField"
name="commit">HEAD</field> <field type="CharField"
name="dirpath">meta</field> </object> <object model="orm.layer_version"
pk="3"> <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field> <field
rel="ManyToOneRel" to="orm.release" name="release">3</field> <field
type="CharField" name="branch">master</field> <field type="CharField"
name="dirpath">meta</field> </object> The layer "pk" values above must
be unique, and typically start at "1". The layer version "pk" values
must also be unique across all layers, and typically start at "1".
Remote Toaster Monitoring
=========================
Toaster has an API that allows remote management applications to
directly query the state of the Toaster server and its builds in a
machine-to-machine manner. This API uses the
`REST <http://en.wikipedia.org/wiki/Representational_state_transfer>`__
interface and the transfer of JSON files. For example, you might monitor
a build inside a container through well supported known HTTP ports in
order to easily access a Toaster server inside the container. In this
example, when you use this direct JSON API, you avoid having web page
parsing against the display the user sees.
Checking Health
---------------
Before you use remote Toaster monitoring, you should do a health check.
To do this, ping the Toaster server using the following call to see if
it is still alive: http://host:port/health Be sure to provide values for
host and port. If the server is alive, you will get the response HTML:
<!DOCTYPE html> <html lang="en"> <head><title>Toaster
Health</title></head> <body>Ok</body> </html>
Determining Status of Builds in Progress
----------------------------------------
Sometimes it is useful to determine the status of a build in progress.
To get the status of pending builds, use the following call:
http://host:port/toastergui/api/building Be sure to provide values for
host and port. The output is a JSON file that itemizes all builds in
progress. This file includes the time in seconds since each respective
build started as well as the progress of the cloning, parsing, and task
execution. The following is sample output for a build in progress:
{"count": 1, "building": [ {"machine": "beaglebone", "seconds":
"463.869", "task": "927:2384", "distro": "poky", "clone": "1:1", "id":
2, "start": "2017-09-22T09:31:44.887Z", "name": "20170922093200",
"parse": "818:818", "project": "my_rocko", "target":
"core-image-minimal" }] } The JSON data for this query is returned in a
single line. In the previous example the line has been artificially
split for readability.
Checking Status of Builds Completed
-----------------------------------
Once a build is completed, you get the status when you use the following
call: http://host:port/toastergui/api/builds Be sure to provide values
for host and port. The output is a JSON file that itemizes all complete
builds, and includes build summary information. The following is sample
output for a completed build: {"count": 1, "builds": [ {"distro":
"poky", "errors": 0, "machine": "beaglebone", "project": "my_rocko",
"stop": "2017-09-22T09:26:36.017Z", "target": "quilt-native", "seconds":
"78.193", "outcome": "Succeeded", "id": 1, "start":
"2017-09-22T09:25:17.824Z", "warnings": 1, "name": "20170922092618" }] }
The JSON data for this query is returned in a single line. In the
previous example the line has been artificially split for readability.
Determining Status of a Specific Build
--------------------------------------
Sometimes it is useful to determine the status of a specific build. To
get the status of a specific build, use the following call:
http://host:port/toastergui/api/build/ID Be sure to provide values for
host, port, and ID. You can find the value for ID from the Builds
Completed query. See the "`Checking Status of Builds
Completed <#checking-status-of-builds-completed>`__" section for more
information.
The output is a JSON file that itemizes the specific build and includes
build summary information. The following is sample output for a specific
build: {"build": {"distro": "poky", "errors": 0, "machine":
"beaglebone", "project": "my_rocko", "stop": "2017-09-22T09:26:36.017Z",
"target": "quilt-native", "seconds": "78.193", "outcome": "Succeeded",
"id": 1, "start": "2017-09-22T09:25:17.824Z", "warnings": 1, "name":
"20170922092618", "cooker_log":
"/opt/user/poky/build-toaster-2/tmp/log/cooker/beaglebone/build_20170922_022607.991.log"
} } The JSON data for this query is returned in a single line. In the
previous example the line has been artificially split for readability.
.. _toaster-useful-commands:
Useful Commands
===============
In addition to the web user interface and the scripts that start and
stop Toaster, command-line commands exist through the ``manage.py``
management script. You can find general documentation on ``manage.py``
at the
`Django <https://docs.djangoproject.com/en/1.7/topics/settings/>`__
site. However, several ``manage.py`` commands have been created that are
specific to Toaster and are used to control configuration and back-end
tasks. You can locate these commands in the `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ (e.g. ``poky``) at
``bitbake/lib/manage.py``. This section documents those commands.
.. note::
- When using ``manage.py`` commands given a default configuration,
you must be sure that your working directory is set to the `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__. Using
``manage.py`` commands from the Build Directory allows Toaster to
find the ``toaster.sqlite`` file, which is located in the Build
Directory.
- For non-default database configurations, it is possible that you
can use ``manage.py`` commands from a directory other than the
Build Directory. To do so, the ``toastermain/settings.py`` file
must be configured to point to the correct database backend.
.. _toaster-command-buildslist:
``buildslist``
--------------
The ``buildslist`` command lists all builds that Toaster has recorded.
Access the command as follows: $ bitbake/lib/toaster/manage.py
buildslist The command returns a list, which includes numeric
identifications, of the builds that Toaster has recorded in the current
database.
You need to run the ``buildslist`` command first to identify existing
builds in the database before using the
```builddelete`` <#toaster-command-builddelete>`__ command. Here is an
example that assumes default repository and build directory names: $ cd
~/poky/build $ python ../bitbake/lib/toaster/manage.py buildslist If
your Toaster database had only one build, the above ``buildslist``
command would return something like the following: 1: qemux86 poky
core-image-minimal
.. _toaster-command-builddelete:
``builddelete``
---------------
The ``builddelete`` command deletes data associated with a build. Access
the command as follows: $ bitbake/lib/toaster/manage.py builddelete
build_id The command deletes all the build data for the specified
build_id. This command is useful for removing old and unused data from
the database.
Prior to running the ``builddelete`` command, you need to get the ID
associated with builds by using the
```buildslist`` <#toaster-command-buildslist>`__ command.
.. _toaster-command-perf:
``perf``
--------
The ``perf`` command measures Toaster performance. Access the command as
follows: $ bitbake/lib/toaster/manage.py perf The command is a sanity
check that returns page loading times in order to identify performance
problems.
.. _toaster-command-checksettings:
``checksettings``
-----------------
The ``checksettings`` command verifies existing Toaster settings. Access
the command as follows: $ bitbake/lib/toaster/manage.py checksettings
Toaster uses settings that are based on the database to configure the
building tasks. The ``checksettings`` command verifies that the database
settings are valid in the sense that they have the minimal information
needed to start a build.
In order for the ``checksettings`` command to work, the database must be
correctly set up and not have existing data. To be sure the database is
ready, you can run the following: $ bitbake/lib/toaster/manage.py
syncdb $ bitbake/lib/toaster/manage.py migrate orm $
bitbake/lib/toaster/manage.py migrate bldcontrol After running these
commands, you can run the ``checksettings`` command.
.. _toaster-command-runbuilds:
``runbuilds``
-------------
The ``runbuilds`` command launches scheduled builds. Access the command
as follows: $ bitbake/lib/toaster/manage.py runbuilds The ``runbuilds``
command checks if scheduled builds exist in the database and then
launches them per schedule. The command returns after the builds start
but before they complete. The Toaster Logging Interface records and
updates the database when the builds complete.

View File

@@ -0,0 +1,495 @@
****************************
Setting Up and Using Toaster
****************************
Starting Toaster for Local Development
======================================
Once you have set up the Yocto Project and installed the Toaster system
dependencies as described in the "`Preparing to Use
Toaster <#toaster-manual-start>`__" chapter, you are ready to start
Toaster.
Navigate to the root of your `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ (e.g. ``poky``): $
cd poky Once in that directory, source the build environment script: $
source oe-init-build-env Next, from the build directory (e.g.
``poky/build``), start Toaster using this command: $ source toaster
start You can now run your builds from the command line, or with Toaster
as explained in section "`Using the Toaster Web
Interface <#using-the-toaster-web-interface>`__".
To access the Toaster web interface, open your favorite browser and
enter the following: http://127.0.0.1:8000
Setting a Different Port
========================
By default, Toaster starts on port 8000. You can use the ``WEBPORT``
parameter to set a different port. For example, the following command
sets the port to "8400": $ source toaster start webport=8400
Setting Up Toaster Without a Web Server
=======================================
You can start a Toaster environment without starting its web server.
This is useful for the following:
- Capturing a command-line builds statistics into the Toaster database
for examination later.
- Capturing a command-line builds statistics when the Toaster server
is already running.
- Having one instance of the Toaster web server track and capture
multiple command-line builds, where each build is started in its own
“noweb” Toaster environment.
The following commands show how to start a Toaster environment without
starting its web server, perform BitBake operations, and then shut down
the Toaster environment. Once the build is complete, you can close the
Toaster environment. Before closing the environment, however, you should
allow a few minutes to ensure the complete transfer of its BitBake build
statistics to the Toaster database. If you have a separate Toaster web
server instance running, you can watch this command-line builds
progress and examine the results as soon as they are posted: $ source
toaster start noweb $ bitbake target $ source toaster stop
Setting Up Toaster Without a Build Server
=========================================
You can start a Toaster environment with the “New Projects” feature
disabled. Doing so is useful for the following:
- Sharing your build results over the web server while blocking others
from starting builds on your host.
- Allowing only local command-line builds to be captured into the
Toaster database.
Use the following command to set up Toaster without a build server: $
source toaster start nobuild webport=port
Setting up External Access
==========================
By default, Toaster binds to the loop back address (i.e. localhost),
which does not allow access from external hosts. To allow external
access, use the ``WEBPORT`` parameter to open an address that connects
to the network, specifically the IP address that your NIC uses to
connect to the network. You can also bind to all IP addresses the
computer supports by using the shortcut "0.0.0.0:port".
The following example binds to all IP addresses on the host: $ source
toaster start webport=0.0.0.0:8400 This example binds to a specific IP
address on the host's NIC: $ source toaster start
webport=192.168.1.1:8400
The Directory for Cloning Layers
================================
Toaster creates a ``_toaster_clones`` directory inside your Source
Directory (i.e. ``poky``) to clone any layers needed for your builds.
Alternatively, if you would like all of your Toaster related files and
directories to be in a particular location other than the default, you
can set the ``TOASTER_DIR`` environment variable, which takes precedence
over your current working directory. Setting this environment variable
causes Toaster to create and use ``$TOASTER_DIR./_toaster_clones``.
.. _toaster-the-build-directory:
The Build Directory
===================
Toaster creates a build directory within your Source Directory (e.g.
``poky``) to execute the builds.
Alternatively, if you would like all of your Toaster related files and
directories to be in a particular location, you can set the
``TOASTER_DIR`` environment variable, which takes precedence over your
current working directory. Setting this environment variable causes
Toaster to use ``$TOASTER_DIR/build`` as the build directory.
.. _toaster-creating-a-django-super-user:
Creating a Django Superuser
===========================
Toaster is built on the `Django
framework <https://www.djangoproject.com/>`__. Django provides an
administration interface you can use to edit Toaster configuration
parameters.
To access the Django administration interface, you must create a
superuser by following these steps:
1. If you used ``pip3``, which is recommended, to set up the Toaster
system dependencies, you need be sure the local user path is in your
``PATH`` list. To append the pip3 local user path, use the following
command: $ export PATH=$PATH:$HOME/.local/bin
2. From the directory containing the Toaster database, which by default
is the `Build Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__,
invoke the ``createsuperuser`` command from ``manage.py``: $ cd
~/poky/build $ ../bitbake/lib/toaster/manage.py createsuperuser
3. Django prompts you for the username, which you need to provide.
4. Django prompts you for an email address, which is optional.
5. Django prompts you for a password, which you must provide.
6. Django prompts you to re-enter your password for verification.
After completing these steps, the following confirmation message
appears: Superuser created successfully.
Creating a superuser allows you to access the Django administration
interface through a browser. The URL for this interface is the same as
the URL used for the Toaster instance with "/admin" on the end. For
example, if you are running Toaster locally, use the following URL:
http://127.0.0.1:8000/admin You can use the Django administration
interface to set Toaster configuration parameters such as the build
directory, layer sources, default variable values, and BitBake versions.
.. _toaster-setting-up-a-production-instance-of-toaster:
Setting Up a Production Instance of Toaster
===========================================
You can use a production instance of Toaster to share the Toaster
instance with remote users, multiple users, or both. The production
instance is also the setup that can handle heavier loads on the web
service. Use the instructions in the following sections to set up
Toaster to run builds through the Toaster web interface.
.. _toaster-production-instance-requirements:
Requirements
------------
Be sure you meet the following requirements:
.. note::
You must comply with all Apache,
mod-wsgi
, and Mysql requirements.
- Have all the build requirements as described in the "`Preparing to
Use Toaster <#toaster-manual-start>`__" chapter.
- Have an Apache webserver.
- Have ``mod-wsgi`` for the Apache webserver.
- Use the Mysql database server.
- If you are using Ubuntu 16.04, run the following: $ sudo apt-get
install apache2 libapache2-mod-wsgi-py3 mysql-server python3-pip
libmysqlclient-dev
- If you are using Fedora 24 or a RedHat distribution, run the
following: $ sudo dnf install httpd python3-mod_wsgi python3-pip
mariadb-server mariadb-devel python3-devel
- If you are using openSUSE Leap 42.1, run the following: $ sudo zypper
install apache2 apache2-mod_wsgi-python3 python3-pip mariadb
mariadb-client python3-devel
.. _toaster-installation-steps:
Installation
------------
Perform the following steps to install Toaster:
1. Create toaster user and set its home directory to
``/var/www/toaster``: $ sudo /usr/sbin/useradd toaster -md
/var/www/toaster -s /bin/false $ sudo su - toaster -s /bin/bash
2. Checkout a copy of ``poky`` into the web server directory. You will
be using ``/var/www/toaster``: $ git clone
git://git.yoctoproject.org/poky $ git checkout DISTRO_NAME_NO_CAP
3. Install Toaster dependencies using the --user flag which keeps the
Python packages isolated from your system-provided packages: $ cd
/var/www/toaster/ $ pip3 install --user -r
./poky/bitbake/toaster-requirements.txt $ pip3 install --user
mysqlclient
.. note::
Isolating these packages is not required but is recommended.
Alternatively, you can use your operating system's package
manager to install the packages.
4. Configure Toaster by editing
``/var/www/toaster/poky/bitbake/lib/toaster/toastermain/settings.py``
as follows:
- Edit the
`DATABASES <https://docs.djangoproject.com/en/1.11/ref/settings/#databases>`__
settings: DATABASES = { 'default': { 'ENGINE':
'django.db.backends.mysql', 'NAME': 'toaster_data', 'USER':
'toaster', 'PASSWORD': 'yourpasswordhere', 'HOST': 'localhost',
'PORT': '3306', } }
- Edit the
`SECRET_KEY <https://docs.djangoproject.com/en/1.11/ref/settings/#std:setting-SECRET_KEY>`__:
SECRET_KEY = 'your_secret_key'
- Edit the
`STATIC_ROOT <https://docs.djangoproject.com/en/1.11/ref/settings/#std:setting-STATIC_ROOT>`__:
STATIC_ROOT = '/var/www/toaster/static_files/'
5. Add the database and user to the ``mysql`` server defined earlier: $
mysql -u root -p mysql> CREATE DATABASE toaster_data; mysql> CREATE
USER 'toaster'@'localhost' identified by 'yourpasswordhere'; mysql>
GRANT all on toaster_data.\* to 'toaster'@'localhost'; mysql> quit
6. Get Toaster to create the database schema, default data, and gather
the statically-served files: $ cd /var/www/toaster/poky/ $
./bitbake/lib/toaster/manage.py migrate $ TOASTER_DIR=`pwd\`
TEMPLATECONF='poky' \\ ./bitbake/lib/toaster/manage.py checksettings
$ ./bitbake/lib/toaster/manage.py collectstatic In the previous
example, from the ``poky`` directory, the ``migrate`` command
ensures the database schema changes have propagated correctly (i.e.
migrations). The next line sets the Toaster root directory
``TOASTER_DIR`` and the location of the Toaster configuration file
``TOASTER_CONF``, which is relative to ``TOASTER_DIR``. The
``TEMPLATECONF`` value reflects the contents of
``poky/.templateconf``, and by default, should include the string
"poky". For more information on the Toaster configuration file, see
the "`Configuring Toaster <#configuring-toaster>`__" section.
This line also runs the ``checksettings`` command, which configures
the location of the Toaster `Build
Directory <&YOCTO_DOCS_REF_URL;#build-directory>`__. The Toaster
root directory ``TOASTER_DIR`` determines where the Toaster build
directory is created on the file system. In the example above,
``TOASTER_DIR`` is set as follows: /var/www/toaster/poky This
setting causes the Toaster build directory to be:
/var/www/toaster/poky/build
Finally, the ``collectstatic`` command is a Django framework command
that collects all the statically served files into a designated
directory to be served up by the Apache web server as defined by
``STATIC_ROOT``.
7. Test and/or use the Mysql integration with Toasters Django web
server. At this point, you can start up the normal Toaster Django
web server with the Toaster database in Mysql. You can use this web
server to confirm that the database migration and data population
from the Layer Index is complete.
To start the default Toaster Django web server with the Toaster
database now in Mysql, use the standard start commands: $ source
oe-init-build-env $ source toaster start Additionally, if Django is
sufficient for your requirements, you can use it for your release
system and migrate later to Apache as your requirements change.
8. Add an Apache configuration file for Toaster to your Apache web
server's configuration directory. If you are using Ubuntu or Debian,
put the file here: /etc/apache2/conf-available/toaster.conf If you
are using Fedora or RedHat, put it here:
/etc/httpd/conf.d/toaster.conf If you are using OpenSUSE, put it
here: /etc/apache2/conf.d/toaster.conf Following is a sample Apache
configuration for Toaster you can follow: Alias /static
/var/www/toaster/static_files <Directory
/var/www/toaster/static_files> <IfModule mod_access_compat.c> Order
allow,deny Allow from all </IfModule> <IfModule
!mod_access_compat.c> Require all granted </IfModule> </Directory>
<Directory /var/www/toaster/poky/bitbake/lib/toaster/toastermain>
<Files "wsgi.py"> Require all granted </Files> </Directory>
WSGIDaemonProcess toaster_wsgi
python-path=/var/www/toaster/poky/bitbake/lib/toaster:/var/www/toaster/.local/lib/python3.4/site-packages
WSGIScriptAlias /
"/var/www/toaster/poky/bitbake/lib/toaster/toastermain/wsgi.py"
<Location /> WSGIProcessGroup toaster_wsgi </Location> If you are
using Ubuntu or Debian, you will need to enable the config and
module for Apache: $ sudo a2enmod wsgi $ sudo a2enconf toaster $
chmod +x bitbake/lib/toaster/toastermain/wsgi.py Finally, restart
Apache to make sure all new configuration is loaded. For Ubuntu,
Debian, and OpenSUSE use: $ sudo service apache2 restart For Fedora
and RedHat use: $ sudo service httpd restart
9. Prepare the systemd service to run Toaster builds. Here is a sample
configuration file for the service: [Unit] Description=Toaster
runbuilds [Service] Type=forking User=toaster
ExecStart=/usr/bin/screen -d -m -S runbuilds
/var/www/toaster/poky/bitbake/lib/toaster/runbuilds-service.sh start
ExecStop=/usr/bin/screen -S runbuilds -X quit
WorkingDirectory=/var/www/toaster/poky [Install]
WantedBy=multi-user.target Prepare the ``runbuilds-service.sh``
script that you need to place in the
``/var/www/toaster/poky/bitbake/lib/toaster/`` directory by setting
up executable permissions: #!/bin/bash #export
http_proxy=http://proxy.host.com:8080 #export
https_proxy=http://proxy.host.com:8080 #export
GIT_PROXY_COMMAND=$HOME/bin/gitproxy cd ~/poky/ source
./oe-init-build-env build source ../bitbake/bin/toaster $1 noweb [
"$1" == 'start' ] && /bin/bash
10. Run the service: # service runbuilds start Since the service is
running in a detached screen session, you can attach to it using
this command: $ sudo su - toaster $ screen -rS runbuilds You can
detach from the service again using "Ctrl-a" followed by "d" key
combination.
You can now open up a browser and start using Toaster.
Using the Toaster Web Interface
===============================
The Toaster web interface allows you to do the following:
- Browse published layers in the `OpenEmbedded Layer
Index <http://layers.openembedded.org>`__ that are available for your
selected version of the build system.
- Import your own layers for building.
- Add and remove layers from your configuration.
- Set configuration variables.
- Select a target or multiple targets to build.
- Start your builds.
- See what was built (recipes and packages) and what packages were
installed into your final image.
- Browse the directory structure of your image.
- See the value of all variables in your build configuration, and which
files set each value.
- Examine error, warning and trace messages to aid in debugging.
- See information about the BitBake tasks executed and reused during
your build, including those that used shared state.
- See dependency relationships between recipes, packages and tasks.
- See performance information such as build time, task time, CPU usage,
and disk I/O.
.. _web-interface-videos:
Toaster Web Interface Videos
----------------------------
Following are several videos that show how to use the Toaster GUI:
- *Build Configuration:* This
`video <https://www.youtube.com/watch?v=qYgDZ8YzV6w>`__ overviews and
demonstrates build configuration for Toaster.
- *Build Custom Layers:* This
`video <https://www.youtube.com/watch?v=QJzaE_XjX5c>`__ shows you how
to build custom layers that are used with Toaster.
- *Toaster Homepage and Table Controls:* This
`video <https://www.youtube.com/watch?v=QEARDnrR1Xw>`__ goes over the
Toaster entry page, and provides an overview of the data manipulation
capabilities of Toaster, which include search, sorting and filtering
by different criteria.
- *Build Dashboard:* This
`video <https://www.youtube.com/watch?v=KKqHYcnp2gE>`__ shows you the
build dashboard, a page providing an overview of the information
available for a selected build.
- *Image Information:* This
`video <https://www.youtube.com/watch?v=XqYGFsmA0Rw>`__ walks through
the information Toaster provides about images: packages installed and
root file system.
- *Configuration:* This
`video <https://www.youtube.com/watch?v=UW-j-T2TzIg>`__ provides
Toaster build configuration information.
- *Tasks:* This `video <https://www.youtube.com/watch?v=D4-9vGSxQtw>`__
shows the information Toaster provides about the tasks run by the
build system.
- *Recipes and Packages Built:* This
`video <https://www.youtube.com/watch?v=x-6dx4huNnw>`__ shows the
information Toaster provides about recipes and packages built.
- *Performance Data:* This
`video <https://www.youtube.com/watch?v=qWGMrJoqusQ>`__ shows the
build performance data provided by Toaster.
.. _a-note-on-the-local-yocto-project-release:
Additional Information About the Local Yocto Project Release
------------------------------------------------------------
This section only applies if you have set up Toaster for local
development, as explained in the "`Starting Toaster for Local
Development <#starting-toaster-for-local-development>`__" section.
When you create a project in Toaster, you will be asked to provide a
name and to select a Yocto Project release. One of the release options
you will find is called "Local Yocto Project".
When you select the "Local Yocto Project" release, Toaster will run your
builds using the local Yocto Project clone you have in your computer:
the same clone you are using to run Toaster. Unless you manually update
this clone, your builds will always use the same Git revision.
If you select any of the other release options, Toaster will fetch the
tip of your selected release from the upstream `Yocto Project
repository <https://git.yoctoproject.org>`__ every time you run a build.
Fetching this tip effectively means that if your selected release is
updated upstream, the Git revision you are using for your builds will
change. If you are doing development locally, you might not want this
change to happen. In that case, the "Local Yocto Project" release might
be the right choice.
However, the "Local Yocto Project" release will not provide you with any
compatible layers, other than the three core layers that come with the
Yocto Project:
- `openembedded-core <http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/>`__
- `meta-poky <http://layers.openembedded.org/layerindex/branch/master/layer/meta-poky/>`__
- `meta-yocto-bsp <http://layers.openembedded.org/layerindex/branch/master/layer/meta-yocto-bsp/>`__
If you want to build any other layers, you will need to manually import
them into your Toaster project, using the "Import layer" page.
.. _toaster-web-interface-preferred-version:
Building a Specific Recipe Given Multiple Versions
--------------------------------------------------
Occasionally, a layer might provide more than one version of the same
recipe. For example, the ``openembedded-core`` layer provides two
versions of the ``bash`` recipe (i.e. 3.2.48 and 4.3.30-r0) and two
versions of the ``which`` recipe (i.e. 2.21 and 2.18). The following
figure shows this exact scenario:
By default, the OpenEmbedded build system builds one of the two recipes.
For the ``bash`` case, version 4.3.30-r0 is built by default.
Unfortunately, Toaster as it exists, is not able to override the default
recipe version. If you would like to build bash 3.2.48, you need to set
the
```PREFERRED_VERSION`` <&YOCTO_DOCS_REF_URL;#var-PREFERRED_VERSION>`__
variable. You can do so from Toaster, using the "Add variable" form,
which is available in the "BitBake variables" page of the project
configuration section as shown in the following screen:
To specify ``bash`` 3.2.48 as the version to build, enter
"PREFERRED_VERSION_bash" in the "Variable" field, and "3.2.48" in the
"Value" field. Next, click the "Add variable" button:
After clicking the "Add variable" button, the settings for
``PREFERRED_VERSION`` are added to the bottom of the BitBake variables
list. With these settings, the OpenEmbedded build system builds the
desired version of the recipe rather than the default version:

View File

@@ -0,0 +1,46 @@
************************
Preparing to Use Toaster
************************
This chapter describes how you need to prepare your system in order to
use Toaster.
.. _toaster-setting-up-the-basic-system-requirements:
Setting Up the Basic System Requirements
========================================
Before you can use Toaster, you need to first set up your build system
to run the Yocto Project. To do this, follow the instructions in the
"`Preparing the Build
Host <&YOCTO_DOCS_DEV_URL;#dev-preparing-the-build-host>`__" section of
the Yocto Project Development Tasks Manual. For Ubuntu/Debian, you might
also need to do an additional install of pip3. $ sudo apt-get install
python3-pip
.. _toaster-establishing-toaster-system-dependencies:
Establishing Toaster System Dependencies
========================================
Toaster requires extra Python dependencies in order to run. A Toaster
requirements file named ``toaster-requirements.txt`` defines the Python
dependencies. The requirements file is located in the ``bitbake``
directory, which is located in the root directory of the `Source
Directory <&YOCTO_DOCS_REF_URL;#source-directory>`__ (e.g.
``poky/bitbake/toaster-requirements.txt``). The dependencies appear in a
``pip``, install-compatible format.
.. _toaster-load-packages:
Install Toaster Packages
------------------------
You need to install the packages that Toaster requires. Use this
command: $ pip3 install --user -r bitbake/toaster-requirements.txt The
previous command installs the necessary Toaster modules into a local
python 3 cache in your ``$HOME`` directory. The caches is actually
located in ``$HOME/.local``. To see what packages have been installed
into your ``$HOME`` directory, do the following: $ pip3 list installed
--local If you need to remove something, the following works: $ pip3
uninstall PackageNameToUninstall

View File

@@ -0,0 +1,12 @@
===================
Toaster User Manual
===================
.. toctree::
:caption: Table of Contents
:numbered:
toaster-manual-intro
toaster-manual-start
toaster-manual-setup-and-use
toaster-manual-reference