+ The OpenEmbedded build system automatically adds common types of
+ runtime dependencies between packages, which means that you do not
+ need to explicitly declare the packages using
+ RDEPENDS.
+ Three automatic mechanisms exist (shlibdeps,
+ pcdeps, and depchains)
+ that handle shared libraries, package configuration (pkg-config)
+ modules, and -dev and
+ -dbg packages, respectively.
+ For other types of runtime dependencies, you must manually declare
+ the dependencies.
+
+ shlibdeps:
+ During the
+ do_package
+ task of each recipe, all shared libraries installed by the
+ recipe are located.
+ For each shared library, the package that contains the
+ shared library is registered as providing the shared
+ library.
+ More specifically, the package is registered as providing
+ the
+ soname
+ of the library.
+ The resulting shared-library-to-package mapping
+ is saved globally in
+ PKGDATA_DIR
+ by the
+ do_packagedata
+ task.
Simultaneously, all executables and shared libraries
+ installed by the recipe are inspected to see what shared
+ libraries they link against.
+ For each shared library dependency that is found,
+ PKGDATA_DIR is queried to
+ see if some package (likely from a different recipe)
+ contains the shared library.
+ If such a package is found, a runtime dependency is added
+ from the package that depends on the shared library to the
+ package that contains the library.
The automatically added runtime dependency also
+ includes a version restriction.
+ This version restriction specifies that at least the
+ current version of the package that provides the shared
+ library must be used, as if
+ "package (>= version)"
+ had been added to
+ RDEPENDS.
+ This forces an upgrade of the package containing the shared
+ library when installing the package that depends on the
+ library, if needed.
If you want to avoid a package being registered as
+ providing a particular shared library (e.g. because the library
+ is for internal use only), then add the library to
+ PRIVATE_LIBS
+ inside the package's recipe.
+
+ pcdeps:
+ During the
+ do_package
+ task of each recipe, all pkg-config modules
+ (*.pc files) installed by the recipe
+ are located.
+ For each module, the package that contains the module is
+ registered as providing the module.
+ The resulting module-to-package mapping is saved globally in
+ PKGDATA_DIR
+ by the
+ do_packagedata
+ task.
Simultaneously, all pkg-config modules installed by
+ the recipe are inspected to see what other pkg-config
+ modules they depend on.
+ A module is seen as depending on another module if it
+ contains a "Requires:" line that specifies the other module.
+ For each module dependency,
+ PKGDATA_DIR is queried to see if some
+ package contains the module.
+ If such a package is found, a runtime dependency is added
+ from the package that depends on the module to the package
+ that contains the module.
+
pcdeps mechanism most often
+ infers dependencies between -dev
+ packages.
+ +
+
+ depchains:
+ If a package foo depends on a package
+ bar, then foo-dev
+ and foo-dbg are also made to depend on
+ bar-dev and
+ bar-dbg, respectively.
+ Taking the -dev packages as an
+ example, the bar-dev package might
+ provide headers and shared library symlinks needed by
+ foo-dev, which shows the need
+ for a dependency between the packages.
The dependencies added by
+ depchains are in the form of
+ RRECOMMENDS.
+
foo-dev also has an
+ RDEPENDS-style dependency on
+ foo, because the default value of
+ RDEPENDS_${PN}-dev (set in
+ bitbake.conf) includes
+ "${PN}".
+ To ensure that the dependency chain is never broken,
+ -dev and -dbg
+ packages are always generated by default, even if the
+ packages turn out to be empty.
+ See the
+ ALLOW_EMPTY
+ variable for more information.
+
+
+
+ The do_package task depends on the
+ do_packagedata
+ task of each recipe in
+ DEPENDS
+ through use of a
+ [deptask]
+ declaration, which guarantees that the required
+ shared-library/module-to-package mapping information will be available
+ when needed as long as DEPENDS has been
+ correctly set.
+
+ Git has an extensive set of commands that lets you manage changes + and perform collaboration over the life of a project. + Conveniently though, you can manage with a small set of basic + operations and workflows once you understand the basic + philosophy behind Git. + You do not have to be an expert in Git to be functional. + A good place to look for instruction on a minimal set of Git + commands is + here. +
++ If you do not know much about Git, you should educate + yourself by visiting the links previously mentioned. +
++ The following list of Git commands briefly describes some basic + Git operations as a way to get started. + As with any set of commands, this list (in most cases) simply shows + the base command and omits the many arguments they support. + See the Git documentation for complete descriptions and strategies + on how to use these commands: +
+
+ git init:
+ Initializes an empty Git repository.
+ You cannot use Git commands unless you have a
+ .git repository.
+
+ git clone:
+ Creates a local clone of a Git repository that is on
+ equal footing with a fellow developer’s Git repository
+ or an upstream repository.
+
+ git add:
+ Locally stages updated file contents to the index that
+ Git uses to track changes.
+ You must stage all files that have changed before you
+ can commit them.
+
+ git commit:
+ Creates a local "commit" that documents the changes you
+ made.
+ Only changes that have been staged can be committed.
+ Commits are used for historical purposes, for determining
+ if a maintainer of a project will allow the change,
+ and for ultimately pushing the change from your local
+ Git repository into the project’s upstream repository.
+
+ git status:
+ Reports any modified files that possibly need to be
+ staged and gives you a status of where you stand regarding
+ local commits as compared to the upstream repository.
+
+ git checkout branch-name:
+ Changes your working branch.
+ This command is analogous to "cd".
+
git checkout –b working-branch:
+ Creates and checks out a working branch on your local
+ machine that you can use to isolate your work.
+ It is a good idea to use local branches when adding
+ specific features or changes.
+ Using isolated branches facilitates easy removal of
+ changes if they do not work out.
+
git branch:
+ Displays the existing local branches associated with your
+ local repository.
+ The branch that you have currently checked out is noted
+ with an asterisk character.
+
+ git branch -D branch-name:
+ Deletes an existing local branch.
+ You need to be in a local branch other than the one you
+ are deleting in order to delete
+ branch-name.
+
+ git pull:
+ Retrieves information from an upstream Git repository
+ and places it in your local Git repository.
+ You use this command to make sure you are synchronized with
+ the repository from which you are basing changes
+ (.e.g. the "master" branch).
+
+ git push:
+ Sends all your committed local changes to the upstream Git
+ repository that your local repository is tracking
+ (e.g. a contribution repository).
+ The maintainer of the project draws from these repositories
+ to merge changes (commits) into the appropriate branch
+ of project's upstream repository.
+
+ git merge:
+ Combines or adds changes from one
+ local branch of your repository with another branch.
+ When you create a local Git repository, the default branch
+ is named "master".
+ A typical workflow is to create a temporary branch that is
+ based off "master" that you would use for isolated work.
+ You would make your changes in that isolated branch,
+ stage and commit them locally, switch to the "master"
+ branch, and then use the git merge
+ command to apply the changes from your isolated branch
+ into the currently checked out branch (e.g. "master").
+ After the merge is complete and if you are done with
+ working in that isolated branch, you can safely delete
+ the isolated branch.
+
+ git cherry-pick:
+ Choose and apply specific commits from one branch
+ into another branch.
+ There are times when you might not be able to merge
+ all the changes in one branch with
+ another but need to pick out certain ones.
+
+ gitk:
+ Provides a GUI view of the branches and changes in your
+ local Git repository.
+ This command is a good way to graphically see where things
+ have diverged in your local repository.
+
gitk
+ package on your development system to use this
+ command.
+ +
+
+ git log:
+ Reports a history of your commits to the repository.
+ This report lists all commits regardless of whether you
+ have pushed them upstream or not.
+
+ git diff:
+ Displays line-by-line differences between a local
+ working file and the same file as understood by Git.
+ This command is useful to see what you have changed
+ in any given file.
+
+
++ The OpenEmbedded build system uses + BitBake + to produce images. + You can see from the + general Yocto Project Development Environment figure, + the BitBake area consists of several functional areas. + This section takes a closer look at each of those areas. +
++ Separate documentation exists for the BitBake tool. + See the + BitBake User Manual + for reference material on BitBake. +
++ The BSP Layer provides machine configurations. + Everything in this layer is specific to the machine for which + you are building the image or the SDK. + A common structure or form is defined for BSP layers. + You can learn more about this structure in the + Yocto Project Board Support Package (BSP) Developer's Guide. +
++
+
+ The BSP Layer's configuration directory contains
+ configuration files for the machine
+ (conf/machine/) and,
+ of course, the layer (machine.confconf/layer.conf).
+
+ The remainder of the layer is dedicated to specific recipes
+ by function: recipes-bsp,
+ recipes-core,
+ recipes-graphics, and
+ recipes-kernel.
+ Metadata can exist for multiple formfactors, graphics
+ support systems, and so forth.
+
recipes-*
+ directories, not all these directories appear in all
+ BSP layers.
+ +
++ After source code is patched, BitBake executes tasks that + configure and compile the source code: +
+![]() |
+
++ This step in the build process consists of three tasks: +
+
+ do_prepare_recipe_sysroot:
+ This task sets up the two sysroots in
+ ${WORKDIR}
+ (i.e. recipe-sysroot and
+ recipe-sysroot-native) so that
+ the sysroots contain the contents of the
+ do_populate_sysroot
+ tasks of the recipes on which the recipe
+ containing the tasks depends.
+ A sysroot exists for both the target and for the native
+ binaries, which run on the host system.
+
do_configure:
+ This task configures the source by enabling and
+ disabling any build-time and configuration options for
+ the software being built.
+ Configurations can come from the recipe itself as well
+ as from an inherited class.
+ Additionally, the software itself might configure itself
+ depending on the target for which it is being built.
+
The configurations handled by the
+ do_configure
+ task are specific
+ to source code configuration for the source code
+ being built by the recipe.
If you are using the
+ autotools
+ class,
+ you can add additional configuration options by using
+ the
+ EXTRA_OECONF
+ or
+ PACKAGECONFIG_CONFARGS
+ variables.
+ For information on how this variable works within
+ that class, see the
+ meta/classes/autotools.bbclass file.
+
do_compile:
+ Once a configuration task has been satisfied, BitBake
+ compiles the source using the
+ do_compile
+ task.
+ Compilation occurs in the directory pointed to by the
+ B
+ variable.
+ Realize that the B directory is, by
+ default, the same as the
+ S
+ directory.
do_install:
+ Once compilation is done, BitBake executes the
+ do_install
+ task.
+ This task copies files from the B
+ directory and places them in a holding area pointed to
+ by the
+ D
+ variable.
+
++ The Yocto Project does most of the work for you when it comes to + creating + cross-development toolchains. + This section provides some technical background on how + cross-development toolchains are created and used. + For more information on toolchains, you can also see the + Yocto Project Application Development and the Extensible Software Development Kit (eSDK) + manual. +
++ In the Yocto Project development environment, cross-development + toolchains are used to build the image and applications that run + on the target hardware. + With just a few commands, the OpenEmbedded build system creates + these necessary toolchains for you. +
++ The following figure shows a high-level build environment regarding + toolchain construction and use. +
++
+![]() |
+
+
+ Most of the work occurs on the Build Host.
+ This is the machine used to build images and generally work within the
+ the Yocto Project environment.
+ When you run BitBake to create an image, the OpenEmbedded build system
+ uses the host gcc compiler to bootstrap a
+ cross-compiler named gcc-cross.
+ The gcc-cross compiler is what BitBake uses to
+ compile source files when creating the target image.
+ You can think of gcc-cross simply as an
+ automatically generated cross-compiler that is used internally within
+ BitBake only.
+
gcc-cross-canadian since this SDK
+ ships a copy of the OpenEmbedded build system and the sysroot
+ within it contains gcc-cross.
+ +
+
+ The chain of events that occurs when gcc-cross is
+ bootstrapped is as follows:
+
+ gcc -> binutils-cross -> gcc-cross-initial -> linux-libc-headers -> glibc-initial -> glibc -> gcc-cross -> gcc-runtime ++
+
+
+ gcc:
+ The build host's GNU Compiler Collection (GCC).
+
+ binutils-cross:
+ The bare minimum binary utilities needed in order to run
+ the gcc-cross-initial phase of the
+ bootstrap operation.
+
+ gcc-cross-initial:
+ An early stage of the bootstrap process for creating
+ the cross-compiler.
+ This stage builds enough of the gcc-cross,
+ the C library, and other pieces needed to finish building the
+ final cross-compiler in later stages.
+ This tool is a "native" package (i.e. it is designed to run on
+ the build host).
+
+ linux-libc-headers:
+ Headers needed for the cross-compiler.
+
+ glibc-initial:
+ An initial version of the Embedded GLIBC needed to bootstrap
+ glibc.
+
+ gcc-cross:
+ The final stage of the bootstrap process for the
+ cross-compiler.
+ This stage results in the actual cross-compiler that
+ BitBake uses when it builds an image for a targeted
+ device.
+
gcc-cross.
+ + This tool is also a "native" package (i.e. it is + designed to run on the build host). +
+
+ gcc-runtime:
+ Runtime libraries resulting from the toolchain bootstrapping
+ process.
+ This tool produces a binary that consists of the
+ runtime libraries need for the targeted device.
+
+
+
+ You can use the OpenEmbedded build system to build an installer for
+ the relocatable SDK used to develop applications.
+ When you run the installer, it installs the toolchain, which contains
+ the development tools (e.g., the
+ gcc-cross-canadian),
+ binutils-cross-canadian, and other
+ nativesdk-* tools,
+ which are tools native to the SDK (i.e. native to
+ SDK_ARCH),
+ you need to cross-compile and test your software.
+ The figure shows the commands you use to easily build out this
+ toolchain.
+ This cross-development toolchain is built to execute on the
+ SDKMACHINE,
+ which might or might not be the same
+ machine as the Build Host.
+
+
++ Here is the bootstrap process for the relocatable toolchain: +
++ gcc -> binutils-crosssdk -> gcc-crosssdk-initial -> linux-libc-headers -> + glibc-initial -> nativesdk-glibc -> gcc-crosssdk -> gcc-cross-canadian ++
+
+
+ gcc:
+ The build host's GNU Compiler Collection (GCC).
+
+ binutils-crosssdk:
+ The bare minimum binary utilities needed in order to run
+ the gcc-crosssdk-initial phase of the
+ bootstrap operation.
+
+ gcc-crosssdk-initial:
+ An early stage of the bootstrap process for creating
+ the cross-compiler.
+ This stage builds enough of the
+ gcc-crosssdk and supporting pieces so that
+ the final stage of the bootstrap process can produce the
+ finished cross-compiler.
+ This tool is a "native" binary that runs on the build host.
+
+ linux-libc-headers:
+ Headers needed for the cross-compiler.
+
+ glibc-initial:
+ An initial version of the Embedded GLIBC needed to bootstrap
+ nativesdk-glibc.
+
+ nativesdk-glibc:
+ The Embedded GLIBC needed to bootstrap the
+ gcc-crosssdk.
+
+ gcc-crosssdk:
+ The final stage of the bootstrap process for the
+ relocatable cross-compiler.
+ The gcc-crosssdk is a transitory compiler
+ and never leaves the build host.
+ Its purpose is to help in the bootstrap process to create the
+ eventual relocatable gcc-cross-canadian
+ compiler, which is relocatable.
+ This tool is also a "native" package (i.e. it is
+ designed to run on the build host).
+
+ gcc-cross-canadian:
+ The final relocatable cross-compiler.
+ When run on the
+ SDKMACHINE,
+ this tool
+ produces executable code that runs on the target device.
+ Only one cross-canadian compiler is produced per architecture
+ since they can be targeted at different processor optimizations
+ using configurations passed to the compiler through the
+ compile commands.
+ This circumvents the need for multiple compilers and thus
+ reduces the size of the toolchains.
+
+
++ This section takes a more detailed look inside the development + process. + The following diagram represents development at a high level. + The remainder of this chapter expands on the fundamental input, output, + process, and + Metadata) blocks + that make up development in the Yocto Project environment. +
+ +![]() |
+
++ In general, development consists of several functional areas: +
+User Configuration: + Metadata you can use to control the build process. +
Metadata Layers: + Various layers that provide software, machine, and + distro Metadata.
Source Files: + Upstream releases, local projects, and SCMs.
Build System: + Processes under the control of + BitBake. + This block expands on how BitBake fetches source, applies + patches, completes compilation, analyzes output for package + generation, creates and tests packages, generates images, and + generates cross-development tools.
Package Feeds: + Directories containing output packages (RPM, DEB or IPK), + which are subsequently used in the construction of an image or + SDK, produced by the build system. + These feeds can also be copied and shared using a web server or + other means to facilitate extending or updating existing + images on devices at runtime if runtime package management is + enabled.
Images: + Images produced by the development process. +
Application Development SDK: + Cross-development tools that are produced along with an image + or separately with BitBake.
+
+
+ The distribution layer provides policy configurations for your
+ distribution.
+ Best practices dictate that you isolate these types of
+ configurations into their own layer.
+ Settings you provide in
+ conf/distro/ override
+ similar
+ settings that BitBake finds in your
+ distro.confconf/local.conf file in the Build
+ Directory.
+
+ The following list provides some explanation and references + for what you typically find in the distribution layer: +
+classes:
+ Class files (.bbclass) hold
+ common functionality that can be shared among
+ recipes in the distribution.
+ When your recipes inherit a class, they take on the
+ settings and functions for that class.
+ You can read more about class files in the
+ "Classes"
+ section of the Yocto Reference Manual.
+
conf:
+ This area holds configuration files for the
+ layer (conf/layer.conf),
+ the distribution
+ (conf/distro/),
+ and any distribution-wide include files.
+ distro.conf
recipes-*: + Recipes and append files that affect common + functionality across the distribution. + This area could include recipes and append files + to add distribution-specific configuration, + initialization scripts, custom image recipes, + and so forth.
+
+
+ To cause Mesa to build the wayland-egl
+ platform and Weston to build Wayland with Kernel Mode
+ Setting
+ (KMS)
+ support, include the "wayland" flag in the
+ DISTRO_FEATURES
+ statement in your local.conf file:
+
+ DISTRO_FEATURES_append = " wayland" ++
+
++
+
+ To install the Wayland feature into an image, you must
+ include the following
+ CORE_IMAGE_EXTRA_INSTALL
+ statement in your local.conf file:
+
+ CORE_IMAGE_EXTRA_INSTALL += "wayland weston" ++
+
+
+ By default, the OpenEmbedded build system disables
+ components that have commercial or other special licensing
+ requirements.
+ Such requirements are defined on a
+ recipe-by-recipe basis through the
+ LICENSE_FLAGS
+ variable definition in the affected recipe.
+ For instance, the
+ poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly
+ recipe contains the following statement:
+
+ LICENSE_FLAGS = "commercial" ++
+ Here is a slightly more complicated example that contains both + an explicit recipe name and version (after variable expansion): +
+
+ LICENSE_FLAGS = "license_${PN}_${PV}"
+
+
+ In order for a component restricted by a
+ LICENSE_FLAGS definition to be enabled and
+ included in an image, it needs to have a matching entry in the
+ global
+ LICENSE_FLAGS_WHITELIST
+ variable, which is a variable typically defined in your
+ local.conf file.
+ For example, to enable the
+ poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly
+ package, you could add either the string
+ "commercial_gst-plugins-ugly" or the more general string
+ "commercial" to LICENSE_FLAGS_WHITELIST.
+ See the
+ "License Flag Matching"
+ section for a full
+ explanation of how LICENSE_FLAGS matching
+ works.
+ Here is the example:
+
+ LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly" ++
+ Likewise, to additionally enable the package built from the
+ recipe containing
+ LICENSE_FLAGS = "license_${PN}_${PV}",
+ and assuming that the actual recipe name was
+ emgd_1.10.bb, the following string would
+ enable that package as well as the original
+ gst-plugins-ugly package:
+
+ LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly license_emgd_1.10" ++
+ As a convenience, you do not need to specify the complete + license string in the whitelist for every package. + You can use an abbreviated form, which consists + of just the first portion or portions of the license + string before the initial underscore character or characters. + A partial string will match any license that contains the + given string as the first portion of its license. + For example, the following whitelist string will also match + both of the packages previously mentioned as well as any other + packages that have licenses starting with "commercial" or + "license". +
++ LICENSE_FLAGS_WHITELIST = "commercial license" ++
+
++ To enable Wayland, you need to enable it to be built and enable + it to be included in the image. +
+
+ Some tasks are easier to implement when allowed to perform certain
+ operations that are normally reserved for the root user (e.g.
+ do_install,
+ do_package_write*,
+ do_rootfs,
+ and
+ do_image*).
+ For example, the do_install task benefits
+ from being able to set the UID and GID of installed files to
+ arbitrary values.
+
+ One approach to allowing tasks to perform root-only operations + would be to require BitBake to run as root. + However, this method is cumbersome and has security issues. + The approach that is actually used is to run tasks that benefit + from root privileges in a "fake" root environment. + Within this environment, the task and its child processes believe + that they are running as the root user, and see an internally + consistent view of the filesystem. + As long as generating the final output (e.g. a package or an image) + does not require root privileges, the fact that some earlier + steps ran in a fake root environment does not cause problems. +
++ The capability to run tasks in a fake root environment is known as + "fakeroot", + which is derived from the BitBake keyword/variable + flag that requests a fake root environment for a task. +
+
+ In the OpenEmbedded build system, the program that implements
+ fakeroot is known as Pseudo.
+ Pseudo overrides system calls by using the environment variable
+ LD_PRELOAD, which results in the illusion
+ of running as root.
+ To keep track of "fake" file ownership and permissions resulting
+ from operations that require root permissions, Pseudo uses
+ an SQLite 3 database.
+ This database is stored in
+ ${WORKDIR}/pseudo/files.db
+ for individual recipes.
+ Storing the database in a file as opposed to in memory
+ gives persistence between tasks and builds, which is not
+ accomplished using fakeroot.
+
virtual/fakeroot-native:do_populate_sysroot,
+ giving the following:
+
+ fakeroot do_mytask () {
+ ...
+ }
+ do_mytask[depends] += "virtual/fakeroot-native:do_populate_sysroot"
+
+
+ For more information, see the
+ FAKEROOT*
+ variables in the BitBake User Manual.
+ You can also reference the
+ "Pseudo"
+ and
+ "Why Not Fakeroot?"
+ articles for background information on Pseudo.
+
+ The Yocto Project makes extensive use of Git, which is a + free, open source distributed version control system. + Git supports distributed development, non-linear development, + and can handle large projects. + It is best that you have some fundamental understanding + of how Git tracks projects and how to work with Git if + you are going to use the Yocto Project for development. + This section provides a quick overview of how Git works and + provides you with a summary of some essential Git commands. +
++ For more information on Git, see + http://git-scm.com/documentation. +
+ If you need to download Git, it is recommended that you add + Git to your system through your distribution's "software + store" (e.g. for Ubuntu, use the Ubuntu Software feature). + For the Git download page, see + http://git-scm.com/download. +
+ For examples beyond the limited few in this section on how + to use Git with the Yocto Project, see the + "Working With Yocto Project Source Files" + section in the Yocto Project Development Tasks Manual. +
+
++ Once packages are split and stored in the Package Feeds area, + the OpenEmbedded build system uses BitBake to generate the + root filesystem image: +
+![]() |
+
+
+ The image generation process consists of several stages and
+ depends on several tasks and variables.
+ The
+ do_rootfs
+ task creates the root filesystem (file and directory structure)
+ for an image.
+ This task uses several key variables to help create the list
+ of packages to actually install:
+
IMAGE_INSTALL:
+ Lists out the base set of packages to install from
+ the Package Feeds area.
PACKAGE_EXCLUDE:
+ Specifies packages that should not be installed.
+
IMAGE_FEATURES:
+ Specifies features to include in the image.
+ Most of these features map to additional packages for
+ installation.
PACKAGE_CLASSES:
+ Specifies the package backend to use and consequently
+ helps determine where to locate packages within the
+ Package Feeds area.
IMAGE_LINGUAS:
+ Determines the language(s) for which additional
+ language support packages are installed.
+
PACKAGE_INSTALL:
+ The final list of packages passed to the package manager
+ for installation into the image.
+
+
+
+ With
+ IMAGE_ROOTFS
+ pointing to the location of the filesystem under construction and
+ the PACKAGE_INSTALL variable providing the
+ final list of packages to install, the root file system is
+ created.
+
+ Package installation is under control of the package manager + (e.g. dnf/rpm, opkg, or apt/dpkg) regardless of whether or + not package management is enabled for the target. + At the end of the process, if package management is not + enabled for the target, the package manager's data files + are deleted from the root filesystem. + As part of the final stage of package installation, postinstall + scripts that are part of the packages are run. + Any scripts that fail to run + on the build host are run on the target when the target system + is first booted. + If you are using a + read-only root filesystem, + all the post installation scripts must succeed during the + package installation phase since the root filesystem is + read-only. +
+
+ The final stages of the do_rootfs task
+ handle post processing.
+ Post processing includes creation of a manifest file and
+ optimizations.
+
+ The manifest file (.manifest) resides
+ in the same directory as the root filesystem image.
+ This file lists out, line-by-line, the installed packages.
+ The manifest file is useful for the
+ testimage
+ class, for example, to determine whether or not to run
+ specific tests.
+ See the
+ IMAGE_MANIFEST
+ variable for additional information.
+
+ Optimizing processes run across the image include
+ mklibs, prelink,
+ and any other post-processing commands as defined by the
+ ROOTFS_POSTPROCESS_COMMAND
+ variable.
+ The mklibs process optimizes the size
+ of the libraries, while the
+ prelink process optimizes the dynamic
+ linking of shared libraries to reduce start up time of
+ executables.
+
+ After the root filesystem is built, processing begins on
+ the image through the
+ do_image
+ task.
+ The build system runs any pre-processing commands as defined
+ by the
+ IMAGE_PREPROCESS_COMMAND
+ variable.
+ This variable specifies a list of functions to call before
+ the OpenEmbedded build system creates the final image output
+ files.
+
+ The OpenEmbedded build system dynamically creates
+ do_image_* tasks as needed, based
+ on the image types specified in the
+ IMAGE_FSTYPES
+ variable.
+ The process turns everything into an image file or a set of
+ image files and compresses the root filesystem image to reduce
+ the overall size of the image.
+ The formats used for the root filesystem depend on the
+ IMAGE_FSTYPES variable.
+
+ As an example, a dynamically created task when creating a
+ particular image type would take the
+ following form:
+
+ do_image_type[depends]
+
+
+ So, if the type as specified by the
+ IMAGE_FSTYPES were
+ ext4, the dynamically generated task
+ would be as follows:
+
+ do_image_ext4[depends] ++
+
+
+ The final task involved in image creation is the
+ do_image_complete
+ task.
+ This task completes the image by applying any image
+ post processing as defined through the
+ IMAGE_POSTPROCESS_COMMAND
+ variable.
+ The variable specifies a list of functions to call once the
+ OpenEmbedded build system has created the final image output
+ files.
+
+ The images produced by the OpenEmbedded build system + are compressed forms of the + root filesystem that are ready to boot on a target device. + You can see from the + general Yocto Project Development Environment figure + that BitBake output, in part, consists of images. + This section is going to look more closely at this output: +
+![]() |
+
++ For a list of example images that the Yocto Project provides, + see the + "Images" + chapter in the Yocto Project Reference Manual. +
+
+ Images are written out to the
+ Build Directory
+ inside the
+ tmp/deploy/images/
+ folder as shown in the figure.
+ This folder contains any files expected to be loaded on the
+ target device.
+ The
+ machine/DEPLOY_DIR
+ variable points to the deploy directory,
+ while the
+ DEPLOY_DIR_IMAGE
+ variable points to the appropriate directory containing images for
+ the current configuration.
+
:
+ A kernel binary file.
+ The
+ kernel-imageKERNEL_IMAGETYPE
+ variable setting determines the naming scheme for the
+ kernel image file.
+ Depending on that variable, the file could begin with
+ a variety of naming strings.
+ The deploy/images/
+ directory can contain multiple image files for the
+ machine.machine
:
+ Root filesystems for the target device (e.g.
+ root-filesystem-image*.ext3 or *.bz2
+ files).
+ The
+ IMAGE_FSTYPES
+ variable setting determines the root filesystem image
+ type.
+ The deploy/images/
+ directory can contain multiple root filesystems for the
+ machine.machine
:
+ Tarballs that contain all the modules built for the kernel.
+ Kernel module tarballs exist for legacy purposes and
+ can be suppressed by setting the
+ kernel-modulesMODULE_TARBALL_DEPLOY
+ variable to "0".
+ The deploy/images/
+ directory can contain multiple kernel module tarballs
+ for the machine.machine
:
+ Bootloaders supporting the image, if applicable to the
+ target machine.
+ The bootloadersdeploy/images/
+ directory can contain multiple bootloaders for the
+ machine.machine
:
+ The symlinksdeploy/images/
+ folder contains
+ a symbolic link that points to the most recently built file
+ for each machine.
+ These links might be useful for external scripts that
+ need to obtain the latest version of each file.
+ machine
+
+Copyright 2010-2018 Linux Foundation
+ Permission is granted to copy, distribute and/or modify this document under + the terms of the + Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by + Creative Commons. +
++ This version of the + Yocto Project Overview Manual + is for the 2.5 release of the + Yocto Project. + To be sure you have the latest version of the manual + for this release, use the manual from the + Yocto Project documentation page. +
+ For manuals associated with other releases of the Yocto + Project, go to the + Yocto Project documentation page + and use the drop-down "Active Releases" button + and choose the manual associated with the desired + Yocto Project. +
+ To report any inaccuracies or problems with this
+ manual, send an email to the Yocto Project
+ discussion group at
+ yocto@yoctoproject.com or log into
+ the freenode #yocto channel.
+
| Revision History | |
|---|---|
| Revision 2.5 | +April 2018 | +
| The initial document released with the Yocto Project 2.5 Release. | |
Table of Contents
++ The OpenEmbedded build system uses checksums and shared + state cache to avoid unnecessarily rebuilding tasks. + Collectively, this scheme is known as "shared state code." +
+
+ As with all schemes, this one has some drawbacks.
+ It is possible that you could make implicit changes to your
+ code that the checksum calculations do not take into
+ account.
+ These implicit changes affect a task's output but do not
+ trigger the shared state code into rebuilding a recipe.
+ Consider an example during which a tool changes its output.
+ Assume that the output of rpmdeps
+ changes.
+ The result of the change should be that all the
+ package and
+ package_write_rpm shared state cache
+ items become invalid.
+ However, because the change to the output is
+ external to the code and therefore implicit,
+ the associated shared state cache items do not become
+ invalidated.
+ In this case, the build process uses the cached items
+ rather than running the task again.
+ Obviously, these types of implicit changes can cause
+ problems.
+
+ To avoid these problems during the build, you need to + understand the effects of any changes you make. + Realize that changes you make directly to a function + are automatically factored into the checksum calculation. + Thus, these explicit changes invalidate the associated + area of shared state cache. + However, you need to be aware of any implicit changes that + are not obvious changes to the code and could affect + the output of a given task. +
+
+ When you identify an implicit change, you can easily
+ take steps to invalidate the cache and force the tasks
+ to run.
+ The steps you can take are as simple as changing a
+ function's comments in the source code.
+ For example, to invalidate package shared state files,
+ change the comment statements of
+ do_package
+ or the comments of one of the functions it calls.
+ Even though the change is purely cosmetic, it causes the
+ checksum to be recalculated and forces the OpenEmbedded
+ build system to run the task again.
+
+
+
+ License flag matching allows you to control what recipes
+ the OpenEmbedded build system includes in the build.
+ Fundamentally, the build system attempts to match
+ LICENSE_FLAGS
+ strings found in recipes against
+ LICENSE_FLAGS_WHITELIST
+ strings found in the whitelist.
+ A match causes the build system to include a recipe in the
+ build, while failure to find a match causes the build
+ system to exclude a recipe.
+
+ In general, license flag matching is simple. + However, understanding some concepts will help you + correctly and effectively use matching. +
+
+ Before a flag
+ defined by a particular recipe is tested against the
+ contents of the whitelist, the expanded string
+ _${PN} is appended to the flag.
+ This expansion makes each
+ LICENSE_FLAGS value recipe-specific.
+ After expansion, the string is then matched against the
+ whitelist.
+ Thus, specifying
+ LICENSE_FLAGS = "commercial"
+ in recipe "foo", for example, results in the string
+ "commercial_foo".
+ And, to create a match, that string must appear in the
+ whitelist.
+
+ Judicious use of the LICENSE_FLAGS
+ strings and the contents of the
+ LICENSE_FLAGS_WHITELIST variable
+ allows you a lot of flexibility for including or excluding
+ recipes based on licensing.
+ For example, you can broaden the matching capabilities by
+ using license flags string subsets in the whitelist.
+
usethispart_1.3,
+ usethispart_1.4, and so forth).
+
+ For example, simply specifying the string "commercial" in
+ the whitelist matches any expanded
+ LICENSE_FLAGS definition that starts
+ with the string "commercial" such as "commercial_foo" and
+ "commercial_bar", which are the strings the build system
+ automatically generates for hypothetical recipes named
+ "foo" and "bar" assuming those recipes simply specify the
+ following:
+
+ LICENSE_FLAGS = "commercial" ++
+ Thus, you can choose to exhaustively + enumerate each license flag in the whitelist and + allow only specific recipes into the image, or + you can use a string subset that causes a broader range of + matches to allow a range of recipes into the image. +
+
+ This scheme works even if the
+ LICENSE_FLAGS string already
+ has _${PN} appended.
+ For example, the build system turns the license flag
+ "commercial_1.2_foo" into "commercial_1.2_foo_foo" and
+ would match both the general "commercial" and the specific
+ "commercial_1.2_foo" strings found in the whitelist, as
+ expected.
+
+ Here are some other scenarios: +
++ You can specify a versioned string in the recipe + such as "commercial_foo_1.2" in a "foo" recipe. + The build system expands this string to + "commercial_foo_1.2_foo". + Combine this license flag with a whitelist that has + the string "commercial" and you match the flag + along with any other flag that starts with the + string "commercial". +
+ Under the same circumstances, you can use + "commercial_foo" in the whitelist and the build + system not only matches "commercial_foo_1.2" but + also matches any license flag with the string + "commercial_foo", regardless of the version. +
+ You can be very specific and use both the + package and version parts in the whitelist (e.g. + "commercial_foo_1.2") to specifically match a + versioned recipe. +
+
++ Because open source projects are open to the public, they have + different licensing structures in place. + License evolution for both Open Source and Free Software has an + interesting history. + If you are interested in this history, you can find basic information + here: +
++
++ In general, the Yocto Project is broadly licensed under the + Massachusetts Institute of Technology (MIT) License. + MIT licensing permits the reuse of software within proprietary + software as long as the license is distributed with that software. + MIT is also compatible with the GNU General Public License (GPL). + Patches to the Yocto Project follow the upstream licensing scheme. + You can find information on the MIT license + here. + You can find information on the GNU GPL + here. +
+
+ When you build an image using the Yocto Project, the build process
+ uses a known list of licenses to ensure compliance.
+ You can find this list in the
+ Source Directory
+ at meta/files/common-licenses.
+ Once the build completes, the list of all licenses found and used
+ during that build are kept in the
+ Build Directory
+ at tmp/deploy/licenses.
+
+ If a module requires a license that is not in the base list, the + build process generates a warning during the build. + These tools make it easier for a developer to be certain of the + licenses with which their shipped products must comply. + However, even with these tools it is still up to the developer to + resolve potential licensing issues. +
++ The base list of licenses used by the build process is a combination + of the Software Package Data Exchange (SPDX) list and the Open + Source Initiative (OSI) projects. + SPDX Group is a working group of + the Linux Foundation that maintains a specification for a standard + format for communicating the components, licenses, and copyrights + associated with a software package. + OSI is a corporation + dedicated to the Open Source Definition and the effort for reviewing + and approving licenses that conform to the Open Source Definition + (OSD). +
+
+ You can find a list of the combined SPDX and OSI licenses that the
+ Yocto Project uses in the
+ meta/files/common-licenses directory in your
+ Source Directory.
+
+ For information that can help you maintain compliance with various + open source licensing during the lifecycle of a product created using + the Yocto Project, see the + "Maintaining Open Source License Compliance During Your Product's Lifecycle" + section in the Yocto Project Development Tasks Manual. +
++ Local projects are custom bits of software the user provides. + These bits reside somewhere local to a project - perhaps + a directory into which the user checks in items (e.g. + a local directory containing a development source tree + used by the group). +
+
+ The canonical method through which to include a local project
+ is to use the
+ externalsrc
+ class to include that local project.
+ You use either the local.conf or a
+ recipe's append file to override or set the
+ recipe to point to the local directory on your disk to pull
+ in the whole source tree.
+
+ For information on how to use the
+ externalsrc class, see the
+ "externalsrc.bbclass"
+ section.
+
+ The previous section described the user configurations that + define BitBake's global behavior. + This section takes a closer look at the layers the build system + uses to further control the build. + These layers provide Metadata for the software, machine, and + policy. +
++ In general, three types of layer input exist: +
+Policy Configuration: + Distribution Layers provide top-level or general + policies for the image or SDK being built. + For example, this layer would dictate whether BitBake + produces RPM or IPK packages.
Machine Configuration: + Board Support Package (BSP) layers provide machine + configurations. + This type of information is specific to a particular + target architecture.
Metadata: + Software layers contain user-supplied recipe files, + patches, and append files. +
+
++ The following figure shows an expanded representation of the + Metadata, Machine Configuration, and Policy Configuration input + (layers) boxes of the + general Yocto Project Development Environment figure: +
++
+![]() |
+
+
+ In general, all layers have a similar structure.
+ They all contain a licensing file
+ (e.g. COPYING) if the layer is to be
+ distributed, a README file as good practice
+ and especially if the layer is to be distributed, a
+ configuration directory, and recipe directories.
+
+ The Yocto Project has many layers that can be used. + You can see a web-interface listing of them on the + Source Repositories + page. + The layers are shown at the bottom categorized under + "Yocto Metadata Layers." + These layers are fundamentally a subset of the + OpenEmbedded Metadata Index, + which lists all layers provided by the OpenEmbedded community. +
++
+
+ BitBake uses the conf/bblayers.conf file,
+ which is part of the user configuration, to find what layers it
+ should be using as part of the build.
+
+ For more information on layers, see the + "Understanding and Creating Layers" + section in the Yocto Project Development Tasks Manual. +
+
+ Prior to the build, if you know that several different recipes
+ provide the same functionality, you can use a virtual provider
+ (i.e. virtual/*) as a placeholder for the
+ actual provider.
+ The actual provider would be determined at build time.
+ In this case, you should add virtual/*
+ to
+ DEPENDS,
+ rather than listing the specified provider.
+ You would select the actual provider by setting the
+ PREFERRED_PROVIDER
+ variable (i.e.
+ PREFERRED_PROVIDER_virtual/*)
+ in the build's configuration file (e.g.
+ poky/build/conf/local.conf).
+
virtual/*
+ item that is ultimately not selected through
+ PREFERRED_PROVIDER does not get built.
+ Preventing these recipes from building is usually the
+ desired behavior since this mechanism's purpose is to
+ select between mutually exclusive alternative providers.
+ +
++ The following lists specific examples of virtual providers: +
+
+ virtual/mesa:
+ Provides gbm.pc.
+
+ virtual/egl:
+ Provides egl.pc and possibly
+ wayland-egl.pc.
+
+ virtual/libgl:
+ Provides gl.pc (i.e. libGL).
+
+ virtual/libgles1:
+ Provides glesv1_cm.pc
+ (i.e. libGLESv1_CM).
+
+ virtual/libgles2:
+ Provides glesv2.pc
+ (i.e. libGLESv2).
+
+
++ Open source philosophy is characterized by software development + directed by peer production and collaboration through an active + community of developers. + Contrast this to the more standard centralized development models + used by commercial software companies where a finite set of developers + produces a product for sale using a defined set of procedures that + ultimately result in an end product whose architecture and source + material are closed to the public. +
++ Open source projects conceptually have differing concurrent agendas, + approaches, and production. + These facets of the development process can come from anyone in the + public (community) that has a stake in the software project. + The open source environment contains new copyright, licensing, domain, + and consumer issues that differ from the more traditional development + environment. + In an open source environment, the end product, source material, + and documentation are all available to the public at no cost. +
++ A benchmark example of an open source project is the Linux kernel, + which was initially conceived and created by Finnish computer science + student Linus Torvalds in 1991. + Conversely, a good example of a non-open source project is the + Windows family of operating + systems developed by + Microsoft Corporation. +
++ Wikipedia has a good historical description of the Open Source + Philosophy + here. + You can also find helpful information on how to participate in the + Linux Community + here. +
+
+ Other helpful variables related to commercial
+ license handling exist and are defined in the
+ poky/meta/conf/distro/include/default-distrovars.inc file:
+
+ COMMERCIAL_AUDIO_PLUGINS ?= "" + COMMERCIAL_VIDEO_PLUGINS ?= "" ++
+ If you want to enable these components, you can do so by
+ making sure you have statements similar to the following
+ in your local.conf configuration file:
+
+ COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \ + gst-plugins-ugly-mpegaudioparse" + COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \ + gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse" + LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly commercial_gst-plugins-bad commercial_qmmp" ++
+ Of course, you could also create a matching whitelist
+ for those components using the more general "commercial"
+ in the whitelist, but that would also enable all the
+ other packages with
+ LICENSE_FLAGS
+ containing "commercial", which you may or may not want:
+
+ LICENSE_FLAGS_WHITELIST = "commercial" ++
+
+
+ Specifying audio and video plug-ins as part of the
+ COMMERCIAL_AUDIO_PLUGINS and
+ COMMERCIAL_VIDEO_PLUGINS statements
+ (along with the enabling
+ LICENSE_FLAGS_WHITELIST) includes the
+ plug-ins or components into built images, thus adding
+ support for media formats or components.
+
+ When determining what parts of the system need to be built,
+ BitBake works on a per-task basis rather than a per-recipe
+ basis.
+ You might wonder why using a per-task basis is preferred over
+ a per-recipe basis.
+ To help explain, consider having the IPK packaging backend
+ enabled and then switching to DEB.
+ In this case, the
+ do_install
+ and
+ do_package
+ task outputs are still valid.
+ However, with a per-recipe approach, the build would not
+ include the .deb files.
+ Consequently, you would have to invalidate the whole build and
+ rerun it.
+ Rerunning everything is not the best solution.
+ Also, in this case, the core must be "taught" much about
+ specific tasks.
+ This methodology does not scale well and does not allow users
+ to easily add new tasks in layers or as external recipes
+ without touching the packaged-staging core.
+
+ The shared state code uses a checksum, which is a unique + signature of a task's inputs, to determine if a task needs to + be run again. + Because it is a change in a task's inputs that triggers a + rerun, the process needs to detect all the inputs to a given + task. + For shell tasks, this turns out to be fairly easy because + the build process generates a "run" shell script for each task + and it is possible to create a checksum that gives you a good + idea of when the task's data changes. +
+
+ To complicate the problem, there are things that should not be
+ included in the checksum.
+ First, there is the actual specific build path of a given
+ task - the
+ WORKDIR.
+ It does not matter if the work directory changes because it
+ should not affect the output for target packages.
+ Also, the build process has the objective of making native
+ or cross packages relocatable.
+
+ The checksum therefore needs to exclude
+ WORKDIR.
+ The simplistic approach for excluding the work directory is to
+ set WORKDIR to some fixed value and
+ create the checksum for the "run" script.
+
+ Another problem results from the "run" scripts containing + functions that might or might not get called. + The incremental build solution contains code that figures out + dependencies between shell functions. + This code is used to prune the "run" scripts down to the + minimum set, thereby alleviating this problem and making the + "run" scripts much more readable as a bonus. +
++ So far we have solutions for shell scripts. + What about Python tasks? + The same approach applies even though these tasks are more + difficult. + The process needs to figure out what variables a Python + function accesses and what functions it calls. + Again, the incremental build solution contains code that first + figures out the variable and function dependencies, and then + creates a checksum for the data used as the input to the task. +
+
+ Like the WORKDIR case, situations exist
+ where dependencies should be ignored.
+ For these cases, you can instruct the build process to
+ ignore a dependency by using a line like the following:
+
+ PACKAGE_ARCHS[vardepsexclude] = "MACHINE" ++
+ This example ensures that the
+ PACKAGE_ARCHS
+ variable does not depend on the value of
+ MACHINE,
+ even if it does reference it.
+
+ Equally, there are cases where we need to add dependencies + BitBake is not able to find. + You can accomplish this by using a line like the following: +
++ PACKAGE_ARCHS[vardeps] = "MACHINE" ++
+ This example explicitly adds the MACHINE
+ variable as a dependency for
+ PACKAGE_ARCHS.
+
+ Consider a case with in-line Python, for example, where
+ BitBake is not able to figure out dependencies.
+ When running in debug mode (i.e. using
+ -DDD), BitBake produces output when it
+ discovers something for which it cannot figure out dependencies.
+ The Yocto Project team has currently not managed to cover
+ those dependencies in detail and is aware of the need to fix
+ this situation.
+
+ Thus far, this section has limited discussion to the direct + inputs into a task. + Information based on direct inputs is referred to as the + "basehash" in the code. + However, there is still the question of a task's indirect + inputs - the things that were already built and present in the + Build Directory. + The checksum (or signature) for a particular task needs to add + the hashes of all the tasks on which the particular task + depends. + Choosing which dependencies to add is a policy decision. + However, the effect is to generate a master checksum that + combines the basehash and the hashes of the task's + dependencies. +
++ At the code level, there are a variety of ways both the + basehash and the dependent task hashes can be influenced. + Within the BitBake configuration file, we can give BitBake + some extra information to help it construct the basehash. + The following statement effectively results in a list of + global variable dependency excludes - variables never + included in any checksum: +
++ BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \ + SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM \ + USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \ + PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \ + CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX" ++
+ The previous example excludes
+ WORKDIR
+ since that variable is actually constructed as a path within
+ TMPDIR,
+ which is on the whitelist.
+
+ The rules for deciding which hashes of dependent tasks to
+ include through dependency chains are more complex and are
+ generally accomplished with a Python function.
+ The code in meta/lib/oe/sstatesig.py shows
+ two examples of this and also illustrates how you can insert
+ your own policy into the system if so desired.
+ This file defines the two basic signature generators
+ OE-Core
+ uses: "OEBasic" and "OEBasicHash".
+ By default, there is a dummy "noop" signature handler enabled
+ in BitBake.
+ This means that behavior is unchanged from previous versions.
+ OE-Core uses the "OEBasicHash" signature handler by default
+ through this setting in the bitbake.conf
+ file:
+
+ BB_SIGNATURE_HANDLER ?= "OEBasicHash" ++
+ The "OEBasicHash" BB_SIGNATURE_HANDLER
+ is the same as the "OEBasic" version but adds the task hash to
+ the stamp files.
+ This results in any
+ Metadata
+ change that changes the task hash, automatically
+ causing the task to be run again.
+ This removes the need to bump
+ PR
+ values, and changes to Metadata automatically ripple across
+ the build.
+
+ It is also worth noting that the end result of these + signature generators is to make some dependency and hash + information available to the build. + This information includes: +
+
+ BB_BASEHASH_task-taskname:
+ The base hashes for each task in the recipe.
+
+ BB_BASEHASH_filename:taskname:
+ The base hashes for each dependent task.
+
+ BBHASHDEPS_filename:taskname:
+ The task dependencies for each task.
+
+ BB_TASKHASH:
+ The hash of the currently running task.
+
+
+Table of Contents
+ ++ This chapter describes concepts for various areas of the Yocto Project. + Currently, topics include Yocto Project components, cross-development + generation, shared state (sstate) cache, runtime dependencies, + Pseudo and Fakeroot, x32 psABI, Wayland support, and Licenses. +
+
+ Seeing what metadata went into creating the input signature
+ of a shared state (sstate) task can be a useful debugging
+ aid.
+ This information is available in signature information
+ (siginfo) files in
+ SSTATE_DIR.
+ For information on how to view and interpret information in
+ siginfo files, see the
+ "Viewing Task Variable Dependencies"
+ section in the Yocto Project Development Tasks Manual.
+
Table of Contents
+ ++ This chapter takes a look at the Yocto Project development + environment and also provides a detailed look at what goes on during + development in that environment. + The chapter provides Yocto Project Development environment concepts that + help you understand how work is accomplished in an open source environment, + which is very different as compared to work accomplished in a closed, + proprietary environment. +
++ Specifically, this chapter addresses open source philosophy, workflows, + Git, source repositories, licensing, recipe syntax, and development + syntax. +
++ This section describes the mechanism by which the OpenEmbedded + build system tracks changes to licensing text. + The section also describes how to enable commercially licensed + recipes, which by default are disabled. +
++ For information that can help you maintain compliance with + various open source licensing during the lifecycle of the product, + see the + "Maintaining Open Source License Compliance During Your Project's Lifecycle" + section in the Yocto Project Development Tasks Manual. +
+Table of Contents
+ ++ Because this manual presents information for many different + topics, supplemental information is recommended for full + comprehension. + For additional introductory information on the Yocto Project, see + the Yocto Project Website. + You can find an introductory to using the Yocto Project by working + through the + Yocto Project Quick Start. +
++ For a comprehensive list of links and other documentation, see the + "Links and Related Documentation" + section in the Yocto Project Reference Manual. +
++ Welcome to the Yocto Project Overview Manual! + This manual introduces the Yocto Project by providing concepts, + software overviews, best-known-methods (BKMs), and any other + high-level introductory information suitable for a new Yocto + Project user. +
++ The following list describes what you can get from this manual: +
++ Major Topic: + Provide a high-level description of this major topic. +
+ Major Topic: + Provide a high-level description of this major topic. +
+ Major Topic: + Provide a high-level description of this major topic. +
+ Major Topic: + Provide a high-level description of this major topic. +
+
++ This manual does not give you the following: +
++ Step-by-step Instructions for Development Tasks: + Instructional procedures reside in other manuals within + the Yocto Project documentation set. + For example, the + Yocto Project Development Tasks Manual + provides examples on how to perform various development + tasks. + As another example, the + Yocto Project Application Development and the Extensible Software Development Kit (eSDK) + manual contains detailed instructions on how to install an + SDK, which is used to develop applications for target + hardware. +
+ Reference Material: + This type of material resides in an appropriate reference + manual. + For example, system variables are documented in the + Yocto Project Reference Manual. + As another example, the + Yocto Project Board Support Package (BSP) Developer's Guide + contains reference information on BSPs. +
+ Detailed Public Information Not Specific to the + Yocto Project: + For example, exhaustive information on how to use the + Source Control Manager Git is better covered with Internet + searches and official Git Documentation than through the + Yocto Project documentation. +
+
++ When the OpenEmbedded build system generates an image or an SDK, + it gets the packages from a package feed area located in the + Build Directory. + The + general Yocto Project Development Environment figure + shows this package feeds area in the upper-right corner. +
++ This section looks a little closer into the package feeds area used + by the build system. + Here is a more detailed look at the area: +
+![]() |
+
+
+ Package feeds are an intermediary step in the build process.
+ The OpenEmbedded build system provides classes to generate
+ different package types, and you specify which classes to enable
+ through the
+ PACKAGE_CLASSES
+ variable.
+ Before placing the packages into package feeds,
+ the build process validates them with generated output quality
+ assurance checks through the
+ insane
+ class.
+
+ The package feed area resides in the Build Directory. + The directory the build system uses to temporarily store packages + is determined by a combination of variables and the particular + package manager in use. + See the "Package Feeds" box in the illustration and note the + information to the right of that area. + In particular, the following defines where package files are + kept: +
+DEPLOY_DIR:
+ Defined as tmp/deploy in the Build
+ Directory.
+
DEPLOY_DIR_*:
+ Depending on the package manager used, the package type
+ sub-folder.
+ Given RPM, IPK, or DEB packaging and tarball creation, the
+ DEPLOY_DIR_RPM,
+ DEPLOY_DIR_IPK,
+ DEPLOY_DIR_DEB,
+ or
+ DEPLOY_DIR_TAR,
+ variables are used, respectively.
+
PACKAGE_ARCH:
+ Defines architecture-specific sub-folders.
+ For example, packages could exist for the i586 or qemux86
+ architectures.
+
+
+
+ BitBake uses the do_package_write_* tasks to
+ generate packages and place them into the package holding area (e.g.
+ do_package_write_ipk for IPK packages).
+ See the
+ "do_package_write_deb",
+ "do_package_write_ipk",
+ "do_package_write_rpm",
+ and
+ "do_package_write_tar"
+ sections for additional information.
+ As an example, consider a scenario where an IPK packaging manager
+ is being used and package architecture support for both i586
+ and qemux86 exist.
+ Packages for the i586 architecture are placed in
+ build/tmp/deploy/ipk/i586, while packages for
+ the qemux86 architecture are placed in
+ build/tmp/deploy/ipk/qemux86.
+
+ After source code is configured and compiled, the + OpenEmbedded build system analyzes + the results and splits the output into packages: +
+![]() |
+
+
+ The
+ do_package
+ and
+ do_packagedata
+ tasks combine to analyze
+ the files found in the
+ D directory
+ and split them into subsets based on available packages and
+ files.
+ The analyzing process involves the following as well as other
+ items: splitting out debugging symbols,
+ looking at shared library dependencies between packages,
+ and looking at package relationships.
+ The do_packagedata task creates package
+ metadata based on the analysis such that the
+ OpenEmbedded build system can generate the final packages.
+ Working, staged, and intermediate results of the analysis
+ and package splitting process use these areas:
+
PKGD -
+ The destination directory for packages before they are
+ split.
+
PKGDATA_DIR -
+ A shared, global-state directory that holds data
+ generated during the packaging process.
+
PKGDESTWORK -
+ A temporary work area used by the
+ do_package task.
+
PKGDEST -
+ The parent directory for packages after they have
+ been split.
+
+ The FILES
+ variable defines the files that go into each package in
+ PACKAGES.
+ If you want details on how this is accomplished, you can
+ look at the
+ package
+ class.
+
+ Depending on the type of packages being created (RPM, DEB, or
+ IPK), the do_package_write_* task
+ creates the actual packages and places them in the
+ Package Feed area, which is
+ ${TMPDIR}/deploy.
+ You can see the
+ "Package Feeds"
+ section for more detail on that part of the build process.
+
deploy/* directories does not exist.
+ Creating such feeds usually requires some kind of feed
+ maintenance mechanism that would upload the new packages
+ into an official package feed (e.g. the
+ ngstrm distribution).
+ This functionality is highly distribution-specific
+ and thus is not provided out of the box.
+ +
++ Once source code is fetched and unpacked, BitBake locates + patch files and applies them to the source files: +
+![]() |
+
+
+ The
+ do_patch
+ task processes recipes by
+ using the
+ SRC_URI
+ variable to locate applicable patch files, which by default
+ are *.patch or
+ *.diff files, or any file if
+ "apply=yes" is specified for the file in
+ SRC_URI.
+
+ BitBake finds and applies multiple patches for a single recipe
+ in the order in which it finds the patches.
+ Patches are applied to the recipe's source files located in the
+ S
+ directory.
+
+ For more information on how the source directories are + created, see the + "Source Fetching" + section. +
++ Understanding recipe file syntax is important for + writing recipes. + The following list overviews the basic items that make up a + BitBake recipe file. + For more complete BitBake syntax descriptions, see the + "Syntax and Operators" + chapter of the BitBake User Manual. +
+Variable Assignments and Manipulations: + Variable assignments allow a value to be assigned to a + variable. + The assignment can be static text or might include + the contents of other variables. + In addition to the assignment, appending and prepending + operations are also supported.
+The following example shows some of the ways + you can use variables in recipes: +
+
+ S = "${WORKDIR}/postfix-${PV}"
+ CFLAGS += "-DNO_ASM"
+ SRC_URI_append = " file://fixup.patch"
+
++
+Functions:
+ Functions provide a series of actions to be performed.
+ You usually use functions to override the default
+ implementation of a task function or to complement
+ a default function (i.e. append or prepend to an
+ existing function).
+ Standard functions use sh shell
+ syntax, although access to OpenEmbedded variables and
+ internal methods are also available.
The following is an example function from the
+ sed recipe:
+
+ do_install () {
+ autotools_do_install
+ install -d ${D}${base_bindir}
+ mv ${D}${bindir}/sed ${D}${base_bindir}/sed
+ rmdir ${D}${bindir}/
+ }
+
++ It is also possible to implement new functions that + are called between existing tasks as long as the + new functions are not replacing or complementing the + default functions. + You can implement functions in Python + instead of shell. + Both of these options are not seen in the majority of + recipes.
+Keywords:
+ BitBake recipes use only a few keywords.
+ You use keywords to include common
+ functions (inherit), load parts
+ of a recipe from other files
+ (include and
+ require) and export variables
+ to the environment (export).
The following example shows the use of some of + these keywords: +
+
+ export POSTCONF = "${STAGING_BINDIR}/postconf"
+ inherit autoconf
+ require otherfile.inc
+
++
+Comments:
+ Any lines that begin with the hash character
+ (#) are treated as comment lines
+ and are ignored:
+
+ # This is a comment ++
+
++
++ This next list summarizes the most important and most commonly + used parts of the recipe syntax. + For more information on these parts of the syntax, you can + reference the + Syntax and Operators + chapter in the BitBake User Manual. +
+Line Continuation: \ -
+ Use the backward slash (\)
+ character to split a statement over multiple lines.
+ Place the slash character at the end of the line that
+ is to be continued on the next line:
+
+ VAR = "A really long \ + line" ++
+
++
+
+ Using Variables: ${...} -
+ Use the ${ syntax to
+ access the contents of a variable:
+ VARNAME}
+ SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz"
+
++
+:= operator instead of
+ = when you make the
+ assignment, but this is not generally needed.
+ +
+Quote All Assignments: " -
+ Use double quotes around the value in all variable
+ assignments.
+ value"
+ VAR1 = "${OTHERVAR}"
+ VAR2 = "The version is ${PV}"
+
++
+Conditional Assignment: ?= -
+ Conditional assignment is used to assign a value to
+ a variable, but only when the variable is currently
+ unset.
+ Use the question mark followed by the equal sign
+ (?=) to make a "soft" assignment
+ used for conditional assignment.
+ Typically, "soft" assignments are used in the
+ local.conf file for variables
+ that are allowed to come through from the external
+ environment.
+
Here is an example where
+ VAR1 is set to "New value" if
+ it is currently empty.
+ However, if VAR1 has already been
+ set, it remains unchanged:
+
+ VAR1 ?= "New value" ++
+ In this next example, VAR1
+ is left with the value "Original value":
+
+ VAR1 = "Original value" + VAR1 ?= "New value" ++
+
+Appending: += -
+ Use the plus character followed by the equals sign
+ (+=) to append values to existing
+ variables.
+
Here is an example: +
++ SRC_URI += "file://fix-makefile.patch" ++
+
+Prepending: =+ -
+ Use the equals sign followed by the plus character
+ (=+) to prepend values to existing
+ variables.
+
Here is an example: +
++ VAR =+ "Starts" ++
+
+Appending: _append -
+ Use the _append operator to
+ append values to existing variables.
+ This operator does not add any additional space.
+ Also, the operator is applied after all the
+ +=, and
+ =+ operators have been applied and
+ after all = assignments have
+ occurred.
+
The following example shows the space being + explicitly added to the start to ensure the appended + value is not merged with the existing value: +
++ SRC_URI_append = " file://fix-makefile.patch" ++
+ You can also use the _append
+ operator with overrides, which results in the actions
+ only being performed for the specified target or
+ machine:
+
+ SRC_URI_append_sh4 = " file://fix-makefile.patch" ++
+
+Prepending: _prepend -
+ Use the _prepend operator to
+ prepend values to existing variables.
+ This operator does not add any additional space.
+ Also, the operator is applied after all the
+ +=, and
+ =+ operators have been applied and
+ after all = assignments have
+ occurred.
+
The following example shows the space being + explicitly added to the end to ensure the prepended + value is not merged with the existing value: +
+
+ CFLAGS_prepend = "-I${S}/myincludes "
+
+
+ You can also use the _prepend
+ operator with overrides, which results in the actions
+ only being performed for the specified target or
+ machine:
+
+ CFLAGS_prepend_sh4 = "-I${S}/myincludes "
+
++
+Overrides: -
+ You can use overrides to set a value conditionally,
+ typically based on how the recipe is being built.
+ For example, to set the
+ KBRANCH
+ variable's value to "standard/base" for any target
+ MACHINE,
+ except for qemuarm where it should be set to
+ "standard/arm-versatile-926ejs", you would do the
+ following:
+
+ KBRANCH = "standard/base" + KBRANCH_qemuarm = "standard/arm-versatile-926ejs" ++
+ Overrides are also used to separate alternate values
+ of a variable in other situations.
+ For example, when setting variables such as
+ FILES
+ and
+ RDEPENDS
+ that are specific to individual packages produced by
+ a recipe, you should always use an override that
+ specifies the name of the package.
+
Indentation: + Use spaces for indentation rather than than tabs. + For shell functions, both currently work. + However, it is a policy decision of the Yocto Project + to use tabs in shell functions. + Realize that some layers have a policy to use spaces + for all indentation. +
Using Python for Complex Operations: ${@ -
+ For more advanced processing, it is possible to use
+ Python code during variable assignments (e.g.
+ search and replacement on a variable).python_code}
You indicate Python code using the
+ ${@
+ syntax for the variable assignment:
+ python_code}
+ SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz
+
++
+Shell Function Syntax:
+ Write shell functions as if you were writing a shell
+ script when you describe a list of actions to take.
+ You should ensure that your script works with a generic
+ sh and that it does not require
+ any bash or other shell-specific
+ functionality.
+ The same considerations apply to various system
+ utilities (e.g. sed,
+ grep, awk,
+ and so forth) that you might wish to use.
+ If in doubt, you should check with multiple
+ implementations - including those from BusyBox.
+
+
++ As mentioned briefly in the previous section and also in the + "Workflows" section, + the Yocto Project maintains source repositories at + http://git.yoctoproject.org/cgit.cgi. + If you look at this web-interface of the repositories, each item + is a separate Git repository. +
++ Git repositories use branching techniques that track content + change (not files) within a project (e.g. a new feature or updated + documentation). + Creating a tree-like structure based on project divergence allows + for excellent historical information over the life of a project. + This methodology also allows for an environment from which you can + do lots of local experimentation on projects as you develop + changes or new features. +
+
+ A Git repository represents all development efforts for a given
+ project.
+ For example, the Git repository poky contains
+ all changes and developments for Poky over the course of its
+ entire life.
+ That means that all changes that make up all releases are captured.
+ The repository maintains a complete history of changes.
+
+ You can create a local copy of any repository by "cloning" it
+ with the git clone command.
+ When you clone a Git repository, you end up with an identical
+ copy of the repository on your development system.
+ Once you have a local copy of a repository, you can take steps to
+ develop locally.
+ For examples on how to clone Git repositories, see the
+ "Working With Yocto Project Source Files"
+ section in the Yocto Project Development Tasks Manual.
+
+ It is important to understand that Git tracks content change and
+ not files.
+ Git uses "branches" to organize different development efforts.
+ For example, the poky repository has
+ several branches that include the current "sumo"
+ branch, the "master" branch, and many branches for past
+ Yocto Project releases.
+ You can see all the branches by going to
+ http://git.yoctoproject.org/cgit.cgi/poky/ and
+ clicking on the
+ [...]
+ link beneath the "Branch" heading.
+
+ Each of these branches represents a specific area of development. + The "master" branch represents the current or most recent + development. + All other branches represent offshoots of the "master" branch. +
++ When you create a local copy of a Git repository, the copy has + the same set of branches as the original. + This means you can use Git to create a local working area + (also called a branch) that tracks a specific development branch + from the upstream source Git repository. + in other words, you can define your local Git environment to + work on any development branch in the repository. + To help illustrate, consider the following example Git commands: +
++ $ cd ~ + $ git clone git://git.yoctoproject.org/poky + $ cd poky + $ git checkout -b sumo origin/sumo ++
+ In the previous example after moving to the home directory, the
+ git clone command creates a
+ local copy of the upstream poky Git repository.
+ By default, Git checks out the "master" branch for your work.
+ After changing the working directory to the new local repository
+ (i.e. poky), the
+ git checkout command creates
+ and checks out a local branch named "sumo", which
+ tracks the upstream "origin/sumo" branch.
+ Changes you make while in this branch would ultimately affect
+ the upstream "sumo" branch of the
+ poky repository.
+
+ It is important to understand that when you create and checkout a + local working branch based on a branch name, + your local environment matches the "tip" of that particular + development branch at the time you created your local branch, + which could be different from the files in the "master" branch + of the upstream repository. + In other words, creating and checking out a local branch based on + the "sumo" branch name is not the same as + cloning and checking out the "master" branch if the repository. + Keep reading to see how you create a local snapshot of a Yocto + Project Release. +
+
+ Git uses "tags" to mark specific changes in a repository.
+ Typically, a tag is used to mark a special point such as the final
+ change before a project is released.
+ You can see the tags used with the poky Git
+ repository by going to
+ http://git.yoctoproject.org/cgit.cgi/poky/ and
+ clicking on the
+ [...]
+ link beneath the "Tag" heading.
+
+ Some key tags for the poky are
+ jethro-14.0.3,
+ morty-16.0.1,
+ pyro-17.0.0, and
+ sumo-20.0.0.
+ These tags represent Yocto Project releases.
+
+ When you create a local copy of the Git repository, you also + have access to all the tags in the upstream repository. + Similar to branches, you can create and checkout a local working + Git branch based on a tag name. + When you do this, you get a snapshot of the Git repository that + reflects the state of the files when the change was made associated + with that tag. + The most common use is to checkout a working branch that matches + a specific Yocto Project release. + Here is an example: +
++ $ cd ~ + $ git clone git://git.yoctoproject.org/poky + $ cd poky + $ git fetch --all --tags --prune + $ git checkout tags/pyro-17.0.0 -b my-pyro-17.0.0 ++
+ In this example, the name of the top-level directory of your
+ local Yocto Project repository is poky.
+ After moving to the poky directory, the
+ git fetch command makes all the upstream
+ tags available locally in your repository.
+ Finally, the git checkout command
+ creates and checks out a branch named "my-pyro-17.0.0" that is
+ based on the specific change upstream in the repository
+ associated with the "pyro-17.0.0" tag.
+ The files in your repository now exactly match that particular
+ Yocto Project release as it is tagged in the upstream Git
+ repository.
+ It is important to understand that when you create and
+ checkout a local working branch based on a tag, your environment
+ matches a specific point in time and not the entire development
+ branch (i.e. the "tip" of the branch).
+
+ To run Weston inside X11, enabling it as described earlier and + building a Sato image is sufficient. + If you are running your image under Sato, a Weston Launcher + appears in the "Utility" category. +
++ Alternatively, you can run Weston through the command-line + interpretor (CLI), which is better suited for development work. + To run Weston under the CLI, you need to do the following after + your image is built: +
+
+ Run these commands to export
+ XDG_RUNTIME_DIR:
+
+ mkdir -p /tmp/$USER-weston + chmod 0700 /tmp/$USER-weston + export XDG_RUNTIME_DIR=/tmp/$USER-weston ++
+
++ Launch Weston in the shell: +
++ weston ++
+
+
+ Another place the build system can get source files from is
+ through an SCM such as Git or Subversion.
+ In this case, a repository is cloned or checked out.
+ The
+ do_fetch
+ task inside BitBake uses
+ the SRC_URI
+ variable and the argument's prefix to determine the correct
+ fetcher module.
+
DL_DIR
+ directory, see the
+ BB_GENERATE_MIRROR_TARBALLS
+ variable.
+
+ When fetching a repository, BitBake uses the
+ SRCREV
+ variable to determine the specific revision from which to
+ build.
+
+ In the
+ general Yocto Project Development Environment figure,
+ the output labeled "Application Development SDK" represents an
+ SDK.
+ The SDK generation process differs depending on whether you build
+ a standard SDK
+ (e.g. bitbake -c populate_sdk imagename)
+ or an extensible SDK
+ (e.g. bitbake -c populate_sdk_ext imagename).
+ This section is going to take a closer look at this output:
+
![]() |
+
+
+ The specific form of this output is a self-extracting
+ SDK installer (*.sh) that, when run,
+ installs the SDK, which consists of a cross-development
+ toolchain, a set of libraries and headers, and an SDK
+ environment setup script.
+ Running this installer essentially sets up your
+ cross-development environment.
+ You can think of the cross-toolchain as the "host"
+ part because it runs on the SDK machine.
+ You can think of the libraries and headers as the "target"
+ part because they are built for the target hardware.
+ The environment setup script is added so that you can initialize
+ the environment before using the tools.
+
+ The Yocto Project supports several methods by which you can + set up this cross-development environment. + These methods include downloading pre-built SDK installers + or building and installing your own SDK installer. +
+ For background information on cross-development toolchains + in the Yocto Project development environment, see the + "Cross-Development Toolchain Generation" + section. +
+ For information on setting up a cross-development + environment, see the + Yocto Project Application Development and the Extensible Software Development Kit (eSDK) + manual. +
+ Once built, the SDK installers are written out to the
+ deploy/sdk folder inside the
+ Build Directory
+ as shown in the figure at the beginning of this section.
+ Depending on the type of SDK, several variables exist that help
+ configure these files.
+ The following list shows the variables associated with a standard
+ SDK:
+
DEPLOY_DIR:
+ Points to the deploy
+ directory.
SDKMACHINE:
+ Specifies the architecture of the machine
+ on which the cross-development tools are run to
+ create packages for the target hardware.
+
SDKIMAGE_FEATURES:
+ Lists the features to include in the "target" part
+ of the SDK.
+
TOOLCHAIN_HOST_TASK:
+ Lists packages that make up the host
+ part of the SDK (i.e. the part that runs on
+ the SDKMACHINE).
+ When you use
+ bitbake -c populate_sdk
+ to create the SDK, a set of default packages
+ apply.
+ This variable allows you to add more packages.
+ imagename
TOOLCHAIN_TARGET_TASK:
+ Lists packages that make up the target part
+ of the SDK (i.e. the part built for the
+ target hardware).
+
SDKPATH:
+ Defines the default SDK installation path offered by the
+ installation script.
+
+ This next list, shows the variables associated with an extensible + SDK: +
+DEPLOY_DIR:
+ Points to the deploy directory.
+
SDK_EXT_TYPE:
+ Controls whether or not shared state artifacts are copied
+ into the extensible SDK.
+ By default, all required shared state artifacts are copied
+ into the SDK.
+
SDK_INCLUDE_PKGDATA:
+ Specifies whether or not packagedata will be included in
+ the extensible SDK for all recipes in the "world" target.
+
SDK_INCLUDE_TOOLCHAIN:
+ Specifies whether or not the toolchain will be included
+ when building the extensible SDK.
+
SDK_LOCAL_CONF_WHITELIST:
+ A list of variables allowed through from the build system
+ configuration into the extensible SDK configuration.
+
SDK_LOCAL_CONF_BLACKLIST:
+ A list of variables not allowed through from the build
+ system configuration into the extensible SDK configuration.
+
SDK_INHERIT_BLACKLIST:
+ A list of classes to remove from the
+ INHERIT
+ value globally within the extensible SDK configuration.
+
+
+
+ The OpenEmbedded build system uses BitBake to generate the
+ Software Development Kit (SDK) installer script for both the
+ standard and extensible SDKs:
+
+
do_populate_sdk
+ task, see the
+ "Building an SDK Installer"
+ section in the Yocto Project Application Development and the
+ Extensible Software Development Kit (SDK) manual.
+
+ Like image generation, the SDK script process consists of
+ several stages and depends on many variables.
+ The do_populate_sdk and
+ do_populate_sdk_ext tasks use these
+ key variables to help create the list of packages to actually
+ install.
+ For information on the variables listed in the figure, see the
+ "Application Development SDK"
+ section.
+
+ The do_populate_sdk task helps create
+ the standard SDK and handles two parts: a target part and a
+ host part.
+ The target part is the part built for the target hardware and
+ includes libraries and headers.
+ The host part is the part of the SDK that runs on the
+ SDKMACHINE.
+
+ The do_populate_sdk_ext task helps create
+ the extensible SDK and handles host and target parts
+ differently than its counter part does for the standard SDK.
+ For the extensible SDK, the task encapsulates the build system,
+ which includes everything needed (host and target) for the SDK.
+
+ Regardless of the type of SDK being constructed, the
+ tasks perform some cleanup after which a cross-development
+ environment setup script and any needed configuration files
+ are created.
+ The final output is the Cross-development
+ toolchain installation script (.sh file),
+ which includes the environment setup script.
+
+ The description of tasks so far assumes that BitBake needs to + build everything and there are no prebuilt objects available. + BitBake does support skipping tasks if prebuilt objects are + available. + These objects are usually made available in the form of a + shared state (sstate) cache. +
+SSTATE_DIR
+ and
+ SSTATE_MIRRORS
+ variables.
+ +
+
+ The idea of a setscene task (i.e
+ do_taskname_setscene)
+ is a version of the task where
+ instead of building something, BitBake can skip to the end
+ result and simply place a set of files into specific locations
+ as needed.
+ In some cases, it makes sense to have a setscene task variant
+ (e.g. generating package files in the
+ do_package_write_* task).
+ In other cases, it does not make sense, (e.g. a
+ do_patch
+ task or
+ do_unpack
+ task) since the work involved would be equal to or greater than
+ the underlying task.
+
+ In the OpenEmbedded build system, the common tasks that have
+ setscene variants are
+ do_package,
+ do_package_write_*,
+ do_deploy,
+ do_packagedata,
+ and
+ do_populate_sysroot.
+ Notice that these are most of the tasks whose output is an
+ end result.
+
+ The OpenEmbedded build system has knowledge of the relationship
+ between these tasks and other tasks that precede them.
+ For example, if BitBake runs
+ do_populate_sysroot_setscene for
+ something, there is little point in running any of the
+ do_fetch, do_unpack,
+ do_patch,
+ do_configure,
+ do_compile, and
+ do_install tasks.
+ However, if do_package needs to be run,
+ BitBake would need to run those other tasks.
+
+ It becomes more complicated if everything can come from an
+ sstate cache because some objects are simply not required at
+ all.
+ For example, you do not need a compiler or native tools, such
+ as quilt, if there is nothing to compile or patch.
+ If the do_package_write_* packages are
+ available from sstate, BitBake does not need the
+ do_package task data.
+
+ To handle all these complexities, BitBake runs in two phases. + The first is the "setscene" stage. + During this stage, BitBake first checks the sstate cache for + any targets it is planning to build. + BitBake does a fast check to see if the object exists rather + than a complete download. + If nothing exists, the second phase, which is the setscene + stage, completes and the main build proceeds. +
++ If objects are found in the sstate cache, the OpenEmbedded + build system works backwards from the end targets specified + by the user. + For example, if an image is being built, the OpenEmbedded build + system first looks for the packages needed for that image and + the tools needed to construct an image. + If those are available, the compiler is not needed. + Thus, the compiler is not even downloaded. + If something was found to be unavailable, or the download or + setscene task fails, the OpenEmbedded build system then tries + to install dependencies, such as the compiler, from the cache. +
+
+ The availability of objects in the sstate cache is handled by
+ the function specified by the
+ BB_HASHCHECK_FUNCTION
+ variable and returns a list of the objects that are available.
+ The function specified by the
+ BB_SETSCENE_DEPVALID
+ variable is the function that determines whether a given
+ dependency needs to be followed, and whether for any given
+ relationship the function needs to be passed.
+ The function returns a True or False value.
+
+ By design, the OpenEmbedded build system builds everything from + scratch unless BitBake can determine that parts do not need to be + rebuilt. + Fundamentally, building from scratch is attractive as it means all + parts are built fresh and there is no possibility of stale data + causing problems. + When developers hit problems, they typically default back to + building from scratch so they know the state of things from the + start. +
++ Building an image from scratch is both an advantage and a + disadvantage to the process. + As mentioned in the previous paragraph, building from scratch + ensures that everything is current and starts from a known state. + However, building from scratch also takes much longer as it + generally means rebuilding things that do not necessarily need + to be rebuilt. +
++ The Yocto Project implements shared state code that supports + incremental builds. + The implementation of the shared state code answers the following + questions that were fundamental roadblocks within the OpenEmbedded + incremental build support system: +
++ What pieces of the system have changed and what pieces have + not changed? +
+ How are changed pieces of software removed and replaced? +
+ How are pre-built components that do not need to be rebuilt + from scratch used when they are available? +
+
++ For the first question, the build system detects changes in the + "inputs" to a given task by creating a checksum (or signature) of + the task's inputs. + If the checksum changes, the system assumes the inputs have changed + and the task needs to be rerun. + For the second question, the shared state (sstate) code tracks + which tasks add which output to the build process. + This means the output from a given task can be removed, upgraded + or otherwise manipulated. + The third question is partly addressed by the solution for the + second question assuming the build system can fetch the sstate + objects from remote locations and install them if they are deemed + to be valid. +
+PR
+ information as part of the shared state packages.
+ Consequently, considerations exist that affect maintaining
+ shared state feeds.
+ For information on how the OpenEmbedded build system
+ works with packages and can track incrementing
+ PR information, see the
+ "Automatically Incrementing a Binary Package Revision Number"
+ section in the Yocto Project Development Tasks Manual.
+ +
++ The rest of this section goes into detail about the overall + incremental build architecture, the checksums (signatures), shared + state, and some tips and tricks. +
++ Checksums and dependencies, as discussed in the previous + section, solve half the problem of supporting a shared state. + The other part of the problem is being able to use checksum + information during the build and being able to reuse or rebuild + specific components. +
+
+ The
+ sstate
+ class is a relatively generic implementation of how to
+ "capture" a snapshot of a given task.
+ The idea is that the build process does not care about the
+ source of a task's output.
+ Output could be freshly built or it could be downloaded and
+ unpacked from somewhere - the build process does not need to
+ worry about its origin.
+
+ There are two types of output, one is just about creating a
+ directory in
+ WORKDIR.
+ A good example is the output of either
+ do_install
+ or
+ do_package.
+ The other type of output occurs when a set of data is merged
+ into a shared directory tree such as the sysroot.
+
+ The Yocto Project team has tried to keep the details of the
+ implementation hidden in sstate class.
+ From a user's perspective, adding shared state wrapping to a task
+ is as simple as this
+ do_deploy
+ example taken from the
+ deploy
+ class:
+
+ DEPLOYDIR = "${WORKDIR}/deploy-${PN}"
+ SSTATETASKS += "do_deploy"
+ do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
+ do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+
+ python do_deploy_setscene () {
+ sstate_setscene(d)
+ }
+ addtask do_deploy_setscene
+ do_deploy[dirs] = "${DEPLOYDIR} ${B}"
+
++ The following list explains the previous example: +
+
+ Adding "do_deploy" to SSTATETASKS
+ adds some required sstate-related processing, which is
+ implemented in the
+ sstate
+ class, to before and after the
+ do_deploy
+ task.
+
+ The
+ do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
+ declares that do_deploy places its
+ output in ${DEPLOYDIR} when run
+ normally (i.e. when not using the sstate cache).
+ This output becomes the input to the shared state cache.
+
+ The
+ do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+ line causes the contents of the shared state cache to be
+ copied to ${DEPLOY_DIR_IMAGE}.
+
do_deploy is not already in
+ the shared state cache or if its input checksum
+ (signature) has changed from when the output was
+ cached, the task will be run to populate the shared
+ state cache, after which the contents of the shared
+ state cache is copied to
+ ${DEPLOY_DIR_IMAGE}.
+ If do_deploy is in the shared
+ state cache and its signature indicates that the
+ cached output is still valid (i.e. if no
+ relevant task inputs have changed), then the
+ contents of the shared state cache will be copied
+ directly to
+ ${DEPLOY_DIR_IMAGE} by the
+ do_deploy_setscene task
+ instead, skipping the
+ do_deploy task.
+ +
++ The following task definition is glue logic needed to + make the previous settings effective: +
+
+ python do_deploy_setscene () {
+ sstate_setscene(d)
+ }
+ addtask do_deploy_setscene
+
+
+ sstate_setscene() takes the flags
+ above as input and accelerates the
+ do_deploy task through the
+ shared state cache if possible.
+ If the task was accelerated,
+ sstate_setscene() returns True.
+ Otherwise, it returns False, and the normal
+ do_deploy task runs.
+ For more information, see the
+ "setscene"
+ section in the BitBake User Manual.
+
+ The do_deploy[dirs] = "${DEPLOYDIR} ${B}"
+ line creates ${DEPLOYDIR} and
+ ${B} before the
+ do_deploy task runs, and also sets
+ the current working directory of
+ do_deploy to
+ ${B}.
+ For more information, see the
+ "Variable Flags"
+ section in the BitBake User Manual.
+
sstate-inputdirs and
+ sstate-outputdirs would be the
+ same, you can use
+ sstate-plaindirs.
+ For example, to preserve the
+ ${PKGD} and
+ ${PKGDEST} output from the
+ do_package
+ task, use the following:
+
+ do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}"
+
++
+
+ sstate-inputdirs and
+ sstate-outputdirs can also be used
+ with multiple directories.
+ For example, the following declares
+ PKGDESTWORK and
+ SHLIBWORK as shared state
+ input directories, which populates the shared state
+ cache, and PKGDATA_DIR and
+ SHLIBSDIR as the corresponding
+ shared state output directories:
+
+ do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}"
+ do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}"
+
++
++ These methods also include the ability to take a + lockfile when manipulating shared state directory + structures, for cases where file additions or removals + are sensitive: +
+
+ do_package[sstate-lockfile] = "${PACKAGELOCK}"
+
++
++
+
+ Behind the scenes, the shared state code works by looking in
+ SSTATE_DIR
+ and
+ SSTATE_MIRRORS
+ for shared state files.
+ Here is an example:
+
+ SSTATE_MIRRORS ?= "\ + file://.* http://someserver.tld/share/sstate/PATH;downloadfilename=PATH \n \ + file://.* file:///some/local/dir/sstate/PATH" ++
+
+SSTATE_DIR) is organized into
+ two-character subdirectories, where the subdirectory
+ names are based on the first two characters of the hash.
+ If the shared state directory structure for a mirror has the
+ same structure as SSTATE_DIR, you must
+ specify "PATH" as part of the URI to enable the build system
+ to map to the appropriate subdirectory.
+ +
++ The shared state package validity can be detected just by + looking at the filename since the filename contains the task + checksum (or signature) as described earlier in this section. + If a valid shared state package is found, the build process + downloads it and uses it to accelerate the task. +
+
+ The build processes use the *_setscene
+ tasks for the task acceleration phase.
+ BitBake goes through this phase before the main execution
+ code and tries to accelerate any tasks for which it can find
+ shared state packages.
+ If a shared state package for a task is available, the
+ shared state package is used.
+ This means the task and any tasks on which it is dependent
+ are not executed.
+
+ As a real world example, the aim is when building an IPK-based
+ image, only the
+ do_package_write_ipk
+ tasks would have their shared state packages fetched and
+ extracted.
+ Since the sysroot is not used, it would never get extracted.
+ This is another reason why a task-based approach is preferred
+ over a recipe-based approach, which would have to install the
+ output from every task.
+
+ The software layer provides the Metadata for additional + software packages used during the build. + This layer does not include Metadata that is specific to the + distribution or the machine, which are found in their + respective layers. +
++ This layer contains any new recipes that your project needs + in the form of recipe files. +
++ The first stages of building a recipe are to fetch and unpack + the source code: +
+![]() |
+
+
+ The
+ do_fetch
+ and
+ do_unpack
+ tasks fetch the source files and unpack them into the work
+ directory.
+
file://)
+ that is part of a recipe's
+ SRC_URI
+ statement, the OpenEmbedded build system takes a checksum
+ of the file for the recipe and inserts the checksum into
+ the signature for the do_fetch.
+ If any local file has been modified, the
+ do_fetch task and all tasks that
+ depend on it are re-executed.
+
+ By default, everything is accomplished in the
+ Build Directory,
+ which has a defined structure.
+ For additional general information on the Build Directory,
+ see the
+ "build/"
+ section in the Yocto Project Reference Manual.
+
+ Unpacked source files are pointed to by the
+ S
+ variable.
+ Each recipe has an area in the Build Directory where the
+ unpacked source code resides.
+ The name of that directory for any given recipe is defined from
+ several different variables.
+ You can see the variables that define these directories
+ by looking at the figure:
+
TMPDIR -
+ The base directory where the OpenEmbedded build system
+ performs all its work during the build.
+
PACKAGE_ARCH -
+ The architecture of the built package or packages.
+
TARGET_OS -
+ The operating system of the target device.
+
PN -
+ The name of the built package.
+
PV -
+ The version of the recipe used to build the package.
+
PR -
+ The revision of the recipe used to build the package.
+
WORKDIR -
+ The location within TMPDIR where
+ a specific package is built.
+
S -
+ Contains the unpacked source files for a given recipe.
+
+
+
+ Two kinds of mirrors exist: pre-mirrors and regular mirrors.
+ The
+ PREMIRRORS
+ and
+ MIRRORS
+ variables point to these, respectively.
+ BitBake checks pre-mirrors before looking upstream for any
+ source files.
+ Pre-mirrors are appropriate when you have a shared directory
+ that is not a directory defined by the
+ DL_DIR
+ variable.
+ A Pre-mirror typically points to a shared directory that is
+ local to your organization.
+
+ Regular mirrors can be any site across the Internet that is + used as an alternative location for source code should the + primary site not be functioning for some reason or another. +
++ In order for the OpenEmbedded build system to create an image or + any target, it must be able to access source files. + The + general Yocto Project Development Environment figure + represents source files using the "Upstream Project Releases", + "Local Projects", and "SCMs (optional)" boxes. + The figure represents mirrors, which also play a role in locating + source files, with the "Source Mirror(s)" box. +
++ The method by which source files are ultimately organized is + a function of the project. + For example, for released software, projects tend to use tarballs + or other archived files that can capture the state of a release + guaranteeing that it is statically represented. + On the other hand, for a project that is more dynamic or + experimental in nature, a project might keep source files in a + repository controlled by a Source Control Manager (SCM) such as + Git. + Pulling source from a repository allows you to control + the point in the repository (the revision) from which you want to + build software. + Finally, a combination of the two might exist, which would give the + consumer a choice when deciding where to get source files. +
+
+ BitBake uses the
+ SRC_URI
+ variable to point to source files regardless of their location.
+ Each recipe must have a SRC_URI variable
+ that points to the source.
+
+ Another area that plays a significant role in where source files
+ come from is pointed to by the
+ DL_DIR
+ variable.
+ This area is a cache that can hold previously downloaded source.
+ You can also instruct the OpenEmbedded build system to create
+ tarballs from Git repositories, which is not the default behavior,
+ and store them in the DL_DIR by using the
+ BB_GENERATE_MIRROR_TARBALLS
+ variable.
+
+ Judicious use of a DL_DIR directory can
+ save the build system a trip across the Internet when looking
+ for files.
+ A good method for using a download directory is to have
+ DL_DIR point to an area outside of your
+ Build Directory.
+ Doing so allows you to safely delete the Build Directory
+ if needed without fear of removing any downloaded source file.
+
+ The remainder of this section provides a deeper look into the + source files and the mirrors. + Here is a more detailed look at the source file area of the + base figure: +
+![]() |
+
+
+ For each task that completes successfully, BitBake writes a
+ stamp file into the
+ STAMPS_DIR
+ directory.
+ The beginning of the stamp file's filename is determined by the
+ STAMP
+ variable, and the end of the name consists of the task's name
+ and current
+ input checksum.
+
BB_SIGNATURE_HANDLER
+ is "OEBasicHash", which is almost always the case in
+ current OpenEmbedded.
+ + To determine if a task needs to be rerun, BitBake checks if a + stamp file with a matching input checksum exists for the task. + If such a stamp file exists, the task's output is assumed to + exist and still be valid. + If the file does not exist, the task is rerun. +
+The stamp mechanism is more general than the shared + state (sstate) cache mechanism described in the + "Setscene Tasks and Shared State" + section. + BitBake avoids rerunning any task that has a valid + stamp file, not just tasks that can be accelerated through + the sstate cache.
+However, you should realize that stamp files only
+ serve as a marker that some work has been done and that
+ these files do not record task output.
+ The actual task output would usually be somewhere in
+ TMPDIR
+ (e.g. in some recipe's
+ WORKDIR.)
+ What the sstate cache mechanism adds is a way to cache task
+ output that can then be shared between build machines.
+
+ Since STAMPS_DIR is usually a subdirectory
+ of TMPDIR, removing
+ TMPDIR will also remove
+ STAMPS_DIR, which means tasks will
+ properly be rerun to repopulate TMPDIR.
+
+ If you want some task to always be considered "out of date",
+ you can mark it with the
+ nostamp
+ varflag.
+ If some other task depends on such a task, then that task will
+ also always be considered out of date, which might not be what
+ you want.
+
+ For details on how to view information about a task's + signature, see the + "Viewing Task Variable Dependencies" + section in the Yocto Project Development Tasks Manual. +
++ The code in the build system that supports incremental builds + is not simple code. + This section presents some tips and tricks that help you work + around issues related to shared state code. +
++ Upstream project releases exist anywhere in the form of an + archived file (e.g. tarball or zip file). + These files correspond to individual recipes. + For example, the figure uses specific releases each for + BusyBox, Qt, and Dbus. + An archive file can be for any released product that can be + built using a recipe. +
++ User configuration helps define the build. + Through user configuration, you can tell BitBake the + target architecture for which you are building the image, + where to store downloaded source, and other build properties. +
++ The following figure shows an expanded representation of the + "User Configuration" box of the + general Yocto Project Development Environment figure: +
++
+![]() |
+
+
+ BitBake needs some basic configuration files in order to complete
+ a build.
+ These files are *.conf files.
+ The minimally necessary ones reside as example files in the
+ Source Directory.
+ For simplicity, this section refers to the Source Directory as
+ the "Poky Directory."
+
+ When you clone the poky Git repository or you
+ download and unpack a Yocto Project release, you can set up the
+ Source Directory to be named anything you want.
+ For this discussion, the cloned repository uses the default
+ name poky.
+
+
+
+ The meta-poky layer inside Poky contains
+ a conf directory that has example
+ configuration files.
+ These example files are used as a basis for creating actual
+ configuration files when you source the build environment
+ script
+ (i.e.
+ oe-init-build-env).
+
+ Sourcing the build environment script creates a
+ Build Directory
+ if one does not already exist.
+ BitBake uses the Build Directory for all its work during builds.
+ The Build Directory has a conf directory that
+ contains default versions of your local.conf
+ and bblayers.conf configuration files.
+ These default configuration files are created only if versions
+ do not already exist in the Build Directory at the time you
+ source the build environment setup script.
+
+ Because the Poky repository is fundamentally an aggregation of
+ existing repositories, some users might be familiar with running
+ the oe-init-build-env script in the context
+ of separate OpenEmbedded-Core and BitBake repositories rather than a
+ single Poky repository.
+ This discussion assumes the script is executed from within a cloned
+ or unpacked version of Poky.
+
+ Depending on where the script is sourced, different sub-scripts
+ are called to set up the Build Directory (Yocto or OpenEmbedded).
+ Specifically, the script
+ scripts/oe-setup-builddir inside the
+ poky directory sets up the Build Directory and seeds the directory
+ (if necessary) with configuration files appropriate for the
+ Yocto Project development environment.
+
scripts/oe-setup-builddir script
+ uses the $TEMPLATECONF variable to
+ determine which sample configuration files to locate.
+ +
+
+ The local.conf file provides many
+ basic variables that define a build environment.
+ Here is a list of a few.
+ To see the default configurations in a local.conf
+ file created by the build environment script, see the
+ local.conf.sample in the
+ meta-poky layer:
+
Parallelism Options:
+ Controlled by the
+ BB_NUMBER_THREADS,
+ PARALLEL_MAKE,
+ and
+ BB_NUMBER_PARSE_THREADS
+ variables.
Target Machine Selection:
+ Controlled by the
+ MACHINE
+ variable.
Download Directory:
+ Controlled by the
+ DL_DIR
+ variable.
Shared State Directory:
+ Controlled by the
+ SSTATE_DIR
+ variable.
Build Output:
+ Controlled by the
+ TMPDIR
+ variable.
+
+conf/local.conf
+ file can also be set in the
+ conf/site.conf and
+ conf/auto.conf configuration files.
+ +
+
+ The bblayers.conf file tells BitBake what
+ layers you want considered during the build.
+ By default, the layers listed in this file include layers
+ minimally needed by the build system.
+ However, you must manually add any custom layers you have created.
+ You can find more information on working with the
+ bblayers.conf file in the
+ "Enabling Your Layer"
+ section in the Yocto Project Development Tasks Manual.
+
+ The files site.conf and
+ auto.conf are not created by the environment
+ initialization script.
+ If you want the site.conf file, you need to
+ create that yourself.
+ The auto.conf file is typically created by
+ an autobuilder:
+
site.conf:
+ You can use the conf/site.conf
+ configuration file to configure multiple build directories.
+ For example, suppose you had several build environments and
+ they shared some common features.
+ You can set these default build properties here.
+ A good example is perhaps the packaging format to use
+ through the
+ PACKAGE_CLASSES
+ variable.
One useful scenario for using the
+ conf/site.conf file is to extend your
+ BBPATH
+ variable to include the path to a
+ conf/site.conf.
+ Then, when BitBake looks for Metadata using
+ BBPATH, it finds the
+ conf/site.conf file and applies your
+ common configurations found in the file.
+ To override configurations in a particular build directory,
+ alter the similar configurations within that build
+ directory's conf/local.conf file.
+
auto.conf:
+ The file is usually created and written to by
+ an autobuilder.
+ The settings put into the file are typically the same as
+ you would find in the conf/local.conf
+ or the conf/site.conf files.
+
+
++ You can edit all configuration files to further define + any particular build environment. + This process is represented by the "User Configuration Edits" + box in the figure. +
+
+ When you launch your build with the
+ bitbake
+ command, BitBake sorts out the configurations to ultimately
+ define your build environment.
+ It is important to understand that the OpenEmbedded build system
+ reads the configuration files in a specific order:
+ targetsite.conf, auto.conf,
+ and local.conf.
+ And, the build system applies the normal assignment statement
+ rules.
+ Because the files are parsed in a specific order, variable
+ assignments for the same variable could be affected.
+ For example, if the auto.conf file and
+ the local.conf set
+ variable1 to different values, because
+ the build system parses local.conf after
+ auto.conf,
+ variable1 is assigned the value from
+ the local.conf file.
+
+ As mentioned in the previous section, the
+ LIC_FILES_CHKSUM variable lists all
+ the important files that contain the license text for the
+ source code.
+ It is possible to specify a checksum for an entire file,
+ or a specific section of a file (specified by beginning and
+ ending line numbers with the "beginline" and "endline"
+ parameters, respectively).
+ The latter is useful for source files with a license
+ notice header, README documents, and so forth.
+ If you do not use the "beginline" parameter, then it is
+ assumed that the text begins on the first line of the file.
+ Similarly, if you do not use the "endline" parameter,
+ it is assumed that the license text ends with the last
+ line of the file.
+
+ The "md5" parameter stores the md5 checksum of the license + text. + If the license text changes in any way as compared to + this parameter then a mismatch occurs. + This mismatch triggers a build failure and notifies + the developer. + Notification allows the developer to review and address + the license text changes. + Also note that if a mismatch occurs during the build, + the correct md5 checksum is placed in the build log and + can be easily copied to the recipe. +
+
+ There is no limit to how many files you can specify using
+ the LIC_FILES_CHKSUM variable.
+ Generally, however, every project requires a few
+ specifications for license tracking.
+ Many projects have a "COPYING" file that stores the
+ license information for all the source code files.
+ This practice allows you to just track the "COPYING"
+ file as long as it is kept up to date.
+
+ If you specify an empty or invalid "md5" + parameter, BitBake returns an md5 mis-match + error and displays the correct "md5" parameter + value during the build. + The correct parameter is also captured in + the build log. +
+ If the whole file contains only license text, + you do not need to use the "beginline" and + "endline" parameters. +
+
++ BitBake is the tool at the heart of the OpenEmbedded build + system and is responsible for parsing the + Metadata, + generating a list of tasks from it, and then executing those + tasks. +
++ This section briefly introduces BitBake. + If you want more information on BitBake, see the + BitBake User Manual. +
++ To see a list of the options BitBake supports, use either of + the following commands: +
++ $ bitbake -h + $ bitbake --help ++
+
+
+ The most common usage for BitBake is
+ bitbake ,
+ where packagenamepackagename is the name of the
+ package you want to build (referred to as the "target" in this
+ manual).
+ The target often equates to the first part of a recipe's
+ filename (e.g. "foo" for a recipe named
+ foo_1.3.0-r0.bb).
+ So, to process the
+ matchbox-desktop_1.2.3.bb recipe file, you
+ might type the following:
+
+ $ bitbake matchbox-desktop ++
+ Several different versions of
+ matchbox-desktop might exist.
+ BitBake chooses the one selected by the distribution
+ configuration.
+ You can get more details about how BitBake chooses between
+ different target versions and providers in the
+ "Preferences"
+ section of the BitBake User Manual.
+
+ BitBake also tries to execute any dependent tasks first.
+ So for example, before building
+ matchbox-desktop, BitBake would build a
+ cross compiler and glibc if they had not
+ already been built.
+
+ A useful BitBake option to consider is the
+ -k or --continue
+ option.
+ This option instructs BitBake to try and continue processing
+ the job as long as possible even after encountering an error.
+ When an error occurs, the target that failed and those that
+ depend on it cannot be remade.
+ However, when you use this option other dependencies can
+ still be processed.
+
+ Class files (.bbclass) contain information
+ that is useful to share between
+ Metadata
+ files.
+ An example is the
+ autotools
+ class, which contains common settings for any application that
+ Autotools uses.
+ The
+ "Classes"
+ chapter in the Yocto Project Reference Manual provides
+ details about classes and how to use them.
+
+ The configuration files (.conf) define
+ various configuration variables that govern the OpenEmbedded
+ build process.
+ These files fall into several areas that define machine
+ configuration options, distribution configuration options,
+ compiler tuning options, general common configuration options,
+ and user configuration options in
+ local.conf, which is found in the
+ Build Directory.
+
+ Files that have the .bb suffix are
+ "recipes" files.
+ In general, a recipe contains information about a single piece
+ of software.
+ This information includes the location from which to download
+ the unaltered source, any source patches to be applied to that
+ source (if needed), which special configuration options to
+ apply, how to compile the source files, and how to package the
+ compiled output.
+
+ The term "package" is sometimes used to refer to recipes.
+ However, since the word "package" is used for the packaged
+ output from the OpenEmbedded build system (i.e.
+ .ipk or .deb files),
+ this document avoids using the term "package" when referring
+ to recipes.
+
+ The license of an upstream project might change in the future.
+ In order to prevent these changes going unnoticed, the
+ LIC_FILES_CHKSUM
+ variable tracks changes to the license text. The checksums are
+ validated at the end of the configure step, and if the
+ checksums do not match, the build will fail.
+
+ The LIC_FILES_CHKSUM
+ variable contains checksums of the license text in the
+ source code for the recipe.
+ Following is an example of how to specify
+ LIC_FILES_CHKSUM:
+
+ LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \ + file://licfile1.txt;beginline=5;endline=29;md5=yyyy \ + file://licfile2.txt;endline=50;md5=zzzz \ + ..." ++
+
+
+ When using "beginline" and "endline", realize
+ that line numbering begins with one and not
+ zero.
+ Also, the included lines are inclusive (i.e.
+ lines five through and including 29 in the
+ previous example for
+ licfile1.txt).
+
+ When a license check fails, the selected license + text is included as part of the QA message. + Using this output, you can determine the exact + start and finish for the needed license text. +
+
+
+ The build system uses the
+ S
+ variable as the default directory when searching files
+ listed in LIC_FILES_CHKSUM.
+ The previous example employs the default directory.
+
+ Consider this next example: +
+
+ LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\
+ md5=bb14ed3c4cda583abc85401304b5cd4e"
+ LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
+
++
+
+ The first line locates a file in
+ ${S}/src/ls.c and isolates lines five
+ through 16 as license text.
+ The second line refers to a file in
+ WORKDIR.
+
+ Note that LIC_FILES_CHKSUM variable is
+ mandatory for all recipes, unless the
+ LICENSE variable is set to "CLOSED".
+
+ The Wayland protocol libraries and the reference Weston
+ compositor ship as integrated packages in the
+ meta layer of the
+ Source Directory.
+ Specifically, you can find the recipes that build both Wayland
+ and Weston at
+ meta/recipes-graphics/wayland.
+
+ You can build both the Wayland and Weston packages for use only + with targets that accept the + Mesa 3D and Direct Rendering Infrastructure, + which is also known as Mesa DRI. + This implies that you cannot build and use the packages if your + target uses, for example, the + Intel Embedded Media + and Graphics Driver + (Intel EMGD) that + overrides Mesa DRI. +
++
++ Wayland + is a computer display server protocol that + provides a method for compositing window managers to communicate + directly with applications and video hardware and expects them to + communicate with input hardware using other libraries. + Using Wayland with supporting targets can result in better control + over graphics frame rendering than an application might otherwise + achieve. +
++ The Yocto Project provides the Wayland protocol libraries and the + reference + Weston + compositor as part of its release. + This section describes what you need to do to implement Wayland and + use the compositor when building an image for a supporting target. +
++ This section provides workflow concepts using the Yocto Project and + Git. + In particular, the information covers basic practices that describe + roles and actions in a collaborative development environment. +
++
++ The Yocto Project files are maintained using Git in "master" + branches whose Git histories track every change and whose structures + provides branches for all diverging functionality. + Although there is no need to use Git, many open source projects do so. +
++ +
++ For the Yocto Project, a key individual called the "maintainer" is + responsible for the "master" branch of a given Git repository. + The "master" branch is the “upstream” repository from which final or + most recent builds of the project occur. + The maintainer is responsible for accepting changes from other + developers and for organizing the underlying branch structure to + reflect release strategies and so forth. +
++
+
+ The Yocto Project poky Git repository also has an
+ upstream contribution Git repository named
+ poky-contrib.
+ You can see all the branches in this repository using the web interface
+ of the
+ Source Repositories organized
+ within the "Poky Support" area.
+ These branches temporarily hold changes to the project that have been
+ submitted or committed by the Yocto Project development team and by
+ community members who contribute to the project.
+ The maintainer determines if the changes are qualified to be moved
+ from the "contrib" branches into the "master" branch of the Git
+ repository.
+
+ Developers (including contributing community members) create and + maintain cloned repositories of the upstream "master" branch. + The cloned repositories are local to their development platforms and + are used to develop changes. + When a developer is satisfied with a particular feature or change, + they "push" the changes to the appropriate "contrib" repository. +
++ Developers are responsible for keeping their local repository + up-to-date with "master". + They are also responsible for straightening out any conflicts that + might arise within files that are being worked on simultaneously by + more than one person. + All this work is done locally on the developer’s machine before + anything is pushed to a "contrib" area and examined at the maintainer’s + level. +
++ A somewhat formal method exists by which developers commit changes + and push them into the "contrib" area and subsequently request that + the maintainer include them into "master". + This process is called “submitting a patch” or "submitting a change." + For information on submitting patches and changes, see the + "Submitting a Change to the Yocto Project" + section in the Yocto Project Development Tasks Manual. +
++ To summarize the development workflow: a single point of entry + exists for changes into the project’s "master" branch of the + Git repository, which is controlled by the project’s maintainer. + And, a set of developers exist who independently develop, test, and + submit changes to "contrib" areas for the maintainer to examine. + The maintainer then chooses which changes are going to become a + permanent part of the project. +
++
+![]() |
+
++ While each development environment is unique, there are some best + practices or methods that help development run smoothly. + The following list describes some of these practices. + For more information about Git workflows, see the workflow topics in + the + Git Community Book. +
++ Make Small Changes: + It is best to keep the changes you commit small as compared to + bundling many disparate changes into a single commit. + This practice not only keeps things manageable but also allows + the maintainer to more easily include or refuse changes.
+It is also good practice to leave the repository in a + state that allows you to still successfully build your project. + In other words, do not commit half of a feature, + then add the other half as a separate, later commit. + Each commit should take you from one buildable project state + to another buildable state. +
++ Use Branches Liberally: + It is very easy to create, use, and delete local branches in + your working Git repository. + You can name these branches anything you like. + It is helpful to give them names associated with the particular + feature or change on which you are working. + Once you are done with a feature or change and have merged it + into your local master branch, simply discard the temporary + branch. +
+ Merge Changes:
+ The git merge command allows you to take
+ the changes from one branch and fold them into another branch.
+ This process is especially helpful when more than a single
+ developer might be working on different parts of the same
+ feature.
+ Merging changes also automatically identifies any collisions
+ or "conflicts" that might happen as a result of the same lines
+ of code being altered by two different developers.
+
+ Manage Branches: + Because branches are easy to use, you should use a system + where branches indicate varying levels of code readiness. + For example, you can have a "work" branch to develop in, a + "test" branch where the code or change is tested, a "stage" + branch where changes are ready to be committed, and so forth. + As your project develops, you can merge code across the + branches to reflect ever-increasing stable states of the + development. +
+ Use Push and Pull:
+ The push-pull workflow is based on the concept of developers
+ "pushing" local commits to a remote repository, which is
+ usually a contribution repository.
+ This workflow is also based on developers "pulling" known
+ states of the project down into their local development
+ repositories.
+ The workflow easily allows you to pull changes submitted by
+ other developers from the upstream repository into your
+ work area ensuring that you have the most recent software
+ on which to develop.
+ The Yocto Project has two scripts named
+ create-pull-request and
+ send-pull-request that ship with the
+ release to facilitate this workflow.
+ You can find these scripts in the scripts
+ folder of the
+ Source Directory.
+ For information on how to use these scripts, see the
+ "Using Scripts to Push a Change Upstream and Request a Pull"
+ section in the Yocto Project Development Tasks Manual.
+
+ Patch Workflow:
+ This workflow allows you to notify the maintainer through an
+ email that you have a change (or patch) you would like
+ considered for the "master" branch of the Git repository.
+ To send this type of change, you format the patch and then
+ send the email using the Git commands
+ git format-patch and
+ git send-email.
+ For information on how to use these scripts, see the
+ "Submitting a Change to the Yocto Project"
+ section in the Yocto Project Development Tasks Manual.
+
+
++ x32 processor-specific Application Binary Interface + (x32 psABI) + is a native 32-bit processor-specific ABI for + Intel 64 (x86-64) + architectures. + An ABI defines the calling conventions between functions in a + processing environment. + The interface determines what registers are used and what the sizes are + for various C data types. +
++ Some processing environments prefer using 32-bit applications even + when running on Intel 64-bit platforms. + Consider the i386 psABI, which is a very old 32-bit ABI for Intel + 64-bit platforms. + The i386 psABI does not provide efficient use and access of the + Intel 64-bit processor resources, leaving the system underutilized. + Now consider the x86_64 psABI. + This ABI is newer and uses 64-bits for data sizes and program + pointers. + The extra bits increase the footprint size of the programs, + libraries, and also increases the memory and file system size + requirements. + Executing under the x32 psABI enables user programs to utilize CPU + and system resources more efficiently while keeping the memory + footprint of the applications low. + Extra bits are used for registers but not for addressing mechanisms. +
++ The Yocto Project supports the final specifications of x32 psABI + as follows: +
++ You can create packages and images in x32 psABI format on + x86_64 architecture targets. +
+ You can successfully build recipes with the x32 toolchain. +
+ You can create and boot
+ core-image-minimal and
+ core-image-sato images.
+
+ RPM Package Manager (RPM) support exists for x32 binaries. +
+ Support for large images exists. +
+
++ For steps on how to use x32 psABI, see the + "Using x32 psABI" + section in the Yocto Project Development Tasks Manual. +
++ The + BitBake + task executor together with various types of configuration files + form the OpenEmbedded Core. + This section overviews these components by describing their use and + how they interact. +
++ BitBake handles the parsing and execution of the data files. + The data itself is of various types: +
++ Recipes: + Provides details about particular pieces of software. +
+ Class Data: + Abstracts common build information (e.g. how to build a + Linux kernel). +
+ Configuration Data: + Defines machine-specific settings, policy decisions, and + so forth. + Configuration data acts as the glue to bind everything + together. +
+
++ BitBake knows how to combine multiple data sources together and + refers to each data source as a layer. + For information on layers, see the + "Understanding and Creating Layers" + section of the Yocto Project Development Tasks Manual. +
++ Following are some brief details on these core components. + For additional information on how these components interact during + a build, see the + "Development Concepts" + section. +
++ The Yocto Project team maintains complete source repositories for all + Yocto Project files at + http://git.yoctoproject.org/cgit/cgit.cgi. + This web-based source code browser is organized into categories by + function such as IDE Plugins, Matchbox, Poky, Yocto Linux Kernel, and + so forth. + From the interface, you can click on any particular item in the "Name" + column and see the URL at the bottom of the page that you need to clone + a Git repository for that particular item. + Having a local Git repository of the + Source Directory, + which is usually named "poky", allows + you to make changes, contribute to the history, and ultimately enhance + the Yocto Project's tools, Board Support Packages, and so forth. +
+
+ For any supported release of Yocto Project, you can also go to the
+ Yocto Project Website and
+ select the "Downloads" tab and get a released tarball of the
+ poky repository or any supported BSP tarballs.
+ Unpacking these tarballs gives you a snapshot of the released
+ files.
+
+ The recommended method for setting up the Yocto Project
+ Source Directory
+ and the files for supported BSPs
+ (e.g., meta-intel) is to use
+ Git to create a local copy of
+ the upstream repositories.
+
+ Be sure to always work in matching branches for both
+ the selected BSP repository and the
+ Source Directory
+ (i.e. poky) repository.
+ For example, if you have checked out the "master" branch
+ of poky and you are going to use
+ meta-intel, be sure to checkout the
+ "master" branch of meta-intel.
+
+
++ In summary, here is where you can get the project files needed for + development: +
++ + Source Repositories: + + This area contains IDE Plugins, Matchbox, Poky, Poky Support, + Tools, Yocto Linux Kernel, and Yocto Metadata Layers. + You can create local copies of Git repositories for each of + these areas.
++
+![]() |
+ For steps on how to view and access these upstream Git + repositories, see the + "Accessing Source Repositories" + Section in the Yocto Project Development Tasks Manual. +
++ + Index of /releases: + + This is an index of releases such as + the Eclipse™ + Yocto Plug-in, miscellaneous support, Poky, Pseudo, installers + for cross-development toolchains, and all released versions of + Yocto Project in the form of images or tarballs. + Downloading and extracting these files does not produce a local + copy of the Git repository but rather a snapshot of a + particular release or image.
++
+![]() |
+ For steps on how to view and access these files, see the + "Accessing Index of Releases" + section in the Yocto Project Development Tasks Manual. +
++ "Downloads" page for the + Yocto Project Website: +
+This section will change due to + reworking of the YP Website.
+The Yocto Project website includes a "Downloads" tab + that allows you to download any Yocto Project + release and Board Support Package (BSP) in tarball form. + The tarballs are similar to those found in the + Index of /releases: area.
++
+![]() |
+ For steps on how to use the "Downloads" page, see the + "Using the Downloads Page" + section in the Yocto Project Development Tasks Manual. +
++
++ The Yocto Project is an open-source collaboration project whose + focus is for developers of embedded Linux systems. + Among other things, the Yocto Project uses an + OpenEmbedded build system. + The build system, which is based on the OpenEmbedded (OE) project and + uses the + BitBake tool, + constructs complete Linux images for architectures based on ARM, MIPS, + PowerPC, x86 and x86-64. +
++ The Yocto Project provides various ancillary tools for the embedded + developer and also features the Sato reference User Interface, which + is optimized for stylus-driven, low-resolution screens. +
+![]() |
+ Here are some highlights for the Yocto Project: +
++ Provides a recent Linux kernel along with a set of system + commands and libraries suitable for the embedded + environment. +
+ Makes available system components such as X11, GTK+, Qt, + Clutter, and SDL (among others) so you can create a rich user + experience on devices that have display hardware. + For devices that do not have a display or where you wish to + use alternative UI frameworks, these components need not be + installed. +
+ Creates a focused and stable core compatible with the + OpenEmbedded project with which you can easily and reliably + build and develop. +
+ Fully supports a wide range of hardware and device emulation + through the Quick EMUlator (QEMU). +
+ Provides a layer mechanism that allows you to easily extend + the system, make customizations, and keep them organized. +
+ You can use the Yocto Project to generate images for many kinds + of devices. + As mentioned earlier, the Yocto Project supports creation of + reference images that you can boot within and emulate using QEMU. + The standard example machines target QEMU full-system + emulation for 32-bit and 64-bit variants of x86, ARM, MIPS, and + PowerPC architectures. + Beyond emulation, you can use the layer mechanism to extend + support to just about any platform that Linux can run on and that + a toolchain can target. +
++ Another Yocto Project feature is the Sato reference User + Interface. + This optional UI that is based on GTK+ is intended for devices with + restricted screen sizes and is included as part of the + OpenEmbedded Core layer so that developers can test parts of the + software stack. +
++ While the Yocto Project does not provide a strict testing framework, + it does provide or generate for you artifacts that let you perform + target-level and emulated testing and debugging. + Additionally, if you are an + Eclipse™ IDE user, you can + install an Eclipse Yocto Plug-in to allow you to develop within that + familiar environment. +
++ By default, using the Yocto Project to build an image creates a Poky + distribution. + However, you can create your own distribution by providing key + Metadata. + A good example is Angstrom, which has had a distribution + based on the Yocto Project since its inception. + Other examples include commercial distributions like + Wind River Linux, + Mentor Embedded Linux, + ENEA Linux + and others. + See the "Creating Your Own Distribution" + section in the Yocto Project Development Tasks Manual for more + information. +
+Copyright © 2010-2018 Linux Foundation
+ Permission is granted to copy, distribute and/or modify this document under + the terms of the + Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by + Creative Commons. +
++ This version of the + Yocto Project Overview Manual + is for the 2.5 release of the + Yocto Project. + To be sure you have the latest version of the manual + for this release, use the manual from the + Yocto Project documentation page. +
+ For manuals associated with other releases of the Yocto + Project, go to the + Yocto Project documentation page + and use the drop-down "Active Releases" button + and choose the manual associated with the desired + Yocto Project. +
+ To report any inaccuracies or problems with this
+ manual, send an email to the Yocto Project
+ discussion group at
+ yocto@yoctoproject.com or log into
+ the freenode #yocto channel.
+
| Revision History | |
|---|---|
| Revision 2.5 | April 2018 |
| The initial document released with the Yocto Project 2.5 Release. | |
Table of Contents
Table of Contents
+ Welcome to the Yocto Project Overview Manual! + This manual introduces the Yocto Project by providing concepts, + software overviews, best-known-methods (BKMs), and any other + high-level introductory information suitable for a new Yocto + Project user. +
+ The following list describes what you can get from this manual: +
+ Major Topic: + Provide a high-level description of this major topic. +
+ Major Topic: + Provide a high-level description of this major topic. +
+ Major Topic: + Provide a high-level description of this major topic. +
+ Major Topic: + Provide a high-level description of this major topic. +
+
+ This manual does not give you the following: +
+ Step-by-step Instructions for Development Tasks: + Instructional procedures reside in other manuals within + the Yocto Project documentation set. + For example, the + Yocto Project Development Tasks Manual + provides examples on how to perform various development + tasks. + As another example, the + Yocto Project Application Development and the Extensible Software Development Kit (eSDK) + manual contains detailed instructions on how to install an + SDK, which is used to develop applications for target + hardware. +
+ Reference Material: + This type of material resides in an appropriate reference + manual. + For example, system variables are documented in the + Yocto Project Reference Manual. + As another example, the + Yocto Project Board Support Package (BSP) Developer's Guide + contains reference information on BSPs. +
+ Detailed Public Information Not Specific to the + Yocto Project: + For example, exhaustive information on how to use the + Source Control Manager Git is better covered with Internet + searches and official Git Documentation than through the + Yocto Project documentation. +
+
+ Because this manual presents information for many different + topics, supplemental information is recommended for full + comprehension. + For additional introductory information on the Yocto Project, see + the Yocto Project Website. + You can find an introductory to using the Yocto Project by working + through the + Yocto Project Quick Start. +
+ For a comprehensive list of links and other documentation, see the + "Links and Related Documentation" + section in the Yocto Project Reference Manual. +
Table of Contents
+ This chapter takes a look at the Yocto Project development + environment and also provides a detailed look at what goes on during + development in that environment. + The chapter provides Yocto Project Development environment concepts that + help you understand how work is accomplished in an open source environment, + which is very different as compared to work accomplished in a closed, + proprietary environment. +
+ Specifically, this chapter addresses open source philosophy, workflows, + Git, source repositories, licensing, recipe syntax, and development + syntax. +
+ The Yocto Project is an open-source collaboration project whose + focus is for developers of embedded Linux systems. + Among other things, the Yocto Project uses an + OpenEmbedded build system. + The build system, which is based on the OpenEmbedded (OE) project and + uses the + BitBake tool, + constructs complete Linux images for architectures based on ARM, MIPS, + PowerPC, x86 and x86-64. +
+ The Yocto Project provides various ancillary tools for the embedded + developer and also features the Sato reference User Interface, which + is optimized for stylus-driven, low-resolution screens. +
![]() |
+ Here are some highlights for the Yocto Project: +
+ Provides a recent Linux kernel along with a set of system + commands and libraries suitable for the embedded + environment. +
+ Makes available system components such as X11, GTK+, Qt, + Clutter, and SDL (among others) so you can create a rich user + experience on devices that have display hardware. + For devices that do not have a display or where you wish to + use alternative UI frameworks, these components need not be + installed. +
+ Creates a focused and stable core compatible with the + OpenEmbedded project with which you can easily and reliably + build and develop. +
+ Fully supports a wide range of hardware and device emulation + through the Quick EMUlator (QEMU). +
+ Provides a layer mechanism that allows you to easily extend + the system, make customizations, and keep them organized. +
+ You can use the Yocto Project to generate images for many kinds + of devices. + As mentioned earlier, the Yocto Project supports creation of + reference images that you can boot within and emulate using QEMU. + The standard example machines target QEMU full-system + emulation for 32-bit and 64-bit variants of x86, ARM, MIPS, and + PowerPC architectures. + Beyond emulation, you can use the layer mechanism to extend + support to just about any platform that Linux can run on and that + a toolchain can target. +
+ Another Yocto Project feature is the Sato reference User + Interface. + This optional UI that is based on GTK+ is intended for devices with + restricted screen sizes and is included as part of the + OpenEmbedded Core layer so that developers can test parts of the + software stack. +
+ While the Yocto Project does not provide a strict testing framework, + it does provide or generate for you artifacts that let you perform + target-level and emulated testing and debugging. + Additionally, if you are an + Eclipse™ IDE user, you can + install an Eclipse Yocto Plug-in to allow you to develop within that + familiar environment. +
+ By default, using the Yocto Project to build an image creates a Poky + distribution. + However, you can create your own distribution by providing key + Metadata. + A good example is Angstrom, which has had a distribution + based on the Yocto Project since its inception. + Other examples include commercial distributions like + Wind River Linux, + Mentor Embedded Linux, + ENEA Linux + and others. + See the "Creating Your Own Distribution" + section in the Yocto Project Development Tasks Manual for more + information. +
+ Open source philosophy is characterized by software development + directed by peer production and collaboration through an active + community of developers. + Contrast this to the more standard centralized development models + used by commercial software companies where a finite set of developers + produces a product for sale using a defined set of procedures that + ultimately result in an end product whose architecture and source + material are closed to the public. +
+ Open source projects conceptually have differing concurrent agendas, + approaches, and production. + These facets of the development process can come from anyone in the + public (community) that has a stake in the software project. + The open source environment contains new copyright, licensing, domain, + and consumer issues that differ from the more traditional development + environment. + In an open source environment, the end product, source material, + and documentation are all available to the public at no cost. +
+ A benchmark example of an open source project is the Linux kernel, + which was initially conceived and created by Finnish computer science + student Linus Torvalds in 1991. + Conversely, a good example of a non-open source project is the + Windows® family of operating + systems developed by + Microsoft® Corporation. +
+ Wikipedia has a good historical description of the Open Source + Philosophy + here. + You can also find helpful information on how to participate in the + Linux Community + here. +
+ This section provides workflow concepts using the Yocto Project and + Git. + In particular, the information covers basic practices that describe + roles and actions in a collaborative development environment. +
+
+ The Yocto Project files are maintained using Git in "master" + branches whose Git histories track every change and whose structures + provides branches for all diverging functionality. + Although there is no need to use Git, many open source projects do so. +
+ +
+ For the Yocto Project, a key individual called the "maintainer" is + responsible for the "master" branch of a given Git repository. + The "master" branch is the “upstream” repository from which final or + most recent builds of the project occur. + The maintainer is responsible for accepting changes from other + developers and for organizing the underlying branch structure to + reflect release strategies and so forth. +
+
+ The Yocto Project poky Git repository also has an
+ upstream contribution Git repository named
+ poky-contrib.
+ You can see all the branches in this repository using the web interface
+ of the
+ Source Repositories organized
+ within the "Poky Support" area.
+ These branches temporarily hold changes to the project that have been
+ submitted or committed by the Yocto Project development team and by
+ community members who contribute to the project.
+ The maintainer determines if the changes are qualified to be moved
+ from the "contrib" branches into the "master" branch of the Git
+ repository.
+
+ Developers (including contributing community members) create and + maintain cloned repositories of the upstream "master" branch. + The cloned repositories are local to their development platforms and + are used to develop changes. + When a developer is satisfied with a particular feature or change, + they "push" the changes to the appropriate "contrib" repository. +
+ Developers are responsible for keeping their local repository + up-to-date with "master". + They are also responsible for straightening out any conflicts that + might arise within files that are being worked on simultaneously by + more than one person. + All this work is done locally on the developer’s machine before + anything is pushed to a "contrib" area and examined at the maintainer’s + level. +
+ A somewhat formal method exists by which developers commit changes + and push them into the "contrib" area and subsequently request that + the maintainer include them into "master". + This process is called “submitting a patch” or "submitting a change." + For information on submitting patches and changes, see the + "Submitting a Change to the Yocto Project" + section in the Yocto Project Development Tasks Manual. +
+ To summarize the development workflow: a single point of entry + exists for changes into the project’s "master" branch of the + Git repository, which is controlled by the project’s maintainer. + And, a set of developers exist who independently develop, test, and + submit changes to "contrib" areas for the maintainer to examine. + The maintainer then chooses which changes are going to become a + permanent part of the project. +
+
![]() |
+
+ While each development environment is unique, there are some best + practices or methods that help development run smoothly. + The following list describes some of these practices. + For more information about Git workflows, see the workflow topics in + the + Git Community Book. +
+ Make Small Changes: + It is best to keep the changes you commit small as compared to + bundling many disparate changes into a single commit. + This practice not only keeps things manageable but also allows + the maintainer to more easily include or refuse changes.
It is also good practice to leave the repository in a + state that allows you to still successfully build your project. + In other words, do not commit half of a feature, + then add the other half as a separate, later commit. + Each commit should take you from one buildable project state + to another buildable state. +
+ Use Branches Liberally: + It is very easy to create, use, and delete local branches in + your working Git repository. + You can name these branches anything you like. + It is helpful to give them names associated with the particular + feature or change on which you are working. + Once you are done with a feature or change and have merged it + into your local master branch, simply discard the temporary + branch. +
+ Merge Changes:
+ The git merge command allows you to take
+ the changes from one branch and fold them into another branch.
+ This process is especially helpful when more than a single
+ developer might be working on different parts of the same
+ feature.
+ Merging changes also automatically identifies any collisions
+ or "conflicts" that might happen as a result of the same lines
+ of code being altered by two different developers.
+
+ Manage Branches: + Because branches are easy to use, you should use a system + where branches indicate varying levels of code readiness. + For example, you can have a "work" branch to develop in, a + "test" branch where the code or change is tested, a "stage" + branch where changes are ready to be committed, and so forth. + As your project develops, you can merge code across the + branches to reflect ever-increasing stable states of the + development. +
+ Use Push and Pull:
+ The push-pull workflow is based on the concept of developers
+ "pushing" local commits to a remote repository, which is
+ usually a contribution repository.
+ This workflow is also based on developers "pulling" known
+ states of the project down into their local development
+ repositories.
+ The workflow easily allows you to pull changes submitted by
+ other developers from the upstream repository into your
+ work area ensuring that you have the most recent software
+ on which to develop.
+ The Yocto Project has two scripts named
+ create-pull-request and
+ send-pull-request that ship with the
+ release to facilitate this workflow.
+ You can find these scripts in the scripts
+ folder of the
+ Source Directory.
+ For information on how to use these scripts, see the
+ "Using Scripts to Push a Change Upstream and Request a Pull"
+ section in the Yocto Project Development Tasks Manual.
+
+ Patch Workflow:
+ This workflow allows you to notify the maintainer through an
+ email that you have a change (or patch) you would like
+ considered for the "master" branch of the Git repository.
+ To send this type of change, you format the patch and then
+ send the email using the Git commands
+ git format-patch and
+ git send-email.
+ For information on how to use these scripts, see the
+ "Submitting a Change to the Yocto Project"
+ section in the Yocto Project Development Tasks Manual.
+
+
+ The Yocto Project makes extensive use of Git, which is a + free, open source distributed version control system. + Git supports distributed development, non-linear development, + and can handle large projects. + It is best that you have some fundamental understanding + of how Git tracks projects and how to work with Git if + you are going to use the Yocto Project for development. + This section provides a quick overview of how Git works and + provides you with a summary of some essential Git commands. +
+ For more information on Git, see + http://git-scm.com/documentation. +
+ If you need to download Git, it is recommended that you add + Git to your system through your distribution's "software + store" (e.g. for Ubuntu, use the Ubuntu Software feature). + For the Git download page, see + http://git-scm.com/download. +
+ For examples beyond the limited few in this section on how + to use Git with the Yocto Project, see the + "Working With Yocto Project Source Files" + section in the Yocto Project Development Tasks Manual. +
+
+ As mentioned briefly in the previous section and also in the + "Workflows" section, + the Yocto Project maintains source repositories at + http://git.yoctoproject.org/cgit.cgi. + If you look at this web-interface of the repositories, each item + is a separate Git repository. +
+ Git repositories use branching techniques that track content + change (not files) within a project (e.g. a new feature or updated + documentation). + Creating a tree-like structure based on project divergence allows + for excellent historical information over the life of a project. + This methodology also allows for an environment from which you can + do lots of local experimentation on projects as you develop + changes or new features. +
+ A Git repository represents all development efforts for a given
+ project.
+ For example, the Git repository poky contains
+ all changes and developments for Poky over the course of its
+ entire life.
+ That means that all changes that make up all releases are captured.
+ The repository maintains a complete history of changes.
+
+ You can create a local copy of any repository by "cloning" it
+ with the git clone command.
+ When you clone a Git repository, you end up with an identical
+ copy of the repository on your development system.
+ Once you have a local copy of a repository, you can take steps to
+ develop locally.
+ For examples on how to clone Git repositories, see the
+ "Working With Yocto Project Source Files"
+ section in the Yocto Project Development Tasks Manual.
+
+ It is important to understand that Git tracks content change and
+ not files.
+ Git uses "branches" to organize different development efforts.
+ For example, the poky repository has
+ several branches that include the current "sumo"
+ branch, the "master" branch, and many branches for past
+ Yocto Project releases.
+ You can see all the branches by going to
+ http://git.yoctoproject.org/cgit.cgi/poky/ and
+ clicking on the
+ [...]
+ link beneath the "Branch" heading.
+
+ Each of these branches represents a specific area of development. + The "master" branch represents the current or most recent + development. + All other branches represent offshoots of the "master" branch. +
+ When you create a local copy of a Git repository, the copy has + the same set of branches as the original. + This means you can use Git to create a local working area + (also called a branch) that tracks a specific development branch + from the upstream source Git repository. + in other words, you can define your local Git environment to + work on any development branch in the repository. + To help illustrate, consider the following example Git commands: +
+ $ cd ~ + $ git clone git://git.yoctoproject.org/poky + $ cd poky + $ git checkout -b sumo origin/sumo +
+ In the previous example after moving to the home directory, the
+ git clone command creates a
+ local copy of the upstream poky Git repository.
+ By default, Git checks out the "master" branch for your work.
+ After changing the working directory to the new local repository
+ (i.e. poky), the
+ git checkout command creates
+ and checks out a local branch named "sumo", which
+ tracks the upstream "origin/sumo" branch.
+ Changes you make while in this branch would ultimately affect
+ the upstream "sumo" branch of the
+ poky repository.
+
+ It is important to understand that when you create and checkout a + local working branch based on a branch name, + your local environment matches the "tip" of that particular + development branch at the time you created your local branch, + which could be different from the files in the "master" branch + of the upstream repository. + In other words, creating and checking out a local branch based on + the "sumo" branch name is not the same as + cloning and checking out the "master" branch if the repository. + Keep reading to see how you create a local snapshot of a Yocto + Project Release. +
+ Git uses "tags" to mark specific changes in a repository.
+ Typically, a tag is used to mark a special point such as the final
+ change before a project is released.
+ You can see the tags used with the poky Git
+ repository by going to
+ http://git.yoctoproject.org/cgit.cgi/poky/ and
+ clicking on the
+ [...]
+ link beneath the "Tag" heading.
+
+ Some key tags for the poky are
+ jethro-14.0.3,
+ morty-16.0.1,
+ pyro-17.0.0, and
+ sumo-20.0.0.
+ These tags represent Yocto Project releases.
+
+ When you create a local copy of the Git repository, you also + have access to all the tags in the upstream repository. + Similar to branches, you can create and checkout a local working + Git branch based on a tag name. + When you do this, you get a snapshot of the Git repository that + reflects the state of the files when the change was made associated + with that tag. + The most common use is to checkout a working branch that matches + a specific Yocto Project release. + Here is an example: +
+ $ cd ~ + $ git clone git://git.yoctoproject.org/poky + $ cd poky + $ git fetch --all --tags --prune + $ git checkout tags/pyro-17.0.0 -b my-pyro-17.0.0 +
+ In this example, the name of the top-level directory of your
+ local Yocto Project repository is poky.
+ After moving to the poky directory, the
+ git fetch command makes all the upstream
+ tags available locally in your repository.
+ Finally, the git checkout command
+ creates and checks out a branch named "my-pyro-17.0.0" that is
+ based on the specific change upstream in the repository
+ associated with the "pyro-17.0.0" tag.
+ The files in your repository now exactly match that particular
+ Yocto Project release as it is tagged in the upstream Git
+ repository.
+ It is important to understand that when you create and
+ checkout a local working branch based on a tag, your environment
+ matches a specific point in time and not the entire development
+ branch (i.e. the "tip" of the branch).
+
+ Git has an extensive set of commands that lets you manage changes + and perform collaboration over the life of a project. + Conveniently though, you can manage with a small set of basic + operations and workflows once you understand the basic + philosophy behind Git. + You do not have to be an expert in Git to be functional. + A good place to look for instruction on a minimal set of Git + commands is + here. +
+ If you do not know much about Git, you should educate + yourself by visiting the links previously mentioned. +
+ The following list of Git commands briefly describes some basic + Git operations as a way to get started. + As with any set of commands, this list (in most cases) simply shows + the base command and omits the many arguments they support. + See the Git documentation for complete descriptions and strategies + on how to use these commands: +
+ git init:
+ Initializes an empty Git repository.
+ You cannot use Git commands unless you have a
+ .git repository.
+
+ git clone:
+ Creates a local clone of a Git repository that is on
+ equal footing with a fellow developer’s Git repository
+ or an upstream repository.
+
+ git add:
+ Locally stages updated file contents to the index that
+ Git uses to track changes.
+ You must stage all files that have changed before you
+ can commit them.
+
+ git commit:
+ Creates a local "commit" that documents the changes you
+ made.
+ Only changes that have been staged can be committed.
+ Commits are used for historical purposes, for determining
+ if a maintainer of a project will allow the change,
+ and for ultimately pushing the change from your local
+ Git repository into the project’s upstream repository.
+
+ git status:
+ Reports any modified files that possibly need to be
+ staged and gives you a status of where you stand regarding
+ local commits as compared to the upstream repository.
+
+ git checkout branch-name:
+ Changes your working branch.
+ This command is analogous to "cd".
+
git checkout –b working-branch:
+ Creates and checks out a working branch on your local
+ machine that you can use to isolate your work.
+ It is a good idea to use local branches when adding
+ specific features or changes.
+ Using isolated branches facilitates easy removal of
+ changes if they do not work out.
+
git branch:
+ Displays the existing local branches associated with your
+ local repository.
+ The branch that you have currently checked out is noted
+ with an asterisk character.
+
+ git branch -D branch-name:
+ Deletes an existing local branch.
+ You need to be in a local branch other than the one you
+ are deleting in order to delete
+ branch-name.
+
+ git pull:
+ Retrieves information from an upstream Git repository
+ and places it in your local Git repository.
+ You use this command to make sure you are synchronized with
+ the repository from which you are basing changes
+ (.e.g. the "master" branch).
+
+ git push:
+ Sends all your committed local changes to the upstream Git
+ repository that your local repository is tracking
+ (e.g. a contribution repository).
+ The maintainer of the project draws from these repositories
+ to merge changes (commits) into the appropriate branch
+ of project's upstream repository.
+
+ git merge:
+ Combines or adds changes from one
+ local branch of your repository with another branch.
+ When you create a local Git repository, the default branch
+ is named "master".
+ A typical workflow is to create a temporary branch that is
+ based off "master" that you would use for isolated work.
+ You would make your changes in that isolated branch,
+ stage and commit them locally, switch to the "master"
+ branch, and then use the git merge
+ command to apply the changes from your isolated branch
+ into the currently checked out branch (e.g. "master").
+ After the merge is complete and if you are done with
+ working in that isolated branch, you can safely delete
+ the isolated branch.
+
+ git cherry-pick:
+ Choose and apply specific commits from one branch
+ into another branch.
+ There are times when you might not be able to merge
+ all the changes in one branch with
+ another but need to pick out certain ones.
+
+ gitk:
+ Provides a GUI view of the branches and changes in your
+ local Git repository.
+ This command is a good way to graphically see where things
+ have diverged in your local repository.
+
gitk
+ package on your development system to use this
+ command.
+ +
+ git log:
+ Reports a history of your commits to the repository.
+ This report lists all commits regardless of whether you
+ have pushed them upstream or not.
+
+ git diff:
+ Displays line-by-line differences between a local
+ working file and the same file as understood by Git.
+ This command is useful to see what you have changed
+ in any given file.
+
+
+ The Yocto Project team maintains complete source repositories for all + Yocto Project files at + http://git.yoctoproject.org/cgit/cgit.cgi. + This web-based source code browser is organized into categories by + function such as IDE Plugins, Matchbox, Poky, Yocto Linux Kernel, and + so forth. + From the interface, you can click on any particular item in the "Name" + column and see the URL at the bottom of the page that you need to clone + a Git repository for that particular item. + Having a local Git repository of the + Source Directory, + which is usually named "poky", allows + you to make changes, contribute to the history, and ultimately enhance + the Yocto Project's tools, Board Support Packages, and so forth. +
+ For any supported release of Yocto Project, you can also go to the
+ Yocto Project Website and
+ select the "Downloads" tab and get a released tarball of the
+ poky repository or any supported BSP tarballs.
+ Unpacking these tarballs gives you a snapshot of the released
+ files.
+
+ The recommended method for setting up the Yocto Project
+ Source Directory
+ and the files for supported BSPs
+ (e.g., meta-intel) is to use
+ Git to create a local copy of
+ the upstream repositories.
+
+ Be sure to always work in matching branches for both
+ the selected BSP repository and the
+ Source Directory
+ (i.e. poky) repository.
+ For example, if you have checked out the "master" branch
+ of poky and you are going to use
+ meta-intel, be sure to checkout the
+ "master" branch of meta-intel.
+
+
+ In summary, here is where you can get the project files needed for + development: +
+ + Source Repositories: + + This area contains IDE Plugins, Matchbox, Poky, Poky Support, + Tools, Yocto Linux Kernel, and Yocto Metadata Layers. + You can create local copies of Git repositories for each of + these areas.
+
![]() |
+ For steps on how to view and access these upstream Git + repositories, see the + "Accessing Source Repositories" + Section in the Yocto Project Development Tasks Manual. +
+ + Index of /releases: + + This is an index of releases such as + the Eclipse™ + Yocto Plug-in, miscellaneous support, Poky, Pseudo, installers + for cross-development toolchains, and all released versions of + Yocto Project in the form of images or tarballs. + Downloading and extracting these files does not produce a local + copy of the Git repository but rather a snapshot of a + particular release or image.
+
![]() |
+ For steps on how to view and access these files, see the + "Accessing Index of Releases" + section in the Yocto Project Development Tasks Manual. +
+ "Downloads" page for the + Yocto Project Website: +
This section will change due to + reworking of the YP Website.
The Yocto Project website includes a "Downloads" tab + that allows you to download any Yocto Project + release and Board Support Package (BSP) in tarball form. + The tarballs are similar to those found in the + Index of /releases: area.
+
![]() |
+ For steps on how to use the "Downloads" page, see the + "Using the Downloads Page" + section in the Yocto Project Development Tasks Manual. +
+
+ Because open source projects are open to the public, they have + different licensing structures in place. + License evolution for both Open Source and Free Software has an + interesting history. + If you are interested in this history, you can find basic information + here: +
+
+ In general, the Yocto Project is broadly licensed under the + Massachusetts Institute of Technology (MIT) License. + MIT licensing permits the reuse of software within proprietary + software as long as the license is distributed with that software. + MIT is also compatible with the GNU General Public License (GPL). + Patches to the Yocto Project follow the upstream licensing scheme. + You can find information on the MIT license + here. + You can find information on the GNU GPL + here. +
+ When you build an image using the Yocto Project, the build process
+ uses a known list of licenses to ensure compliance.
+ You can find this list in the
+ Source Directory
+ at meta/files/common-licenses.
+ Once the build completes, the list of all licenses found and used
+ during that build are kept in the
+ Build Directory
+ at tmp/deploy/licenses.
+
+ If a module requires a license that is not in the base list, the + build process generates a warning during the build. + These tools make it easier for a developer to be certain of the + licenses with which their shipped products must comply. + However, even with these tools it is still up to the developer to + resolve potential licensing issues. +
+ The base list of licenses used by the build process is a combination + of the Software Package Data Exchange (SPDX) list and the Open + Source Initiative (OSI) projects. + SPDX Group is a working group of + the Linux Foundation that maintains a specification for a standard + format for communicating the components, licenses, and copyrights + associated with a software package. + OSI is a corporation + dedicated to the Open Source Definition and the effort for reviewing + and approving licenses that conform to the Open Source Definition + (OSD). +
+ You can find a list of the combined SPDX and OSI licenses that the
+ Yocto Project uses in the
+ meta/files/common-licenses directory in your
+ Source Directory.
+
+ For information that can help you maintain compliance with various + open source licensing during the lifecycle of a product created using + the Yocto Project, see the + "Maintaining Open Source License Compliance During Your Product's Lifecycle" + section in the Yocto Project Development Tasks Manual. +
+ Understanding recipe file syntax is important for + writing recipes. + The following list overviews the basic items that make up a + BitBake recipe file. + For more complete BitBake syntax descriptions, see the + "Syntax and Operators" + chapter of the BitBake User Manual. +
Variable Assignments and Manipulations: + Variable assignments allow a value to be assigned to a + variable. + The assignment can be static text or might include + the contents of other variables. + In addition to the assignment, appending and prepending + operations are also supported.
The following example shows some of the ways + you can use variables in recipes: +
+ S = "${WORKDIR}/postfix-${PV}"
+ CFLAGS += "-DNO_ASM"
+ SRC_URI_append = " file://fixup.patch"
+ +
Functions:
+ Functions provide a series of actions to be performed.
+ You usually use functions to override the default
+ implementation of a task function or to complement
+ a default function (i.e. append or prepend to an
+ existing function).
+ Standard functions use sh shell
+ syntax, although access to OpenEmbedded variables and
+ internal methods are also available.
The following is an example function from the
+ sed recipe:
+
+ do_install () {
+ autotools_do_install
+ install -d ${D}${base_bindir}
+ mv ${D}${bindir}/sed ${D}${base_bindir}/sed
+ rmdir ${D}${bindir}/
+ }
+ + It is also possible to implement new functions that + are called between existing tasks as long as the + new functions are not replacing or complementing the + default functions. + You can implement functions in Python + instead of shell. + Both of these options are not seen in the majority of + recipes.
Keywords:
+ BitBake recipes use only a few keywords.
+ You use keywords to include common
+ functions (inherit), load parts
+ of a recipe from other files
+ (include and
+ require) and export variables
+ to the environment (export).
The following example shows the use of some of + these keywords: +
+ export POSTCONF = "${STAGING_BINDIR}/postconf"
+ inherit autoconf
+ require otherfile.inc
+ +
Comments:
+ Any lines that begin with the hash character
+ (#) are treated as comment lines
+ and are ignored:
+
+ # This is a comment +
+
+
+ This next list summarizes the most important and most commonly + used parts of the recipe syntax. + For more information on these parts of the syntax, you can + reference the + Syntax and Operators + chapter in the BitBake User Manual. +
Line Continuation: \ -
+ Use the backward slash (\)
+ character to split a statement over multiple lines.
+ Place the slash character at the end of the line that
+ is to be continued on the next line:
+
+ VAR = "A really long \ + line" +
+
+
+ Using Variables: ${...} -
+ Use the ${ syntax to
+ access the contents of a variable:
+ VARNAME}
+ SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz"
+ +
:= operator instead of
+ = when you make the
+ assignment, but this is not generally needed.
+ +
Quote All Assignments: " -
+ Use double quotes around the value in all variable
+ assignments.
+ value"
+ VAR1 = "${OTHERVAR}"
+ VAR2 = "The version is ${PV}"
+ +
Conditional Assignment: ?= -
+ Conditional assignment is used to assign a value to
+ a variable, but only when the variable is currently
+ unset.
+ Use the question mark followed by the equal sign
+ (?=) to make a "soft" assignment
+ used for conditional assignment.
+ Typically, "soft" assignments are used in the
+ local.conf file for variables
+ that are allowed to come through from the external
+ environment.
+
Here is an example where
+ VAR1 is set to "New value" if
+ it is currently empty.
+ However, if VAR1 has already been
+ set, it remains unchanged:
+
+ VAR1 ?= "New value" +
+ In this next example, VAR1
+ is left with the value "Original value":
+
+ VAR1 = "Original value" + VAR1 ?= "New value" +
+
Appending: += -
+ Use the plus character followed by the equals sign
+ (+=) to append values to existing
+ variables.
+
Here is an example: +
+ SRC_URI += "file://fix-makefile.patch" +
+
Prepending: =+ -
+ Use the equals sign followed by the plus character
+ (=+) to prepend values to existing
+ variables.
+
Here is an example: +
+ VAR =+ "Starts" +
+
Appending: _append -
+ Use the _append operator to
+ append values to existing variables.
+ This operator does not add any additional space.
+ Also, the operator is applied after all the
+ +=, and
+ =+ operators have been applied and
+ after all = assignments have
+ occurred.
+
The following example shows the space being + explicitly added to the start to ensure the appended + value is not merged with the existing value: +
+ SRC_URI_append = " file://fix-makefile.patch" +
+ You can also use the _append
+ operator with overrides, which results in the actions
+ only being performed for the specified target or
+ machine:
+
+ SRC_URI_append_sh4 = " file://fix-makefile.patch" +
+
Prepending: _prepend -
+ Use the _prepend operator to
+ prepend values to existing variables.
+ This operator does not add any additional space.
+ Also, the operator is applied after all the
+ +=, and
+ =+ operators have been applied and
+ after all = assignments have
+ occurred.
+
The following example shows the space being + explicitly added to the end to ensure the prepended + value is not merged with the existing value: +
+ CFLAGS_prepend = "-I${S}/myincludes "
+
+ You can also use the _prepend
+ operator with overrides, which results in the actions
+ only being performed for the specified target or
+ machine:
+
+ CFLAGS_prepend_sh4 = "-I${S}/myincludes "
+ +
Overrides: -
+ You can use overrides to set a value conditionally,
+ typically based on how the recipe is being built.
+ For example, to set the
+ KBRANCH
+ variable's value to "standard/base" for any target
+ MACHINE,
+ except for qemuarm where it should be set to
+ "standard/arm-versatile-926ejs", you would do the
+ following:
+
+ KBRANCH = "standard/base" + KBRANCH_qemuarm = "standard/arm-versatile-926ejs" +
+ Overrides are also used to separate alternate values
+ of a variable in other situations.
+ For example, when setting variables such as
+ FILES
+ and
+ RDEPENDS
+ that are specific to individual packages produced by
+ a recipe, you should always use an override that
+ specifies the name of the package.
+
Indentation: + Use spaces for indentation rather than than tabs. + For shell functions, both currently work. + However, it is a policy decision of the Yocto Project + to use tabs in shell functions. + Realize that some layers have a policy to use spaces + for all indentation. +
Using Python for Complex Operations: ${@ -
+ For more advanced processing, it is possible to use
+ Python code during variable assignments (e.g.
+ search and replacement on a variable).python_code}
You indicate Python code using the
+ ${@
+ syntax for the variable assignment:
+ python_code}
+ SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz
+ +
Shell Function Syntax:
+ Write shell functions as if you were writing a shell
+ script when you describe a list of actions to take.
+ You should ensure that your script works with a generic
+ sh and that it does not require
+ any bash or other shell-specific
+ functionality.
+ The same considerations apply to various system
+ utilities (e.g. sed,
+ grep, awk,
+ and so forth) that you might wish to use.
+ If in doubt, you should check with multiple
+ implementations - including those from BusyBox.
+
+
+ This section takes a more detailed look inside the development + process. + The following diagram represents development at a high level. + The remainder of this chapter expands on the fundamental input, output, + process, and + Metadata) blocks + that make up development in the Yocto Project environment. +
![]() |
+
+ In general, development consists of several functional areas: +
User Configuration: + Metadata you can use to control the build process. +
Metadata Layers: + Various layers that provide software, machine, and + distro Metadata.
Source Files: + Upstream releases, local projects, and SCMs.
Build System: + Processes under the control of + BitBake. + This block expands on how BitBake fetches source, applies + patches, completes compilation, analyzes output for package + generation, creates and tests packages, generates images, and + generates cross-development tools.
Package Feeds: + Directories containing output packages (RPM, DEB or IPK), + which are subsequently used in the construction of an image or + SDK, produced by the build system. + These feeds can also be copied and shared using a web server or + other means to facilitate extending or updating existing + images on devices at runtime if runtime package management is + enabled.
Images: + Images produced by the development process. +
Application Development SDK: + Cross-development tools that are produced along with an image + or separately with BitBake.
+
+ User configuration helps define the build. + Through user configuration, you can tell BitBake the + target architecture for which you are building the image, + where to store downloaded source, and other build properties. +
+ The following figure shows an expanded representation of the + "User Configuration" box of the + general Yocto Project Development Environment figure: +
+
![]() |
+
+ BitBake needs some basic configuration files in order to complete
+ a build.
+ These files are *.conf files.
+ The minimally necessary ones reside as example files in the
+ Source Directory.
+ For simplicity, this section refers to the Source Directory as
+ the "Poky Directory."
+
+ When you clone the poky Git repository or you
+ download and unpack a Yocto Project release, you can set up the
+ Source Directory to be named anything you want.
+ For this discussion, the cloned repository uses the default
+ name poky.
+
+
+ The meta-poky layer inside Poky contains
+ a conf directory that has example
+ configuration files.
+ These example files are used as a basis for creating actual
+ configuration files when you source the build environment
+ script
+ (i.e.
+ oe-init-build-env).
+
+ Sourcing the build environment script creates a
+ Build Directory
+ if one does not already exist.
+ BitBake uses the Build Directory for all its work during builds.
+ The Build Directory has a conf directory that
+ contains default versions of your local.conf
+ and bblayers.conf configuration files.
+ These default configuration files are created only if versions
+ do not already exist in the Build Directory at the time you
+ source the build environment setup script.
+
+ Because the Poky repository is fundamentally an aggregation of
+ existing repositories, some users might be familiar with running
+ the oe-init-build-env script in the context
+ of separate OpenEmbedded-Core and BitBake repositories rather than a
+ single Poky repository.
+ This discussion assumes the script is executed from within a cloned
+ or unpacked version of Poky.
+
+ Depending on where the script is sourced, different sub-scripts
+ are called to set up the Build Directory (Yocto or OpenEmbedded).
+ Specifically, the script
+ scripts/oe-setup-builddir inside the
+ poky directory sets up the Build Directory and seeds the directory
+ (if necessary) with configuration files appropriate for the
+ Yocto Project development environment.
+
scripts/oe-setup-builddir script
+ uses the $TEMPLATECONF variable to
+ determine which sample configuration files to locate.
+ +
+ The local.conf file provides many
+ basic variables that define a build environment.
+ Here is a list of a few.
+ To see the default configurations in a local.conf
+ file created by the build environment script, see the
+ local.conf.sample in the
+ meta-poky layer:
+
Parallelism Options:
+ Controlled by the
+ BB_NUMBER_THREADS,
+ PARALLEL_MAKE,
+ and
+ BB_NUMBER_PARSE_THREADS
+ variables.
Target Machine Selection:
+ Controlled by the
+ MACHINE
+ variable.
Download Directory:
+ Controlled by the
+ DL_DIR
+ variable.
Shared State Directory:
+ Controlled by the
+ SSTATE_DIR
+ variable.
Build Output:
+ Controlled by the
+ TMPDIR
+ variable.
+
conf/local.conf
+ file can also be set in the
+ conf/site.conf and
+ conf/auto.conf configuration files.
+ +
+ The bblayers.conf file tells BitBake what
+ layers you want considered during the build.
+ By default, the layers listed in this file include layers
+ minimally needed by the build system.
+ However, you must manually add any custom layers you have created.
+ You can find more information on working with the
+ bblayers.conf file in the
+ "Enabling Your Layer"
+ section in the Yocto Project Development Tasks Manual.
+
+ The files site.conf and
+ auto.conf are not created by the environment
+ initialization script.
+ If you want the site.conf file, you need to
+ create that yourself.
+ The auto.conf file is typically created by
+ an autobuilder:
+
site.conf:
+ You can use the conf/site.conf
+ configuration file to configure multiple build directories.
+ For example, suppose you had several build environments and
+ they shared some common features.
+ You can set these default build properties here.
+ A good example is perhaps the packaging format to use
+ through the
+ PACKAGE_CLASSES
+ variable.
One useful scenario for using the
+ conf/site.conf file is to extend your
+ BBPATH
+ variable to include the path to a
+ conf/site.conf.
+ Then, when BitBake looks for Metadata using
+ BBPATH, it finds the
+ conf/site.conf file and applies your
+ common configurations found in the file.
+ To override configurations in a particular build directory,
+ alter the similar configurations within that build
+ directory's conf/local.conf file.
+
auto.conf:
+ The file is usually created and written to by
+ an autobuilder.
+ The settings put into the file are typically the same as
+ you would find in the conf/local.conf
+ or the conf/site.conf files.
+
+
+ You can edit all configuration files to further define + any particular build environment. + This process is represented by the "User Configuration Edits" + box in the figure. +
+ When you launch your build with the
+ bitbake
+ command, BitBake sorts out the configurations to ultimately
+ define your build environment.
+ It is important to understand that the OpenEmbedded build system
+ reads the configuration files in a specific order:
+ targetsite.conf, auto.conf,
+ and local.conf.
+ And, the build system applies the normal assignment statement
+ rules.
+ Because the files are parsed in a specific order, variable
+ assignments for the same variable could be affected.
+ For example, if the auto.conf file and
+ the local.conf set
+ variable1 to different values, because
+ the build system parses local.conf after
+ auto.conf,
+ variable1 is assigned the value from
+ the local.conf file.
+
+ The previous section described the user configurations that + define BitBake's global behavior. + This section takes a closer look at the layers the build system + uses to further control the build. + These layers provide Metadata for the software, machine, and + policy. +
+ In general, three types of layer input exist: +
Policy Configuration: + Distribution Layers provide top-level or general + policies for the image or SDK being built. + For example, this layer would dictate whether BitBake + produces RPM or IPK packages.
Machine Configuration: + Board Support Package (BSP) layers provide machine + configurations. + This type of information is specific to a particular + target architecture.
Metadata: + Software layers contain user-supplied recipe files, + patches, and append files. +
+
+ The following figure shows an expanded representation of the + Metadata, Machine Configuration, and Policy Configuration input + (layers) boxes of the + general Yocto Project Development Environment figure: +
+
![]() |
+
+ In general, all layers have a similar structure.
+ They all contain a licensing file
+ (e.g. COPYING) if the layer is to be
+ distributed, a README file as good practice
+ and especially if the layer is to be distributed, a
+ configuration directory, and recipe directories.
+
+ The Yocto Project has many layers that can be used. + You can see a web-interface listing of them on the + Source Repositories + page. + The layers are shown at the bottom categorized under + "Yocto Metadata Layers." + These layers are fundamentally a subset of the + OpenEmbedded Metadata Index, + which lists all layers provided by the OpenEmbedded community. +
+
+ BitBake uses the conf/bblayers.conf file,
+ which is part of the user configuration, to find what layers it
+ should be using as part of the build.
+
+ For more information on layers, see the + "Understanding and Creating Layers" + section in the Yocto Project Development Tasks Manual. +
+ The distribution layer provides policy configurations for your
+ distribution.
+ Best practices dictate that you isolate these types of
+ configurations into their own layer.
+ Settings you provide in
+ conf/distro/ override
+ similar
+ settings that BitBake finds in your
+ distro.confconf/local.conf file in the Build
+ Directory.
+
+ The following list provides some explanation and references + for what you typically find in the distribution layer: +
classes:
+ Class files (.bbclass) hold
+ common functionality that can be shared among
+ recipes in the distribution.
+ When your recipes inherit a class, they take on the
+ settings and functions for that class.
+ You can read more about class files in the
+ "Classes"
+ section of the Yocto Reference Manual.
+
conf:
+ This area holds configuration files for the
+ layer (conf/layer.conf),
+ the distribution
+ (conf/distro/),
+ and any distribution-wide include files.
+ distro.conf
recipes-*: + Recipes and append files that affect common + functionality across the distribution. + This area could include recipes and append files + to add distribution-specific configuration, + initialization scripts, custom image recipes, + and so forth.
+
+ The BSP Layer provides machine configurations. + Everything in this layer is specific to the machine for which + you are building the image or the SDK. + A common structure or form is defined for BSP layers. + You can learn more about this structure in the + Yocto Project Board Support Package (BSP) Developer's Guide. +
+
+ The BSP Layer's configuration directory contains
+ configuration files for the machine
+ (conf/machine/) and,
+ of course, the layer (machine.confconf/layer.conf).
+
+ The remainder of the layer is dedicated to specific recipes
+ by function: recipes-bsp,
+ recipes-core,
+ recipes-graphics, and
+ recipes-kernel.
+ Metadata can exist for multiple formfactors, graphics
+ support systems, and so forth.
+
recipes-*
+ directories, not all these directories appear in all
+ BSP layers.
+ +
+ The software layer provides the Metadata for additional + software packages used during the build. + This layer does not include Metadata that is specific to the + distribution or the machine, which are found in their + respective layers. +
+ This layer contains any new recipes that your project needs + in the form of recipe files. +
+ In order for the OpenEmbedded build system to create an image or + any target, it must be able to access source files. + The + general Yocto Project Development Environment figure + represents source files using the "Upstream Project Releases", + "Local Projects", and "SCMs (optional)" boxes. + The figure represents mirrors, which also play a role in locating + source files, with the "Source Mirror(s)" box. +
+ The method by which source files are ultimately organized is + a function of the project. + For example, for released software, projects tend to use tarballs + or other archived files that can capture the state of a release + guaranteeing that it is statically represented. + On the other hand, for a project that is more dynamic or + experimental in nature, a project might keep source files in a + repository controlled by a Source Control Manager (SCM) such as + Git. + Pulling source from a repository allows you to control + the point in the repository (the revision) from which you want to + build software. + Finally, a combination of the two might exist, which would give the + consumer a choice when deciding where to get source files. +
+ BitBake uses the
+ SRC_URI
+ variable to point to source files regardless of their location.
+ Each recipe must have a SRC_URI variable
+ that points to the source.
+
+ Another area that plays a significant role in where source files
+ come from is pointed to by the
+ DL_DIR
+ variable.
+ This area is a cache that can hold previously downloaded source.
+ You can also instruct the OpenEmbedded build system to create
+ tarballs from Git repositories, which is not the default behavior,
+ and store them in the DL_DIR by using the
+ BB_GENERATE_MIRROR_TARBALLS
+ variable.
+
+ Judicious use of a DL_DIR directory can
+ save the build system a trip across the Internet when looking
+ for files.
+ A good method for using a download directory is to have
+ DL_DIR point to an area outside of your
+ Build Directory.
+ Doing so allows you to safely delete the Build Directory
+ if needed without fear of removing any downloaded source file.
+
+ The remainder of this section provides a deeper look into the + source files and the mirrors. + Here is a more detailed look at the source file area of the + base figure: +
![]() |
+
+ Upstream project releases exist anywhere in the form of an + archived file (e.g. tarball or zip file). + These files correspond to individual recipes. + For example, the figure uses specific releases each for + BusyBox, Qt, and Dbus. + An archive file can be for any released product that can be + built using a recipe. +
+ Local projects are custom bits of software the user provides. + These bits reside somewhere local to a project - perhaps + a directory into which the user checks in items (e.g. + a local directory containing a development source tree + used by the group). +
+ The canonical method through which to include a local project
+ is to use the
+ externalsrc
+ class to include that local project.
+ You use either the local.conf or a
+ recipe's append file to override or set the
+ recipe to point to the local directory on your disk to pull
+ in the whole source tree.
+
+ For information on how to use the
+ externalsrc class, see the
+ "externalsrc.bbclass"
+ section.
+
+ Another place the build system can get source files from is
+ through an SCM such as Git or Subversion.
+ In this case, a repository is cloned or checked out.
+ The
+ do_fetch
+ task inside BitBake uses
+ the SRC_URI
+ variable and the argument's prefix to determine the correct
+ fetcher module.
+
DL_DIR
+ directory, see the
+ BB_GENERATE_MIRROR_TARBALLS
+ variable.
+
+ When fetching a repository, BitBake uses the
+ SRCREV
+ variable to determine the specific revision from which to
+ build.
+
+ Two kinds of mirrors exist: pre-mirrors and regular mirrors.
+ The
+ PREMIRRORS
+ and
+ MIRRORS
+ variables point to these, respectively.
+ BitBake checks pre-mirrors before looking upstream for any
+ source files.
+ Pre-mirrors are appropriate when you have a shared directory
+ that is not a directory defined by the
+ DL_DIR
+ variable.
+ A Pre-mirror typically points to a shared directory that is
+ local to your organization.
+
+ Regular mirrors can be any site across the Internet that is + used as an alternative location for source code should the + primary site not be functioning for some reason or another. +
+ When the OpenEmbedded build system generates an image or an SDK, + it gets the packages from a package feed area located in the + Build Directory. + The + general Yocto Project Development Environment figure + shows this package feeds area in the upper-right corner. +
+ This section looks a little closer into the package feeds area used + by the build system. + Here is a more detailed look at the area: +
![]() |
+
+ Package feeds are an intermediary step in the build process.
+ The OpenEmbedded build system provides classes to generate
+ different package types, and you specify which classes to enable
+ through the
+ PACKAGE_CLASSES
+ variable.
+ Before placing the packages into package feeds,
+ the build process validates them with generated output quality
+ assurance checks through the
+ insane
+ class.
+
+ The package feed area resides in the Build Directory. + The directory the build system uses to temporarily store packages + is determined by a combination of variables and the particular + package manager in use. + See the "Package Feeds" box in the illustration and note the + information to the right of that area. + In particular, the following defines where package files are + kept: +
DEPLOY_DIR:
+ Defined as tmp/deploy in the Build
+ Directory.
+
DEPLOY_DIR_*:
+ Depending on the package manager used, the package type
+ sub-folder.
+ Given RPM, IPK, or DEB packaging and tarball creation, the
+ DEPLOY_DIR_RPM,
+ DEPLOY_DIR_IPK,
+ DEPLOY_DIR_DEB,
+ or
+ DEPLOY_DIR_TAR,
+ variables are used, respectively.
+
PACKAGE_ARCH:
+ Defines architecture-specific sub-folders.
+ For example, packages could exist for the i586 or qemux86
+ architectures.
+
+
+ BitBake uses the do_package_write_* tasks to
+ generate packages and place them into the package holding area (e.g.
+ do_package_write_ipk for IPK packages).
+ See the
+ "do_package_write_deb",
+ "do_package_write_ipk",
+ "do_package_write_rpm",
+ and
+ "do_package_write_tar"
+ sections for additional information.
+ As an example, consider a scenario where an IPK packaging manager
+ is being used and package architecture support for both i586
+ and qemux86 exist.
+ Packages for the i586 architecture are placed in
+ build/tmp/deploy/ipk/i586, while packages for
+ the qemux86 architecture are placed in
+ build/tmp/deploy/ipk/qemux86.
+
+ The OpenEmbedded build system uses + BitBake + to produce images. + You can see from the + general Yocto Project Development Environment figure, + the BitBake area consists of several functional areas. + This section takes a closer look at each of those areas. +
+ Separate documentation exists for the BitBake tool. + See the + BitBake User Manual + for reference material on BitBake. +
+ The first stages of building a recipe are to fetch and unpack + the source code: +
![]() |
+
+ The
+ do_fetch
+ and
+ do_unpack
+ tasks fetch the source files and unpack them into the work
+ directory.
+
file://)
+ that is part of a recipe's
+ SRC_URI
+ statement, the OpenEmbedded build system takes a checksum
+ of the file for the recipe and inserts the checksum into
+ the signature for the do_fetch.
+ If any local file has been modified, the
+ do_fetch task and all tasks that
+ depend on it are re-executed.
+
+ By default, everything is accomplished in the
+ Build Directory,
+ which has a defined structure.
+ For additional general information on the Build Directory,
+ see the
+ "build/"
+ section in the Yocto Project Reference Manual.
+
+ Unpacked source files are pointed to by the
+ S
+ variable.
+ Each recipe has an area in the Build Directory where the
+ unpacked source code resides.
+ The name of that directory for any given recipe is defined from
+ several different variables.
+ You can see the variables that define these directories
+ by looking at the figure:
+
TMPDIR -
+ The base directory where the OpenEmbedded build system
+ performs all its work during the build.
+
PACKAGE_ARCH -
+ The architecture of the built package or packages.
+
TARGET_OS -
+ The operating system of the target device.
+
PN -
+ The name of the built package.
+
PV -
+ The version of the recipe used to build the package.
+
PR -
+ The revision of the recipe used to build the package.
+
WORKDIR -
+ The location within TMPDIR where
+ a specific package is built.
+
S -
+ Contains the unpacked source files for a given recipe.
+
+
+ Once source code is fetched and unpacked, BitBake locates + patch files and applies them to the source files: +
![]() |
+
+ The
+ do_patch
+ task processes recipes by
+ using the
+ SRC_URI
+ variable to locate applicable patch files, which by default
+ are *.patch or
+ *.diff files, or any file if
+ "apply=yes" is specified for the file in
+ SRC_URI.
+
+ BitBake finds and applies multiple patches for a single recipe
+ in the order in which it finds the patches.
+ Patches are applied to the recipe's source files located in the
+ S
+ directory.
+
+ For more information on how the source directories are + created, see the + "Source Fetching" + section. +
+ After source code is patched, BitBake executes tasks that + configure and compile the source code: +
![]() |
+
+ This step in the build process consists of three tasks: +
+ do_prepare_recipe_sysroot:
+ This task sets up the two sysroots in
+ ${WORKDIR}
+ (i.e. recipe-sysroot and
+ recipe-sysroot-native) so that
+ the sysroots contain the contents of the
+ do_populate_sysroot
+ tasks of the recipes on which the recipe
+ containing the tasks depends.
+ A sysroot exists for both the target and for the native
+ binaries, which run on the host system.
+
do_configure:
+ This task configures the source by enabling and
+ disabling any build-time and configuration options for
+ the software being built.
+ Configurations can come from the recipe itself as well
+ as from an inherited class.
+ Additionally, the software itself might configure itself
+ depending on the target for which it is being built.
+
The configurations handled by the
+ do_configure
+ task are specific
+ to source code configuration for the source code
+ being built by the recipe.
If you are using the
+ autotools
+ class,
+ you can add additional configuration options by using
+ the
+ EXTRA_OECONF
+ or
+ PACKAGECONFIG_CONFARGS
+ variables.
+ For information on how this variable works within
+ that class, see the
+ meta/classes/autotools.bbclass file.
+
do_compile:
+ Once a configuration task has been satisfied, BitBake
+ compiles the source using the
+ do_compile
+ task.
+ Compilation occurs in the directory pointed to by the
+ B
+ variable.
+ Realize that the B directory is, by
+ default, the same as the
+ S
+ directory.
do_install:
+ Once compilation is done, BitBake executes the
+ do_install
+ task.
+ This task copies files from the B
+ directory and places them in a holding area pointed to
+ by the
+ D
+ variable.
+
+ After source code is configured and compiled, the + OpenEmbedded build system analyzes + the results and splits the output into packages: +
![]() |
+
+ The
+ do_package
+ and
+ do_packagedata
+ tasks combine to analyze
+ the files found in the
+ D directory
+ and split them into subsets based on available packages and
+ files.
+ The analyzing process involves the following as well as other
+ items: splitting out debugging symbols,
+ looking at shared library dependencies between packages,
+ and looking at package relationships.
+ The do_packagedata task creates package
+ metadata based on the analysis such that the
+ OpenEmbedded build system can generate the final packages.
+ Working, staged, and intermediate results of the analysis
+ and package splitting process use these areas:
+
PKGD -
+ The destination directory for packages before they are
+ split.
+
PKGDATA_DIR -
+ A shared, global-state directory that holds data
+ generated during the packaging process.
+
PKGDESTWORK -
+ A temporary work area used by the
+ do_package task.
+
PKGDEST -
+ The parent directory for packages after they have
+ been split.
+
+ The FILES
+ variable defines the files that go into each package in
+ PACKAGES.
+ If you want details on how this is accomplished, you can
+ look at the
+ package
+ class.
+
+ Depending on the type of packages being created (RPM, DEB, or
+ IPK), the do_package_write_* task
+ creates the actual packages and places them in the
+ Package Feed area, which is
+ ${TMPDIR}/deploy.
+ You can see the
+ "Package Feeds"
+ section for more detail on that part of the build process.
+
deploy/* directories does not exist.
+ Creating such feeds usually requires some kind of feed
+ maintenance mechanism that would upload the new packages
+ into an official package feed (e.g. the
+ Ångström distribution).
+ This functionality is highly distribution-specific
+ and thus is not provided out of the box.
+ +
+ Once packages are split and stored in the Package Feeds area, + the OpenEmbedded build system uses BitBake to generate the + root filesystem image: +
![]() |
+
+ The image generation process consists of several stages and
+ depends on several tasks and variables.
+ The
+ do_rootfs
+ task creates the root filesystem (file and directory structure)
+ for an image.
+ This task uses several key variables to help create the list
+ of packages to actually install:
+
IMAGE_INSTALL:
+ Lists out the base set of packages to install from
+ the Package Feeds area.
PACKAGE_EXCLUDE:
+ Specifies packages that should not be installed.
+
IMAGE_FEATURES:
+ Specifies features to include in the image.
+ Most of these features map to additional packages for
+ installation.
PACKAGE_CLASSES:
+ Specifies the package backend to use and consequently
+ helps determine where to locate packages within the
+ Package Feeds area.
IMAGE_LINGUAS:
+ Determines the language(s) for which additional
+ language support packages are installed.
+
PACKAGE_INSTALL:
+ The final list of packages passed to the package manager
+ for installation into the image.
+
+
+ With
+ IMAGE_ROOTFS
+ pointing to the location of the filesystem under construction and
+ the PACKAGE_INSTALL variable providing the
+ final list of packages to install, the root file system is
+ created.
+
+ Package installation is under control of the package manager + (e.g. dnf/rpm, opkg, or apt/dpkg) regardless of whether or + not package management is enabled for the target. + At the end of the process, if package management is not + enabled for the target, the package manager's data files + are deleted from the root filesystem. + As part of the final stage of package installation, postinstall + scripts that are part of the packages are run. + Any scripts that fail to run + on the build host are run on the target when the target system + is first booted. + If you are using a + read-only root filesystem, + all the post installation scripts must succeed during the + package installation phase since the root filesystem is + read-only. +
+ The final stages of the do_rootfs task
+ handle post processing.
+ Post processing includes creation of a manifest file and
+ optimizations.
+
+ The manifest file (.manifest) resides
+ in the same directory as the root filesystem image.
+ This file lists out, line-by-line, the installed packages.
+ The manifest file is useful for the
+ testimage
+ class, for example, to determine whether or not to run
+ specific tests.
+ See the
+ IMAGE_MANIFEST
+ variable for additional information.
+
+ Optimizing processes run across the image include
+ mklibs, prelink,
+ and any other post-processing commands as defined by the
+ ROOTFS_POSTPROCESS_COMMAND
+ variable.
+ The mklibs process optimizes the size
+ of the libraries, while the
+ prelink process optimizes the dynamic
+ linking of shared libraries to reduce start up time of
+ executables.
+
+ After the root filesystem is built, processing begins on
+ the image through the
+ do_image
+ task.
+ The build system runs any pre-processing commands as defined
+ by the
+ IMAGE_PREPROCESS_COMMAND
+ variable.
+ This variable specifies a list of functions to call before
+ the OpenEmbedded build system creates the final image output
+ files.
+
+ The OpenEmbedded build system dynamically creates
+ do_image_* tasks as needed, based
+ on the image types specified in the
+ IMAGE_FSTYPES
+ variable.
+ The process turns everything into an image file or a set of
+ image files and compresses the root filesystem image to reduce
+ the overall size of the image.
+ The formats used for the root filesystem depend on the
+ IMAGE_FSTYPES variable.
+
+ As an example, a dynamically created task when creating a
+ particular image type would take the
+ following form:
+
+ do_image_type[depends]
+
+ So, if the type as specified by the
+ IMAGE_FSTYPES were
+ ext4, the dynamically generated task
+ would be as follows:
+
+ do_image_ext4[depends] +
+
+ The final task involved in image creation is the
+ do_image_complete
+ task.
+ This task completes the image by applying any image
+ post processing as defined through the
+ IMAGE_POSTPROCESS_COMMAND
+ variable.
+ The variable specifies a list of functions to call once the
+ OpenEmbedded build system has created the final image output
+ files.
+
+ The OpenEmbedded build system uses BitBake to generate the
+ Software Development Kit (SDK) installer script for both the
+ standard and extensible SDKs:
+
+
do_populate_sdk
+ task, see the
+ "Building an SDK Installer"
+ section in the Yocto Project Application Development and the
+ Extensible Software Development Kit (SDK) manual.
+
+ Like image generation, the SDK script process consists of
+ several stages and depends on many variables.
+ The do_populate_sdk and
+ do_populate_sdk_ext tasks use these
+ key variables to help create the list of packages to actually
+ install.
+ For information on the variables listed in the figure, see the
+ "Application Development SDK"
+ section.
+
+ The do_populate_sdk task helps create
+ the standard SDK and handles two parts: a target part and a
+ host part.
+ The target part is the part built for the target hardware and
+ includes libraries and headers.
+ The host part is the part of the SDK that runs on the
+ SDKMACHINE.
+
+ The do_populate_sdk_ext task helps create
+ the extensible SDK and handles host and target parts
+ differently than its counter part does for the standard SDK.
+ For the extensible SDK, the task encapsulates the build system,
+ which includes everything needed (host and target) for the SDK.
+
+ Regardless of the type of SDK being constructed, the
+ tasks perform some cleanup after which a cross-development
+ environment setup script and any needed configuration files
+ are created.
+ The final output is the Cross-development
+ toolchain installation script (.sh file),
+ which includes the environment setup script.
+
+ For each task that completes successfully, BitBake writes a
+ stamp file into the
+ STAMPS_DIR
+ directory.
+ The beginning of the stamp file's filename is determined by the
+ STAMP
+ variable, and the end of the name consists of the task's name
+ and current
+ input checksum.
+
BB_SIGNATURE_HANDLER
+ is "OEBasicHash", which is almost always the case in
+ current OpenEmbedded.
+ + To determine if a task needs to be rerun, BitBake checks if a + stamp file with a matching input checksum exists for the task. + If such a stamp file exists, the task's output is assumed to + exist and still be valid. + If the file does not exist, the task is rerun. +
The stamp mechanism is more general than the shared + state (sstate) cache mechanism described in the + "Setscene Tasks and Shared State" + section. + BitBake avoids rerunning any task that has a valid + stamp file, not just tasks that can be accelerated through + the sstate cache.
However, you should realize that stamp files only
+ serve as a marker that some work has been done and that
+ these files do not record task output.
+ The actual task output would usually be somewhere in
+ TMPDIR
+ (e.g. in some recipe's
+ WORKDIR.)
+ What the sstate cache mechanism adds is a way to cache task
+ output that can then be shared between build machines.
+
+ Since STAMPS_DIR is usually a subdirectory
+ of TMPDIR, removing
+ TMPDIR will also remove
+ STAMPS_DIR, which means tasks will
+ properly be rerun to repopulate TMPDIR.
+
+ If you want some task to always be considered "out of date",
+ you can mark it with the
+ nostamp
+ varflag.
+ If some other task depends on such a task, then that task will
+ also always be considered out of date, which might not be what
+ you want.
+
+ For details on how to view information about a task's + signature, see the + "Viewing Task Variable Dependencies" + section in the Yocto Project Development Tasks Manual. +
+ The description of tasks so far assumes that BitBake needs to + build everything and there are no prebuilt objects available. + BitBake does support skipping tasks if prebuilt objects are + available. + These objects are usually made available in the form of a + shared state (sstate) cache. +
SSTATE_DIR
+ and
+ SSTATE_MIRRORS
+ variables.
+ +
+ The idea of a setscene task (i.e
+ do_taskname_setscene)
+ is a version of the task where
+ instead of building something, BitBake can skip to the end
+ result and simply place a set of files into specific locations
+ as needed.
+ In some cases, it makes sense to have a setscene task variant
+ (e.g. generating package files in the
+ do_package_write_* task).
+ In other cases, it does not make sense, (e.g. a
+ do_patch
+ task or
+ do_unpack
+ task) since the work involved would be equal to or greater than
+ the underlying task.
+
+ In the OpenEmbedded build system, the common tasks that have
+ setscene variants are
+ do_package,
+ do_package_write_*,
+ do_deploy,
+ do_packagedata,
+ and
+ do_populate_sysroot.
+ Notice that these are most of the tasks whose output is an
+ end result.
+
+ The OpenEmbedded build system has knowledge of the relationship
+ between these tasks and other tasks that precede them.
+ For example, if BitBake runs
+ do_populate_sysroot_setscene for
+ something, there is little point in running any of the
+ do_fetch, do_unpack,
+ do_patch,
+ do_configure,
+ do_compile, and
+ do_install tasks.
+ However, if do_package needs to be run,
+ BitBake would need to run those other tasks.
+
+ It becomes more complicated if everything can come from an
+ sstate cache because some objects are simply not required at
+ all.
+ For example, you do not need a compiler or native tools, such
+ as quilt, if there is nothing to compile or patch.
+ If the do_package_write_* packages are
+ available from sstate, BitBake does not need the
+ do_package task data.
+
+ To handle all these complexities, BitBake runs in two phases. + The first is the "setscene" stage. + During this stage, BitBake first checks the sstate cache for + any targets it is planning to build. + BitBake does a fast check to see if the object exists rather + than a complete download. + If nothing exists, the second phase, which is the setscene + stage, completes and the main build proceeds. +
+ If objects are found in the sstate cache, the OpenEmbedded + build system works backwards from the end targets specified + by the user. + For example, if an image is being built, the OpenEmbedded build + system first looks for the packages needed for that image and + the tools needed to construct an image. + If those are available, the compiler is not needed. + Thus, the compiler is not even downloaded. + If something was found to be unavailable, or the download or + setscene task fails, the OpenEmbedded build system then tries + to install dependencies, such as the compiler, from the cache. +
+ The availability of objects in the sstate cache is handled by
+ the function specified by the
+ BB_HASHCHECK_FUNCTION
+ variable and returns a list of the objects that are available.
+ The function specified by the
+ BB_SETSCENE_DEPVALID
+ variable is the function that determines whether a given
+ dependency needs to be followed, and whether for any given
+ relationship the function needs to be passed.
+ The function returns a True or False value.
+
+ The images produced by the OpenEmbedded build system + are compressed forms of the + root filesystem that are ready to boot on a target device. + You can see from the + general Yocto Project Development Environment figure + that BitBake output, in part, consists of images. + This section is going to look more closely at this output: +
![]() |
+
+ For a list of example images that the Yocto Project provides, + see the + "Images" + chapter in the Yocto Project Reference Manual. +
+ Images are written out to the
+ Build Directory
+ inside the
+ tmp/deploy/images/
+ folder as shown in the figure.
+ This folder contains any files expected to be loaded on the
+ target device.
+ The
+ machine/DEPLOY_DIR
+ variable points to the deploy directory,
+ while the
+ DEPLOY_DIR_IMAGE
+ variable points to the appropriate directory containing images for
+ the current configuration.
+
:
+ A kernel binary file.
+ The
+ kernel-imageKERNEL_IMAGETYPE
+ variable setting determines the naming scheme for the
+ kernel image file.
+ Depending on that variable, the file could begin with
+ a variety of naming strings.
+ The deploy/images/
+ directory can contain multiple image files for the
+ machine.machine
:
+ Root filesystems for the target device (e.g.
+ root-filesystem-image*.ext3 or *.bz2
+ files).
+ The
+ IMAGE_FSTYPES
+ variable setting determines the root filesystem image
+ type.
+ The deploy/images/
+ directory can contain multiple root filesystems for the
+ machine.machine
:
+ Tarballs that contain all the modules built for the kernel.
+ Kernel module tarballs exist for legacy purposes and
+ can be suppressed by setting the
+ kernel-modulesMODULE_TARBALL_DEPLOY
+ variable to "0".
+ The deploy/images/
+ directory can contain multiple kernel module tarballs
+ for the machine.machine
:
+ Bootloaders supporting the image, if applicable to the
+ target machine.
+ The bootloadersdeploy/images/
+ directory can contain multiple bootloaders for the
+ machine.machine
:
+ The symlinksdeploy/images/
+ folder contains
+ a symbolic link that points to the most recently built file
+ for each machine.
+ These links might be useful for external scripts that
+ need to obtain the latest version of each file.
+ machine
+
+ In the
+ general Yocto Project Development Environment figure,
+ the output labeled "Application Development SDK" represents an
+ SDK.
+ The SDK generation process differs depending on whether you build
+ a standard SDK
+ (e.g. bitbake -c populate_sdk imagename)
+ or an extensible SDK
+ (e.g. bitbake -c populate_sdk_ext imagename).
+ This section is going to take a closer look at this output:
+
![]() |
+
+ The specific form of this output is a self-extracting
+ SDK installer (*.sh) that, when run,
+ installs the SDK, which consists of a cross-development
+ toolchain, a set of libraries and headers, and an SDK
+ environment setup script.
+ Running this installer essentially sets up your
+ cross-development environment.
+ You can think of the cross-toolchain as the "host"
+ part because it runs on the SDK machine.
+ You can think of the libraries and headers as the "target"
+ part because they are built for the target hardware.
+ The environment setup script is added so that you can initialize
+ the environment before using the tools.
+
+ The Yocto Project supports several methods by which you can + set up this cross-development environment. + These methods include downloading pre-built SDK installers + or building and installing your own SDK installer. +
+ For background information on cross-development toolchains + in the Yocto Project development environment, see the + "Cross-Development Toolchain Generation" + section. +
+ For information on setting up a cross-development + environment, see the + Yocto Project Application Development and the Extensible Software Development Kit (eSDK) + manual. +
+ Once built, the SDK installers are written out to the
+ deploy/sdk folder inside the
+ Build Directory
+ as shown in the figure at the beginning of this section.
+ Depending on the type of SDK, several variables exist that help
+ configure these files.
+ The following list shows the variables associated with a standard
+ SDK:
+
DEPLOY_DIR:
+ Points to the deploy
+ directory.
SDKMACHINE:
+ Specifies the architecture of the machine
+ on which the cross-development tools are run to
+ create packages for the target hardware.
+
SDKIMAGE_FEATURES:
+ Lists the features to include in the "target" part
+ of the SDK.
+
TOOLCHAIN_HOST_TASK:
+ Lists packages that make up the host
+ part of the SDK (i.e. the part that runs on
+ the SDKMACHINE).
+ When you use
+ bitbake -c populate_sdk
+ to create the SDK, a set of default packages
+ apply.
+ This variable allows you to add more packages.
+ imagename
TOOLCHAIN_TARGET_TASK:
+ Lists packages that make up the target part
+ of the SDK (i.e. the part built for the
+ target hardware).
+
SDKPATH:
+ Defines the default SDK installation path offered by the
+ installation script.
+
+ This next list, shows the variables associated with an extensible + SDK: +
DEPLOY_DIR:
+ Points to the deploy directory.
+
SDK_EXT_TYPE:
+ Controls whether or not shared state artifacts are copied
+ into the extensible SDK.
+ By default, all required shared state artifacts are copied
+ into the SDK.
+
SDK_INCLUDE_PKGDATA:
+ Specifies whether or not packagedata will be included in
+ the extensible SDK for all recipes in the "world" target.
+
SDK_INCLUDE_TOOLCHAIN:
+ Specifies whether or not the toolchain will be included
+ when building the extensible SDK.
+
SDK_LOCAL_CONF_WHITELIST:
+ A list of variables allowed through from the build system
+ configuration into the extensible SDK configuration.
+
SDK_LOCAL_CONF_BLACKLIST:
+ A list of variables not allowed through from the build
+ system configuration into the extensible SDK configuration.
+
SDK_INHERIT_BLACKLIST:
+ A list of classes to remove from the
+ INHERIT
+ value globally within the extensible SDK configuration.
+
+
Table of Contents
+ This chapter describes concepts for various areas of the Yocto Project. + Currently, topics include Yocto Project components, cross-development + generation, shared state (sstate) cache, runtime dependencies, + Pseudo and Fakeroot, x32 psABI, Wayland support, and Licenses. +
+ The + BitBake + task executor together with various types of configuration files + form the OpenEmbedded Core. + This section overviews these components by describing their use and + how they interact. +
+ BitBake handles the parsing and execution of the data files. + The data itself is of various types: +
+ Recipes: + Provides details about particular pieces of software. +
+ Class Data: + Abstracts common build information (e.g. how to build a + Linux kernel). +
+ Configuration Data: + Defines machine-specific settings, policy decisions, and + so forth. + Configuration data acts as the glue to bind everything + together. +
+
+ BitBake knows how to combine multiple data sources together and + refers to each data source as a layer. + For information on layers, see the + "Understanding and Creating Layers" + section of the Yocto Project Development Tasks Manual. +
+ Following are some brief details on these core components. + For additional information on how these components interact during + a build, see the + "Development Concepts" + section. +
+ BitBake is the tool at the heart of the OpenEmbedded build + system and is responsible for parsing the + Metadata, + generating a list of tasks from it, and then executing those + tasks. +
+ This section briefly introduces BitBake. + If you want more information on BitBake, see the + BitBake User Manual. +
+ To see a list of the options BitBake supports, use either of + the following commands: +
+ $ bitbake -h + $ bitbake --help +
+
+ The most common usage for BitBake is
+ bitbake ,
+ where packagenamepackagename is the name of the
+ package you want to build (referred to as the "target" in this
+ manual).
+ The target often equates to the first part of a recipe's
+ filename (e.g. "foo" for a recipe named
+ foo_1.3.0-r0.bb).
+ So, to process the
+ matchbox-desktop_1.2.3.bb recipe file, you
+ might type the following:
+
+ $ bitbake matchbox-desktop +
+ Several different versions of
+ matchbox-desktop might exist.
+ BitBake chooses the one selected by the distribution
+ configuration.
+ You can get more details about how BitBake chooses between
+ different target versions and providers in the
+ "Preferences"
+ section of the BitBake User Manual.
+
+ BitBake also tries to execute any dependent tasks first.
+ So for example, before building
+ matchbox-desktop, BitBake would build a
+ cross compiler and glibc if they had not
+ already been built.
+
+ A useful BitBake option to consider is the
+ -k or --continue
+ option.
+ This option instructs BitBake to try and continue processing
+ the job as long as possible even after encountering an error.
+ When an error occurs, the target that failed and those that
+ depend on it cannot be remade.
+ However, when you use this option other dependencies can
+ still be processed.
+
+ Files that have the .bb suffix are
+ "recipes" files.
+ In general, a recipe contains information about a single piece
+ of software.
+ This information includes the location from which to download
+ the unaltered source, any source patches to be applied to that
+ source (if needed), which special configuration options to
+ apply, how to compile the source files, and how to package the
+ compiled output.
+
+ The term "package" is sometimes used to refer to recipes.
+ However, since the word "package" is used for the packaged
+ output from the OpenEmbedded build system (i.e.
+ .ipk or .deb files),
+ this document avoids using the term "package" when referring
+ to recipes.
+
+ Prior to the build, if you know that several different recipes
+ provide the same functionality, you can use a virtual provider
+ (i.e. virtual/*) as a placeholder for the
+ actual provider.
+ The actual provider would be determined at build time.
+ In this case, you should add virtual/*
+ to
+ DEPENDS,
+ rather than listing the specified provider.
+ You would select the actual provider by setting the
+ PREFERRED_PROVIDER
+ variable (i.e.
+ PREFERRED_PROVIDER_virtual/*)
+ in the build's configuration file (e.g.
+ poky/build/conf/local.conf).
+
virtual/*
+ item that is ultimately not selected through
+ PREFERRED_PROVIDER does not get built.
+ Preventing these recipes from building is usually the
+ desired behavior since this mechanism's purpose is to
+ select between mutually exclusive alternative providers.
+ +
+ The following lists specific examples of virtual providers: +
+ virtual/mesa:
+ Provides gbm.pc.
+
+ virtual/egl:
+ Provides egl.pc and possibly
+ wayland-egl.pc.
+
+ virtual/libgl:
+ Provides gl.pc (i.e. libGL).
+
+ virtual/libgles1:
+ Provides glesv1_cm.pc
+ (i.e. libGLESv1_CM).
+
+ virtual/libgles2:
+ Provides glesv2.pc
+ (i.e. libGLESv2).
+
+
+ Class files (.bbclass) contain information
+ that is useful to share between
+ Metadata
+ files.
+ An example is the
+ autotools
+ class, which contains common settings for any application that
+ Autotools uses.
+ The
+ "Classes"
+ chapter in the Yocto Project Reference Manual provides
+ details about classes and how to use them.
+
+ The configuration files (.conf) define
+ various configuration variables that govern the OpenEmbedded
+ build process.
+ These files fall into several areas that define machine
+ configuration options, distribution configuration options,
+ compiler tuning options, general common configuration options,
+ and user configuration options in
+ local.conf, which is found in the
+ Build Directory.
+
+ The Yocto Project does most of the work for you when it comes to + creating + cross-development toolchains. + This section provides some technical background on how + cross-development toolchains are created and used. + For more information on toolchains, you can also see the + Yocto Project Application Development and the Extensible Software Development Kit (eSDK) + manual. +
+ In the Yocto Project development environment, cross-development + toolchains are used to build the image and applications that run + on the target hardware. + With just a few commands, the OpenEmbedded build system creates + these necessary toolchains for you. +
+ The following figure shows a high-level build environment regarding + toolchain construction and use. +
+
![]() |
+
+ Most of the work occurs on the Build Host.
+ This is the machine used to build images and generally work within the
+ the Yocto Project environment.
+ When you run BitBake to create an image, the OpenEmbedded build system
+ uses the host gcc compiler to bootstrap a
+ cross-compiler named gcc-cross.
+ The gcc-cross compiler is what BitBake uses to
+ compile source files when creating the target image.
+ You can think of gcc-cross simply as an
+ automatically generated cross-compiler that is used internally within
+ BitBake only.
+
gcc-cross-canadian since this SDK
+ ships a copy of the OpenEmbedded build system and the sysroot
+ within it contains gcc-cross.
+ +
+ The chain of events that occurs when gcc-cross is
+ bootstrapped is as follows:
+
+ gcc -> binutils-cross -> gcc-cross-initial -> linux-libc-headers -> glibc-initial -> glibc -> gcc-cross -> gcc-runtime +
+
+ gcc:
+ The build host's GNU Compiler Collection (GCC).
+
+ binutils-cross:
+ The bare minimum binary utilities needed in order to run
+ the gcc-cross-initial phase of the
+ bootstrap operation.
+
+ gcc-cross-initial:
+ An early stage of the bootstrap process for creating
+ the cross-compiler.
+ This stage builds enough of the gcc-cross,
+ the C library, and other pieces needed to finish building the
+ final cross-compiler in later stages.
+ This tool is a "native" package (i.e. it is designed to run on
+ the build host).
+
+ linux-libc-headers:
+ Headers needed for the cross-compiler.
+
+ glibc-initial:
+ An initial version of the Embedded GLIBC needed to bootstrap
+ glibc.
+
+ gcc-cross:
+ The final stage of the bootstrap process for the
+ cross-compiler.
+ This stage results in the actual cross-compiler that
+ BitBake uses when it builds an image for a targeted
+ device.
+
gcc-cross.
+ + This tool is also a "native" package (i.e. it is + designed to run on the build host). +
+ gcc-runtime:
+ Runtime libraries resulting from the toolchain bootstrapping
+ process.
+ This tool produces a binary that consists of the
+ runtime libraries need for the targeted device.
+
+
+ You can use the OpenEmbedded build system to build an installer for
+ the relocatable SDK used to develop applications.
+ When you run the installer, it installs the toolchain, which contains
+ the development tools (e.g., the
+ gcc-cross-canadian),
+ binutils-cross-canadian, and other
+ nativesdk-* tools,
+ which are tools native to the SDK (i.e. native to
+ SDK_ARCH),
+ you need to cross-compile and test your software.
+ The figure shows the commands you use to easily build out this
+ toolchain.
+ This cross-development toolchain is built to execute on the
+ SDKMACHINE,
+ which might or might not be the same
+ machine as the Build Host.
+
+
+ Here is the bootstrap process for the relocatable toolchain: +
+ gcc -> binutils-crosssdk -> gcc-crosssdk-initial -> linux-libc-headers -> + glibc-initial -> nativesdk-glibc -> gcc-crosssdk -> gcc-cross-canadian +
+
+ gcc:
+ The build host's GNU Compiler Collection (GCC).
+
+ binutils-crosssdk:
+ The bare minimum binary utilities needed in order to run
+ the gcc-crosssdk-initial phase of the
+ bootstrap operation.
+
+ gcc-crosssdk-initial:
+ An early stage of the bootstrap process for creating
+ the cross-compiler.
+ This stage builds enough of the
+ gcc-crosssdk and supporting pieces so that
+ the final stage of the bootstrap process can produce the
+ finished cross-compiler.
+ This tool is a "native" binary that runs on the build host.
+
+ linux-libc-headers:
+ Headers needed for the cross-compiler.
+
+ glibc-initial:
+ An initial version of the Embedded GLIBC needed to bootstrap
+ nativesdk-glibc.
+
+ nativesdk-glibc:
+ The Embedded GLIBC needed to bootstrap the
+ gcc-crosssdk.
+
+ gcc-crosssdk:
+ The final stage of the bootstrap process for the
+ relocatable cross-compiler.
+ The gcc-crosssdk is a transitory compiler
+ and never leaves the build host.
+ Its purpose is to help in the bootstrap process to create the
+ eventual relocatable gcc-cross-canadian
+ compiler, which is relocatable.
+ This tool is also a "native" package (i.e. it is
+ designed to run on the build host).
+
+ gcc-cross-canadian:
+ The final relocatable cross-compiler.
+ When run on the
+ SDKMACHINE,
+ this tool
+ produces executable code that runs on the target device.
+ Only one cross-canadian compiler is produced per architecture
+ since they can be targeted at different processor optimizations
+ using configurations passed to the compiler through the
+ compile commands.
+ This circumvents the need for multiple compilers and thus
+ reduces the size of the toolchains.
+
+
+ By design, the OpenEmbedded build system builds everything from + scratch unless BitBake can determine that parts do not need to be + rebuilt. + Fundamentally, building from scratch is attractive as it means all + parts are built fresh and there is no possibility of stale data + causing problems. + When developers hit problems, they typically default back to + building from scratch so they know the state of things from the + start. +
+ Building an image from scratch is both an advantage and a + disadvantage to the process. + As mentioned in the previous paragraph, building from scratch + ensures that everything is current and starts from a known state. + However, building from scratch also takes much longer as it + generally means rebuilding things that do not necessarily need + to be rebuilt. +
+ The Yocto Project implements shared state code that supports + incremental builds. + The implementation of the shared state code answers the following + questions that were fundamental roadblocks within the OpenEmbedded + incremental build support system: +
+ What pieces of the system have changed and what pieces have + not changed? +
+ How are changed pieces of software removed and replaced? +
+ How are pre-built components that do not need to be rebuilt + from scratch used when they are available? +
+
+ For the first question, the build system detects changes in the + "inputs" to a given task by creating a checksum (or signature) of + the task's inputs. + If the checksum changes, the system assumes the inputs have changed + and the task needs to be rerun. + For the second question, the shared state (sstate) code tracks + which tasks add which output to the build process. + This means the output from a given task can be removed, upgraded + or otherwise manipulated. + The third question is partly addressed by the solution for the + second question assuming the build system can fetch the sstate + objects from remote locations and install them if they are deemed + to be valid. +
PR
+ information as part of the shared state packages.
+ Consequently, considerations exist that affect maintaining
+ shared state feeds.
+ For information on how the OpenEmbedded build system
+ works with packages and can track incrementing
+ PR information, see the
+ "Automatically Incrementing a Binary Package Revision Number"
+ section in the Yocto Project Development Tasks Manual.
+ +
+ The rest of this section goes into detail about the overall + incremental build architecture, the checksums (signatures), shared + state, and some tips and tricks. +
+ When determining what parts of the system need to be built,
+ BitBake works on a per-task basis rather than a per-recipe
+ basis.
+ You might wonder why using a per-task basis is preferred over
+ a per-recipe basis.
+ To help explain, consider having the IPK packaging backend
+ enabled and then switching to DEB.
+ In this case, the
+ do_install
+ and
+ do_package
+ task outputs are still valid.
+ However, with a per-recipe approach, the build would not
+ include the .deb files.
+ Consequently, you would have to invalidate the whole build and
+ rerun it.
+ Rerunning everything is not the best solution.
+ Also, in this case, the core must be "taught" much about
+ specific tasks.
+ This methodology does not scale well and does not allow users
+ to easily add new tasks in layers or as external recipes
+ without touching the packaged-staging core.
+
+ The shared state code uses a checksum, which is a unique + signature of a task's inputs, to determine if a task needs to + be run again. + Because it is a change in a task's inputs that triggers a + rerun, the process needs to detect all the inputs to a given + task. + For shell tasks, this turns out to be fairly easy because + the build process generates a "run" shell script for each task + and it is possible to create a checksum that gives you a good + idea of when the task's data changes. +
+ To complicate the problem, there are things that should not be
+ included in the checksum.
+ First, there is the actual specific build path of a given
+ task - the
+ WORKDIR.
+ It does not matter if the work directory changes because it
+ should not affect the output for target packages.
+ Also, the build process has the objective of making native
+ or cross packages relocatable.
+
+ The checksum therefore needs to exclude
+ WORKDIR.
+ The simplistic approach for excluding the work directory is to
+ set WORKDIR to some fixed value and
+ create the checksum for the "run" script.
+
+ Another problem results from the "run" scripts containing + functions that might or might not get called. + The incremental build solution contains code that figures out + dependencies between shell functions. + This code is used to prune the "run" scripts down to the + minimum set, thereby alleviating this problem and making the + "run" scripts much more readable as a bonus. +
+ So far we have solutions for shell scripts. + What about Python tasks? + The same approach applies even though these tasks are more + difficult. + The process needs to figure out what variables a Python + function accesses and what functions it calls. + Again, the incremental build solution contains code that first + figures out the variable and function dependencies, and then + creates a checksum for the data used as the input to the task. +
+ Like the WORKDIR case, situations exist
+ where dependencies should be ignored.
+ For these cases, you can instruct the build process to
+ ignore a dependency by using a line like the following:
+
+ PACKAGE_ARCHS[vardepsexclude] = "MACHINE" +
+ This example ensures that the
+ PACKAGE_ARCHS
+ variable does not depend on the value of
+ MACHINE,
+ even if it does reference it.
+
+ Equally, there are cases where we need to add dependencies + BitBake is not able to find. + You can accomplish this by using a line like the following: +
+ PACKAGE_ARCHS[vardeps] = "MACHINE" +
+ This example explicitly adds the MACHINE
+ variable as a dependency for
+ PACKAGE_ARCHS.
+
+ Consider a case with in-line Python, for example, where
+ BitBake is not able to figure out dependencies.
+ When running in debug mode (i.e. using
+ -DDD), BitBake produces output when it
+ discovers something for which it cannot figure out dependencies.
+ The Yocto Project team has currently not managed to cover
+ those dependencies in detail and is aware of the need to fix
+ this situation.
+
+ Thus far, this section has limited discussion to the direct + inputs into a task. + Information based on direct inputs is referred to as the + "basehash" in the code. + However, there is still the question of a task's indirect + inputs - the things that were already built and present in the + Build Directory. + The checksum (or signature) for a particular task needs to add + the hashes of all the tasks on which the particular task + depends. + Choosing which dependencies to add is a policy decision. + However, the effect is to generate a master checksum that + combines the basehash and the hashes of the task's + dependencies. +
+ At the code level, there are a variety of ways both the + basehash and the dependent task hashes can be influenced. + Within the BitBake configuration file, we can give BitBake + some extra information to help it construct the basehash. + The following statement effectively results in a list of + global variable dependency excludes - variables never + included in any checksum: +
+ BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \ + SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM \ + USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \ + PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \ + CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX" +
+ The previous example excludes
+ WORKDIR
+ since that variable is actually constructed as a path within
+ TMPDIR,
+ which is on the whitelist.
+
+ The rules for deciding which hashes of dependent tasks to
+ include through dependency chains are more complex and are
+ generally accomplished with a Python function.
+ The code in meta/lib/oe/sstatesig.py shows
+ two examples of this and also illustrates how you can insert
+ your own policy into the system if so desired.
+ This file defines the two basic signature generators
+ OE-Core
+ uses: "OEBasic" and "OEBasicHash".
+ By default, there is a dummy "noop" signature handler enabled
+ in BitBake.
+ This means that behavior is unchanged from previous versions.
+ OE-Core uses the "OEBasicHash" signature handler by default
+ through this setting in the bitbake.conf
+ file:
+
+ BB_SIGNATURE_HANDLER ?= "OEBasicHash" +
+ The "OEBasicHash" BB_SIGNATURE_HANDLER
+ is the same as the "OEBasic" version but adds the task hash to
+ the stamp files.
+ This results in any
+ Metadata
+ change that changes the task hash, automatically
+ causing the task to be run again.
+ This removes the need to bump
+ PR
+ values, and changes to Metadata automatically ripple across
+ the build.
+
+ It is also worth noting that the end result of these + signature generators is to make some dependency and hash + information available to the build. + This information includes: +
+ BB_BASEHASH_task-taskname:
+ The base hashes for each task in the recipe.
+
+ BB_BASEHASH_filename:taskname:
+ The base hashes for each dependent task.
+
+ BBHASHDEPS_filename:taskname:
+ The task dependencies for each task.
+
+ BB_TASKHASH:
+ The hash of the currently running task.
+
+
+ Checksums and dependencies, as discussed in the previous + section, solve half the problem of supporting a shared state. + The other part of the problem is being able to use checksum + information during the build and being able to reuse or rebuild + specific components. +
+ The
+ sstate
+ class is a relatively generic implementation of how to
+ "capture" a snapshot of a given task.
+ The idea is that the build process does not care about the
+ source of a task's output.
+ Output could be freshly built or it could be downloaded and
+ unpacked from somewhere - the build process does not need to
+ worry about its origin.
+
+ There are two types of output, one is just about creating a
+ directory in
+ WORKDIR.
+ A good example is the output of either
+ do_install
+ or
+ do_package.
+ The other type of output occurs when a set of data is merged
+ into a shared directory tree such as the sysroot.
+
+ The Yocto Project team has tried to keep the details of the
+ implementation hidden in sstate class.
+ From a user's perspective, adding shared state wrapping to a task
+ is as simple as this
+ do_deploy
+ example taken from the
+ deploy
+ class:
+
+ DEPLOYDIR = "${WORKDIR}/deploy-${PN}"
+ SSTATETASKS += "do_deploy"
+ do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
+ do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+
+ python do_deploy_setscene () {
+ sstate_setscene(d)
+ }
+ addtask do_deploy_setscene
+ do_deploy[dirs] = "${DEPLOYDIR} ${B}"
+ + The following list explains the previous example: +
+ Adding "do_deploy" to SSTATETASKS
+ adds some required sstate-related processing, which is
+ implemented in the
+ sstate
+ class, to before and after the
+ do_deploy
+ task.
+
+ The
+ do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
+ declares that do_deploy places its
+ output in ${DEPLOYDIR} when run
+ normally (i.e. when not using the sstate cache).
+ This output becomes the input to the shared state cache.
+
+ The
+ do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
+ line causes the contents of the shared state cache to be
+ copied to ${DEPLOY_DIR_IMAGE}.
+
do_deploy is not already in
+ the shared state cache or if its input checksum
+ (signature) has changed from when the output was
+ cached, the task will be run to populate the shared
+ state cache, after which the contents of the shared
+ state cache is copied to
+ ${DEPLOY_DIR_IMAGE}.
+ If do_deploy is in the shared
+ state cache and its signature indicates that the
+ cached output is still valid (i.e. if no
+ relevant task inputs have changed), then the
+ contents of the shared state cache will be copied
+ directly to
+ ${DEPLOY_DIR_IMAGE} by the
+ do_deploy_setscene task
+ instead, skipping the
+ do_deploy task.
+ +
+ The following task definition is glue logic needed to + make the previous settings effective: +
+ python do_deploy_setscene () {
+ sstate_setscene(d)
+ }
+ addtask do_deploy_setscene
+
+ sstate_setscene() takes the flags
+ above as input and accelerates the
+ do_deploy task through the
+ shared state cache if possible.
+ If the task was accelerated,
+ sstate_setscene() returns True.
+ Otherwise, it returns False, and the normal
+ do_deploy task runs.
+ For more information, see the
+ "setscene"
+ section in the BitBake User Manual.
+
+ The do_deploy[dirs] = "${DEPLOYDIR} ${B}"
+ line creates ${DEPLOYDIR} and
+ ${B} before the
+ do_deploy task runs, and also sets
+ the current working directory of
+ do_deploy to
+ ${B}.
+ For more information, see the
+ "Variable Flags"
+ section in the BitBake User Manual.
+
sstate-inputdirs and
+ sstate-outputdirs would be the
+ same, you can use
+ sstate-plaindirs.
+ For example, to preserve the
+ ${PKGD} and
+ ${PKGDEST} output from the
+ do_package
+ task, use the following:
+
+ do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}"
+ +
+ sstate-inputdirs and
+ sstate-outputdirs can also be used
+ with multiple directories.
+ For example, the following declares
+ PKGDESTWORK and
+ SHLIBWORK as shared state
+ input directories, which populates the shared state
+ cache, and PKGDATA_DIR and
+ SHLIBSDIR as the corresponding
+ shared state output directories:
+
+ do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}"
+ do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}"
+ +
+ These methods also include the ability to take a + lockfile when manipulating shared state directory + structures, for cases where file additions or removals + are sensitive: +
+ do_package[sstate-lockfile] = "${PACKAGELOCK}"
+ +
+
+ Behind the scenes, the shared state code works by looking in
+ SSTATE_DIR
+ and
+ SSTATE_MIRRORS
+ for shared state files.
+ Here is an example:
+
+ SSTATE_MIRRORS ?= "\ + file://.* http://someserver.tld/share/sstate/PATH;downloadfilename=PATH \n \ + file://.* file:///some/local/dir/sstate/PATH" +
+
SSTATE_DIR) is organized into
+ two-character subdirectories, where the subdirectory
+ names are based on the first two characters of the hash.
+ If the shared state directory structure for a mirror has the
+ same structure as SSTATE_DIR, you must
+ specify "PATH" as part of the URI to enable the build system
+ to map to the appropriate subdirectory.
+ +
+ The shared state package validity can be detected just by + looking at the filename since the filename contains the task + checksum (or signature) as described earlier in this section. + If a valid shared state package is found, the build process + downloads it and uses it to accelerate the task. +
+ The build processes use the *_setscene
+ tasks for the task acceleration phase.
+ BitBake goes through this phase before the main execution
+ code and tries to accelerate any tasks for which it can find
+ shared state packages.
+ If a shared state package for a task is available, the
+ shared state package is used.
+ This means the task and any tasks on which it is dependent
+ are not executed.
+
+ As a real world example, the aim is when building an IPK-based
+ image, only the
+ do_package_write_ipk
+ tasks would have their shared state packages fetched and
+ extracted.
+ Since the sysroot is not used, it would never get extracted.
+ This is another reason why a task-based approach is preferred
+ over a recipe-based approach, which would have to install the
+ output from every task.
+
+ The code in the build system that supports incremental builds + is not simple code. + This section presents some tips and tricks that help you work + around issues related to shared state code. +
+ Seeing what metadata went into creating the input signature
+ of a shared state (sstate) task can be a useful debugging
+ aid.
+ This information is available in signature information
+ (siginfo) files in
+ SSTATE_DIR.
+ For information on how to view and interpret information in
+ siginfo files, see the
+ "Viewing Task Variable Dependencies"
+ section in the Yocto Project Development Tasks Manual.
+
+ The OpenEmbedded build system uses checksums and shared + state cache to avoid unnecessarily rebuilding tasks. + Collectively, this scheme is known as "shared state code." +
+ As with all schemes, this one has some drawbacks.
+ It is possible that you could make implicit changes to your
+ code that the checksum calculations do not take into
+ account.
+ These implicit changes affect a task's output but do not
+ trigger the shared state code into rebuilding a recipe.
+ Consider an example during which a tool changes its output.
+ Assume that the output of rpmdeps
+ changes.
+ The result of the change should be that all the
+ package and
+ package_write_rpm shared state cache
+ items become invalid.
+ However, because the change to the output is
+ external to the code and therefore implicit,
+ the associated shared state cache items do not become
+ invalidated.
+ In this case, the build process uses the cached items
+ rather than running the task again.
+ Obviously, these types of implicit changes can cause
+ problems.
+
+ To avoid these problems during the build, you need to + understand the effects of any changes you make. + Realize that changes you make directly to a function + are automatically factored into the checksum calculation. + Thus, these explicit changes invalidate the associated + area of shared state cache. + However, you need to be aware of any implicit changes that + are not obvious changes to the code and could affect + the output of a given task. +
+ When you identify an implicit change, you can easily
+ take steps to invalidate the cache and force the tasks
+ to run.
+ The steps you can take are as simple as changing a
+ function's comments in the source code.
+ For example, to invalidate package shared state files,
+ change the comment statements of
+ do_package
+ or the comments of one of the functions it calls.
+ Even though the change is purely cosmetic, it causes the
+ checksum to be recalculated and forces the OpenEmbedded
+ build system to run the task again.
+
+
+ The OpenEmbedded build system automatically adds common types of
+ runtime dependencies between packages, which means that you do not
+ need to explicitly declare the packages using
+ RDEPENDS.
+ Three automatic mechanisms exist (shlibdeps,
+ pcdeps, and depchains)
+ that handle shared libraries, package configuration (pkg-config)
+ modules, and -dev and
+ -dbg packages, respectively.
+ For other types of runtime dependencies, you must manually declare
+ the dependencies.
+
+ shlibdeps:
+ During the
+ do_package
+ task of each recipe, all shared libraries installed by the
+ recipe are located.
+ For each shared library, the package that contains the
+ shared library is registered as providing the shared
+ library.
+ More specifically, the package is registered as providing
+ the
+ soname
+ of the library.
+ The resulting shared-library-to-package mapping
+ is saved globally in
+ PKGDATA_DIR
+ by the
+ do_packagedata
+ task.
Simultaneously, all executables and shared libraries
+ installed by the recipe are inspected to see what shared
+ libraries they link against.
+ For each shared library dependency that is found,
+ PKGDATA_DIR is queried to
+ see if some package (likely from a different recipe)
+ contains the shared library.
+ If such a package is found, a runtime dependency is added
+ from the package that depends on the shared library to the
+ package that contains the library.
The automatically added runtime dependency also
+ includes a version restriction.
+ This version restriction specifies that at least the
+ current version of the package that provides the shared
+ library must be used, as if
+ "package (>= version)"
+ had been added to
+ RDEPENDS.
+ This forces an upgrade of the package containing the shared
+ library when installing the package that depends on the
+ library, if needed.
If you want to avoid a package being registered as
+ providing a particular shared library (e.g. because the library
+ is for internal use only), then add the library to
+ PRIVATE_LIBS
+ inside the package's recipe.
+
+ pcdeps:
+ During the
+ do_package
+ task of each recipe, all pkg-config modules
+ (*.pc files) installed by the recipe
+ are located.
+ For each module, the package that contains the module is
+ registered as providing the module.
+ The resulting module-to-package mapping is saved globally in
+ PKGDATA_DIR
+ by the
+ do_packagedata
+ task.
Simultaneously, all pkg-config modules installed by
+ the recipe are inspected to see what other pkg-config
+ modules they depend on.
+ A module is seen as depending on another module if it
+ contains a "Requires:" line that specifies the other module.
+ For each module dependency,
+ PKGDATA_DIR is queried to see if some
+ package contains the module.
+ If such a package is found, a runtime dependency is added
+ from the package that depends on the module to the package
+ that contains the module.
+
pcdeps mechanism most often
+ infers dependencies between -dev
+ packages.
+ +
+ depchains:
+ If a package foo depends on a package
+ bar, then foo-dev
+ and foo-dbg are also made to depend on
+ bar-dev and
+ bar-dbg, respectively.
+ Taking the -dev packages as an
+ example, the bar-dev package might
+ provide headers and shared library symlinks needed by
+ foo-dev, which shows the need
+ for a dependency between the packages.
The dependencies added by
+ depchains are in the form of
+ RRECOMMENDS.
+
foo-dev also has an
+ RDEPENDS-style dependency on
+ foo, because the default value of
+ RDEPENDS_${PN}-dev (set in
+ bitbake.conf) includes
+ "${PN}".
+ To ensure that the dependency chain is never broken,
+ -dev and -dbg
+ packages are always generated by default, even if the
+ packages turn out to be empty.
+ See the
+ ALLOW_EMPTY
+ variable for more information.
+
+
+ The do_package task depends on the
+ do_packagedata
+ task of each recipe in
+ DEPENDS
+ through use of a
+ [deptask]
+ declaration, which guarantees that the required
+ shared-library/module-to-package mapping information will be available
+ when needed as long as DEPENDS has been
+ correctly set.
+
+ Some tasks are easier to implement when allowed to perform certain
+ operations that are normally reserved for the root user (e.g.
+ do_install,
+ do_package_write*,
+ do_rootfs,
+ and
+ do_image*).
+ For example, the do_install task benefits
+ from being able to set the UID and GID of installed files to
+ arbitrary values.
+
+ One approach to allowing tasks to perform root-only operations + would be to require BitBake to run as root. + However, this method is cumbersome and has security issues. + The approach that is actually used is to run tasks that benefit + from root privileges in a "fake" root environment. + Within this environment, the task and its child processes believe + that they are running as the root user, and see an internally + consistent view of the filesystem. + As long as generating the final output (e.g. a package or an image) + does not require root privileges, the fact that some earlier + steps ran in a fake root environment does not cause problems. +
+ The capability to run tasks in a fake root environment is known as + "fakeroot", + which is derived from the BitBake keyword/variable + flag that requests a fake root environment for a task. +
+ In the OpenEmbedded build system, the program that implements
+ fakeroot is known as Pseudo.
+ Pseudo overrides system calls by using the environment variable
+ LD_PRELOAD, which results in the illusion
+ of running as root.
+ To keep track of "fake" file ownership and permissions resulting
+ from operations that require root permissions, Pseudo uses
+ an SQLite 3 database.
+ This database is stored in
+ ${WORKDIR}/pseudo/files.db
+ for individual recipes.
+ Storing the database in a file as opposed to in memory
+ gives persistence between tasks and builds, which is not
+ accomplished using fakeroot.
+
virtual/fakeroot-native:do_populate_sysroot,
+ giving the following:
+
+ fakeroot do_mytask () {
+ ...
+ }
+ do_mytask[depends] += "virtual/fakeroot-native:do_populate_sysroot"
+
+ For more information, see the
+ FAKEROOT*
+ variables in the BitBake User Manual.
+ You can also reference the
+ "Pseudo"
+ and
+ "Why Not Fakeroot?"
+ articles for background information on Pseudo.
+
+ Wayland + is a computer display server protocol that + provides a method for compositing window managers to communicate + directly with applications and video hardware and expects them to + communicate with input hardware using other libraries. + Using Wayland with supporting targets can result in better control + over graphics frame rendering than an application might otherwise + achieve. +
+ The Yocto Project provides the Wayland protocol libraries and the + reference + Weston + compositor as part of its release. + This section describes what you need to do to implement Wayland and + use the compositor when building an image for a supporting target. +
+ The Wayland protocol libraries and the reference Weston
+ compositor ship as integrated packages in the
+ meta layer of the
+ Source Directory.
+ Specifically, you can find the recipes that build both Wayland
+ and Weston at
+ meta/recipes-graphics/wayland.
+
+ You can build both the Wayland and Weston packages for use only + with targets that accept the + Mesa 3D and Direct Rendering Infrastructure, + which is also known as Mesa DRI. + This implies that you cannot build and use the packages if your + target uses, for example, the + Intel® Embedded Media + and Graphics Driver + (Intel® EMGD) that + overrides Mesa DRI. +
+
+ To enable Wayland, you need to enable it to be built and enable + it to be included in the image. +
+ To cause Mesa to build the wayland-egl
+ platform and Weston to build Wayland with Kernel Mode
+ Setting
+ (KMS)
+ support, include the "wayland" flag in the
+ DISTRO_FEATURES
+ statement in your local.conf file:
+
+ DISTRO_FEATURES_append = " wayland" +
+
+
+ To install the Wayland feature into an image, you must
+ include the following
+ CORE_IMAGE_EXTRA_INSTALL
+ statement in your local.conf file:
+
+ CORE_IMAGE_EXTRA_INSTALL += "wayland weston" +
+
+ To run Weston inside X11, enabling it as described earlier and + building a Sato image is sufficient. + If you are running your image under Sato, a Weston Launcher + appears in the "Utility" category. +
+ Alternatively, you can run Weston through the command-line + interpretor (CLI), which is better suited for development work. + To run Weston under the CLI, you need to do the following after + your image is built: +
+ Run these commands to export
+ XDG_RUNTIME_DIR:
+
+ mkdir -p /tmp/$USER-weston + chmod 0700 /tmp/$USER-weston + export XDG_RUNTIME_DIR=/tmp/$USER-weston +
+
+ Launch Weston in the shell: +
+ weston +
+
+ This section describes the mechanism by which the OpenEmbedded + build system tracks changes to licensing text. + The section also describes how to enable commercially licensed + recipes, which by default are disabled. +
+ For information that can help you maintain compliance with + various open source licensing during the lifecycle of the product, + see the + "Maintaining Open Source License Compliance During Your Project's Lifecycle" + section in the Yocto Project Development Tasks Manual. +
+ The license of an upstream project might change in the future.
+ In order to prevent these changes going unnoticed, the
+ LIC_FILES_CHKSUM
+ variable tracks changes to the license text. The checksums are
+ validated at the end of the configure step, and if the
+ checksums do not match, the build will fail.
+
LIC_FILES_CHKSUM Variable¶
+ The LIC_FILES_CHKSUM
+ variable contains checksums of the license text in the
+ source code for the recipe.
+ Following is an example of how to specify
+ LIC_FILES_CHKSUM:
+
+ LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \ + file://licfile1.txt;beginline=5;endline=29;md5=yyyy \ + file://licfile2.txt;endline=50;md5=zzzz \ + ..." +
+
+ When using "beginline" and "endline", realize
+ that line numbering begins with one and not
+ zero.
+ Also, the included lines are inclusive (i.e.
+ lines five through and including 29 in the
+ previous example for
+ licfile1.txt).
+
+ When a license check fails, the selected license + text is included as part of the QA message. + Using this output, you can determine the exact + start and finish for the needed license text. +
+
+ The build system uses the
+ S
+ variable as the default directory when searching files
+ listed in LIC_FILES_CHKSUM.
+ The previous example employs the default directory.
+
+ Consider this next example: +
+ LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\
+ md5=bb14ed3c4cda583abc85401304b5cd4e"
+ LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
+ +
+ The first line locates a file in
+ ${S}/src/ls.c and isolates lines five
+ through 16 as license text.
+ The second line refers to a file in
+ WORKDIR.
+
+ Note that LIC_FILES_CHKSUM variable is
+ mandatory for all recipes, unless the
+ LICENSE variable is set to "CLOSED".
+
+ As mentioned in the previous section, the
+ LIC_FILES_CHKSUM variable lists all
+ the important files that contain the license text for the
+ source code.
+ It is possible to specify a checksum for an entire file,
+ or a specific section of a file (specified by beginning and
+ ending line numbers with the "beginline" and "endline"
+ parameters, respectively).
+ The latter is useful for source files with a license
+ notice header, README documents, and so forth.
+ If you do not use the "beginline" parameter, then it is
+ assumed that the text begins on the first line of the file.
+ Similarly, if you do not use the "endline" parameter,
+ it is assumed that the license text ends with the last
+ line of the file.
+
+ The "md5" parameter stores the md5 checksum of the license + text. + If the license text changes in any way as compared to + this parameter then a mismatch occurs. + This mismatch triggers a build failure and notifies + the developer. + Notification allows the developer to review and address + the license text changes. + Also note that if a mismatch occurs during the build, + the correct md5 checksum is placed in the build log and + can be easily copied to the recipe. +
+ There is no limit to how many files you can specify using
+ the LIC_FILES_CHKSUM variable.
+ Generally, however, every project requires a few
+ specifications for license tracking.
+ Many projects have a "COPYING" file that stores the
+ license information for all the source code files.
+ This practice allows you to just track the "COPYING"
+ file as long as it is kept up to date.
+
+ If you specify an empty or invalid "md5" + parameter, BitBake returns an md5 mis-match + error and displays the correct "md5" parameter + value during the build. + The correct parameter is also captured in + the build log. +
+ If the whole file contains only license text, + you do not need to use the "beginline" and + "endline" parameters. +
+
+ By default, the OpenEmbedded build system disables
+ components that have commercial or other special licensing
+ requirements.
+ Such requirements are defined on a
+ recipe-by-recipe basis through the
+ LICENSE_FLAGS
+ variable definition in the affected recipe.
+ For instance, the
+ poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly
+ recipe contains the following statement:
+
+ LICENSE_FLAGS = "commercial" +
+ Here is a slightly more complicated example that contains both + an explicit recipe name and version (after variable expansion): +
+ LICENSE_FLAGS = "license_${PN}_${PV}"
+
+ In order for a component restricted by a
+ LICENSE_FLAGS definition to be enabled and
+ included in an image, it needs to have a matching entry in the
+ global
+ LICENSE_FLAGS_WHITELIST
+ variable, which is a variable typically defined in your
+ local.conf file.
+ For example, to enable the
+ poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly
+ package, you could add either the string
+ "commercial_gst-plugins-ugly" or the more general string
+ "commercial" to LICENSE_FLAGS_WHITELIST.
+ See the
+ "License Flag Matching"
+ section for a full
+ explanation of how LICENSE_FLAGS matching
+ works.
+ Here is the example:
+
+ LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly" +
+ Likewise, to additionally enable the package built from the
+ recipe containing
+ LICENSE_FLAGS = "license_${PN}_${PV}",
+ and assuming that the actual recipe name was
+ emgd_1.10.bb, the following string would
+ enable that package as well as the original
+ gst-plugins-ugly package:
+
+ LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly license_emgd_1.10" +
+ As a convenience, you do not need to specify the complete + license string in the whitelist for every package. + You can use an abbreviated form, which consists + of just the first portion or portions of the license + string before the initial underscore character or characters. + A partial string will match any license that contains the + given string as the first portion of its license. + For example, the following whitelist string will also match + both of the packages previously mentioned as well as any other + packages that have licenses starting with "commercial" or + "license". +
+ LICENSE_FLAGS_WHITELIST = "commercial license" +
+
+ License flag matching allows you to control what recipes
+ the OpenEmbedded build system includes in the build.
+ Fundamentally, the build system attempts to match
+ LICENSE_FLAGS
+ strings found in recipes against
+ LICENSE_FLAGS_WHITELIST
+ strings found in the whitelist.
+ A match causes the build system to include a recipe in the
+ build, while failure to find a match causes the build
+ system to exclude a recipe.
+
+ In general, license flag matching is simple. + However, understanding some concepts will help you + correctly and effectively use matching. +
+ Before a flag
+ defined by a particular recipe is tested against the
+ contents of the whitelist, the expanded string
+ _${PN} is appended to the flag.
+ This expansion makes each
+ LICENSE_FLAGS value recipe-specific.
+ After expansion, the string is then matched against the
+ whitelist.
+ Thus, specifying
+ LICENSE_FLAGS = "commercial"
+ in recipe "foo", for example, results in the string
+ "commercial_foo".
+ And, to create a match, that string must appear in the
+ whitelist.
+
+ Judicious use of the LICENSE_FLAGS
+ strings and the contents of the
+ LICENSE_FLAGS_WHITELIST variable
+ allows you a lot of flexibility for including or excluding
+ recipes based on licensing.
+ For example, you can broaden the matching capabilities by
+ using license flags string subsets in the whitelist.
+
usethispart_1.3,
+ usethispart_1.4, and so forth).
+
+ For example, simply specifying the string "commercial" in
+ the whitelist matches any expanded
+ LICENSE_FLAGS definition that starts
+ with the string "commercial" such as "commercial_foo" and
+ "commercial_bar", which are the strings the build system
+ automatically generates for hypothetical recipes named
+ "foo" and "bar" assuming those recipes simply specify the
+ following:
+
+ LICENSE_FLAGS = "commercial" +
+ Thus, you can choose to exhaustively + enumerate each license flag in the whitelist and + allow only specific recipes into the image, or + you can use a string subset that causes a broader range of + matches to allow a range of recipes into the image. +
+ This scheme works even if the
+ LICENSE_FLAGS string already
+ has _${PN} appended.
+ For example, the build system turns the license flag
+ "commercial_1.2_foo" into "commercial_1.2_foo_foo" and
+ would match both the general "commercial" and the specific
+ "commercial_1.2_foo" strings found in the whitelist, as
+ expected.
+
+ Here are some other scenarios: +
+ You can specify a versioned string in the recipe + such as "commercial_foo_1.2" in a "foo" recipe. + The build system expands this string to + "commercial_foo_1.2_foo". + Combine this license flag with a whitelist that has + the string "commercial" and you match the flag + along with any other flag that starts with the + string "commercial". +
+ Under the same circumstances, you can use + "commercial_foo" in the whitelist and the build + system not only matches "commercial_foo_1.2" but + also matches any license flag with the string + "commercial_foo", regardless of the version. +
+ You can be very specific and use both the + package and version parts in the whitelist (e.g. + "commercial_foo_1.2") to specifically match a + versioned recipe. +
+
+ Other helpful variables related to commercial
+ license handling exist and are defined in the
+ poky/meta/conf/distro/include/default-distrovars.inc file:
+
+ COMMERCIAL_AUDIO_PLUGINS ?= "" + COMMERCIAL_VIDEO_PLUGINS ?= "" +
+ If you want to enable these components, you can do so by
+ making sure you have statements similar to the following
+ in your local.conf configuration file:
+
+ COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \ + gst-plugins-ugly-mpegaudioparse" + COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \ + gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse" + LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly commercial_gst-plugins-bad commercial_qmmp" +
+ Of course, you could also create a matching whitelist
+ for those components using the more general "commercial"
+ in the whitelist, but that would also enable all the
+ other packages with
+ LICENSE_FLAGS
+ containing "commercial", which you may or may not want:
+
+ LICENSE_FLAGS_WHITELIST = "commercial" +
+
+ Specifying audio and video plug-ins as part of the
+ COMMERCIAL_AUDIO_PLUGINS and
+ COMMERCIAL_VIDEO_PLUGINS statements
+ (along with the enabling
+ LICENSE_FLAGS_WHITELIST) includes the
+ plug-ins or components into built images, thus adding
+ support for media formats or components.
+
+ x32 processor-specific Application Binary Interface + (x32 psABI) + is a native 32-bit processor-specific ABI for + Intel® 64 (x86-64) + architectures. + An ABI defines the calling conventions between functions in a + processing environment. + The interface determines what registers are used and what the sizes are + for various C data types. +
+ Some processing environments prefer using 32-bit applications even + when running on Intel 64-bit platforms. + Consider the i386 psABI, which is a very old 32-bit ABI for Intel + 64-bit platforms. + The i386 psABI does not provide efficient use and access of the + Intel 64-bit processor resources, leaving the system underutilized. + Now consider the x86_64 psABI. + This ABI is newer and uses 64-bits for data sizes and program + pointers. + The extra bits increase the footprint size of the programs, + libraries, and also increases the memory and file system size + requirements. + Executing under the x32 psABI enables user programs to utilize CPU + and system resources more efficiently while keeping the memory + footprint of the applications low. + Extra bits are used for registers but not for addressing mechanisms. +
+ The Yocto Project supports the final specifications of x32 psABI + as follows: +
+ You can create packages and images in x32 psABI format on + x86_64 architecture targets. +
+ You can successfully build recipes with the x32 toolchain. +
+ You can create and boot
+ core-image-minimal and
+ core-image-sato images.
+
+ RPM Package Manager (RPM) support exists for x32 binaries. +
+ Support for large images exists. +
+
+ For steps on how to use x32 psABI, see the + "Using x32 psABI" + section in the Yocto Project Development Tasks Manual. +