Compare commits
55 Commits
master
...
pinky-3.1.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0a6bec9469 | ||
|
|
b89647279a | ||
|
|
7a833289f8 | ||
|
|
19da401a19 | ||
|
|
740db47160 | ||
|
|
39bb1e9235 | ||
|
|
7688bca5df | ||
|
|
ba5e395dfa | ||
|
|
3e23549d0a | ||
|
|
23212b4c3d | ||
|
|
8f3e41b0f6 | ||
|
|
f4a6877b1b | ||
|
|
402a5b2ab8 | ||
|
|
4b13072f18 | ||
|
|
1c332abfe2 | ||
|
|
31bdfe582e | ||
|
|
d2c268aec8 | ||
|
|
f73dc1bf2a | ||
|
|
961589f1a4 | ||
|
|
99e4fde8fc | ||
|
|
abf40bb82c | ||
|
|
a4dd5f68db | ||
|
|
7a7d1df42e | ||
|
|
5e32eb124c | ||
|
|
668c274e39 | ||
|
|
f0ff94fe07 | ||
|
|
7fe3c30100 | ||
|
|
1bcb06beee | ||
|
|
c2f46bd918 | ||
|
|
71e368844f | ||
|
|
6bf7f1479d | ||
|
|
2fbc6e8044 | ||
|
|
4d4c63f2db | ||
|
|
bb3786088c | ||
|
|
7c1b438d40 | ||
|
|
d1e8a463a3 | ||
|
|
67063df452 | ||
|
|
f82f89d48a | ||
|
|
2ab930bf33 | ||
|
|
79414eecf7 | ||
|
|
4422694de0 | ||
|
|
fa7434d685 | ||
|
|
007253df78 | ||
|
|
23220b52b1 | ||
|
|
fbf6b9543c | ||
|
|
f3b60f7bda | ||
|
|
fcbca8870c | ||
|
|
a634019532 | ||
|
|
4168c08285 | ||
|
|
f026861423 | ||
|
|
602d17d8cb | ||
|
|
a1868835fe | ||
|
|
2f6c30fc9a | ||
|
|
0f9a67f1cf | ||
|
|
7d846ee9bb |
39
.gitignore
vendored
@@ -1,39 +0,0 @@
|
||||
*.pyc
|
||||
*.pyo
|
||||
/*.patch
|
||||
/.repo/
|
||||
/build*/
|
||||
pyshtables.py
|
||||
pstage/
|
||||
scripts/oe-git-proxy-socks
|
||||
sources/
|
||||
meta-*/
|
||||
buildtools/
|
||||
!meta-skeleton
|
||||
!meta-selftest
|
||||
hob-image-*.bb
|
||||
*.swp
|
||||
*.orig
|
||||
*.rej
|
||||
*~
|
||||
!meta-poky
|
||||
!meta-yocto
|
||||
!meta-yocto-bsp
|
||||
!meta-yocto-imported
|
||||
/documentation/*/eclipse/
|
||||
/documentation/*/*.html
|
||||
/documentation/*/*.pdf
|
||||
/documentation/*/*.tgz
|
||||
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.html
|
||||
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.pdf
|
||||
/bitbake/doc/bitbake-user-manual/bitbake-user-manual.tgz
|
||||
pull-*/
|
||||
bitbake/lib/toaster/contrib/tts/backlog.txt
|
||||
bitbake/lib/toaster/contrib/tts/log/*
|
||||
bitbake/lib/toaster/contrib/tts/.cache/*
|
||||
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
|
||||
_toaster_clones/
|
||||
downloads/
|
||||
sstate-cache/
|
||||
toaster.sqlite
|
||||
.vscode/
|
||||
11
LICENSE
Normal file
@@ -0,0 +1,11 @@
|
||||
Different components of Poky are under different licenses (a mix of
|
||||
MIT and GPLv2). Please see:
|
||||
|
||||
bitbake/COPYING (GPLv2)
|
||||
meta/COPYING.MIT (MIT)
|
||||
meta-extras/COPYING.MIT (MIT)
|
||||
|
||||
which cover the components in those subdirectories.
|
||||
|
||||
License information for any other files is either explicitly stated
|
||||
or defaults to GPL version 2.
|
||||
123
README
@@ -1,114 +1,15 @@
|
||||
The poky repository master branch is no longer being updated.
|
||||
Poky
|
||||
====
|
||||
|
||||
You can either:
|
||||
Poky platform builder is a combined cross build system and development
|
||||
environment. It features support for building X11/Matchbox/GTK based
|
||||
filesystem images for various embedded devices and boards. It also
|
||||
supports cross-architecture application development using QEMU emulation
|
||||
and a standalone toolchain and SDK with IDE integration.
|
||||
|
||||
a) switch to individual clones of bitbake, openembedded-core, meta-yocto and yocto-docs
|
||||
|
||||
https://docs.yoctoproject.org/dev-manual/poky-manual-setup.html
|
||||
|
||||
b) use the new bitbake-setup
|
||||
|
||||
https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user-manual-environment-setup.html
|
||||
|
||||
You can find more information in our documentation: https://docs.yoctoproject.org/
|
||||
|
||||
Note that "poky" the distro setting is still available in meta-yocto as
|
||||
before and we continue to use and maintain that.
|
||||
|
||||
Long live Poky!
|
||||
|
||||
|
||||
|
||||
|
||||
Some further information on the background of this change follows. The
|
||||
details are taken from:
|
||||
https://lists.openembedded.org/g/openembedded-architecture/message/2179
|
||||
|
||||
TLDR: People have complained about the combo-layer built poky
|
||||
repository for years. It was meant to be a temporary thing, we now have
|
||||
an alternative and I'm therefore doing what I promised I'd do. Change
|
||||
is tough, things may break but this is the right point to at least try
|
||||
it.
|
||||
|
||||
I'd like to note that:
|
||||
* setting up builds with a separate oe-core and bitbake clone
|
||||
works as it always has done
|
||||
* you can change your CI just to use those two repos instead of poky
|
||||
* bitbake-setup isn't mandatory, it will just be what the yocto-
|
||||
docs presents to users
|
||||
* we don't have to stop maintaining the poky repository
|
||||
however nobody will test the new approach/code unless we do
|
||||
* we are optionally exposing sstate mirrors in the new config
|
||||
* we are also exposing config fragments to users
|
||||
* poky as a DISTRO in meta-yocto remains
|
||||
|
||||
A bit more about the history and background for those who are
|
||||
interested and then some FAQs:
|
||||
|
||||
Back around 2010 when we split up openembedded-classic and started
|
||||
developing layers, we made the artificial "poky" repository construct
|
||||
as a way to let people easily and quickly get started with the project.
|
||||
without cloning and managing multiple repositories. Layers were a new
|
||||
idea with lots of rough edges. kas didn't exist, I think repo was only
|
||||
just created and it was a different world. For us, it meant hacking up
|
||||
a quick tool, "combo-layer" and it was really a temporary solution to
|
||||
fill a gap and it was at least as functional as repo of the era. It was
|
||||
assumed we'd work it out properly in the future.
|
||||
|
||||
At developer meetings there are inevitable questions about why
|
||||
poky/combo-layer exist and few seem to actually like/support it. There
|
||||
are continual questions about why a tool doesn't exist or why we don't
|
||||
adopt one too.
|
||||
|
||||
15 years later, a bit longer than we might have thought, we are finally
|
||||
in a position where there may be a viable way forward to change.
|
||||
|
||||
It has taken us a bit of time to get to this point. I wrote the
|
||||
original description of something like bitbake-setup about 7-8 years
|
||||
ago. I shared it privately with a few people, the review feedback
|
||||
stopped me pushing it further as I simply did not have the bandwidth.
|
||||
We were fortunate to get funding from the Sovereign Tech Fund to start
|
||||
the work and whilst I'd probably prefer to avoid the issue, the time
|
||||
had come to start. Since then, Alexander Kanavin has put a lot of work
|
||||
into getting it to the point where it would be possible to switch. A
|
||||
huge thanks to him for getting this to the current point.
|
||||
|
||||
Why not use kas/submodules/repo?
|
||||
|
||||
This topic has been discussed in depth several times. Very roughly,
|
||||
these are either difficult to focus on our use cases or have specific
|
||||
designs and intent which we as a project would struggle to influence.
|
||||
We are taking significant influence from some of them but also trying
|
||||
to build something where we can benefit from tight direct integration
|
||||
with bitbake and the metadata. For example fragment support is generic
|
||||
and hopefully something other approaches can also benefit from. We want
|
||||
to provide something we can switch the projects docs and autobuilder to
|
||||
which we can control and develop as we need it to. We are not aiming to
|
||||
force anyone to switch, you can use whichever tool you want.
|
||||
|
||||
Can we not keep poky [repository master branch] around?
|
||||
|
||||
If we do that, nobody will use the new tooling and it will be a
|
||||
disaster as issues won't get resolved. We need our CI to use the same
|
||||
thing we promote to our new and experienced users. We need this new
|
||||
tooling to be usable by our experienced developers too. We have tried
|
||||
for months to get people to try it and they simply don't. Making a
|
||||
release with it won't change much either. It needs people using it and
|
||||
for that, poky has to stop being updated.
|
||||
|
||||
What happens to poky [repository]?
|
||||
|
||||
The LTS branches continue their lifetime as planned. For master, I'll
|
||||
probably put a final commit in changing to just a README which points
|
||||
people at the bitbake-setup changes and explains what happened.
|
||||
|
||||
What are the timelines? Why now?
|
||||
|
||||
If we're going to make a change, we really want this in the next LTS
|
||||
release, which is April 2026. We only have one release before that
|
||||
which is now, October 2025. We therefore need to switch now, and then
|
||||
give us time to update docs, fix issues that arise and so on and have
|
||||
it in a release cycle. Whilst it means delaying the Oct 2025 release
|
||||
slightly, that is the right thing to do in the context of the bigger
|
||||
picture.
|
||||
Poky has an extensive handbook, the source of which is contained in
|
||||
the handbook directory. For compiled HTML or pdf versions of this,
|
||||
see the Poky website http://pokylinux.org.
|
||||
|
||||
Additional information on the specifics of hardware that Poky supports
|
||||
is available in README.hardware.
|
||||
|
||||
436
README.hardware
Normal file
@@ -0,0 +1,436 @@
|
||||
Poky Hardware Reference Guide
|
||||
=============================
|
||||
|
||||
This file gives details about using Poky with different hardware reference
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device. To discuss
|
||||
support for further hardware reference boards/devices please contact OpenedHand.
|
||||
|
||||
QEMU Emulation Images (qemuarm and qemux86)
|
||||
===========================================
|
||||
|
||||
To simplify development Poky supports building images to work with the QEMU
|
||||
emulator in system emulation mode. Two architectures are currently supported,
|
||||
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
|
||||
in the Poky Handbook.
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
The following boards are supported by Poky:
|
||||
|
||||
* Compulab CM-X270 (cm-x270)
|
||||
* Compulab EM-X270 (em-x270)
|
||||
* FreeScale iMX31ADS (mx31ads)
|
||||
* Marvell PXA3xx Zylonite (zylonite)
|
||||
* Logic iMX31 Lite Kit (mx31litekit)
|
||||
* Phytec phyCORE-iMX31 (mx31phy)
|
||||
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
The following consumer devices are supported by Poky:
|
||||
|
||||
* FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
* HTC Universal (htcuniversal)
|
||||
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
* Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
* Sharp Zaurus SL-C1000 (akita)
|
||||
* Sharp Zaurus SL-C3x00 series (spitz)
|
||||
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
Poky Boot CD (bootcdx86)
|
||||
========================
|
||||
|
||||
The Poky boot CD iso images are designed as a demonstration of the Poky
|
||||
environment and to show the versatile image formats Poky can generate. It will
|
||||
run on Pentium2 or greater PC style computers. The iso image can be
|
||||
burnt to CD and then booted from.
|
||||
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
Compulab CM-X270 (cm-x270)
|
||||
==========================
|
||||
|
||||
The bootloader on this board doesn't support writing jffs2 images directly to
|
||||
NAND and normally uses a proprietary kernel flash driver. To allow the use of
|
||||
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
|
||||
is booted which contains mtd utilities and this is then used to write the main
|
||||
filesystem.
|
||||
|
||||
It is assumed the board is connected to a network where a TFTP server is
|
||||
available and that a serial terminal is available to communicate with the
|
||||
bootloader (38400, 8N1). If a DHCP server is available the device will use it
|
||||
to obtain an IP address. If not, run:
|
||||
|
||||
ARMmon > setip dhcp off
|
||||
ARMmon > setip ip 192.168.1.203
|
||||
ARMmon > setip mask 255.255.255.0
|
||||
|
||||
To reflash the kernel:
|
||||
|
||||
ARMmon > download kernel tftp zimage 192.168.1.202
|
||||
ARMmon > flash kernel
|
||||
|
||||
where zimage is the name of the kernel on the TFTP server and its IP address is
|
||||
192.168.1.202. The names of the files must be all lowercase.
|
||||
|
||||
To reflash the initrd/initramfs:
|
||||
|
||||
ARMmon > download ramdisk tftp diskimage 192.168.1.202
|
||||
ARMmon > flash ramdisk
|
||||
|
||||
where diskimage is the name of the initramfs image (a cpio.gz file).
|
||||
|
||||
To boot the initramfs:
|
||||
|
||||
ARMmon > ramdisk on
|
||||
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
|
||||
|
||||
To reflash the main image login to the system as user "root", then run:
|
||||
|
||||
# ifconfig eth0 192.168.1.203
|
||||
# tftp -g -r mainimage 192.168.1.202
|
||||
# flash_eraseall /dev/mtd1
|
||||
# nandwrite /dev/mtd1 mainimage
|
||||
|
||||
which configures the network interface with the IP address 192.168.1.203,
|
||||
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
|
||||
the flash and then writes the new image to the flash.
|
||||
|
||||
The main image can then be booted with:
|
||||
|
||||
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
|
||||
|
||||
Note that the initramfs image is built by poky in a slightly different mode to
|
||||
normal since it uses uclibc. To generate this use a command like:
|
||||
|
||||
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
|
||||
|
||||
|
||||
Compulab EM-X270 (em-x270)
|
||||
==========================
|
||||
|
||||
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
|
||||
Compulab website. Inside the images directory of this ZIP file is another ZIP
|
||||
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
|
||||
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
|
||||
|
||||
Insert this USB disk into the supplied adapter and connect this to the
|
||||
board. Whilst holding down the the suspend button press the reset button. The
|
||||
board will now boot off the USB key and into a version of Angstrom. On the
|
||||
desktop is an icon labelled "Updater". Run this program to launch the updater
|
||||
that will flash the Poky kernel and rootfs to the board.
|
||||
|
||||
|
||||
FreeScale iMX31ADS (mx31ads)
|
||||
===========================
|
||||
|
||||
The correct serial port is the top-most female connector to the right of the
|
||||
ethernet socket.
|
||||
|
||||
For uploading data to RedBoot we are going to use tftp. In this example we
|
||||
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
|
||||
|
||||
To set the IP address, run:
|
||||
|
||||
ip_address -l 192.168.9.2/24 -h 192.168.9.1
|
||||
|
||||
To download a kernel called "zimage" from the TFTP server, run:
|
||||
|
||||
load -r -b 0x100000 zimage
|
||||
|
||||
To write the kernel to flash run:
|
||||
|
||||
fis create kernel
|
||||
|
||||
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
|
||||
|
||||
load -r -b 0x100000 rootfs
|
||||
|
||||
To write the root filesystem to flash run:
|
||||
|
||||
fis create root
|
||||
|
||||
To load and boot a kernel and rootfs from flash:
|
||||
|
||||
fis load kernel
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
|
||||
|
||||
To load and boot a kernel from a TFTP server with the rootfs over NFS:
|
||||
|
||||
load -r -b 0x100000 zimage
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
|
||||
|
||||
The instructions above are for using the (default) NOR flash on the board,
|
||||
there is also 128M of NAND flash. It is possible to install Poky to the NAND
|
||||
flash which gives more space for the rootfs and instructions for using this are
|
||||
given below. To switch to the NAND flash:
|
||||
|
||||
factive NAND
|
||||
|
||||
This will then restart RedBoot using the NAND rather than the NOR. If you
|
||||
have not used the NAND before then it is unlikely that there will be a
|
||||
partition table yet. You can get the list of partitions with 'fis list'.
|
||||
|
||||
If this shows no partitions then you can create them with:
|
||||
|
||||
fis init
|
||||
|
||||
The output of 'fis list' should now show:
|
||||
|
||||
Name FLASH addr Mem addr Length Entry point
|
||||
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
|
||||
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
|
||||
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
|
||||
|
||||
Partitions for the kernel and rootfs need to be created:
|
||||
|
||||
fis create -l 0x1A0000 -e 0x00100000 kernel
|
||||
fis create -l 0x5000000 -e 0x00100000 root
|
||||
|
||||
You may now use the instructions above for flashing. However it is important
|
||||
to note that the erase block size for the NAND is different to the NOR so the
|
||||
JFFS erase size will need to be changed to 0x4000. Stardard images are built
|
||||
for NOR and you will need to build custom images for NAND.
|
||||
|
||||
You will also need to update the kernel command line to use the correct root
|
||||
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
|
||||
scheme shown above. If this fails then you can doublecheck against the output
|
||||
from the kernel when it evaluates the available mtd partitions.
|
||||
|
||||
|
||||
Marvell PXA3xx Zylonite (zylonite)
|
||||
==================================
|
||||
|
||||
These instructions assume the Zylonite is connected to a machine running a TFTP
|
||||
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
|
||||
to access the blob bootloader. The kernel is on the TFTP server as
|
||||
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
|
||||
the images are to be saved in NAND flash.
|
||||
|
||||
The following commands setup blob:
|
||||
|
||||
blob> setip client 192.168.123.4
|
||||
blob> setip server 192.168.123.5
|
||||
|
||||
To flash the kernel:
|
||||
|
||||
blob> tftp zylonite-kernel
|
||||
blob> nandwrite -j 0x80800000 0x60000 0x200000
|
||||
|
||||
To flash the rootfs:
|
||||
|
||||
blob> tftp zylonite-rootfs
|
||||
blob> nanderase -j 0x260000 0x5000000
|
||||
blob> nandwrite -j 0x80800000 0x260000 <length>
|
||||
|
||||
(where <length> is the rootfs size which will be printed by the tftp step)
|
||||
|
||||
To boot the board:
|
||||
|
||||
blob> nkernel
|
||||
blob> boot
|
||||
|
||||
|
||||
Logic iMX31 Lite Kit (mx31litekit)
|
||||
===============================
|
||||
|
||||
The easiest method to boot this board is to take an MMC/SD card and format
|
||||
the first partition as ext2, then extract the poky image onto this as root.
|
||||
Assuming the board is network connected, a TFTP server is available at
|
||||
192.168.1.33 and a serial terminal is available (115200 8N1), the following
|
||||
commands will boot a kernel called "mx31kern" from the TFTP server:
|
||||
|
||||
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
|
||||
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
|
||||
losh> exec 0x80100000 -
|
||||
|
||||
|
||||
Phytec phyCORE-iMX31 (mx31phy)
|
||||
==============================
|
||||
|
||||
Support for this board is currently being developed. Experimental jffs2
|
||||
images and a suitable kernel are available and are known to work with the
|
||||
board.
|
||||
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
========================================
|
||||
|
||||
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
|
||||
which you can build with "bitbake dfu-util-native" command.
|
||||
|
||||
Flashing requires these steps:
|
||||
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB.
|
||||
3. Hold AUX key and press Power key. There should be a bootmenu
|
||||
on screen.
|
||||
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
|
||||
The output should look like this:
|
||||
|
||||
dfu-util - (C) 2007 by OpenMoko Inc.
|
||||
This program is Free Software and has ABSOLUTELY NO WARRANTY
|
||||
|
||||
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
|
||||
|
||||
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
|
||||
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
|
||||
|
||||
|
||||
HTC Universal (htcuniversal)
|
||||
============================
|
||||
|
||||
Note: HTC Universal support is highly experimental.
|
||||
|
||||
On the HTC Universal, entirely replacing the Windows installation is not
|
||||
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
|
||||
has booted, Windows is no longer in memory or active but when power is removed,
|
||||
the user will be returned to windows and will need to return to Linux from
|
||||
there.
|
||||
|
||||
Once an MMC/SD card is available it is suggested its split into two partitions,
|
||||
one for a program called HaRET which lets you boot Linux from within Windows
|
||||
and the second for the rootfs. The HaRET partition should be the first partition
|
||||
on the card and be vfat formatted. It doesn't need to be large, just enough for
|
||||
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
|
||||
second partition. The first partition should be vfat so Windows recognises it
|
||||
as if it doesn't, it has been known to reformat cards.
|
||||
|
||||
On the first partition you need three files:
|
||||
|
||||
* a HaRET binary (version 0.5.1 works well and a working version
|
||||
should be part of the last Poky release)
|
||||
* a kernel renamed to "zImage"
|
||||
* a default.txt which contains:
|
||||
|
||||
set kernel "zImage"
|
||||
set mtype "855"
|
||||
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
|
||||
boot2
|
||||
|
||||
On the second parition the root file system is extracted as root. A different
|
||||
partition layout or other kernel options can be changed in the default.txt file.
|
||||
|
||||
When inserted into the device, Windows should see the card and let you browse
|
||||
its contents using File Explorer. Running the HaRET binary will present a dialog
|
||||
box (maybe after messages warning about running unsigned binaries) where you
|
||||
select OK and you should then see Poky boot. Kernel messages can be seen by
|
||||
adding psplash=false to the kernel commandline.
|
||||
|
||||
|
||||
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
============================================================
|
||||
|
||||
Note: Nokia tablet support is highly experimental.
|
||||
|
||||
The Nokia internet tablet devices are OMAP based tablet formfactor devices
|
||||
with large screens (800x480), wifi and touchscreen.
|
||||
|
||||
To flash images to these devices you need the "flasher" utility which can be
|
||||
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
|
||||
utility needs to be run as root and the usb filesystem needs to be mounted
|
||||
although most distributions will have done this for you. Once you have this
|
||||
follow these steps:
|
||||
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB
|
||||
(connecting power to the device doesn't hurt either).
|
||||
3. Run "flasher -i"
|
||||
4. Power on the device.
|
||||
5. The program should give an indication it's found
|
||||
a tablet device. If not, recheck the cables, make sure you're
|
||||
root and usbfs/usbdevfs is mounted.
|
||||
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
|
||||
and <kernel> is the kernel to use
|
||||
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
|
||||
7. Run "flasher -R" to reboot the device.
|
||||
8. The device should boot into Poky.
|
||||
|
||||
The nokia800 images and kernel will run on both the N800 and N810.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
==================================
|
||||
|
||||
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
|
||||
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
|
||||
these devices follow these steps:
|
||||
|
||||
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
|
||||
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
|
||||
card as "initrd.bin":
|
||||
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
|
||||
|
||||
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
|
||||
"zImage.bin":
|
||||
|
||||
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
|
||||
|
||||
4. Copy an updater script (updater.sh.c7x0) onto the card
|
||||
as "updater.sh":
|
||||
|
||||
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
|
||||
|
||||
5. Power down the Zaurus.
|
||||
6. Hold "OK" key and power on the device. An update menu should appear
|
||||
(in Japanese).
|
||||
7. Choose "Update" (item 4).
|
||||
8. The next screen will ask for the source, choose the appropriate
|
||||
card (CF or SD).
|
||||
9. Make sure AC power is connected.
|
||||
10. The next screen asks for confirmation, choose "Yes" (the left button).
|
||||
11. The update process will start, flash the files on the card onto
|
||||
the device and the device will then reboot into Poky.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C1000 (akita)
|
||||
=============================
|
||||
|
||||
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
|
||||
c7x0. To install Poky images on this device follow the instructions for
|
||||
the c7x0 but replace "c7x0" with "akita" where appropriate.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C3x00 series (spitz)
|
||||
====================================
|
||||
|
||||
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
|
||||
to akita but with an internal microdrive. The installation procedure
|
||||
assumes a standard microdrive based device where the root (first)
|
||||
partition has been enlarged to fit the image (at least 100MB,
|
||||
400MB for the SDK).
|
||||
|
||||
The procedure is the same as for the c7x0 and akita models with the
|
||||
following differences:
|
||||
|
||||
1. Instead of a jffs2 image you need to copy a compressed tarball of the
|
||||
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
|
||||
card as "hdimage1.tgz":
|
||||
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
|
||||
|
||||
2. You additionally need to copy a special tar utility (gnu-tar) onto
|
||||
the card as "gnu-tar":
|
||||
|
||||
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar
|
||||
|
||||
|
||||
|
||||
10
bitbake/AUTHORS
Normal file
@@ -0,0 +1,10 @@
|
||||
Tim Ansell <mithro@mithis.net>
|
||||
Phil Blundell <pb@handhelds.org>
|
||||
Seb Frankengul <seb@frankengul.org>
|
||||
Holger Freyther <zecke@handhelds.org>
|
||||
Marcin Juszkiewicz <hrw@hrw.one.pl>
|
||||
Chris Larson <kergoth@handhelds.org>
|
||||
Ulrich Luckas <luckas@musoft.de>
|
||||
Mickey Lauer <mickey@Vanille.de>
|
||||
Richard Purdie <rpurdie@rpsys.net>
|
||||
Holger Schurig <holgerschurig@gmx.de>
|
||||
339
bitbake/COPYING
Normal file
@@ -0,0 +1,339 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 2, June 1991
|
||||
|
||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your
|
||||
freedom to share and change it. By contrast, the GNU General Public
|
||||
License is intended to guarantee your freedom to share and change free
|
||||
software--to make sure the software is free for all its users. This
|
||||
General Public License applies to most of the Free Software
|
||||
Foundation's software and to any other program whose authors commit to
|
||||
using it. (Some other Free Software Foundation software is covered by
|
||||
the GNU Lesser General Public License instead.) You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
this service if you wish), that you receive source code or can get it
|
||||
if you want it, that you can change the software or use pieces of it
|
||||
in new free programs; and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
anyone to deny you these rights or to ask you to surrender the rights.
|
||||
These restrictions translate to certain responsibilities for you if you
|
||||
distribute copies of the software, or if you modify it.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must give the recipients all the rights that
|
||||
you have. You must make sure that they, too, receive or can get the
|
||||
source code. And you must show them these terms so they know their
|
||||
rights.
|
||||
|
||||
We protect your rights with two steps: (1) copyright the software, and
|
||||
(2) offer you this license which gives you legal permission to copy,
|
||||
distribute and/or modify the software.
|
||||
|
||||
Also, for each author's protection and ours, we want to make certain
|
||||
that everyone understands that there is no warranty for this free
|
||||
software. If the software is modified by someone else and passed on, we
|
||||
want its recipients to know that what they have is not the original, so
|
||||
that any problems introduced by others will not reflect on the original
|
||||
authors' reputations.
|
||||
|
||||
Finally, any free program is threatened constantly by software
|
||||
patents. We wish to avoid the danger that redistributors of a free
|
||||
program will individually obtain patent licenses, in effect making the
|
||||
program proprietary. To prevent this, we have made it clear that any
|
||||
patent must be licensed for everyone's free use or not licensed at all.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License applies to any program or other work which contains
|
||||
a notice placed by the copyright holder saying it may be distributed
|
||||
under the terms of this General Public License. The "Program", below,
|
||||
refers to any such program or work, and a "work based on the Program"
|
||||
means either the Program or any derivative work under copyright law:
|
||||
that is to say, a work containing the Program or a portion of it,
|
||||
either verbatim or with modifications and/or translated into another
|
||||
language. (Hereinafter, translation is included without limitation in
|
||||
the term "modification".) Each licensee is addressed as "you".
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running the Program is not restricted, and the output from the Program
|
||||
is covered only if its contents constitute a work based on the
|
||||
Program (independent of having been made by running the Program).
|
||||
Whether that is true depends on what the Program does.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Program's
|
||||
source code as you receive it, in any medium, provided that you
|
||||
conspicuously and appropriately publish on each copy an appropriate
|
||||
copyright notice and disclaimer of warranty; keep intact all the
|
||||
notices that refer to this License and to the absence of any warranty;
|
||||
and give any other recipients of the Program a copy of this License
|
||||
along with the Program.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy, and
|
||||
you may at your option offer warranty protection in exchange for a fee.
|
||||
|
||||
2. You may modify your copy or copies of the Program or any portion
|
||||
of it, thus forming a work based on the Program, and copy and
|
||||
distribute such modifications or work under the terms of Section 1
|
||||
above, provided that you also meet all of these conditions:
|
||||
|
||||
a) You must cause the modified files to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
|
||||
b) You must cause any work that you distribute or publish, that in
|
||||
whole or in part contains or is derived from the Program or any
|
||||
part thereof, to be licensed as a whole at no charge to all third
|
||||
parties under the terms of this License.
|
||||
|
||||
c) If the modified program normally reads commands interactively
|
||||
when run, you must cause it, when started running for such
|
||||
interactive use in the most ordinary way, to print or display an
|
||||
announcement including an appropriate copyright notice and a
|
||||
notice that there is no warranty (or else, saying that you provide
|
||||
a warranty) and that users may redistribute the program under
|
||||
these conditions, and telling the user how to view a copy of this
|
||||
License. (Exception: if the Program itself is interactive but
|
||||
does not normally print such an announcement, your work based on
|
||||
the Program is not required to print an announcement.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Program,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Program, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote it.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Program.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Program
|
||||
with the Program (or with a work based on the Program) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
|
||||
3. You may copy and distribute the Program (or a work based on it,
|
||||
under Section 2) in object code or executable form under the terms of
|
||||
Sections 1 and 2 above provided that you also do one of the following:
|
||||
|
||||
a) Accompany it with the complete corresponding machine-readable
|
||||
source code, which must be distributed under the terms of Sections
|
||||
1 and 2 above on a medium customarily used for software interchange; or,
|
||||
|
||||
b) Accompany it with a written offer, valid for at least three
|
||||
years, to give any third party, for a charge no more than your
|
||||
cost of physically performing source distribution, a complete
|
||||
machine-readable copy of the corresponding source code, to be
|
||||
distributed under the terms of Sections 1 and 2 above on a medium
|
||||
customarily used for software interchange; or,
|
||||
|
||||
c) Accompany it with the information you received as to the offer
|
||||
to distribute corresponding source code. (This alternative is
|
||||
allowed only for noncommercial distribution and only if you
|
||||
received the program in object code or executable form with such
|
||||
an offer, in accord with Subsection b above.)
|
||||
|
||||
The source code for a work means the preferred form of the work for
|
||||
making modifications to it. For an executable work, complete source
|
||||
code means all the source code for all modules it contains, plus any
|
||||
associated interface definition files, plus the scripts used to
|
||||
control compilation and installation of the executable. However, as a
|
||||
special exception, the source code distributed need not include
|
||||
anything that is normally distributed (in either source or binary
|
||||
form) with the major components (compiler, kernel, and so on) of the
|
||||
operating system on which the executable runs, unless that component
|
||||
itself accompanies the executable.
|
||||
|
||||
If distribution of executable or object code is made by offering
|
||||
access to copy from a designated place, then offering equivalent
|
||||
access to copy the source code from the same place counts as
|
||||
distribution of the source code, even though third parties are not
|
||||
compelled to copy the source along with the object code.
|
||||
|
||||
4. You may not copy, modify, sublicense, or distribute the Program
|
||||
except as expressly provided under this License. Any attempt
|
||||
otherwise to copy, modify, sublicense or distribute the Program is
|
||||
void, and will automatically terminate your rights under this License.
|
||||
However, parties who have received copies, or rights, from you under
|
||||
this License will not have their licenses terminated so long as such
|
||||
parties remain in full compliance.
|
||||
|
||||
5. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Program or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Program (or any work based on the
|
||||
Program), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Program or works based on it.
|
||||
|
||||
6. Each time you redistribute the Program (or any work based on the
|
||||
Program), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute or modify the Program subject to
|
||||
these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted herein.
|
||||
You are not responsible for enforcing compliance by third parties to
|
||||
this License.
|
||||
|
||||
7. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Program at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Program by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Program.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under
|
||||
any particular circumstance, the balance of the section is intended to
|
||||
apply and the section as a whole is intended to apply in other
|
||||
circumstances.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system, which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
8. If the distribution and/or use of the Program is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Program under this License
|
||||
may add an explicit geographical distribution limitation excluding
|
||||
those countries, so that distribution is permitted only in or among
|
||||
countries not thus excluded. In such case, this License incorporates
|
||||
the limitation as if written in the body of this License.
|
||||
|
||||
9. The Free Software Foundation may publish revised and/or new versions
|
||||
of the General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program
|
||||
specifies a version number of this License which applies to it and "any
|
||||
later version", you have the option of following the terms and conditions
|
||||
either of that version or of any later version published by the Free
|
||||
Software Foundation. If the Program does not specify a version number of
|
||||
this License, you may choose any version ever published by the Free Software
|
||||
Foundation.
|
||||
|
||||
10. If you wish to incorporate parts of the Program into other free
|
||||
programs whose distribution conditions are different, write to the author
|
||||
to ask for permission. For software which is copyrighted by the Free
|
||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
||||
make exceptions for this. Our decision will be guided by the two goals
|
||||
of preserving the free status of all derivatives of our free software and
|
||||
of promoting the sharing and reuse of software generally.
|
||||
|
||||
NO WARRANTY
|
||||
|
||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
||||
REPAIR OR CORRECTION.
|
||||
|
||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
convey the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along
|
||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program is interactive, make it output a short notice like this
|
||||
when it starts in an interactive mode:
|
||||
|
||||
Gnomovision version 69, Copyright (C) year name of author
|
||||
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, the commands you use may
|
||||
be called something other than `show w' and `show c'; they could even be
|
||||
mouse-clicks or menu items--whatever suits your program.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or your
|
||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||
necessary. Here is a sample; alter the names:
|
||||
|
||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
|
||||
`Gnomovision' (which makes passes at compilers) written by James Hacker.
|
||||
|
||||
<signature of Ty Coon>, 1 April 1989
|
||||
Ty Coon, President of Vice
|
||||
|
||||
This General Public License does not permit incorporating your program into
|
||||
proprietary programs. If your program is a subroutine library, you may
|
||||
consider it more useful to permit linking proprietary applications with the
|
||||
library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License.
|
||||
242
bitbake/ChangeLog
Normal file
@@ -0,0 +1,242 @@
|
||||
Changes in BitBake 1.8.x:
|
||||
- Fix exit code for build failures in --continue mode
|
||||
- Fix git branch tags fetching
|
||||
|
||||
Changes in BitBake 1.8.10:
|
||||
- Psyco is available only for x86 - do not use it on other architectures.
|
||||
- Fix a bug in bb.decodeurl where http://some.where.com/somefile.tgz decoded to host="" (#1530)
|
||||
- Warn about malformed PREFERRED_PROVIDERS (#1072)
|
||||
- Add support for BB_NICE_LEVEL option (#1627)
|
||||
- Sort initial providers list by default preference (#1145, #2024)
|
||||
- Improve provider sorting so prefered versions have preference over latest versions (#768)
|
||||
- Detect builds of tasks with overlapping providers and warn (will become a fatal error) (#1359)
|
||||
- Add MULTI_PROVIDER_WHITELIST variable to allow known safe multiple providers to be listed
|
||||
- Handle paths in svn fetcher module parameter
|
||||
- Support the syntax "export VARIABLE"
|
||||
- Add bzr fetcher
|
||||
- Add support for cleaning directories before a task in the form:
|
||||
do_taskname[cleandirs] = "dir"
|
||||
- bzr fetcher tweaks from Robert Schuster (#2913)
|
||||
- Add mercurial (hg) fetcher from Robert Schuster (#2913)
|
||||
- Fix bogus preferred_version return values
|
||||
- Fix 'depends' flag splitting
|
||||
- Fix unexport handling (#3135)
|
||||
- Add bb.copyfile function similar to bb.movefile (and improve movefile error reporting)
|
||||
- Allow multiple options for deptask flag
|
||||
- Use git-fetch instead of git-pull removing any need for merges when
|
||||
fetching (we don't care about the index). Fixes fetch errors.
|
||||
- Add BB_GENERATE_MIRROR_TARBALLS option, set to 0 to make git fetches
|
||||
faster at the expense of not creating mirror tarballs.
|
||||
- SRCREV handling updates, improvements and fixes from Poky
|
||||
- Add bb.utils.lockfile() and bb.utils.unlockfile() from Poky
|
||||
- Add support for task selfstamp and lockfiles flags
|
||||
- Disable task number acceleration since it can allow the tasks to run
|
||||
out of sequence
|
||||
- Improve runqueue code comments
|
||||
- Add task scheduler abstraction and some example schedulers
|
||||
- Improve circular dependency chain debugging code and user feedback
|
||||
- Don't give a stacktrace for invalid tasks, have a user friendly message (#3431)
|
||||
- Add support for "-e target" (#3432)
|
||||
- Fix shell showdata command (#3259)
|
||||
- Fix shell data updating problems (#1880)
|
||||
- Properly raise errors for invalid source URI protocols
|
||||
- Change the wget fetcher failure handling to avoid lockfile problems
|
||||
- Add git branch support
|
||||
- Add support for branches in git fetcher (Otavio Salvador, Michael Lauer)
|
||||
- Make taskdata and runqueue errors more user friendly
|
||||
- Add norecurse and fullpath options to cvs fetcher
|
||||
|
||||
Changes in Bitbake 1.8.8:
|
||||
- Rewrite svn fetcher to make adding extra operations easier
|
||||
as part of future SRCDATE="now" fixes
|
||||
(requires new FETCHCMD_svn definition in bitbake.conf)
|
||||
- Change SVNDIR layout to be more unique (fixes #2644 and #2624)
|
||||
- Import persistent data store from trunk
|
||||
- Sync fetcher code with that in trunk, adding SRCREV support for svn
|
||||
- Add ConfigParsed Event after configuration parsing is complete
|
||||
- data.emit_var() - only call getVar if we need the variable
|
||||
- Stop generating the A variable (seems to be legacy code)
|
||||
- Make sure intertask depends get processed correcting in recursive depends
|
||||
- Add pn-PN to overrides when evaluating PREFERRED_VERSION
|
||||
- Improve the progress indicator by skipping tasks that have
|
||||
already run before starting the build rather than during it
|
||||
- Add profiling option (-P)
|
||||
- Add BB_SRCREV_POLICY variable (clear or cache) to control SRCREV cache
|
||||
- Add SRCREV_FORMAT support
|
||||
- Fix local fetcher's localpath return values
|
||||
- Apply OVERRIDES before performing immediate expansions
|
||||
- Allow the -b -e option combination to take regular expressions
|
||||
- Add plain message function to bb.msg
|
||||
- Sort the list of providers before processing so dependency problems are
|
||||
reproducible rather than effectively random
|
||||
- Add locking for fetchers so only one tries to fetch a given file at a given time
|
||||
- Fix int(0)/None confusion in runqueue.py which causes random gaps in dependency chains
|
||||
- Fix handling of variables with expansion in the name using _append/_prepend
|
||||
e.g. RRECOMMENDS_${PN}_append_xyz = "abc"
|
||||
- Expand data in addtasks
|
||||
- Print the list of missing DEPENDS,RDEPENDS for the "No buildable providers available for required...."
|
||||
error message.
|
||||
- Rework add_task to be more efficient (6% speedup, 7% number of function calls reduction)
|
||||
- Sort digraph output to make builds more reproducible
|
||||
- Split expandKeys into two for loops to benefit from the expand_cache (12% speedup)
|
||||
- runqueue.py: Fix idepends handling to avoid dependency errors
|
||||
- Clear the terminal TOSTOP flag if set (and warn the user)
|
||||
- Fix regression from r653 and make SRCDATE/CVSDATE work for packages again
|
||||
|
||||
Changes in Bitbake 1.8.6:
|
||||
- Correctly redirect stdin when forking
|
||||
- If parsing errors are found, exit, too many users miss the errors
|
||||
- Remove supriours PREFERRED_PROVIDER warnings
|
||||
|
||||
Changes in Bitbake 1.8.4:
|
||||
- Make sure __inherit_cache is updated before calling include() (from Michael Krelin)
|
||||
- Fix bug when target was in ASSUME_PROVIDED (#2236)
|
||||
- Raise ParseError for filenames with multiple underscores instead of infinitely looping (#2062)
|
||||
- Fix invalid regexp in BBMASK error handling (missing import) (#1124)
|
||||
- Don't run build sanity checks on incomplete builds
|
||||
- Promote certain warnings from debug to note 2 level
|
||||
- Update manual
|
||||
|
||||
Changes in Bitbake 1.8.2:
|
||||
- Catch truncated cache file errors
|
||||
- Add PE (Package Epoch) support from Philipp Zabel (pH5)
|
||||
- Add code to handle inter-task dependencies
|
||||
- Allow operations other than assignment on flag variables
|
||||
- Fix cache errors when generation dotGraphs
|
||||
|
||||
Changes in Bitbake 1.8.0:
|
||||
- Release 1.7.x as a stable series
|
||||
|
||||
Changes in BitBake 1.7.x:
|
||||
- Major updates of the dependency handling and execution
|
||||
of tasks. Code from bin/bitbake replaced with runqueue.py
|
||||
and taskdata.py
|
||||
- New task execution code supports multithreading with a simplistic
|
||||
threading algorithm controlled by BB_NUMBER_THREADS
|
||||
- Change of the SVN Fetcher to keep the checkout around
|
||||
courtsey of Paul Sokolovsky (#1367)
|
||||
- PATH fix to bbimage (#1108)
|
||||
- Allow debug domains to be specified on the commandline (-l)
|
||||
- Allow 'interactive' tasks
|
||||
- Logging message improvements
|
||||
- Drop now uneeded BUILD_ALL_DEPS variable
|
||||
- Add support for wildcards to -b option
|
||||
- Major overhaul of the fetchers making a large amount of code common
|
||||
including mirroring code
|
||||
- Fetchers now touch md5 stamps upon access (to show activity)
|
||||
- Fix -f force option when used without -b (long standing bug)
|
||||
- Add expand_cache to data_cache.py, caching expanded data (speedup)
|
||||
- Allow version field in DEPENDS (ignored for now)
|
||||
- Add abort flag support to the shell
|
||||
- Make inherit fail if the class doesn't exist (#1478)
|
||||
- Fix data.emit_env() to expand keynames as well as values
|
||||
- Add ssh fetcher
|
||||
- Add perforce fetcher
|
||||
- Make PREFERRED_PROVIDER_foobar defaults to foobar if available
|
||||
- Share the parser's mtime_cache, reducing the number of stat syscalls
|
||||
- Compile all anonfuncs at once!
|
||||
*** Anonfuncs must now use common spacing format ***
|
||||
- Memorise the list of handlers in __BBHANDLERS and tasks in __BBTASKS
|
||||
This removes 2 million function calls resulting in a 5-10% speedup
|
||||
- Add manpage
|
||||
- Update generateDotGraph to use taskData/runQueue improving accuracy
|
||||
and also adding a task dependency graph
|
||||
- Fix/standardise on GPLv2 licence
|
||||
- Move most functionality from bin/bitbake to cooker.py and split into
|
||||
separate funcitons
|
||||
- CVS fetcher: Added support for non-default port
|
||||
- Add BBINCLUDELOGS_LINES, the number of lines to read from any logfile
|
||||
- Drop shebangs from lib/bb scripts
|
||||
|
||||
Changes in Bitbake 1.6.0:
|
||||
- Better msg handling
|
||||
- COW dict implementation from Tim Ansell (mithro) leading
|
||||
to better performance
|
||||
- Speed up of -s
|
||||
|
||||
Changes in Bitbake 1.4.4:
|
||||
- SRCDATE now handling courtsey Justin Patrin
|
||||
- #1017 fix to work with rm_work
|
||||
|
||||
Changes in BitBake 1.4.2:
|
||||
- Send logs to oe.pastebin.com instead of pastebin.com
|
||||
fixes #856
|
||||
- Copy the internal bitbake data before building the
|
||||
dependency graph. This fixes nano not having a
|
||||
virtual/libc dependency
|
||||
- Allow multiple TARBALL_STASH entries
|
||||
- Cache, check if the directory exists before changing
|
||||
into it
|
||||
- git speedup cloning by not doing a checkout
|
||||
- allow to have spaces in filenames (.conf, .bb, .bbclass)
|
||||
|
||||
Changes in BitBake 1.4.0:
|
||||
- Fix to check both RDEPENDS and RDEPENDS_${PN}
|
||||
- Fix a RDEPENDS parsing bug in utils:explode_deps()
|
||||
- Update git fetcher behaviour to match git changes
|
||||
- ASSUME_PROVIDED allowed to include runtime packages
|
||||
- git fetcher cleanup and efficency improvements
|
||||
- Change the format of the cache
|
||||
- Update usermanual to document the Fetchers
|
||||
- Major changes to caching with a new strategy
|
||||
giving a major performance increase when reparsing
|
||||
with few data changes
|
||||
|
||||
Changes in BitBake 1.3.3:
|
||||
- Create a new Fetcher module to ease the
|
||||
development of new Fetchers.
|
||||
Issue #438 fixed by rpurdie@openedhand.com
|
||||
- Make the Subversion fetcher honor the SRC Date
|
||||
(CVSDATE).
|
||||
Issue #555 fixed by chris@openedhand.com
|
||||
- Expand PREFERRED_PROVIDER properly
|
||||
Issue #436 fixed by rprudie@openedhand.com
|
||||
- Typo fix for Issue #531 by Philipp Zabel for the
|
||||
BitBake Shell
|
||||
- Introduce a new special variable SRCDATE as
|
||||
a generic naming to replace CVSDATE.
|
||||
- Introduce a new keyword 'required'. In contrast
|
||||
to 'include' parsing will fail if a to be included
|
||||
file can not be found.
|
||||
- Remove hardcoding of the STAMP directory. Patch
|
||||
courtsey pHilipp Zabel
|
||||
- Track the RDEPENDS of each package (rpurdie@openedhand.com)
|
||||
- Introduce BUILD_ALL_DEPS to build all RDEPENDS. E.g
|
||||
this is used by the OpenEmbedded Meta Packages.
|
||||
(rpurdie@openedhand.com).
|
||||
|
||||
Changes in BitBake 1.3.2:
|
||||
- reintegration of make.py into BitBake
|
||||
- bbread is gone, use bitbake -e
|
||||
- lots of shell updates and bugfixes
|
||||
- Introduction of the .= and =. operator
|
||||
- Sort variables, keys and groups in bitdoc
|
||||
- Fix regression in the handling of BBCOLLECTIONS
|
||||
- Update the bitbake usermanual
|
||||
|
||||
Changes in BitBake 1.3.0:
|
||||
- add bitbake interactive shell (bitbake -i)
|
||||
- refactor bitbake utility in OO style
|
||||
- kill default arguments in methods in the bb.data module
|
||||
- kill default arguments in methods in the bb.fetch module
|
||||
- the http/https/ftp fetcher will fail if the to be
|
||||
downloaded file was not found in DL_DIR (this is needed
|
||||
to avoid unpacking the sourceforge mirror page)
|
||||
- Switch to a cow like data instance for persistent and non
|
||||
persisting mode (called data_smart.py)
|
||||
- Changed the callback of bb.make.collect_bbfiles to carry
|
||||
additional parameters
|
||||
- Drastically reduced the amount of needed RAM by not holding
|
||||
each data instance in memory when using a cache/persistent
|
||||
storage
|
||||
|
||||
Changes in BitBake 1.2.1:
|
||||
The 1.2.1 release is meant as a intermediate release to lay the
|
||||
ground for more radical changes. The most notable changes are:
|
||||
|
||||
- Do not hardcode {}, use bb.data.init() instead if you want to
|
||||
get a instance of a data class
|
||||
- bb.data.init() is a factory and the old bb.data methods are delegates
|
||||
- Do not use deepcopy use bb.data.createCopy() instead.
|
||||
- Removed default arguments in bb.fetch
|
||||
|
||||
19
bitbake/HEADER
Normal file
@@ -0,0 +1,19 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# <one line to give the program's name and a brief idea of what it does.>
|
||||
# Copyright (C) <year> <name of author>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
52
bitbake/MANIFEST
Normal file
@@ -0,0 +1,52 @@
|
||||
AUTHORS
|
||||
COPYING
|
||||
ChangeLog
|
||||
MANIFEST
|
||||
setup.py
|
||||
bin/bitdoc
|
||||
bin/bbimage
|
||||
bin/bitbake
|
||||
lib/bb/__init__.py
|
||||
lib/bb/build.py
|
||||
lib/bb/cache.py
|
||||
lib/bb/cooker.py
|
||||
lib/bb/COW.py
|
||||
lib/bb/data.py
|
||||
lib/bb/data_smart.py
|
||||
lib/bb/event.py
|
||||
lib/bb/fetch/__init__.py
|
||||
lib/bb/fetch/bzr.py
|
||||
lib/bb/fetch/cvs.py
|
||||
lib/bb/fetch/git.py
|
||||
lib/bb/fetch/hg.py
|
||||
lib/bb/fetch/local.py
|
||||
lib/bb/fetch/perforce.py
|
||||
lib/bb/fetch/ssh.py
|
||||
lib/bb/fetch/svk.py
|
||||
lib/bb/fetch/svn.py
|
||||
lib/bb/fetch/wget.py
|
||||
lib/bb/manifest.py
|
||||
lib/bb/methodpool.py
|
||||
lib/bb/msg.py
|
||||
lib/bb/parse/__init__.py
|
||||
lib/bb/parse/parse_py/__init__.py
|
||||
lib/bb/parse/parse_py/BBHandler.py
|
||||
lib/bb/parse/parse_py/ConfHandler.py
|
||||
lib/bb/persist_data.py
|
||||
lib/bb/providers.py
|
||||
lib/bb/runqueue.py
|
||||
lib/bb/shell.py
|
||||
lib/bb/taskdata.py
|
||||
lib/bb/utils.py
|
||||
setup.py
|
||||
doc/COPYING.GPL
|
||||
doc/COPYING.MIT
|
||||
doc/bitbake.1
|
||||
doc/manual/html.css
|
||||
doc/manual/Makefile
|
||||
doc/manual/usermanual.xml
|
||||
contrib/bbdev.sh
|
||||
contrib/vim/syntax/bitbake.vim
|
||||
contrib/vim/ftdetect/bitbake.vim
|
||||
conf/bitbake.conf
|
||||
classes/base.bbclass
|
||||
155
bitbake/bin/bbimage
Executable file
@@ -0,0 +1,155 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, os
|
||||
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
from bb import *
|
||||
|
||||
__version__ = 1.1
|
||||
type = "jffs2"
|
||||
cfg_bb = data.init()
|
||||
cfg_oespawn = data.init()
|
||||
|
||||
bb.msg.set_debug_level(0)
|
||||
|
||||
def usage():
|
||||
print "Usage: bbimage [options ...]"
|
||||
print "Creates an image for a target device from a root filesystem,"
|
||||
print "obeying configuration parameters from the BitBake"
|
||||
print "configuration files, thereby easing handling of deviceisms."
|
||||
print ""
|
||||
print " %s\t\t%s" % ("-r [arg], --root [arg]", "root directory (default=${IMAGE_ROOTFS})")
|
||||
print " %s\t\t%s" % ("-t [arg], --type [arg]", "image type (jffs2[default], cramfs)")
|
||||
print " %s\t\t%s" % ("-n [arg], --name [arg]", "image name (override IMAGE_NAME variable)")
|
||||
print " %s\t\t%s" % ("-v, --version", "output version information and exit")
|
||||
sys.exit(0)
|
||||
|
||||
def version():
|
||||
print "BitBake Build Tool Core version %s" % bb.__version__
|
||||
print "BBImage version %s" % __version__
|
||||
|
||||
def emit_bb(d, base_d = {}):
|
||||
for v in d.keys():
|
||||
if d[v] != base_d[v]:
|
||||
data.emit_var(v, d)
|
||||
|
||||
def getopthash(l):
|
||||
h = {}
|
||||
for (opt, val) in l:
|
||||
h[opt] = val
|
||||
return h
|
||||
|
||||
import getopt
|
||||
try:
|
||||
(opts, args) = getopt.getopt(sys.argv[1:], 'vr:t:e:n:', [ 'version', 'root=', 'type=', 'bbfile=', 'name=' ])
|
||||
except getopt.GetoptError:
|
||||
usage()
|
||||
|
||||
# handle opts
|
||||
opthash = getopthash(opts)
|
||||
|
||||
if '--version' in opthash or '-v' in opthash:
|
||||
version()
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
cfg_bb = parse.handle(os.path.join('conf', 'bitbake.conf'), cfg_bb)
|
||||
except IOError:
|
||||
fatal("Unable to open bitbake.conf")
|
||||
|
||||
# sanity check
|
||||
if cfg_bb is None:
|
||||
fatal("Unable to open/parse %s" % os.path.join('conf', 'bitbake.conf'))
|
||||
usage(1)
|
||||
|
||||
rootfs = None
|
||||
extra_files = []
|
||||
|
||||
if '--root' in opthash:
|
||||
rootfs = opthash['--root']
|
||||
if '-r' in opthash:
|
||||
rootfs = opthash['-r']
|
||||
|
||||
if '--type' in opthash:
|
||||
type = opthash['--type']
|
||||
if '-t' in opthash:
|
||||
type = opthash['-t']
|
||||
|
||||
if '--bbfile' in opthash:
|
||||
extra_files.append(opthash['--bbfile'])
|
||||
if '-e' in opthash:
|
||||
extra_files.append(opthash['-e'])
|
||||
|
||||
for f in extra_files:
|
||||
try:
|
||||
cfg_bb = parse.handle(f, cfg_bb)
|
||||
except IOError:
|
||||
print "unable to open %s" % f
|
||||
|
||||
if not rootfs:
|
||||
rootfs = data.getVar('IMAGE_ROOTFS', cfg_bb, 1)
|
||||
|
||||
if not rootfs:
|
||||
bb.fatal("IMAGE_ROOTFS not defined")
|
||||
|
||||
data.setVar('IMAGE_ROOTFS', rootfs, cfg_bb)
|
||||
|
||||
from copy import copy, deepcopy
|
||||
localdata = data.createCopy(cfg_bb)
|
||||
|
||||
overrides = data.getVar('OVERRIDES', localdata)
|
||||
if not overrides:
|
||||
bb.fatal("OVERRIDES not defined.")
|
||||
data.setVar('OVERRIDES', '%s:%s' % (overrides, type), localdata)
|
||||
data.update_data(localdata)
|
||||
data.setVar('OVERRIDES', overrides, localdata)
|
||||
|
||||
if '-n' in opthash:
|
||||
data.setVar('IMAGE_NAME', opthash['-n'], localdata)
|
||||
if '--name' in opthash:
|
||||
data.setVar('IMAGE_NAME', opthash['--name'], localdata)
|
||||
|
||||
topdir = data.getVar('TOPDIR', localdata, 1) or os.getcwd()
|
||||
|
||||
cmd = data.getVar('IMAGE_CMD', localdata, 1)
|
||||
if not cmd:
|
||||
bb.fatal("IMAGE_CMD not defined")
|
||||
|
||||
outdir = data.getVar('DEPLOY_DIR_IMAGE', localdata, 1)
|
||||
if not outdir:
|
||||
bb.fatal('DEPLOY_DIR_IMAGE not defined')
|
||||
mkdirhier(outdir)
|
||||
|
||||
#depends = data.getVar('IMAGE_DEPENDS', localdata, 1) or ""
|
||||
#if depends:
|
||||
# bb.note("Spawning bbmake to satisfy dependencies: %s" % depends)
|
||||
# ret = os.system('bbmake %s' % depends)
|
||||
# if ret != 0:
|
||||
# bb.error("executing bbmake to satisfy dependencies")
|
||||
|
||||
bb.note("Executing %s" % cmd)
|
||||
data.setVar('image_cmd', cmd, localdata)
|
||||
data.setVarFlag('image_cmd', 'func', 1, localdata)
|
||||
try:
|
||||
bb.build.exec_func('image_cmd', localdata)
|
||||
except bb.build.FuncFailed:
|
||||
sys.exit(1)
|
||||
#ret = os.system(cmd)
|
||||
#sys.exit(ret)
|
||||
134
bitbake/bin/bitbake
Executable file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2005 ROAD GmbH
|
||||
# Copyright (C) 2006 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, os, getopt, re, time, optparse
|
||||
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
from bb import cooker
|
||||
|
||||
__version__ = "1.8.11"
|
||||
|
||||
#============================================================================#
|
||||
# BBOptions
|
||||
#============================================================================#
|
||||
class BBConfiguration( object ):
|
||||
"""
|
||||
Manages build options and configurations for one run
|
||||
"""
|
||||
def __init__( self, options ):
|
||||
for key, val in options.__dict__.items():
|
||||
setattr( self, key, val )
|
||||
|
||||
|
||||
#============================================================================#
|
||||
# main
|
||||
#============================================================================#
|
||||
|
||||
def main():
|
||||
parser = optparse.OptionParser( version = "BitBake Build Tool Core version %s, %%prog version %s" % ( bb.__version__, __version__ ),
|
||||
usage = """%prog [options] [package ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of BitBake files.
|
||||
It expects that BBFILES is defined, which is a space separated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
Default BBFILES are the .bb files in the current directory.""" )
|
||||
|
||||
parser.add_option( "-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES.",
|
||||
action = "store", dest = "buildfile", default = None )
|
||||
|
||||
parser.add_option( "-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
|
||||
action = "store_false", dest = "abort", default = True )
|
||||
|
||||
parser.add_option( "-f", "--force", help = "force run of specified cmd, regardless of stamp status",
|
||||
action = "store_true", dest = "force", default = False )
|
||||
|
||||
parser.add_option( "-i", "--interactive", help = "drop into the interactive mode also called the BitBake shell.",
|
||||
action = "store_true", dest = "interactive", default = False )
|
||||
|
||||
parser.add_option( "-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
|
||||
action = "store", dest = "cmd" )
|
||||
|
||||
parser.add_option( "-r", "--read", help = "read the specified file before bitbake.conf",
|
||||
action = "append", dest = "file", default = [] )
|
||||
|
||||
parser.add_option( "-v", "--verbose", help = "output more chit-chat to the terminal",
|
||||
action = "store_true", dest = "verbose", default = False )
|
||||
|
||||
parser.add_option( "-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
|
||||
action = "count", dest="debug", default = 0)
|
||||
|
||||
parser.add_option( "-n", "--dry-run", help = "don't execute, just go through the motions",
|
||||
action = "store_true", dest = "dry_run", default = False )
|
||||
|
||||
parser.add_option( "-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
|
||||
action = "store_true", dest = "parse_only", default = False )
|
||||
|
||||
parser.add_option( "-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
|
||||
action = "store_true", dest = "disable_psyco", default = False )
|
||||
|
||||
parser.add_option( "-s", "--show-versions", help = "show current and preferred versions of all packages",
|
||||
action = "store_true", dest = "show_versions", default = False )
|
||||
|
||||
parser.add_option( "-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
|
||||
action = "store_true", dest = "show_environment", default = False )
|
||||
|
||||
parser.add_option( "-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
|
||||
action = "store_true", dest = "dot_graph", default = False )
|
||||
|
||||
parser.add_option( "-I", "--ignore-deps", help = """Stop processing at the given list of dependencies when generating dependency graphs. This can help to make the graph more appealing""",
|
||||
action = "append", dest = "ignored_dot_deps", default = [] )
|
||||
|
||||
parser.add_option( "-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
|
||||
action = "append", dest = "debug_domains", default = [] )
|
||||
|
||||
parser.add_option( "-P", "--profile", help = "profile the command and print a report",
|
||||
action = "store_true", dest = "profile", default = False )
|
||||
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
|
||||
configuration = BBConfiguration(options)
|
||||
configuration.pkgs_to_build = []
|
||||
configuration.pkgs_to_build.extend(args[1:])
|
||||
|
||||
cooker = bb.cooker.BBCooker(configuration)
|
||||
|
||||
if configuration.profile:
|
||||
try:
|
||||
import cProfile as profile
|
||||
except:
|
||||
import profile
|
||||
|
||||
profile.runctx("cooker.cook()", globals(), locals(), "profile.log")
|
||||
import pstats
|
||||
p = pstats.Stats('profile.log')
|
||||
p.sort_stats('time')
|
||||
p.print_stats()
|
||||
p.print_callers()
|
||||
p.sort_stats('cumulative')
|
||||
p.print_stats()
|
||||
else:
|
||||
cooker.cook()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
532
bitbake/bin/bitdoc
Executable file
@@ -0,0 +1,532 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import optparse, os, sys
|
||||
|
||||
# bitbake
|
||||
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
import bb.parse
|
||||
from string import split, join
|
||||
|
||||
__version__ = "0.0.2"
|
||||
|
||||
class HTMLFormatter:
|
||||
"""
|
||||
Simple class to help to generate some sort of HTML files. It is
|
||||
quite inferior solution compared to docbook, gtkdoc, doxygen but it
|
||||
should work for now.
|
||||
We've a global introduction site (index.html) and then one site for
|
||||
the list of keys (alphabetical sorted) and one for the list of groups,
|
||||
one site for each key with links to the relations and groups.
|
||||
|
||||
index.html
|
||||
all_keys.html
|
||||
all_groups.html
|
||||
groupNAME.html
|
||||
keyNAME.html
|
||||
"""
|
||||
|
||||
def replace(self, text, *pairs):
|
||||
"""
|
||||
From pydoc... almost identical at least
|
||||
"""
|
||||
while pairs:
|
||||
(a,b) = pairs[0]
|
||||
text = join(split(text, a), b)
|
||||
pairs = pairs[1:]
|
||||
return text
|
||||
def escape(self, text):
|
||||
"""
|
||||
Escape string to be conform HTML
|
||||
"""
|
||||
return self.replace(text,
|
||||
('&', '&'),
|
||||
('<', '<' ),
|
||||
('>', '>' ) )
|
||||
def createNavigator(self):
|
||||
"""
|
||||
Create the navgiator
|
||||
"""
|
||||
return """<table class="navigation" width="100%" summary="Navigation header" cellpadding="2" cellspacing="2">
|
||||
<tr valign="middle">
|
||||
<td><a accesskey="g" href="index.html">Home</a></td>
|
||||
<td><a accesskey="n" href="all_groups.html">Groups</a></td>
|
||||
<td><a accesskey="u" href="all_keys.html">Keys</a></td>
|
||||
</tr></table>
|
||||
"""
|
||||
|
||||
def relatedKeys(self, item):
|
||||
"""
|
||||
Create HTML to link to foreign keys
|
||||
"""
|
||||
|
||||
if len(item.related()) == 0:
|
||||
return ""
|
||||
|
||||
txt = "<p><b>See also:</b><br>"
|
||||
txts = []
|
||||
for it in item.related():
|
||||
txts.append("""<a href="key%(it)s.html">%(it)s</a>""" % vars() )
|
||||
|
||||
return txt + ",".join(txts)
|
||||
|
||||
def groups(self,item):
|
||||
"""
|
||||
Create HTML to link to related groups
|
||||
"""
|
||||
|
||||
if len(item.groups()) == 0:
|
||||
return ""
|
||||
|
||||
|
||||
txt = "<p><b>See also:</b><br>"
|
||||
txts = []
|
||||
for group in item.groups():
|
||||
txts.append( """<a href="group%s.html">%s</a> """ % (group,group) )
|
||||
|
||||
return txt + ",".join(txts)
|
||||
|
||||
|
||||
def createKeySite(self,item):
|
||||
"""
|
||||
Create a site for a key. It contains the header/navigator, a heading,
|
||||
the description, links to related keys and to the groups.
|
||||
"""
|
||||
|
||||
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
|
||||
<html><head><title>Key %s</title></head>
|
||||
<link rel="stylesheet" href="style.css" type="text/css">
|
||||
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
|
||||
%s
|
||||
<h2><span class="refentrytitle">%s</span></h2>
|
||||
|
||||
<div class="refsynopsisdiv">
|
||||
<h2>Synopsis</h2>
|
||||
<p>
|
||||
%s
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="refsynopsisdiv">
|
||||
<h2>Related Keys</h2>
|
||||
<p>
|
||||
%s
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="refsynopsisdiv">
|
||||
<h2>Groups</h2>
|
||||
<p>
|
||||
%s
|
||||
</p>
|
||||
</div>
|
||||
|
||||
|
||||
</body>
|
||||
""" % (item.name(), self.createNavigator(), item.name(),
|
||||
self.escape(item.description()), self.relatedKeys(item), self.groups(item))
|
||||
|
||||
def createGroupsSite(self, doc):
|
||||
"""
|
||||
Create the Group Overview site
|
||||
"""
|
||||
|
||||
groups = ""
|
||||
sorted_groups = doc.groups()
|
||||
sorted_groups.sort()
|
||||
for group in sorted_groups:
|
||||
groups += """<a href="group%s.html">%s</a><br>""" % (group, group)
|
||||
|
||||
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
|
||||
<html><head><title>Group overview</title></head>
|
||||
<link rel="stylesheet" href="style.css" type="text/css">
|
||||
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
|
||||
%s
|
||||
<h2>Available Groups</h2>
|
||||
%s
|
||||
</body>
|
||||
""" % (self.createNavigator(), groups)
|
||||
|
||||
def createIndex(self):
|
||||
"""
|
||||
Create the index file
|
||||
"""
|
||||
|
||||
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
|
||||
<html><head><title>Bitbake Documentation</title></head>
|
||||
<link rel="stylesheet" href="style.css" type="text/css">
|
||||
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
|
||||
%s
|
||||
<h2>Documentation Entrance</h2>
|
||||
<a href="all_groups.html">All available groups</a><br>
|
||||
<a href="all_keys.html">All available keys</a><br>
|
||||
</body>
|
||||
""" % self.createNavigator()
|
||||
|
||||
def createKeysSite(self, doc):
|
||||
"""
|
||||
Create Overview of all avilable keys
|
||||
"""
|
||||
keys = ""
|
||||
sorted_keys = doc.doc_keys()
|
||||
sorted_keys.sort()
|
||||
for key in sorted_keys:
|
||||
keys += """<a href="key%s.html">%s</a><br>""" % (key, key)
|
||||
|
||||
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
|
||||
<html><head><title>Key overview</title></head>
|
||||
<link rel="stylesheet" href="style.css" type="text/css">
|
||||
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
|
||||
%s
|
||||
<h2>Available Keys</h2>
|
||||
%s
|
||||
</body>
|
||||
""" % (self.createNavigator(), keys)
|
||||
|
||||
def createGroupSite(self, gr, items, _description = None):
|
||||
"""
|
||||
Create a site for a group:
|
||||
Group the name of the group, items contain the name of the keys
|
||||
inside this group
|
||||
"""
|
||||
groups = ""
|
||||
description = ""
|
||||
|
||||
# create a section with the group descriptions
|
||||
if _description:
|
||||
description += "<h2 Description of Grozp %s</h2>" % gr
|
||||
description += _description
|
||||
|
||||
items.sort(lambda x,y:cmp(x.name(),y.name()))
|
||||
for group in items:
|
||||
groups += """<a href="key%s.html">%s</a><br>""" % (group.name(), group.name())
|
||||
|
||||
return """<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
|
||||
<html><head><title>Group %s</title></head>
|
||||
<link rel="stylesheet" href="style.css" type="text/css">
|
||||
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
|
||||
%s
|
||||
%s
|
||||
<div class="refsynopsisdiv">
|
||||
<h2>Keys in Group %s</h2>
|
||||
<pre class="synopsis">
|
||||
%s
|
||||
</pre>
|
||||
</div>
|
||||
</body>
|
||||
""" % (gr, self.createNavigator(), description, gr, groups)
|
||||
|
||||
|
||||
|
||||
def createCSS(self):
|
||||
"""
|
||||
Create the CSS file
|
||||
"""
|
||||
return """.synopsis, .classsynopsis
|
||||
{
|
||||
background: #eeeeee;
|
||||
border: solid 1px #aaaaaa;
|
||||
padding: 0.5em;
|
||||
}
|
||||
.programlisting
|
||||
{
|
||||
background: #eeeeff;
|
||||
border: solid 1px #aaaaff;
|
||||
padding: 0.5em;
|
||||
}
|
||||
.variablelist
|
||||
{
|
||||
padding: 4px;
|
||||
margin-left: 3em;
|
||||
}
|
||||
.variablelist td:first-child
|
||||
{
|
||||
vertical-align: top;
|
||||
}
|
||||
table.navigation
|
||||
{
|
||||
background: #ffeeee;
|
||||
border: solid 1px #ffaaaa;
|
||||
margin-top: 0.5em;
|
||||
margin-bottom: 0.5em;
|
||||
}
|
||||
.navigation a
|
||||
{
|
||||
color: #770000;
|
||||
}
|
||||
.navigation a:visited
|
||||
{
|
||||
color: #550000;
|
||||
}
|
||||
.navigation .title
|
||||
{
|
||||
font-size: 200%;
|
||||
}
|
||||
div.refnamediv
|
||||
{
|
||||
margin-top: 2em;
|
||||
}
|
||||
div.gallery-float
|
||||
{
|
||||
float: left;
|
||||
padding: 10px;
|
||||
}
|
||||
div.gallery-float img
|
||||
{
|
||||
border-style: none;
|
||||
}
|
||||
div.gallery-spacer
|
||||
{
|
||||
clear: both;
|
||||
}
|
||||
a
|
||||
{
|
||||
text-decoration: none;
|
||||
}
|
||||
a:hover
|
||||
{
|
||||
text-decoration: underline;
|
||||
color: #FF0000;
|
||||
}
|
||||
"""
|
||||
|
||||
|
||||
|
||||
class DocumentationItem:
|
||||
"""
|
||||
A class to hold information about a configuration
|
||||
item. It contains the key name, description, a list of related names,
|
||||
and the group this item is contained in.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self._groups = []
|
||||
self._related = []
|
||||
self._name = ""
|
||||
self._desc = ""
|
||||
|
||||
def groups(self):
|
||||
return self._groups
|
||||
|
||||
def name(self):
|
||||
return self._name
|
||||
|
||||
def description(self):
|
||||
return self._desc
|
||||
|
||||
def related(self):
|
||||
return self._related
|
||||
|
||||
def setName(self, name):
|
||||
self._name = name
|
||||
|
||||
def setDescription(self, desc):
|
||||
self._desc = desc
|
||||
|
||||
def addGroup(self, group):
|
||||
self._groups.append(group)
|
||||
|
||||
def addRelation(self,relation):
|
||||
self._related.append(relation)
|
||||
|
||||
def sort(self):
|
||||
self._related.sort()
|
||||
self._groups.sort()
|
||||
|
||||
|
||||
class Documentation:
|
||||
"""
|
||||
Holds the documentation... with mappings from key to items...
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.__keys = {}
|
||||
self.__groups = {}
|
||||
|
||||
def insert_doc_item(self, item):
|
||||
"""
|
||||
Insert the Doc Item into the internal list
|
||||
of representation
|
||||
"""
|
||||
item.sort()
|
||||
self.__keys[item.name()] = item
|
||||
|
||||
for group in item.groups():
|
||||
if not group in self.__groups:
|
||||
self.__groups[group] = []
|
||||
self.__groups[group].append(item)
|
||||
self.__groups[group].sort()
|
||||
|
||||
|
||||
def doc_item(self, key):
|
||||
"""
|
||||
Return the DocumentationInstance describing the key
|
||||
"""
|
||||
try:
|
||||
return self.__keys[key]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
def doc_keys(self):
|
||||
"""
|
||||
Return the documented KEYS (names)
|
||||
"""
|
||||
return self.__keys.keys()
|
||||
|
||||
def groups(self):
|
||||
"""
|
||||
Return the names of available groups
|
||||
"""
|
||||
return self.__groups.keys()
|
||||
|
||||
def group_content(self,group_name):
|
||||
"""
|
||||
Return a list of keys/names that are in a specefic
|
||||
group or the empty list
|
||||
"""
|
||||
try:
|
||||
return self.__groups[group_name]
|
||||
except KeyError:
|
||||
return []
|
||||
|
||||
|
||||
def parse_cmdline(args):
|
||||
"""
|
||||
Parse the CMD line and return the result as a n-tuple
|
||||
"""
|
||||
|
||||
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__,__version__))
|
||||
usage = """%prog [options]
|
||||
|
||||
Create a set of html pages (documentation) for a bitbake.conf....
|
||||
"""
|
||||
|
||||
# Add the needed options
|
||||
parser.add_option( "-c", "--config", help = "Use the specified configuration file as source",
|
||||
action = "store", dest = "config", default = os.path.join("conf", "documentation.conf") )
|
||||
|
||||
parser.add_option( "-o", "--output", help = "Output directory for html files",
|
||||
action = "store", dest = "output", default = "html/" )
|
||||
|
||||
parser.add_option( "-D", "--debug", help = "Increase the debug level",
|
||||
action = "count", dest = "debug", default = 0 )
|
||||
|
||||
parser.add_option( "-v","--verbose", help = "output more chit-char to the terminal",
|
||||
action = "store_true", dest = "verbose", default = False )
|
||||
|
||||
options, args = parser.parse_args( sys.argv )
|
||||
|
||||
if options.debug:
|
||||
bb.msg.set_debug_level(options.debug)
|
||||
|
||||
return options.config, options.output
|
||||
|
||||
def main():
|
||||
"""
|
||||
The main Method
|
||||
"""
|
||||
|
||||
(config_file,output_dir) = parse_cmdline( sys.argv )
|
||||
|
||||
# right to let us load the file now
|
||||
try:
|
||||
documentation = bb.parse.handle( config_file, bb.data.init() )
|
||||
except IOError:
|
||||
bb.fatal( "Unable to open %s" % config_file )
|
||||
except bb.parse.ParseError:
|
||||
bb.fatal( "Unable to parse %s" % config_file )
|
||||
|
||||
|
||||
# Assuming we've the file loaded now, we will initialize the 'tree'
|
||||
doc = Documentation()
|
||||
|
||||
# defined states
|
||||
state_begin = 0
|
||||
state_see = 1
|
||||
state_group = 2
|
||||
|
||||
for key in bb.data.keys(documentation):
|
||||
data = bb.data.getVarFlag(key, "doc", documentation)
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# The Documentation now starts
|
||||
doc_ins = DocumentationItem()
|
||||
doc_ins.setName(key)
|
||||
|
||||
|
||||
tokens = data.split(' ')
|
||||
state = state_begin
|
||||
string= ""
|
||||
for token in tokens:
|
||||
token = token.strip(',')
|
||||
|
||||
if not state == state_see and token == "@see":
|
||||
state = state_see
|
||||
continue
|
||||
elif not state == state_group and token == "@group":
|
||||
state = state_group
|
||||
continue
|
||||
|
||||
if state == state_begin:
|
||||
string += " %s" % token
|
||||
elif state == state_see:
|
||||
doc_ins.addRelation(token)
|
||||
elif state == state_group:
|
||||
doc_ins.addGroup(token)
|
||||
|
||||
# set the description
|
||||
doc_ins.setDescription(string)
|
||||
doc.insert_doc_item(doc_ins)
|
||||
|
||||
# let us create the HTML now
|
||||
bb.mkdirhier(output_dir)
|
||||
os.chdir(output_dir)
|
||||
|
||||
# Let us create the sites now. We do it in the following order
|
||||
# Start with the index.html. It will point to sites explaining all
|
||||
# keys and groups
|
||||
html_slave = HTMLFormatter()
|
||||
|
||||
f = file('style.css', 'w')
|
||||
print >> f, html_slave.createCSS()
|
||||
|
||||
f = file('index.html', 'w')
|
||||
print >> f, html_slave.createIndex()
|
||||
|
||||
f = file('all_groups.html', 'w')
|
||||
print >> f, html_slave.createGroupsSite(doc)
|
||||
|
||||
f = file('all_keys.html', 'w')
|
||||
print >> f, html_slave.createKeysSite(doc)
|
||||
|
||||
# now for each group create the site
|
||||
for group in doc.groups():
|
||||
f = file('group%s.html' % group, 'w')
|
||||
print >> f, html_slave.createGroupSite(group, doc.group_content(group))
|
||||
|
||||
# now for the keys
|
||||
for key in doc.doc_keys():
|
||||
f = file('key%s.html' % doc.doc_item(key).name(), 'w')
|
||||
print >> f, html_slave.createKeySite(doc.doc_item(key))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1
bitbake/contrib/README
Normal file
@@ -0,0 +1 @@
|
||||
This directory is for additional contributed files which may be useful.
|
||||
31
bitbake/contrib/bbdev.sh
Normal file
@@ -0,0 +1,31 @@
|
||||
# This is a shell function to be sourced into your shell or placed in your .profile,
|
||||
# which makes setting things up for BitBake a bit easier.
|
||||
#
|
||||
# The author disclaims copyright to the contents of this file and places it in the
|
||||
# public domain.
|
||||
|
||||
bbdev () {
|
||||
local BBDIR PKGDIR BUILDDIR
|
||||
if test x"$1" = "x--help"; then echo >&2 "syntax: bbdev [bbdir [pkgdir [builddir]]]"; return 1; fi
|
||||
if test x"$1" = x; then BBDIR=`pwd`; else BBDIR=$1; fi
|
||||
if test x"$2" = x; then PKGDIR=`pwd`; else PKGDIR=$2; fi
|
||||
if test x"$3" = x; then BUILDDIR=`pwd`; else BUILDDIR=$3; fi
|
||||
|
||||
BBDIR=`readlink -f $BBDIR`
|
||||
PKGDIR=`readlink -f $PKGDIR`
|
||||
BUILDDIR=`readlink -f $BUILDDIR`
|
||||
if ! (test -d $BBDIR && test -d $PKGDIR && test -d $BUILDDIR); then
|
||||
echo >&2 "syntax: bbdev [bbdir [pkgdir [builddir]]]"
|
||||
return 1
|
||||
fi
|
||||
|
||||
PATH=$BBDIR/bin:$PATH
|
||||
BBPATH=$BBDIR
|
||||
if test x"$BBDIR" != x"$PKGDIR"; then
|
||||
BBPATH=$PKGDIR:$BBPATH
|
||||
fi
|
||||
if test x"$PKGDIR" != x"$BUILDDIR"; then
|
||||
BBPATH=$BUILDDIR:$BBPATH
|
||||
fi
|
||||
export BBPATH
|
||||
}
|
||||
4
bitbake/contrib/vim/ftdetect/bitbake.vim
Normal file
@@ -0,0 +1,4 @@
|
||||
au BufNewFile,BufRead *.bb setfiletype bitbake
|
||||
au BufNewFile,BufRead *.bbclass setfiletype bitbake
|
||||
au BufNewFile,BufRead *.inc setfiletype bitbake
|
||||
" au BufNewFile,BufRead *.conf setfiletype bitbake
|
||||
120
bitbake/contrib/vim/syntax/bitbake.vim
Normal file
@@ -0,0 +1,120 @@
|
||||
" Vim syntax file
|
||||
"
|
||||
" Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
|
||||
" This file is licensed under the MIT license, see COPYING.MIT in
|
||||
" this source distribution for the terms.
|
||||
"
|
||||
" Language: BitBake
|
||||
" Maintainer: Chris Larson <kergoth@handhelds.org>
|
||||
" Filenames: *.bb, *.bbclass
|
||||
|
||||
if version < 600
|
||||
syntax clear
|
||||
elseif exists("b:current_syntax")
|
||||
finish
|
||||
endif
|
||||
|
||||
syn case match
|
||||
|
||||
|
||||
" Catch incorrect syntax (only matches if nothing else does)
|
||||
"
|
||||
syn match bbUnmatched "."
|
||||
|
||||
|
||||
" Other
|
||||
|
||||
syn match bbComment "^#.*$" display contains=bbTodo
|
||||
syn keyword bbTodo TODO FIXME XXX contained
|
||||
syn match bbDelimiter "[(){}=]" contained
|
||||
syn match bbQuote /['"]/ contained
|
||||
syn match bbArrayBrackets "[\[\]]" contained
|
||||
|
||||
|
||||
" BitBake strings
|
||||
|
||||
syn match bbContinue "\\$"
|
||||
syn region bbString matchgroup=bbQuote start=/"/ skip=/\\$/ excludenl end=/"/ contained keepend contains=bbTodo,bbContinue,bbVarDeref
|
||||
syn region bbString matchgroup=bbQuote start=/'/ skip=/\\$/ excludenl end=/'/ contained keepend contains=bbTodo,bbContinue,bbVarDeref
|
||||
|
||||
|
||||
" BitBake variable metadata
|
||||
|
||||
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_\.]\+}" contained
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.]\+\(_[${}a-zA-Z0-9\-_\.]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\.]\+" display contained
|
||||
"syn keyword bbVarEq = display contained nextgroup=bbVarValue
|
||||
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
|
||||
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref
|
||||
|
||||
|
||||
" BitBake variable metadata flags
|
||||
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
|
||||
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
|
||||
"syn match bbVarFlagFlag "\[\([a-zA-Z0-9\-_\.]\+\)\]\s*\(=\)\@=" contains=bbIdentifier nextgroup=bbVarEq
|
||||
|
||||
|
||||
" Functions!
|
||||
syn match bbFunction "\h\w*" display contained
|
||||
|
||||
|
||||
" BitBake python metadata
|
||||
syn include @python syntax/python.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
|
||||
syn keyword bbPythonFlag python contained nextgroup=bbFunction
|
||||
syn match bbPythonFuncDef "^\(python\s\+\)\(\w\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPythonFlag,bbFunction,bbDelimiter nextgroup=bbPythonFuncRegion skipwhite
|
||||
syn region bbPythonFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
|
||||
"hi def link bbPythonFuncRegion Comment
|
||||
|
||||
|
||||
" BitBake shell metadata
|
||||
syn include @shell syntax/sh.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
|
||||
syn keyword bbFakerootFlag fakeroot contained nextgroup=bbFunction
|
||||
syn match bbShellFuncDef "^\(fakeroot\s*\)\?\(\w\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbFakerootFlag,bbFunction,bbDelimiter nextgroup=bbShellFuncRegion skipwhite
|
||||
syn region bbShellFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
|
||||
"hi def link bbShellFuncRegion Comment
|
||||
|
||||
|
||||
" BitBake 'def'd python functions
|
||||
syn keyword bbDef def contained
|
||||
syn region bbDefRegion start='^def\s\+\w\+\s*([^)]*)\s*:\s*$' end='^\(\s\|$\)\@!' contains=@python
|
||||
|
||||
|
||||
" BitBake statements
|
||||
syn keyword bbStatement include inherit require addtask addhandler EXPORT_FUNCTIONS display contained
|
||||
syn match bbStatementLine "^\(include\|inherit\|require\|addtask\|addhandler\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
|
||||
syn match bbStatementRest ".*$" contained contains=bbString,bbVarDeref
|
||||
|
||||
" Highlight
|
||||
"
|
||||
hi def link bbArrayBrackets Statement
|
||||
hi def link bbUnmatched Error
|
||||
hi def link bbVarDeref String
|
||||
hi def link bbContinue Special
|
||||
hi def link bbDef Statement
|
||||
hi def link bbPythonFlag Type
|
||||
hi def link bbExportFlag Type
|
||||
hi def link bbFakerootFlag Type
|
||||
hi def link bbStatement Statement
|
||||
hi def link bbString String
|
||||
hi def link bbTodo Todo
|
||||
hi def link bbComment Comment
|
||||
hi def link bbOperator Operator
|
||||
hi def link bbError Error
|
||||
hi def link bbFunction Function
|
||||
hi def link bbDelimiter Delimiter
|
||||
hi def link bbIdentifier Identifier
|
||||
hi def link bbVarEq Operator
|
||||
hi def link bbQuote String
|
||||
hi def link bbVarValue String
|
||||
|
||||
let b:current_syntax = "bb"
|
||||
339
bitbake/doc/COPYING.GPL
Normal file
@@ -0,0 +1,339 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 2, June 1991
|
||||
|
||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your
|
||||
freedom to share and change it. By contrast, the GNU General Public
|
||||
License is intended to guarantee your freedom to share and change free
|
||||
software--to make sure the software is free for all its users. This
|
||||
General Public License applies to most of the Free Software
|
||||
Foundation's software and to any other program whose authors commit to
|
||||
using it. (Some other Free Software Foundation software is covered by
|
||||
the GNU Lesser General Public License instead.) You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
this service if you wish), that you receive source code or can get it
|
||||
if you want it, that you can change the software or use pieces of it
|
||||
in new free programs; and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
anyone to deny you these rights or to ask you to surrender the rights.
|
||||
These restrictions translate to certain responsibilities for you if you
|
||||
distribute copies of the software, or if you modify it.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must give the recipients all the rights that
|
||||
you have. You must make sure that they, too, receive or can get the
|
||||
source code. And you must show them these terms so they know their
|
||||
rights.
|
||||
|
||||
We protect your rights with two steps: (1) copyright the software, and
|
||||
(2) offer you this license which gives you legal permission to copy,
|
||||
distribute and/or modify the software.
|
||||
|
||||
Also, for each author's protection and ours, we want to make certain
|
||||
that everyone understands that there is no warranty for this free
|
||||
software. If the software is modified by someone else and passed on, we
|
||||
want its recipients to know that what they have is not the original, so
|
||||
that any problems introduced by others will not reflect on the original
|
||||
authors' reputations.
|
||||
|
||||
Finally, any free program is threatened constantly by software
|
||||
patents. We wish to avoid the danger that redistributors of a free
|
||||
program will individually obtain patent licenses, in effect making the
|
||||
program proprietary. To prevent this, we have made it clear that any
|
||||
patent must be licensed for everyone's free use or not licensed at all.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License applies to any program or other work which contains
|
||||
a notice placed by the copyright holder saying it may be distributed
|
||||
under the terms of this General Public License. The "Program", below,
|
||||
refers to any such program or work, and a "work based on the Program"
|
||||
means either the Program or any derivative work under copyright law:
|
||||
that is to say, a work containing the Program or a portion of it,
|
||||
either verbatim or with modifications and/or translated into another
|
||||
language. (Hereinafter, translation is included without limitation in
|
||||
the term "modification".) Each licensee is addressed as "you".
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running the Program is not restricted, and the output from the Program
|
||||
is covered only if its contents constitute a work based on the
|
||||
Program (independent of having been made by running the Program).
|
||||
Whether that is true depends on what the Program does.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Program's
|
||||
source code as you receive it, in any medium, provided that you
|
||||
conspicuously and appropriately publish on each copy an appropriate
|
||||
copyright notice and disclaimer of warranty; keep intact all the
|
||||
notices that refer to this License and to the absence of any warranty;
|
||||
and give any other recipients of the Program a copy of this License
|
||||
along with the Program.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy, and
|
||||
you may at your option offer warranty protection in exchange for a fee.
|
||||
|
||||
2. You may modify your copy or copies of the Program or any portion
|
||||
of it, thus forming a work based on the Program, and copy and
|
||||
distribute such modifications or work under the terms of Section 1
|
||||
above, provided that you also meet all of these conditions:
|
||||
|
||||
a) You must cause the modified files to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
|
||||
b) You must cause any work that you distribute or publish, that in
|
||||
whole or in part contains or is derived from the Program or any
|
||||
part thereof, to be licensed as a whole at no charge to all third
|
||||
parties under the terms of this License.
|
||||
|
||||
c) If the modified program normally reads commands interactively
|
||||
when run, you must cause it, when started running for such
|
||||
interactive use in the most ordinary way, to print or display an
|
||||
announcement including an appropriate copyright notice and a
|
||||
notice that there is no warranty (or else, saying that you provide
|
||||
a warranty) and that users may redistribute the program under
|
||||
these conditions, and telling the user how to view a copy of this
|
||||
License. (Exception: if the Program itself is interactive but
|
||||
does not normally print such an announcement, your work based on
|
||||
the Program is not required to print an announcement.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Program,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Program, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote it.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Program.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Program
|
||||
with the Program (or with a work based on the Program) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
|
||||
3. You may copy and distribute the Program (or a work based on it,
|
||||
under Section 2) in object code or executable form under the terms of
|
||||
Sections 1 and 2 above provided that you also do one of the following:
|
||||
|
||||
a) Accompany it with the complete corresponding machine-readable
|
||||
source code, which must be distributed under the terms of Sections
|
||||
1 and 2 above on a medium customarily used for software interchange; or,
|
||||
|
||||
b) Accompany it with a written offer, valid for at least three
|
||||
years, to give any third party, for a charge no more than your
|
||||
cost of physically performing source distribution, a complete
|
||||
machine-readable copy of the corresponding source code, to be
|
||||
distributed under the terms of Sections 1 and 2 above on a medium
|
||||
customarily used for software interchange; or,
|
||||
|
||||
c) Accompany it with the information you received as to the offer
|
||||
to distribute corresponding source code. (This alternative is
|
||||
allowed only for noncommercial distribution and only if you
|
||||
received the program in object code or executable form with such
|
||||
an offer, in accord with Subsection b above.)
|
||||
|
||||
The source code for a work means the preferred form of the work for
|
||||
making modifications to it. For an executable work, complete source
|
||||
code means all the source code for all modules it contains, plus any
|
||||
associated interface definition files, plus the scripts used to
|
||||
control compilation and installation of the executable. However, as a
|
||||
special exception, the source code distributed need not include
|
||||
anything that is normally distributed (in either source or binary
|
||||
form) with the major components (compiler, kernel, and so on) of the
|
||||
operating system on which the executable runs, unless that component
|
||||
itself accompanies the executable.
|
||||
|
||||
If distribution of executable or object code is made by offering
|
||||
access to copy from a designated place, then offering equivalent
|
||||
access to copy the source code from the same place counts as
|
||||
distribution of the source code, even though third parties are not
|
||||
compelled to copy the source along with the object code.
|
||||
|
||||
4. You may not copy, modify, sublicense, or distribute the Program
|
||||
except as expressly provided under this License. Any attempt
|
||||
otherwise to copy, modify, sublicense or distribute the Program is
|
||||
void, and will automatically terminate your rights under this License.
|
||||
However, parties who have received copies, or rights, from you under
|
||||
this License will not have their licenses terminated so long as such
|
||||
parties remain in full compliance.
|
||||
|
||||
5. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Program or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Program (or any work based on the
|
||||
Program), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Program or works based on it.
|
||||
|
||||
6. Each time you redistribute the Program (or any work based on the
|
||||
Program), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute or modify the Program subject to
|
||||
these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted herein.
|
||||
You are not responsible for enforcing compliance by third parties to
|
||||
this License.
|
||||
|
||||
7. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Program at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Program by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Program.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under
|
||||
any particular circumstance, the balance of the section is intended to
|
||||
apply and the section as a whole is intended to apply in other
|
||||
circumstances.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system, which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
8. If the distribution and/or use of the Program is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Program under this License
|
||||
may add an explicit geographical distribution limitation excluding
|
||||
those countries, so that distribution is permitted only in or among
|
||||
countries not thus excluded. In such case, this License incorporates
|
||||
the limitation as if written in the body of this License.
|
||||
|
||||
9. The Free Software Foundation may publish revised and/or new versions
|
||||
of the General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program
|
||||
specifies a version number of this License which applies to it and "any
|
||||
later version", you have the option of following the terms and conditions
|
||||
either of that version or of any later version published by the Free
|
||||
Software Foundation. If the Program does not specify a version number of
|
||||
this License, you may choose any version ever published by the Free Software
|
||||
Foundation.
|
||||
|
||||
10. If you wish to incorporate parts of the Program into other free
|
||||
programs whose distribution conditions are different, write to the author
|
||||
to ask for permission. For software which is copyrighted by the Free
|
||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
||||
make exceptions for this. Our decision will be guided by the two goals
|
||||
of preserving the free status of all derivatives of our free software and
|
||||
of promoting the sharing and reuse of software generally.
|
||||
|
||||
NO WARRANTY
|
||||
|
||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
||||
REPAIR OR CORRECTION.
|
||||
|
||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
convey the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along
|
||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program is interactive, make it output a short notice like this
|
||||
when it starts in an interactive mode:
|
||||
|
||||
Gnomovision version 69, Copyright (C) year name of author
|
||||
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, the commands you use may
|
||||
be called something other than `show w' and `show c'; they could even be
|
||||
mouse-clicks or menu items--whatever suits your program.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or your
|
||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||
necessary. Here is a sample; alter the names:
|
||||
|
||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
|
||||
`Gnomovision' (which makes passes at compilers) written by James Hacker.
|
||||
|
||||
<signature of Ty Coon>, 1 April 1989
|
||||
Ty Coon, President of Vice
|
||||
|
||||
This General Public License does not permit incorporating your program into
|
||||
proprietary programs. If your program is a subroutine library, you may
|
||||
consider it more useful to permit linking proprietary applications with the
|
||||
library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License.
|
||||
17
bitbake/doc/COPYING.MIT
Normal file
@@ -0,0 +1,17 @@
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
|
||||
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
|
||||
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
117
bitbake/doc/bitbake.1
Normal file
@@ -0,0 +1,117 @@
|
||||
.\" Hey, EMACS: -*- nroff -*-
|
||||
.\" First parameter, NAME, should be all caps
|
||||
.\" Second parameter, SECTION, should be 1-8, maybe w/ subsection
|
||||
.\" other parameters are allowed: see man(7), man(1)
|
||||
.TH BITBAKE 1 "November 19, 2006"
|
||||
.\" Please adjust this date whenever revising the manpage.
|
||||
.\"
|
||||
.\" Some roff macros, for reference:
|
||||
.\" .nh disable hyphenation
|
||||
.\" .hy enable hyphenation
|
||||
.\" .ad l left justify
|
||||
.\" .ad b justify to both left and right margins
|
||||
.\" .nf disable filling
|
||||
.\" .fi enable filling
|
||||
.\" .br insert line break
|
||||
.\" .sp <n> insert n+1 empty lines
|
||||
.\" for manpage-specific macros, see man(7)
|
||||
.SH NAME
|
||||
BitBake \- simple tool for the execution of tasks
|
||||
.SH SYNOPSIS
|
||||
.B bitbake
|
||||
.RI [ options ] " packagenames"
|
||||
.br
|
||||
.SH DESCRIPTION
|
||||
This manual page documents briefly the
|
||||
.B bitbake
|
||||
command.
|
||||
.PP
|
||||
.\" TeX users may be more comfortable with the \fB<whatever>\fP and
|
||||
.\" \fI<whatever>\fP escape sequences to invode bold face and italics,
|
||||
.\" respectively.
|
||||
\fBbitbake\fP is a program that executes the specified task (default is 'build')
|
||||
for a given set of BitBake files.
|
||||
.br
|
||||
It expects that BBFILES is defined, which is a space seperated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
.br
|
||||
Default BBFILES are the .bb files in the current directory.
|
||||
.SH OPTIONS
|
||||
This program follow the usual GNU command line syntax, with long
|
||||
options starting with two dashes (`-').
|
||||
.TP
|
||||
.B \-h, \-\-help
|
||||
Show summary of options.
|
||||
.TP
|
||||
.B \-\-version
|
||||
Show version of program.
|
||||
.TP
|
||||
.B \-bBUILDFILE, \-\-buildfile=BUILDFILE
|
||||
execute the task against this .bb file, rather than a package from BBFILES.
|
||||
.TP
|
||||
.B \-k, \-\-continue
|
||||
continue as much as possible after an error. While the target that failed, and
|
||||
those that depend on it, cannot be remade, the other dependencies of these
|
||||
targets can be processed all the same.
|
||||
.TP
|
||||
.B \-f, \-\-force
|
||||
force run of specified cmd, regardless of stamp status
|
||||
.TP
|
||||
.B \-i, \-\-interactive
|
||||
drop into the interactive mode also called the BitBake shell.
|
||||
.TP
|
||||
.B \-cCMD, \-\-cmd=CMD
|
||||
Specify task to execute. Note that this only executes the specified task for
|
||||
the providee and the packages it depends on, i.e. 'compile' does not implicitly
|
||||
call stage for the dependencies (IOW: use only if you know what you are doing).
|
||||
Depending on the base.bbclass a listtaks tasks is defined and will show
|
||||
available tasks.
|
||||
.TP
|
||||
.B \-rFILE, \-\-read=FILE
|
||||
read the specified file before bitbake.conf
|
||||
.TP
|
||||
.B \-v, \-\-verbose
|
||||
output more chit-chat to the terminal
|
||||
.TP
|
||||
.B \-D, \-\-debug
|
||||
Increase the debug level. You can specify this more than once.
|
||||
.TP
|
||||
.B \-n, \-\-dry-run
|
||||
don't execute, just go through the motions
|
||||
.TP
|
||||
.B \-p, \-\-parse-only
|
||||
quit after parsing the BB files (developers only)
|
||||
.TP
|
||||
.B \-d, \-\-disable-psyco
|
||||
disable using the psyco just-in-time compiler (not recommended)
|
||||
.TP
|
||||
.B \-s, \-\-show-versions
|
||||
show current and preferred versions of all packages
|
||||
.TP
|
||||
.B \-e, \-\-environment
|
||||
show the global or per-package environment (this is what used to be bbread)
|
||||
.TP
|
||||
.B \-g, \-\-graphviz
|
||||
emit the dependency trees of the specified packages in the dot syntax
|
||||
.TP
|
||||
.B \-IIGNORED\_DOT\_DEPS, \-\-ignore-deps=IGNORED_DOT_DEPS
|
||||
Stop processing at the given list of dependencies when generating dependency
|
||||
graphs. This can help to make the graph more appealing
|
||||
.\"
|
||||
.\" Next option is only in BitBake 1.7.x (trunk)
|
||||
.\"
|
||||
.\".TP
|
||||
.\".B \-lDEBUG_DOMAINS, \-\-log-domains=DEBUG_DOMAINS
|
||||
.\"Show debug logging for the specified logging domains
|
||||
|
||||
.SH AUTHORS
|
||||
BitBake was written by
|
||||
Phil Blundell,
|
||||
Holger Freyther,
|
||||
Chris Larson,
|
||||
Mickey Lauer,
|
||||
Richard Purdie,
|
||||
Holger Schurig
|
||||
.PP
|
||||
This manual page was written by Marcin Juszkiewicz <marcin@hrw.one.pl>
|
||||
for the Debian project (but may be used by others).
|
||||
56
bitbake/doc/manual/Makefile
Normal file
@@ -0,0 +1,56 @@
|
||||
topdir = .
|
||||
manual = $(topdir)/usermanual.xml
|
||||
# types = pdf txt rtf ps xhtml html man tex texi dvi
|
||||
# types = pdf txt
|
||||
types = $(xmltotypes) $(htmltypes)
|
||||
xmltotypes = pdf txt
|
||||
htmltypes = html xhtml
|
||||
htmlxsl = $(if $(filter $@,$(foreach type,$(htmltypes),$(type)-nochunks)),http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl,http://docbook.sourceforge.net/release/xsl/current/$@/chunk.xsl)
|
||||
htmlcssfile = docbook.css
|
||||
htmlcss = $(topdir)/html.css
|
||||
# htmlcssfile =
|
||||
# htmlcss =
|
||||
cleanfiles = $(foreach i,$(types),$(topdir)/$(i))
|
||||
|
||||
ifdef DEBUG
|
||||
define command
|
||||
$(1)
|
||||
endef
|
||||
else
|
||||
define command
|
||||
@echo $(2) $(3) $(4)
|
||||
@$(1) >/dev/null
|
||||
endef
|
||||
endif
|
||||
|
||||
all: $(types)
|
||||
|
||||
lint: $(manual) FORCE
|
||||
$(call command,xmllint --xinclude --postvalid --noout $(manual),XMLLINT $(manual))
|
||||
|
||||
$(types) $(foreach type,$(htmltypes),$(type)-nochunks): lint FORCE
|
||||
|
||||
$(foreach type,$(htmltypes),$(type)-nochunks): $(if $(htmlcss),$(htmlcss)) $(manual)
|
||||
@mkdir -p $@
|
||||
ifdef htmlcss
|
||||
$(call command,install -m 0644 $(htmlcss) $@/$(htmlcssfile),CP $(htmlcss) $@/$(htmlcssfile))
|
||||
endif
|
||||
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual) > $@/index.$(patsubst %-nochunks,%,$@),XSLTPROC $@ $(manual))
|
||||
|
||||
$(htmltypes): $(if $(htmlcss),$(htmlcss)) $(manual)
|
||||
@mkdir -p $@
|
||||
ifdef htmlcss
|
||||
$(call command,install -m 0644 $(htmlcss) $@/$(htmlcssfile),CP $(htmlcss) $@/$(htmlcssfile))
|
||||
endif
|
||||
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
|
||||
|
||||
$(xmltotypes): $(manual)
|
||||
$(call command,xmlto --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
|
||||
|
||||
clean:
|
||||
rm -rf $(cleanfiles)
|
||||
|
||||
$(foreach i,$(types) $(foreach type,$(htmltypes),$(type)-nochunks),clean-$(i)):
|
||||
rm -rf $(patsubst clean-%,%,$@)
|
||||
|
||||
FORCE:
|
||||
281
bitbake/doc/manual/html.css
Normal file
@@ -0,0 +1,281 @@
|
||||
/* Feuille de style DocBook du projet Traduc.org */
|
||||
/* DocBook CSS stylesheet of the Traduc.org project */
|
||||
|
||||
/* (c) Jean-Philippe Gu<47>rard - 14 ao<61>t 2004 */
|
||||
/* (c) Jean-Philippe Gu<47>rard - 14 August 2004 */
|
||||
|
||||
/* Cette feuille de style est libre, vous pouvez la */
|
||||
/* redistribuer et la modifier selon les termes de la Licence */
|
||||
/* Art Libre. Vous trouverez un exemplaire de cette Licence sur */
|
||||
/* http://tigreraye.org/Petit-guide-du-traducteur.html#licence-art-libre */
|
||||
|
||||
/* This work of art is free, you can redistribute it and/or */
|
||||
/* modify it according to terms of the Free Art license. You */
|
||||
/* will find a specimen of this license on the Copyleft */
|
||||
/* Attitude web site: http://artlibre.org as well as on other */
|
||||
/* sites. */
|
||||
/* Please note that the French version of this licence as shown */
|
||||
/* on http://tigreraye.org/Petit-guide-du-traducteur.html#licence-art-libre */
|
||||
/* is only official licence of this document. The English */
|
||||
/* is only provided to help you understand this licence. */
|
||||
|
||||
/* La derni<6E>re version de cette feuille de style est toujours */
|
||||
/* disponible sur<75>: http://tigreraye.org/style.css */
|
||||
/* Elle est <20>galement disponible sur<75>: */
|
||||
/* http://www.traduc.org/docs/HOWTO/lecture/style.css */
|
||||
|
||||
/* The latest version of this stylesheet is available from: */
|
||||
/* http://tigreraye.org/style.css */
|
||||
/* It is also available on: */
|
||||
/* http://www.traduc.org/docs/HOWTO/lecture/style.css */
|
||||
|
||||
/* N'h<>sitez pas <20> envoyer vos commentaires et corrections <20> */
|
||||
/* Jean-Philippe Gu<47>rard <jean-philippe.guerard@tigreraye.org> */
|
||||
|
||||
/* Please send feedback and bug reports to */
|
||||
/* Jean-Philippe Gu<47>rard <jean-philippe.guerard@tigreraye.org> */
|
||||
|
||||
/* $Id: style.css,v 1.14 2004/09/10 20:12:09 fevrier Exp fevrier $ */
|
||||
|
||||
/* Pr<50>sentation g<>n<EFBFBD>rale du document */
|
||||
/* Overall document presentation */
|
||||
|
||||
body {
|
||||
/*
|
||||
font-family: Apolline, "URW Palladio L", Garamond, jGaramond,
|
||||
"Bitstream Cyberbit", "Palatino Linotype", serif;
|
||||
*/
|
||||
margin: 7%;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
/* Taille du texte */
|
||||
/* Text size */
|
||||
|
||||
* { font-size: 100%; }
|
||||
|
||||
/* Gestion des textes mis en relief imbriqu<71>s */
|
||||
/* Embedded emphasis */
|
||||
|
||||
em { font-style: italic; }
|
||||
em em { font-style: normal; }
|
||||
em em em { font-style: italic; }
|
||||
|
||||
/* Titres */
|
||||
/* Titles */
|
||||
|
||||
h1 { font-size: 200%; font-weight: 900; }
|
||||
h2 { font-size: 160%; font-weight: 900; }
|
||||
h3 { font-size: 130%; font-weight: bold; }
|
||||
h4 { font-size: 115%; font-weight: bold; }
|
||||
h5 { font-size: 108%; font-weight: bold; }
|
||||
h6 { font-weight: bold; }
|
||||
|
||||
/* Nom de famille en petites majuscules (uniquement en fran<61>ais) */
|
||||
/* Last names in small caps (for French only) */
|
||||
|
||||
*[class~="surname"]:lang(fr) { font-variant: small-caps; }
|
||||
|
||||
/* Blocs de citation */
|
||||
/* Quotation blocs */
|
||||
|
||||
div[class~="blockquote"] {
|
||||
border: solid 2px #AAA;
|
||||
padding: 5px;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
div[class~="blockquote"] > table {
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Blocs lit<69>raux<75>: fond gris clair */
|
||||
/* Literal blocs: light gray background */
|
||||
|
||||
*[class~="literallayout"] {
|
||||
background: #f0f0f0;
|
||||
padding: 5px;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
/* Programmes et captures texte<74>: fond bleu clair */
|
||||
/* Listing and text screen snapshots: light blue background */
|
||||
|
||||
*[class~="programlisting"], *[class~="screen"] {
|
||||
background: #f0f0ff;
|
||||
padding: 5px;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
/* Les textes <20> remplacer sont surlign<67>s en vert p<>le */
|
||||
/* Replaceable text in highlighted in pale green */
|
||||
|
||||
*[class~="replaceable"] {
|
||||
background-color: #98fb98;
|
||||
font-style: normal; }
|
||||
|
||||
/* Tables<65>: fonds gris clair & bords simples */
|
||||
/* Tables: light gray background and solid borders */
|
||||
|
||||
*[class~="table"] *[class~="title"] { width:100%; border: 0px; }
|
||||
|
||||
table {
|
||||
border: 1px solid #aaa;
|
||||
border-collapse: collapse;
|
||||
padding: 2px;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
/* Listes simples en style table */
|
||||
/* Simples lists in table presentation */
|
||||
|
||||
table[class~="simplelist"] {
|
||||
background-color: #F0F0F0;
|
||||
margin: 5px;
|
||||
border: solid 1px #AAA;
|
||||
}
|
||||
|
||||
table[class~="simplelist"] td {
|
||||
border: solid 1px #AAA;
|
||||
}
|
||||
|
||||
/* Les tables */
|
||||
/* Tables */
|
||||
|
||||
*[class~="table"] table {
|
||||
background-color: #F0F0F0;
|
||||
border: solid 1px #AAA;
|
||||
}
|
||||
*[class~="informaltable"] table { background-color: #F0F0F0; }
|
||||
|
||||
th,td {
|
||||
vertical-align: baseline;
|
||||
text-align: left;
|
||||
padding: 0.1em 0.3em;
|
||||
empty-cells: show;
|
||||
}
|
||||
|
||||
/* Alignement des colonnes */
|
||||
/* Colunms alignment */
|
||||
|
||||
td[align=center] , th[align=center] { text-align: center; }
|
||||
td[align=right] , th[align=right] { text-align: right; }
|
||||
td[align=left] , th[align=left] { text-align: left; }
|
||||
td[align=justify] , th[align=justify] { text-align: justify; }
|
||||
|
||||
/* Pas de marge autour des images */
|
||||
/* No inside margins for images */
|
||||
|
||||
img { border: 0; }
|
||||
|
||||
/* Les liens ne sont pas soulign<67>s */
|
||||
/* No underlines for links */
|
||||
|
||||
:link , :visited , :active { text-decoration: none; }
|
||||
|
||||
/* Prudence<63>: cadre jaune et fond jaune clair */
|
||||
/* Caution: yellow border and light yellow background */
|
||||
|
||||
*[class~="caution"] {
|
||||
border: solid 2px yellow;
|
||||
background-color: #ffffe0;
|
||||
padding: 1em 6px 1em ;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
*[class~="caution"] th {
|
||||
vertical-align: middle
|
||||
}
|
||||
|
||||
*[class~="caution"] table {
|
||||
background-color: #ffffe0;
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Note importante<74>: cadre jaune et fond jaune clair */
|
||||
/* Important: yellow border and light yellow background */
|
||||
|
||||
*[class~="important"] {
|
||||
border: solid 2px yellow;
|
||||
background-color: #ffffe0;
|
||||
padding: 1em 6px 1em;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
*[class~="important"] th {
|
||||
vertical-align: middle
|
||||
}
|
||||
|
||||
*[class~="important"] table {
|
||||
background-color: #ffffe0;
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Mise en <20>vidence<63>: texte l<>g<EFBFBD>rement plus grand */
|
||||
/* Highlights: slightly larger texts */
|
||||
|
||||
*[class~="highlights"] {
|
||||
font-size: 110%;
|
||||
}
|
||||
|
||||
/* Note<74>: cadre bleu et fond bleu clair */
|
||||
/* Notes: blue border and light blue background */
|
||||
|
||||
*[class~="note"] {
|
||||
border: solid 2px #7099C5;
|
||||
background-color: #f0f0ff;
|
||||
padding: 1em 6px 1em ;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
*[class~="note"] th {
|
||||
vertical-align: middle
|
||||
}
|
||||
|
||||
*[class~="note"] table {
|
||||
background-color: #f0f0ff;
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Astuce<63>: cadre vert et fond vert clair */
|
||||
/* Tip: green border and light green background */
|
||||
|
||||
*[class~="tip"] {
|
||||
border: solid 2px #00ff00;
|
||||
background-color: #f0ffff;
|
||||
padding: 1em 6px 1em ;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
*[class~="tip"] th {
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
*[class~="tip"] table {
|
||||
background-color: #f0ffff;
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Avertissement<6E>: cadre rouge et fond rouge clair */
|
||||
/* Warning: red border and light red background */
|
||||
|
||||
*[class~="warning"] {
|
||||
border: solid 2px #ff0000;
|
||||
background-color: #fff0f0;
|
||||
padding: 1em 6px 1em ;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
*[class~="warning"] th {
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
|
||||
*[class~="warning"] table {
|
||||
background-color: #fff0f0;
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Fin */
|
||||
/* The End */
|
||||
|
||||
518
bitbake/doc/manual/usermanual.xml
Normal file
@@ -0,0 +1,518 @@
|
||||
<?xml version="1.0"?>
|
||||
<!--
|
||||
ex:ts=4:sw=4:sts=4:et
|
||||
-*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
-->
|
||||
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
<book>
|
||||
<bookinfo>
|
||||
<title>BitBake User Manual</title>
|
||||
<authorgroup>
|
||||
<corpauthor>BitBake Team</corpauthor>
|
||||
</authorgroup>
|
||||
<copyright>
|
||||
<year>2004, 2005, 2006</year>
|
||||
<holder>Chris Larson</holder>
|
||||
<holder>Phil Blundell</holder>
|
||||
</copyright>
|
||||
<legalnotice>
|
||||
<para>This work is licensed under the Creative Commons Attribution License. To view a copy of this license, visit <ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink> or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.</para>
|
||||
</legalnotice>
|
||||
</bookinfo>
|
||||
<chapter>
|
||||
<title>Introduction</title>
|
||||
<section>
|
||||
<title>Overview</title>
|
||||
<para>BitBake is, at its simplest, a tool for executing
|
||||
tasks and managing metadata. As such, its similarities to GNU make and other
|
||||
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions, including OpenZaurus and Familiar.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Background and Goals</title>
|
||||
<para>Prior to BitBake, no other build tool adequately met
|
||||
the needs of an aspiring embedded Linux distribution. All of the
|
||||
buildsystems used by traditional desktop Linux distributions lacked
|
||||
important functionality, and none of the ad-hoc
|
||||
<emphasis>buildroot</emphasis> systems, prevalent in the
|
||||
embedded space, were scalable or maintainable.</para>
|
||||
|
||||
<para>Some important goals for BitBake were:
|
||||
<itemizedlist>
|
||||
<listitem><para>Handle crosscompilation.</para></listitem>
|
||||
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
|
||||
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
|
||||
<listitem><para>Must be linux distribution agnostic (both build and target).</para></listitem>
|
||||
<listitem><para>Must be architecture agnostic</para></listitem>
|
||||
<listitem><para>Must support multiple build and target operating systems (including cygwin, the BSDs, etc).</para></listitem>
|
||||
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
|
||||
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
|
||||
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
|
||||
<listitem><para>Must make it easy to collaborate
|
||||
between multiple projects using BitBake for their
|
||||
builds.</para></listitem>
|
||||
<listitem><para>Should provide an inheritance mechanism to
|
||||
share common metadata between many packages.</para></listitem>
|
||||
<listitem><para>Et cetera...</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>BitBake satisfies all these and many more. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
<chapter>
|
||||
<title>Metadata</title>
|
||||
<section>
|
||||
<title>Description</title>
|
||||
<itemizedlist>
|
||||
<para>BitBake metadata can be classified into 3 major areas:</para>
|
||||
<listitem>
|
||||
<para>Configuration Files</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>.bb Files</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Classes</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>What follows are a large number of examples of BitBake metadata. Any syntax which isn't supported in any of the aforementioned areas will be documented as such.</para>
|
||||
<section>
|
||||
<title>Basic variable setting</title>
|
||||
<para><screen><varname>VARIABLE</varname> = "value"</screen></para>
|
||||
<para>In this example, <varname>VARIABLE</varname> is <literal>value</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variable expansion</title>
|
||||
<para>BitBake supports variables referencing one another's contents using a syntax which is similar to shell scripting</para>
|
||||
<para><screen><varname>A</varname> = "aval"
|
||||
<varname>B</varname> = "pre${A}post"</screen></para>
|
||||
<para>This results in <varname>A</varname> containing <literal>aval</literal> and <varname>B</varname> containing <literal>preavalpost</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Immediate variable expansion (:=)</title>
|
||||
<para>:= results in a variable's contents being expanded immediately, rather than when the variable is actually used.</para>
|
||||
<para><screen><varname>T</varname> = "123"
|
||||
<varname>A</varname> := "${B} ${A} test ${T}"
|
||||
<varname>T</varname> = "456"
|
||||
<varname>B</varname> = "${T} bval"
|
||||
|
||||
<varname>C</varname> = "cval"
|
||||
<varname>C</varname> := "${C}append"</screen></para>
|
||||
<para>In that example, <varname>A</varname> would contain <literal> test 123</literal>, <varname>B</varname> would contain <literal>456 bval</literal>, and <varname>C</varname> would be <literal>cvalappend</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Appending (+=) and prepending (=+)</title>
|
||||
<para><screen><varname>B</varname> = "bval"
|
||||
<varname>B</varname> += "additionaldata"
|
||||
<varname>C</varname> = "cval"
|
||||
<varname>C</varname> =+ "test"</screen></para>
|
||||
<para>In this example, <varname>B</varname> is now <literal>bval additionaldata</literal> and <varname>C</varname> is <literal>test cval</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Appending (.=) and prepending (=.) without spaces</title>
|
||||
<para><screen><varname>B</varname> = "bval"
|
||||
<varname>B</varname> .= "additionaldata"
|
||||
<varname>C</varname> = "cval"
|
||||
<varname>C</varname> =. "test"</screen></para>
|
||||
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above Appending and Prepending operators no additional space
|
||||
will be introduced.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Conditional metadata set</title>
|
||||
<para>OVERRIDES is a <quote>:</quote> seperated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
|
||||
<para><screen><varname>OVERRIDES</varname> = "architecture:os:machine"
|
||||
<varname>TEST</varname> = "defaultvalue"
|
||||
<varname>TEST_os</varname> = "osspecificvalue"
|
||||
<varname>TEST_condnotinoverrides</varname> = "othercondvalue"</screen></para>
|
||||
<para>In this example, <varname>TEST</varname> would be <literal>osspecificvalue</literal>, due to the condition <quote>os</quote> being in <varname>OVERRIDES</varname>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Conditional appending</title>
|
||||
<para>BitBake also supports appending and prepending to variables based on whether something is in OVERRIDES. Example:</para>
|
||||
<para><screen><varname>DEPENDS</varname> = "glibc ncurses"
|
||||
<varname>OVERRIDES</varname> = "machine:local"
|
||||
<varname>DEPENDS_append_machine</varname> = " libmad"</screen></para>
|
||||
<para>In this example, <varname>DEPENDS</varname> is set to <literal>glibc ncurses libmad</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inclusion</title>
|
||||
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse in whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Requiring Inclusion</title>
|
||||
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
|
||||
raise an ParseError if the to be included file can not be found. Otherwise it will behave just like the <literal>
|
||||
include</literal> directive.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Python variable expansion</title>
|
||||
<para><screen><varname>DATE</varname> = "${@time.strftime('%Y%m%d',time.gmtime())}"</screen></para>
|
||||
<para>This would result in the <varname>DATE</varname> variable containing today's date.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Defining executable metadata</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para><screen>do_mytask () {
|
||||
echo "Hello, world!"
|
||||
}</screen></para>
|
||||
<para>This is essentially identical to setting a variable, except that this variable happens to be executable shell code.</para>
|
||||
<para><screen>python do_printdate () {
|
||||
import time
|
||||
print time.strftime('%Y%m%d', time.gmtime())
|
||||
}</screen></para>
|
||||
<para>This is the similar to the previous, but flags it as python so that BitBake knows it is python code.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Defining python functions into the global python namespace</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para><screen>def get_depends(bb, d):
|
||||
if bb.data.getVar('SOMECONDITION', d, True):
|
||||
return "dependencywithcond"
|
||||
else:
|
||||
return "dependency"
|
||||
|
||||
<varname>SOMECONDITION</varname> = "1"
|
||||
<varname>DEPENDS</varname> = "${@get_depends(bb, d)}"</screen></para>
|
||||
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variable Flags</title>
|
||||
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
|
||||
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
|
||||
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inheritance</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudamentary form of inheritence. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.oeclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Tasks</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
|
||||
<para><screen>python do_printdate () {
|
||||
import time
|
||||
print time.strftime('%Y%m%d', time.gmtime())
|
||||
}
|
||||
|
||||
addtask printdate before do_build</screen></para>
|
||||
<para>This defines the necessary python function and adds it as a task which is now a dependency of do_build (the default task). If anyone executes the do_build task, that will result in do_printdate being run first.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Events</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>BitBake allows to install event handlers. Events are triggered at certain points during operation, such as, the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent was to make it easy to do things like email notifications on build failure.</para>
|
||||
<para><screen>addhandler myclass_eventhandler
|
||||
python myclass_eventhandler() {
|
||||
from bb.event import NotHandled, getName
|
||||
from bb import data
|
||||
|
||||
print "The name of the Event is %s" % getName(e)
|
||||
print "The file we run for is %s" % data.getVar('FILE', e.data, True)
|
||||
|
||||
return NotHandled
|
||||
}
|
||||
</screen></para><para>
|
||||
This event handler gets called every time an event is triggered. A global variable <varname>e</varname> is defined. <varname>e</varname>.data contains an instance of bb.data. With the getName(<varname>e</varname>)
|
||||
method one can get the name of the triggered event.</para><para>The above event handler prints the name
|
||||
of the event and the content of the <varname>FILE</varname> variable.</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Dependency Handling</title>
|
||||
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
|
||||
<section>
|
||||
<title>Dependencies internal to the .bb file</title>
|
||||
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>DEPENDS</title>
|
||||
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>RDEPENDS</title>
|
||||
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
|
||||
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive DEPENDS</title>
|
||||
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive RDEPENDS</title>
|
||||
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inter Task</title>
|
||||
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
|
||||
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Parsing</title>
|
||||
<section>
|
||||
<title>Configuration Files</title>
|
||||
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed. Currently, BitBake has hardcoded knowledge of a single configuration file. It expects to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
|
||||
<para>Only variable definitions and include directives are allowed in .conf files.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Classes</title>
|
||||
<para>BitBake classes are our rudamentary inheritence mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>.bb Files</title>
|
||||
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space seperated list of .bb files, and does handle wildcards.</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
|
||||
<chapter>
|
||||
<title>File Download support</title>
|
||||
<section>
|
||||
<title>Overview</title>
|
||||
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to indicate BitBake which files to fetch. The next sections will describe th available fetchers and the options they have. Each Fetcher honors a set of Variables and
|
||||
a per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantic of the Variables and Parameters are defined by the Fetcher. BitBakes tries to have a consistent semantic between the different Fetchers.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Local File Fetcher</title>
|
||||
<para>The URN for the Local File Fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
|
||||
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
|
||||
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
|
||||
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
|
||||
</screen>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>CVS File Fetcher</title>
|
||||
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIRS</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
|
||||
</para>
|
||||
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout by default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR></varname>.
|
||||
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
|
||||
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
|
||||
</screen>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>HTTP/FTP Fetcher</title>
|
||||
<para>The URNs for the HTTP/FTP are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file, <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the uri and basename of the to be fetched file. <varname>PREMIRRORS</varname>
|
||||
will be tried first when fetching a file if that fails the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
|
||||
</para>
|
||||
<para>The only supported Parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac;md5sum=12343"
|
||||
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac;md5sum=1234"
|
||||
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan;md5sum=1234"
|
||||
</screen></para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>SVK Fetcher</title>
|
||||
<para>
|
||||
<emphasis>Currently NOT supported</emphasis>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>SVN Fetcher</title>
|
||||
<para>The URN for the SVN Fetcher is <emphasis>svn</emphasis>.
|
||||
</para>
|
||||
<para>This Fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command, <varname>DL_DIR</varname> is the directory where tarballs will be saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
|
||||
</para>
|
||||
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname>. <varname>proto</varname> is the subversion prototype, <varname>rev</varname> is the subversions revision.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
|
||||
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
|
||||
</screen></para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>GIT Fetcher</title>
|
||||
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
|
||||
</para>
|
||||
<para>The Variables <varname>DL_DIR</varname>, <varname>GITDIR</varname> are used. <varname>DL_DIR</varname> will be used to store the checkedout version. <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
|
||||
</para>
|
||||
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
|
||||
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
|
||||
</screen></para>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
|
||||
|
||||
<chapter>
|
||||
<title>Commands</title>
|
||||
<section>
|
||||
<title>bbread</title>
|
||||
<para>bbread is a command for displaying BitBake metadata. When run with no arguments, it has the core parse 'conf/bitbake.conf', as located in BBPATH, and displays that. If you supply a file on the commandline, such as a .bb, then it parses that afterwards, using the aforementioned configuration metadata.</para>
|
||||
<para><emphasis>NOTE: the stand a lone bbread command was removed. Instead of bbread use bitbake -e.
|
||||
</emphasis></para>
|
||||
</section>
|
||||
<section>
|
||||
<title>bitbake</title>
|
||||
<section>
|
||||
<title>Introduction</title>
|
||||
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Usage and Syntax</title>
|
||||
<para>
|
||||
<screen><prompt>$ </prompt>bitbake --help
|
||||
usage: bitbake [options] [package ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of BitBake files.
|
||||
It expects that BBFILES is defined, which is a space seperated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
Default BBFILES are the .bb files in the current directory.
|
||||
|
||||
options:
|
||||
--version show program's version number and exit
|
||||
-h, --help show this help message and exit
|
||||
-b BUILDFILE, --buildfile=BUILDFILE
|
||||
execute the task against this .bb file, rather than a
|
||||
package from BBFILES.
|
||||
-k, --continue continue as much as possible after an error. While the
|
||||
target that failed, and those that depend on it,
|
||||
cannot be remade, the other dependencies of these
|
||||
targets can be processed all the same.
|
||||
-f, --force force run of specified cmd, regardless of stamp status
|
||||
-i, --interactive drop into the interactive mode also called the BitBake
|
||||
shell.
|
||||
-c CMD, --cmd=CMD Specify task to execute. Note that this only executes
|
||||
the specified task for the providee and the packages
|
||||
it depends on, i.e. 'compile' does not implicitly call
|
||||
stage for the dependencies (IOW: use only if you know
|
||||
what you are doing). Depending on the base.bbclass a
|
||||
listtasks tasks is defined and will show available
|
||||
tasks
|
||||
-r FILE, --read=FILE read the specified file before bitbake.conf
|
||||
-v, --verbose output more chit-chat to the terminal
|
||||
-D, --debug Increase the debug level. You can specify this more
|
||||
than once.
|
||||
-n, --dry-run don't execute, just go through the motions
|
||||
-p, --parse-only quit after parsing the BB files (developers only)
|
||||
-d, --disable-psyco disable using the psyco just-in-time compiler (not
|
||||
recommended)
|
||||
-s, --show-versions show current and preferred versions of all packages
|
||||
-e, --environment show the global or per-package environment (this is
|
||||
what used to be bbread)
|
||||
-g, --graphviz emit the dependency trees of the specified packages in
|
||||
the dot syntax
|
||||
-I IGNORED_DOT_DEPS, --ignore-deps=IGNORED_DOT_DEPS
|
||||
Stop processing at the given list of dependencies when
|
||||
generating dependency graphs. This can help to make
|
||||
the graph more appealing
|
||||
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
|
||||
Show debug logging for the specified logging domains
|
||||
|
||||
</screen>
|
||||
</para>
|
||||
<para>
|
||||
<example>
|
||||
<title>Executing a task against a single .bb</title>
|
||||
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and bitbake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
|
||||
<para><quote>clean</quote> task:</para>
|
||||
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
|
||||
<para><quote>build</quote> task:</para>
|
||||
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb</screen></para>
|
||||
</example>
|
||||
</para>
|
||||
<para>
|
||||
<example>
|
||||
<title>Executing tasks against a set of .bb files</title>
|
||||
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell bitbake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
|
||||
<para>The next section, Metadata, outlines how one goes about specifying such things.</para>
|
||||
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
|
||||
<screen><prompt>$ </prompt>bitbake blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
|
||||
<screen><prompt>$ </prompt>bitbake blah-1.0-r0</screen>
|
||||
<screen><prompt>$ </prompt>bitbake -c clean blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake virtual/whatever</screen>
|
||||
<screen><prompt>$ </prompt>bitbake -c clean virtual/whatever</screen>
|
||||
</example>
|
||||
<example>
|
||||
<title>Generating dependency graphs</title>
|
||||
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
|
||||
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
|
||||
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
|
||||
<screen><prompt>$ </prompt>bitbake -g blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
|
||||
</example>
|
||||
</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Special variables</title>
|
||||
<para>Certain variables affect bitbake operation:</para>
|
||||
<section>
|
||||
<title><varname>BB_NUMBER_THREADS</varname></title>
|
||||
<para> The number of threads bitbake should run at once (default: 1).</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Metadata</title>
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space seperated list of files that are available, and supports wildcards.
|
||||
<example>
|
||||
<title>Setting BBFILES</title>
|
||||
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
|
||||
</example></para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space seperated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
|
||||
<example>
|
||||
<title>Depending on another .bb</title>
|
||||
<para>a.bb:
|
||||
<screen>PN = "package-a"
|
||||
DEPENDS += "package-b"</screen>
|
||||
</para>
|
||||
<para>b.bb:
|
||||
<screen>PN = "package-b"</screen>
|
||||
</para>
|
||||
</example>
|
||||
<example>
|
||||
<title>Using PROVIDES</title>
|
||||
<para>This example shows the usage of the PROVIDES variable, which allows a given .bb to specify what functionality it provides.</para>
|
||||
<para>package1.bb:
|
||||
<screen>PROVIDES += "virtual/package"</screen>
|
||||
</para>
|
||||
<para>package2.bb:
|
||||
<screen>DEPENDS += "virtual/package"</screen>
|
||||
</para>
|
||||
<para>package3.bb:
|
||||
<screen>PROVIDES += "virtual/package"</screen>
|
||||
</para>
|
||||
<para>As you can see, here there are two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running bitbake to control which of those providers gets used. There is, indeed, such a way.</para>
|
||||
<para>The following would go into a .conf file, to select package1:
|
||||
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
|
||||
</para>
|
||||
</example>
|
||||
<example>
|
||||
<title>Specifying version preference</title>
|
||||
<para>When there are multiple <quote>versions</quote> of a given package, bitbake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preferences for the default selected version. In addition, the user can specify their preferences with regard to version.</para>
|
||||
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
|
||||
<para>If we then have an <filename>a_1.2.bb</filename>, bitbake will choose 1.2 by default. However, if we define the following variable in a .conf that bitbake parses, we can change that.
|
||||
<screen>PREFERRED_VERSION_a = "1.1"</screen>
|
||||
</para>
|
||||
</example>
|
||||
<example>
|
||||
<title>Using <quote>bbfile collections</quote></title>
|
||||
<para>bbfile collections exist to allow the user to have multiple repositories of bbfiles that contain the same exact package. For example, one could easily use them to make one's own local copy of an upstream repository, but with custom modifications that one does not want upstream. Usage:</para>
|
||||
<screen>BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
|
||||
BBFILE_COLLECTIONS = "upstream local"
|
||||
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
|
||||
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
|
||||
BBFILE_PRIORITY_upstream = "5"
|
||||
BBFILE_PRIORITY_local = "10"</screen>
|
||||
</example>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
</book>
|
||||
320
bitbake/lib/bb/COW.py
Normal file
@@ -0,0 +1,320 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
|
||||
#
|
||||
# Copyright (C) 2006 Tim Amsell
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Please Note:
|
||||
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
|
||||
# Assign a file to __warn__ to get warnings about slow operations.
|
||||
#
|
||||
|
||||
from inspect import getmro
|
||||
|
||||
import copy
|
||||
import types
|
||||
types.ImmutableTypes = tuple([ \
|
||||
types.BooleanType, \
|
||||
types.ComplexType, \
|
||||
types.FloatType, \
|
||||
types.IntType, \
|
||||
types.LongType, \
|
||||
types.NoneType, \
|
||||
types.TupleType, \
|
||||
frozenset] + \
|
||||
list(types.StringTypes))
|
||||
|
||||
MUTABLE = "__mutable__"
|
||||
|
||||
class COWMeta(type):
|
||||
pass
|
||||
|
||||
class COWDictMeta(COWMeta):
|
||||
__warn__ = False
|
||||
__hasmutable__ = False
|
||||
__marker__ = tuple()
|
||||
|
||||
def __str__(cls):
|
||||
# FIXME: I have magic numbers!
|
||||
return "<COWDict Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
|
||||
__repr__ = __str__
|
||||
|
||||
def cow(cls):
|
||||
class C(cls):
|
||||
__count__ = cls.__count__ + 1
|
||||
return C
|
||||
copy = cow
|
||||
__call__ = cow
|
||||
|
||||
def __setitem__(cls, key, value):
|
||||
if not isinstance(value, types.ImmutableTypes):
|
||||
if not isinstance(value, COWMeta):
|
||||
cls.__hasmutable__ = True
|
||||
key += MUTABLE
|
||||
setattr(cls, key, value)
|
||||
|
||||
def __getmutable__(cls, key, readonly=False):
|
||||
nkey = key + MUTABLE
|
||||
try:
|
||||
return cls.__dict__[nkey]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
value = getattr(cls, nkey)
|
||||
if readonly:
|
||||
return value
|
||||
|
||||
if not cls.__warn__ is False and not isinstance(value, COWMeta):
|
||||
print >> cls.__warn__, "Warning: Doing a copy because %s is a mutable type." % key
|
||||
try:
|
||||
value = value.copy()
|
||||
except AttributeError, e:
|
||||
value = copy.copy(value)
|
||||
setattr(cls, nkey, value)
|
||||
return value
|
||||
|
||||
__getmarker__ = []
|
||||
def __getreadonly__(cls, key, default=__getmarker__):
|
||||
"""\
|
||||
Get a value (even if mutable) which you promise not to change.
|
||||
"""
|
||||
return cls.__getitem__(key, default, True)
|
||||
|
||||
def __getitem__(cls, key, default=__getmarker__, readonly=False):
|
||||
try:
|
||||
try:
|
||||
value = getattr(cls, key)
|
||||
except AttributeError:
|
||||
value = cls.__getmutable__(key, readonly)
|
||||
|
||||
# This is for values which have been deleted
|
||||
if value is cls.__marker__:
|
||||
raise AttributeError("key %s does not exist." % key)
|
||||
|
||||
return value
|
||||
except AttributeError, e:
|
||||
if not default is cls.__getmarker__:
|
||||
return default
|
||||
|
||||
raise KeyError(str(e))
|
||||
|
||||
def __delitem__(cls, key):
|
||||
cls.__setitem__(key, cls.__marker__)
|
||||
|
||||
def __revertitem__(cls, key):
|
||||
if not cls.__dict__.has_key(key):
|
||||
key += MUTABLE
|
||||
delattr(cls, key)
|
||||
|
||||
def has_key(cls, key):
|
||||
value = cls.__getreadonly__(key, cls.__marker__)
|
||||
if value is cls.__marker__:
|
||||
return False
|
||||
return True
|
||||
|
||||
def iter(cls, type, readonly=False):
|
||||
for key in dir(cls):
|
||||
if key.startswith("__"):
|
||||
continue
|
||||
|
||||
if key.endswith(MUTABLE):
|
||||
key = key[:-len(MUTABLE)]
|
||||
|
||||
if type == "keys":
|
||||
yield key
|
||||
|
||||
try:
|
||||
if readonly:
|
||||
value = cls.__getreadonly__(key)
|
||||
else:
|
||||
value = cls[key]
|
||||
except KeyError:
|
||||
continue
|
||||
|
||||
if type == "values":
|
||||
yield value
|
||||
if type == "items":
|
||||
yield (key, value)
|
||||
raise StopIteration()
|
||||
|
||||
def iterkeys(cls):
|
||||
return cls.iter("keys")
|
||||
def itervalues(cls, readonly=False):
|
||||
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
|
||||
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
|
||||
return cls.iter("values", readonly)
|
||||
def iteritems(cls, readonly=False):
|
||||
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
|
||||
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
|
||||
return cls.iter("items", readonly)
|
||||
|
||||
class COWSetMeta(COWDictMeta):
|
||||
def __str__(cls):
|
||||
# FIXME: I have magic numbers!
|
||||
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) -3)
|
||||
__repr__ = __str__
|
||||
|
||||
def cow(cls):
|
||||
class C(cls):
|
||||
__count__ = cls.__count__ + 1
|
||||
return C
|
||||
|
||||
def add(cls, value):
|
||||
COWDictMeta.__setitem__(cls, repr(hash(value)), value)
|
||||
|
||||
def remove(cls, value):
|
||||
COWDictMeta.__delitem__(cls, repr(hash(value)))
|
||||
|
||||
def __in__(cls, value):
|
||||
return COWDictMeta.has_key(repr(hash(value)))
|
||||
|
||||
def iterkeys(cls):
|
||||
raise TypeError("sets don't have keys")
|
||||
|
||||
def iteritems(cls):
|
||||
raise TypeError("sets don't have 'items'")
|
||||
|
||||
# These are the actual classes you use!
|
||||
class COWDictBase(object):
|
||||
__metaclass__ = COWDictMeta
|
||||
__count__ = 0
|
||||
|
||||
class COWSetBase(object):
|
||||
__metaclass__ = COWSetMeta
|
||||
__count__ = 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
COWDictBase.__warn__ = sys.stderr
|
||||
a = COWDictBase()
|
||||
print "a", a
|
||||
|
||||
a['a'] = 'a'
|
||||
a['b'] = 'b'
|
||||
a['dict'] = {}
|
||||
|
||||
b = a.copy()
|
||||
print "b", b
|
||||
b['c'] = 'b'
|
||||
|
||||
print
|
||||
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems():
|
||||
print x
|
||||
print
|
||||
|
||||
b['dict']['a'] = 'b'
|
||||
b['a'] = 'c'
|
||||
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems():
|
||||
print x
|
||||
print
|
||||
|
||||
try:
|
||||
b['dict2']
|
||||
except KeyError, e:
|
||||
print "Okay!"
|
||||
|
||||
a['set'] = COWSetBase()
|
||||
a['set'].add("o1")
|
||||
a['set'].add("o1")
|
||||
a['set'].add("o2")
|
||||
|
||||
print "a", a
|
||||
for x in a['set'].itervalues():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b['set'].itervalues():
|
||||
print x
|
||||
print
|
||||
|
||||
b['set'].add('o3')
|
||||
|
||||
print "a", a
|
||||
for x in a['set'].itervalues():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b['set'].itervalues():
|
||||
print x
|
||||
print
|
||||
|
||||
a['set2'] = set()
|
||||
a['set2'].add("o1")
|
||||
a['set2'].add("o1")
|
||||
a['set2'].add("o2")
|
||||
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print x
|
||||
print
|
||||
|
||||
del b['b']
|
||||
try:
|
||||
print b['b']
|
||||
except KeyError:
|
||||
print "Yay! deleted key raises error"
|
||||
|
||||
if b.has_key('b'):
|
||||
print "Boo!"
|
||||
else:
|
||||
print "Yay - has_key with delete works!"
|
||||
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print x
|
||||
print
|
||||
|
||||
b.__revertitem__('b')
|
||||
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print x
|
||||
print
|
||||
|
||||
b.__revertitem__('dict')
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print x
|
||||
print
|
||||
1311
bitbake/lib/bb/__init__.py
Normal file
471
bitbake/lib/bb/build.py
Normal file
@@ -0,0 +1,471 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake 'Build' implementation
|
||||
#
|
||||
# Core code for function execution and task handling in the
|
||||
# BitBake build tools.
|
||||
#
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# Based on Gentoo's portage.py.
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
from bb import data, fetch, event, mkdirhier, utils
|
||||
import bb, os
|
||||
|
||||
# events
|
||||
class FuncFailed(Exception):
|
||||
"""Executed function failed"""
|
||||
|
||||
class EventException(Exception):
|
||||
"""Exception which is associated with an Event."""
|
||||
|
||||
def __init__(self, msg, event):
|
||||
self.args = msg, event
|
||||
|
||||
class TaskBase(event.Event):
|
||||
"""Base class for task events"""
|
||||
|
||||
def __init__(self, t, d ):
|
||||
self._task = t
|
||||
event.Event.__init__(self, d)
|
||||
|
||||
def getTask(self):
|
||||
return self._task
|
||||
|
||||
def setTask(self, task):
|
||||
self._task = task
|
||||
|
||||
task = property(getTask, setTask, None, "task property")
|
||||
|
||||
class TaskStarted(TaskBase):
|
||||
"""Task execution started"""
|
||||
|
||||
class TaskSucceeded(TaskBase):
|
||||
"""Task execution completed"""
|
||||
|
||||
class TaskFailed(TaskBase):
|
||||
"""Task execution failed"""
|
||||
|
||||
class InvalidTask(TaskBase):
|
||||
"""Invalid Task"""
|
||||
|
||||
# functions
|
||||
|
||||
def exec_func(func, d, dirs = None):
|
||||
"""Execute an BB 'function'"""
|
||||
|
||||
body = data.getVar(func, d)
|
||||
if not body:
|
||||
return
|
||||
|
||||
cleandirs = (data.expand(data.getVarFlag(func, 'cleandirs', d), d) or "").split()
|
||||
for cdir in cleandirs:
|
||||
os.system("rm -rf %s" % cdir)
|
||||
|
||||
if not dirs:
|
||||
dirs = (data.expand(data.getVarFlag(func, 'dirs', d), d) or "").split()
|
||||
for adir in dirs:
|
||||
mkdirhier(adir)
|
||||
|
||||
if len(dirs) > 0:
|
||||
adir = dirs[-1]
|
||||
else:
|
||||
adir = data.getVar('B', d, 1)
|
||||
|
||||
adir = data.expand(adir, d)
|
||||
|
||||
try:
|
||||
prevdir = os.getcwd()
|
||||
except OSError:
|
||||
prevdir = data.expand('${TOPDIR}', d)
|
||||
if adir and os.access(adir, os.F_OK):
|
||||
os.chdir(adir)
|
||||
|
||||
locks = []
|
||||
lockfiles = (data.expand(data.getVarFlag(func, 'lockfiles', d), d) or "").split()
|
||||
for lock in lockfiles:
|
||||
locks.append(bb.utils.lockfile(lock))
|
||||
|
||||
if data.getVarFlag(func, "python", d):
|
||||
exec_func_python(func, d)
|
||||
else:
|
||||
exec_func_shell(func, d)
|
||||
|
||||
for lock in locks:
|
||||
bb.utils.unlockfile(lock)
|
||||
|
||||
if os.path.exists(prevdir):
|
||||
os.chdir(prevdir)
|
||||
|
||||
def exec_func_python(func, d):
|
||||
"""Execute a python BB 'function'"""
|
||||
import re, os
|
||||
|
||||
tmp = "def " + func + "():\n%s" % data.getVar(func, d)
|
||||
tmp += '\n' + func + '()'
|
||||
comp = utils.better_compile(tmp, func, bb.data.getVar('FILE', d, 1) )
|
||||
prevdir = os.getcwd()
|
||||
g = {} # globals
|
||||
g['bb'] = bb
|
||||
g['os'] = os
|
||||
g['d'] = d
|
||||
utils.better_exec(comp,g,tmp, bb.data.getVar('FILE',d,1))
|
||||
if os.path.exists(prevdir):
|
||||
os.chdir(prevdir)
|
||||
|
||||
def exec_func_shell(func, d):
|
||||
"""Execute a shell BB 'function' Returns true if execution was successful.
|
||||
|
||||
For this, it creates a bash shell script in the tmp dectory, writes the local
|
||||
data into it and finally executes. The output of the shell will end in a log file and stdout.
|
||||
|
||||
Note on directory behavior. The 'dirs' varflag should contain a list
|
||||
of the directories you need created prior to execution. The last
|
||||
item in the list is where we will chdir/cd to.
|
||||
"""
|
||||
import sys
|
||||
|
||||
deps = data.getVarFlag(func, 'deps', d)
|
||||
check = data.getVarFlag(func, 'check', d)
|
||||
interact = data.getVarFlag(func, 'interactive', d)
|
||||
if check in globals():
|
||||
if globals()[check](func, deps):
|
||||
return
|
||||
|
||||
global logfile
|
||||
t = data.getVar('T', d, 1)
|
||||
if not t:
|
||||
return 0
|
||||
mkdirhier(t)
|
||||
logfile = "%s/log.%s.%s" % (t, func, str(os.getpid()))
|
||||
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
|
||||
|
||||
f = open(runfile, "w")
|
||||
f.write("#!/bin/sh -e\n")
|
||||
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
|
||||
data.emit_env(f, d)
|
||||
|
||||
f.write("cd %s\n" % os.getcwd())
|
||||
if func: f.write("%s\n" % func)
|
||||
f.close()
|
||||
os.chmod(runfile, 0775)
|
||||
if not func:
|
||||
bb.msg.error(bb.msg.domain.Build, "Function not specified")
|
||||
raise FuncFailed()
|
||||
|
||||
# open logs
|
||||
si = file('/dev/null', 'r')
|
||||
try:
|
||||
if bb.msg.debug_level['default'] > 0:
|
||||
so = os.popen("tee \"%s\"" % logfile, "w")
|
||||
else:
|
||||
so = file(logfile, 'w')
|
||||
except OSError, e:
|
||||
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
|
||||
pass
|
||||
|
||||
se = so
|
||||
|
||||
if not interact:
|
||||
# dup the existing fds so we dont lose them
|
||||
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
|
||||
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
|
||||
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
|
||||
|
||||
# replace those fds with our own
|
||||
os.dup2(si.fileno(), osi[1])
|
||||
os.dup2(so.fileno(), oso[1])
|
||||
os.dup2(se.fileno(), ose[1])
|
||||
|
||||
# execute function
|
||||
prevdir = os.getcwd()
|
||||
if data.getVarFlag(func, "fakeroot", d):
|
||||
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
|
||||
else:
|
||||
maybe_fakeroot = ''
|
||||
lang_environment = "LC_ALL=C "
|
||||
ret = os.system('%s%ssh -e %s' % (lang_environment, maybe_fakeroot, runfile))
|
||||
try:
|
||||
os.chdir(prevdir)
|
||||
except:
|
||||
pass
|
||||
|
||||
if not interact:
|
||||
# restore the backups
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
|
||||
# close our logs
|
||||
si.close()
|
||||
so.close()
|
||||
se.close()
|
||||
|
||||
# close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
if ret==0:
|
||||
if bb.msg.debug_level['default'] > 0:
|
||||
os.remove(runfile)
|
||||
# os.remove(logfile)
|
||||
return
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Build, "function %s failed" % func)
|
||||
if data.getVar("BBINCLUDELOGS", d):
|
||||
bb.msg.error(bb.msg.domain.Build, "log data follows (%s)" % logfile)
|
||||
number_of_lines = data.getVar("BBINCLUDELOGS_LINES", d)
|
||||
if number_of_lines:
|
||||
os.system('tail -n%s %s' % (number_of_lines, logfile))
|
||||
else:
|
||||
f = open(logfile, "r")
|
||||
while True:
|
||||
l = f.readline()
|
||||
if l == '':
|
||||
break
|
||||
l = l.rstrip()
|
||||
print '| %s' % l
|
||||
f.close()
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Build, "see log in %s" % logfile)
|
||||
raise FuncFailed( logfile )
|
||||
|
||||
|
||||
def exec_task(task, d):
|
||||
"""Execute an BB 'task'
|
||||
|
||||
The primary difference between executing a task versus executing
|
||||
a function is that a task exists in the task digraph, and therefore
|
||||
has dependencies amongst other tasks."""
|
||||
|
||||
# check if the task is in the graph..
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
task_cache = data.getVar('_task_cache', d)
|
||||
if not task_cache:
|
||||
task_cache = []
|
||||
data.setVar('_task_cache', task_cache, d)
|
||||
if not task_graph.hasnode(task):
|
||||
raise EventException("Missing node in task graph", InvalidTask(task, d))
|
||||
|
||||
# check whether this task needs executing..
|
||||
if stamp_is_current(task, d):
|
||||
return 1
|
||||
|
||||
# follow digraph path up, then execute our way back down
|
||||
def execute(graph, item):
|
||||
if data.getVarFlag(item, 'task', d):
|
||||
if item in task_cache:
|
||||
return 1
|
||||
|
||||
if task != item:
|
||||
# deeper than toplevel, exec w/ deps
|
||||
exec_task(item, d)
|
||||
return 1
|
||||
|
||||
try:
|
||||
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % item)
|
||||
old_overrides = data.getVar('OVERRIDES', d, 0)
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', 'task_%s:%s' % (item, old_overrides), localdata)
|
||||
data.update_data(localdata)
|
||||
event.fire(TaskStarted(item, localdata))
|
||||
exec_func(item, localdata)
|
||||
event.fire(TaskSucceeded(item, localdata))
|
||||
task_cache.append(item)
|
||||
data.setVar('_task_cache', task_cache, d)
|
||||
except FuncFailed, reason:
|
||||
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % reason )
|
||||
failedevent = TaskFailed(item, d)
|
||||
event.fire(failedevent)
|
||||
raise EventException("Function failed in task: %s" % reason, failedevent)
|
||||
|
||||
if data.getVarFlag(task, 'dontrundeps', d):
|
||||
execute(None, task)
|
||||
else:
|
||||
task_graph.walkdown(task, execute)
|
||||
|
||||
# make stamp, or cause event and raise exception
|
||||
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
|
||||
make_stamp(task, d)
|
||||
|
||||
def extract_stamp_data(d, fn):
|
||||
"""
|
||||
Extracts stamp data from d which is either a data dictonary (fn unset)
|
||||
or a dataCache entry (fn set).
|
||||
"""
|
||||
if fn:
|
||||
return (d.task_queues[fn], d.stamp[fn], d.task_deps[fn])
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
return (task_graph, data.getVar('STAMP', d, 1), None)
|
||||
|
||||
def extract_stamp(d, fn):
|
||||
"""
|
||||
Extracts stamp format which is either a data dictonary (fn unset)
|
||||
or a dataCache entry (fn set).
|
||||
"""
|
||||
if fn:
|
||||
return d.stamp[fn]
|
||||
return data.getVar('STAMP', d, 1)
|
||||
|
||||
def stamp_is_current(task, d, file_name = None, checkdeps = 1):
|
||||
"""
|
||||
Check status of a given task's stamp.
|
||||
Returns 0 if it is not current and needs updating.
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
|
||||
(task_graph, stampfn, taskdep) = extract_stamp_data(d, file_name)
|
||||
|
||||
if not stampfn:
|
||||
return 0
|
||||
|
||||
stampfile = "%s.%s" % (stampfn, task)
|
||||
if not os.access(stampfile, os.F_OK):
|
||||
return 0
|
||||
|
||||
if checkdeps == 0:
|
||||
return 1
|
||||
|
||||
import stat
|
||||
tasktime = os.stat(stampfile)[stat.ST_MTIME]
|
||||
|
||||
_deps = []
|
||||
def checkStamp(graph, task):
|
||||
# check for existance
|
||||
if file_name:
|
||||
if 'nostamp' in taskdep and task in taskdep['nostamp']:
|
||||
return 1
|
||||
else:
|
||||
if data.getVarFlag(task, 'nostamp', d):
|
||||
return 1
|
||||
|
||||
if not stamp_is_current(task, d, file_name, 0 ):
|
||||
return 0
|
||||
|
||||
depfile = "%s.%s" % (stampfn, task)
|
||||
deptime = os.stat(depfile)[stat.ST_MTIME]
|
||||
if deptime > tasktime:
|
||||
return 0
|
||||
return 1
|
||||
|
||||
return task_graph.walkdown(task, checkStamp)
|
||||
|
||||
def stamp_internal(task, d, file_name):
|
||||
"""
|
||||
Internal stamp helper function
|
||||
Removes any stamp for the given task
|
||||
Makes sure the stamp directory exists
|
||||
Returns the stamp path+filename
|
||||
"""
|
||||
stamp = extract_stamp(d, file_name)
|
||||
if not stamp:
|
||||
return
|
||||
stamp = "%s.%s" % (stamp, task)
|
||||
mkdirhier(os.path.dirname(stamp))
|
||||
# Remove the file and recreate to force timestamp
|
||||
# change on broken NFS filesystems
|
||||
if os.access(stamp, os.F_OK):
|
||||
os.remove(stamp)
|
||||
return stamp
|
||||
|
||||
def make_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Creates/updates a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
stamp = stamp_internal(task, d, file_name)
|
||||
if stamp:
|
||||
f = open(stamp, "w")
|
||||
f.close()
|
||||
|
||||
def del_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Removes a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
stamp_internal(task, d, file_name)
|
||||
|
||||
def add_tasks(tasklist, d):
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
task_deps = data.getVar('_task_deps', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
if not task_deps:
|
||||
task_deps = {}
|
||||
|
||||
for task in tasklist:
|
||||
deps = tasklist[task]
|
||||
task = data.expand(task, d)
|
||||
|
||||
data.setVarFlag(task, 'task', 1, d)
|
||||
task_graph.addnode(task, None)
|
||||
for dep in deps:
|
||||
dep = data.expand(dep, d)
|
||||
if not task_graph.hasnode(dep):
|
||||
task_graph.addnode(dep, None)
|
||||
task_graph.addnode(task, dep)
|
||||
|
||||
flags = data.getVarFlags(task, d)
|
||||
def getTask(name):
|
||||
if name in flags:
|
||||
deptask = data.expand(flags[name], d)
|
||||
if not name in task_deps:
|
||||
task_deps[name] = {}
|
||||
task_deps[name][task] = deptask
|
||||
getTask('depends')
|
||||
getTask('deptask')
|
||||
getTask('rdeptask')
|
||||
getTask('recrdeptask')
|
||||
getTask('nostamp')
|
||||
|
||||
# don't assume holding a reference
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
data.setVar('_task_deps', task_deps, d)
|
||||
|
||||
def remove_task(task, kill, d):
|
||||
"""Remove an BB 'task'.
|
||||
|
||||
If kill is 1, also remove tasks that depend on this task."""
|
||||
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
if not task_graph.hasnode(task):
|
||||
return
|
||||
|
||||
data.delVarFlag(task, 'task', d)
|
||||
ref = 1
|
||||
if kill == 1:
|
||||
ref = 2
|
||||
task_graph.delnode(task, ref)
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
|
||||
def task_exists(task, d):
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
return task_graph.hasnode(task)
|
||||
435
bitbake/lib/bb/cache.py
Normal file
@@ -0,0 +1,435 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake 'Event' implementation
|
||||
#
|
||||
# Caching of bitbake variables before task execution
|
||||
|
||||
# Copyright (C) 2006 Richard Purdie
|
||||
|
||||
# but small sections based on code from bin/bitbake:
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2005 ROAD GmbH
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
import os, re
|
||||
import bb.data
|
||||
import bb.utils
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
|
||||
|
||||
__cache_version__ = "127"
|
||||
|
||||
class Cache:
|
||||
"""
|
||||
BitBake Cache implementation
|
||||
"""
|
||||
def __init__(self, cooker):
|
||||
|
||||
|
||||
self.cachedir = bb.data.getVar("CACHE", cooker.configuration.data, True)
|
||||
self.clean = {}
|
||||
self.depends_cache = {}
|
||||
self.data = None
|
||||
self.data_fn = None
|
||||
|
||||
if self.cachedir in [None, '']:
|
||||
self.has_cache = False
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Not using a cache. Set CACHE = <directory> to enable.")
|
||||
else:
|
||||
self.has_cache = True
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_cache.dat")
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
|
||||
try:
|
||||
os.stat( self.cachedir )
|
||||
except OSError:
|
||||
bb.mkdirhier( self.cachedir )
|
||||
|
||||
if self.has_cache and (self.mtime(self.cachefile)):
|
||||
try:
|
||||
p = pickle.Unpickler( file(self.cachefile,"rb"))
|
||||
self.depends_cache, version_data = p.load()
|
||||
if version_data['CACHE_VER'] != __cache_version__:
|
||||
raise ValueError, 'Cache Version Mismatch'
|
||||
if version_data['BITBAKE_VER'] != bb.__version__:
|
||||
raise ValueError, 'Bitbake Version Mismatch'
|
||||
except EOFError:
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
|
||||
self.depends_cache = {}
|
||||
except (ValueError, KeyError):
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
|
||||
self.depends_cache = {}
|
||||
|
||||
if self.depends_cache:
|
||||
for fn in self.depends_cache.keys():
|
||||
self.clean[fn] = ""
|
||||
self.cacheValidUpdate(fn)
|
||||
|
||||
def getVar(self, var, fn, exp = 0):
|
||||
"""
|
||||
Gets the value of a variable
|
||||
(similar to getVar in the data class)
|
||||
|
||||
There are two scenarios:
|
||||
1. We have cached data - serve from depends_cache[fn]
|
||||
2. We're learning what data to cache - serve from data
|
||||
backend but add a copy of the data to the cache.
|
||||
"""
|
||||
|
||||
if fn in self.clean:
|
||||
return self.depends_cache[fn][var]
|
||||
|
||||
if not fn in self.depends_cache:
|
||||
self.depends_cache[fn] = {}
|
||||
|
||||
if fn != self.data_fn:
|
||||
# We're trying to access data in the cache which doesn't exist
|
||||
# yet setData hasn't been called to setup the right access. Very bad.
|
||||
bb.msg.error(bb.msg.domain.Cache, "Parsing error data_fn %s and fn %s don't match" % (self.data_fn, fn))
|
||||
|
||||
result = bb.data.getVar(var, self.data, exp)
|
||||
self.depends_cache[fn][var] = result
|
||||
return result
|
||||
|
||||
def setData(self, fn, data):
|
||||
"""
|
||||
Called to prime bb_cache ready to learn which variables to cache.
|
||||
Will be followed by calls to self.getVar which aren't cached
|
||||
but can be fulfilled from self.data.
|
||||
"""
|
||||
self.data_fn = fn
|
||||
self.data = data
|
||||
|
||||
# Make sure __depends makes the depends_cache
|
||||
self.getVar("__depends", fn, True)
|
||||
self.depends_cache[fn]["CACHETIMESTAMP"] = bb.parse.cached_mtime(fn)
|
||||
|
||||
def loadDataFull(self, fn, cfgData):
|
||||
"""
|
||||
Return a complete set of data for fn.
|
||||
To do this, we need to parse the file.
|
||||
"""
|
||||
bb_data, skipped = self.load_bbfile(fn, cfgData)
|
||||
return bb_data
|
||||
|
||||
def loadData(self, fn, cfgData):
|
||||
"""
|
||||
Load a subset of data for fn.
|
||||
If the cached data is valid we do nothing,
|
||||
To do this, we need to parse the file and set the system
|
||||
to record the variables accessed.
|
||||
Return the cache status and whether the file was skipped when parsed
|
||||
"""
|
||||
if self.cacheValid(fn):
|
||||
if "SKIPPED" in self.depends_cache[fn]:
|
||||
return True, True
|
||||
return True, False
|
||||
|
||||
bb_data, skipped = self.load_bbfile(fn, cfgData)
|
||||
self.setData(fn, bb_data)
|
||||
return False, skipped
|
||||
|
||||
def cacheValid(self, fn):
|
||||
"""
|
||||
Is the cache valid for fn?
|
||||
Fast version, no timestamps checked.
|
||||
"""
|
||||
# Is cache enabled?
|
||||
if not self.has_cache:
|
||||
return False
|
||||
if fn in self.clean:
|
||||
return True
|
||||
return False
|
||||
|
||||
def cacheValidUpdate(self, fn):
|
||||
"""
|
||||
Is the cache valid for fn?
|
||||
Make thorough (slower) checks including timestamps.
|
||||
"""
|
||||
# Is cache enabled?
|
||||
if not self.has_cache:
|
||||
return False
|
||||
|
||||
# Check file still exists
|
||||
if self.mtime(fn) == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# File isn't in depends_cache
|
||||
if not fn in self.depends_cache:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s is not cached" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# Check the file's timestamp
|
||||
if bb.parse.cached_mtime(fn) > self.getVar("CACHETIMESTAMP", fn, True):
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s changed" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# Check dependencies are still valid
|
||||
depends = self.getVar("__depends", fn, True)
|
||||
for f,old_mtime in depends:
|
||||
# Check if file still exists
|
||||
if self.mtime(f) == 0:
|
||||
return False
|
||||
|
||||
new_mtime = bb.parse.cached_mtime(f)
|
||||
if (new_mtime > old_mtime):
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
|
||||
if not fn in self.clean:
|
||||
self.clean[fn] = ""
|
||||
|
||||
return True
|
||||
|
||||
def skip(self, fn):
|
||||
"""
|
||||
Mark a fn as skipped
|
||||
Called from the parser
|
||||
"""
|
||||
if not fn in self.depends_cache:
|
||||
self.depends_cache[fn] = {}
|
||||
self.depends_cache[fn]["SKIPPED"] = "1"
|
||||
|
||||
def remove(self, fn):
|
||||
"""
|
||||
Remove a fn from the cache
|
||||
Called from the parser in error cases
|
||||
"""
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Removing %s from cache" % fn)
|
||||
if fn in self.depends_cache:
|
||||
del self.depends_cache[fn]
|
||||
if fn in self.clean:
|
||||
del self.clean[fn]
|
||||
|
||||
def sync(self):
|
||||
"""
|
||||
Save the cache
|
||||
Called from the parser when complete (or exiting)
|
||||
"""
|
||||
|
||||
if not self.has_cache:
|
||||
return
|
||||
|
||||
version_data = {}
|
||||
version_data['CACHE_VER'] = __cache_version__
|
||||
version_data['BITBAKE_VER'] = bb.__version__
|
||||
|
||||
p = pickle.Pickler(file(self.cachefile, "wb" ), -1 )
|
||||
p.dump([self.depends_cache, version_data])
|
||||
|
||||
def mtime(self, cachefile):
|
||||
return bb.parse.cached_mtime_noerror(cachefile)
|
||||
|
||||
def handle_data(self, file_name, cacheData):
|
||||
"""
|
||||
Save data we need into the cache
|
||||
"""
|
||||
|
||||
pn = self.getVar('PN', file_name, True)
|
||||
pe = self.getVar('PE', file_name, True) or "0"
|
||||
pv = self.getVar('PV', file_name, True)
|
||||
pr = self.getVar('PR', file_name, True)
|
||||
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
|
||||
provides = set([pn] + (self.getVar("PROVIDES", file_name, True) or "").split())
|
||||
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
|
||||
packages = (self.getVar('PACKAGES', file_name, True) or "").split()
|
||||
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
|
||||
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
|
||||
|
||||
cacheData.task_queues[file_name] = self.getVar("_task_graph", file_name, True)
|
||||
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name, True)
|
||||
|
||||
# build PackageName to FileName lookup table
|
||||
if pn not in cacheData.pkg_pn:
|
||||
cacheData.pkg_pn[pn] = []
|
||||
cacheData.pkg_pn[pn].append(file_name)
|
||||
|
||||
cacheData.stamp[file_name] = self.getVar('STAMP', file_name, True)
|
||||
|
||||
# build FileName to PackageName lookup table
|
||||
cacheData.pkg_fn[file_name] = pn
|
||||
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
|
||||
cacheData.pkg_dp[file_name] = dp
|
||||
|
||||
# Build forward and reverse provider hashes
|
||||
# Forward: virtual -> [filenames]
|
||||
# Reverse: PN -> [virtuals]
|
||||
if pn not in cacheData.pn_provides:
|
||||
cacheData.pn_provides[pn] = set()
|
||||
cacheData.pn_provides[pn] |= provides
|
||||
|
||||
cacheData.fn_provides[file_name] = set()
|
||||
for provide in provides:
|
||||
if provide not in cacheData.providers:
|
||||
cacheData.providers[provide] = []
|
||||
cacheData.providers[provide].append(file_name)
|
||||
cacheData.fn_provides[file_name].add(provide)
|
||||
|
||||
cacheData.deps[file_name] = set()
|
||||
for dep in depends:
|
||||
cacheData.all_depends.add(dep)
|
||||
cacheData.deps[file_name].add(dep)
|
||||
|
||||
# Build reverse hash for PACKAGES, so runtime dependencies
|
||||
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
|
||||
for package in packages:
|
||||
if not package in cacheData.packages:
|
||||
cacheData.packages[package] = []
|
||||
cacheData.packages[package].append(file_name)
|
||||
rprovides += (self.getVar("RPROVIDES_%s" % package, file_name, 1) or "").split()
|
||||
|
||||
for package in packages_dynamic:
|
||||
if not package in cacheData.packages_dynamic:
|
||||
cacheData.packages_dynamic[package] = []
|
||||
cacheData.packages_dynamic[package].append(file_name)
|
||||
|
||||
for rprovide in rprovides:
|
||||
if not rprovide in cacheData.rproviders:
|
||||
cacheData.rproviders[rprovide] = []
|
||||
cacheData.rproviders[rprovide].append(file_name)
|
||||
|
||||
# Build hash of runtime depends and rececommends
|
||||
|
||||
def add_dep(deplist, deps):
|
||||
for dep in deps:
|
||||
if not dep in deplist:
|
||||
deplist[dep] = ""
|
||||
|
||||
if not file_name in cacheData.rundeps:
|
||||
cacheData.rundeps[file_name] = {}
|
||||
if not file_name in cacheData.runrecs:
|
||||
cacheData.runrecs[file_name] = {}
|
||||
|
||||
for package in packages + [pn]:
|
||||
if not package in cacheData.rundeps[file_name]:
|
||||
cacheData.rundeps[file_name][package] = {}
|
||||
if not package in cacheData.runrecs[file_name]:
|
||||
cacheData.runrecs[file_name][package] = {}
|
||||
|
||||
add_dep(cacheData.rundeps[file_name][package], bb.utils.explode_deps(self.getVar('RDEPENDS', file_name, True) or ""))
|
||||
add_dep(cacheData.runrecs[file_name][package], bb.utils.explode_deps(self.getVar('RRECOMMENDS', file_name, True) or ""))
|
||||
add_dep(cacheData.rundeps[file_name][package], bb.utils.explode_deps(self.getVar("RDEPENDS_%s" % package, file_name, True) or ""))
|
||||
add_dep(cacheData.runrecs[file_name][package], bb.utils.explode_deps(self.getVar("RRECOMMENDS_%s" % package, file_name, True) or ""))
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not self.getVar('BROKEN', file_name, True) and not self.getVar('EXCLUDE_FROM_WORLD', file_name, True):
|
||||
cacheData.possible_world.append(file_name)
|
||||
|
||||
|
||||
def load_bbfile( self, bbfile , config):
|
||||
"""
|
||||
Load and parse one .bb build file
|
||||
Return the data and whether parsing resulted in the file being skipped
|
||||
"""
|
||||
|
||||
import bb
|
||||
from bb import utils, data, parse, debug, event, fatal
|
||||
|
||||
# expand tmpdir to include this topdir
|
||||
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
oldpath = os.path.abspath(os.getcwd())
|
||||
if self.mtime(bbfile_loc):
|
||||
os.chdir(bbfile_loc)
|
||||
bb_data = data.init_db(config)
|
||||
try:
|
||||
bb_data = parse.handle(bbfile, bb_data) # read .bb data
|
||||
os.chdir(oldpath)
|
||||
return bb_data, False
|
||||
except bb.parse.SkipPackage:
|
||||
os.chdir(oldpath)
|
||||
return bb_data, True
|
||||
except:
|
||||
os.chdir(oldpath)
|
||||
raise
|
||||
|
||||
def init(cooker):
|
||||
"""
|
||||
The Objective: Cache the minimum amount of data possible yet get to the
|
||||
stage of building packages (i.e. tryBuild) without reparsing any .bb files.
|
||||
|
||||
To do this, we intercept getVar calls and only cache the variables we see
|
||||
being accessed. We rely on the cache getVar calls being made for all
|
||||
variables bitbake might need to use to reach this stage. For each cached
|
||||
file we need to track:
|
||||
|
||||
* Its mtime
|
||||
* The mtimes of all its dependencies
|
||||
* Whether it caused a parse.SkipPackage exception
|
||||
|
||||
Files causing parsing errors are evicted from the cache.
|
||||
|
||||
"""
|
||||
return Cache(cooker)
|
||||
|
||||
|
||||
|
||||
#============================================================================#
|
||||
# CacheData
|
||||
#============================================================================#
|
||||
class CacheData:
|
||||
"""
|
||||
The data structures we compile from the cached data
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Direct cache variables
|
||||
(from Cache.handle_data)
|
||||
"""
|
||||
self.providers = {}
|
||||
self.rproviders = {}
|
||||
self.packages = {}
|
||||
self.packages_dynamic = {}
|
||||
self.possible_world = []
|
||||
self.pkg_pn = {}
|
||||
self.pkg_fn = {}
|
||||
self.pkg_pepvpr = {}
|
||||
self.pkg_dp = {}
|
||||
self.pn_provides = {}
|
||||
self.fn_provides = {}
|
||||
self.all_depends = set()
|
||||
self.deps = {}
|
||||
self.rundeps = {}
|
||||
self.runrecs = {}
|
||||
self.task_queues = {}
|
||||
self.task_deps = {}
|
||||
self.stamp = {}
|
||||
self.preferred = {}
|
||||
|
||||
"""
|
||||
Indirect Cache variables
|
||||
(set elsewhere)
|
||||
"""
|
||||
self.ignored_dependencies = []
|
||||
self.world_target = set()
|
||||
self.bbfile_priority = {}
|
||||
self.bbfile_config_priorities = []
|
||||
731
bitbake/lib/bb/cooker.py
Normal file
@@ -0,0 +1,731 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2005 ROAD GmbH
|
||||
# Copyright (C) 2006 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, os, getopt, glob, copy, os.path, re, time
|
||||
import bb
|
||||
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue
|
||||
import itertools, sre_constants
|
||||
|
||||
parsespin = itertools.cycle( r'|/-\\' )
|
||||
|
||||
#============================================================================#
|
||||
# BBCooker
|
||||
#============================================================================#
|
||||
class BBCooker:
|
||||
"""
|
||||
Manages one bitbake build run
|
||||
"""
|
||||
|
||||
def __init__(self, configuration):
|
||||
self.status = None
|
||||
|
||||
self.cache = None
|
||||
self.bb_cache = None
|
||||
|
||||
self.configuration = configuration
|
||||
|
||||
if self.configuration.verbose:
|
||||
bb.msg.set_verbose(True)
|
||||
|
||||
if self.configuration.debug:
|
||||
bb.msg.set_debug_level(self.configuration.debug)
|
||||
else:
|
||||
bb.msg.set_debug_level(0)
|
||||
|
||||
if self.configuration.debug_domains:
|
||||
bb.msg.set_debug_domains(self.configuration.debug_domains)
|
||||
|
||||
self.configuration.data = bb.data.init()
|
||||
|
||||
for f in self.configuration.file:
|
||||
self.parseConfigurationFile( f )
|
||||
|
||||
self.parseConfigurationFile( os.path.join( "conf", "bitbake.conf" ) )
|
||||
|
||||
if not self.configuration.cmd:
|
||||
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data) or "build"
|
||||
|
||||
bbpkgs = bb.data.getVar('BBPKGS', self.configuration.data, True)
|
||||
if bbpkgs:
|
||||
self.configuration.pkgs_to_build.extend(bbpkgs.split())
|
||||
|
||||
#
|
||||
# Special updated configuration we use for firing events
|
||||
#
|
||||
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
|
||||
bb.data.update_data(self.configuration.event_data)
|
||||
|
||||
#
|
||||
# TOSTOP must not be set or our children will hang when they output
|
||||
#
|
||||
fd = sys.stdout.fileno()
|
||||
if os.isatty(fd):
|
||||
import termios
|
||||
tcattr = termios.tcgetattr(fd)
|
||||
if tcattr[3] & termios.TOSTOP:
|
||||
bb.msg.note(1, bb.msg.domain.Build, "The terminal had the TOSTOP bit set, clearing...")
|
||||
tcattr[3] = tcattr[3] & ~termios.TOSTOP
|
||||
termios.tcsetattr(fd, termios.TCSANOW, tcattr)
|
||||
|
||||
# Change nice level if we're asked to
|
||||
nice = bb.data.getVar("BB_NICE_LEVEL", self.configuration.data, True)
|
||||
if nice:
|
||||
curnice = os.nice(0)
|
||||
nice = int(nice) - curnice
|
||||
bb.msg.note(2, bb.msg.domain.Build, "Renice to %s " % os.nice(nice))
|
||||
|
||||
|
||||
def tryBuildPackage(self, fn, item, task, the_data, build_depends):
|
||||
"""
|
||||
Build one task of a package, optionally build following task depends
|
||||
"""
|
||||
bb.event.fire(bb.event.PkgStarted(item, the_data))
|
||||
try:
|
||||
if not build_depends:
|
||||
bb.data.setVarFlag('do_%s' % task, 'dontrundeps', 1, the_data)
|
||||
if not self.configuration.dry_run:
|
||||
bb.build.exec_task('do_%s' % task, the_data)
|
||||
bb.event.fire(bb.event.PkgSucceeded(item, the_data))
|
||||
return True
|
||||
except bb.build.FuncFailed:
|
||||
bb.msg.error(bb.msg.domain.Build, "task stack execution failed")
|
||||
bb.event.fire(bb.event.PkgFailed(item, the_data))
|
||||
raise
|
||||
except bb.build.EventException, e:
|
||||
event = e.args[1]
|
||||
bb.msg.error(bb.msg.domain.Build, "%s event exception, aborting" % bb.event.getName(event))
|
||||
bb.event.fire(bb.event.PkgFailed(item, the_data))
|
||||
raise
|
||||
|
||||
def tryBuild( self, fn, build_depends):
|
||||
"""
|
||||
Build a provider and its dependencies.
|
||||
build_depends is a list of previous build dependencies (not runtime)
|
||||
If build_depends is empty, we're dealing with a runtime depends
|
||||
"""
|
||||
|
||||
the_data = self.bb_cache.loadDataFull(fn, self.configuration.data)
|
||||
|
||||
item = self.status.pkg_fn[fn]
|
||||
|
||||
if bb.build.stamp_is_current('do_%s' % self.configuration.cmd, the_data):
|
||||
return True
|
||||
|
||||
return self.tryBuildPackage(fn, item, self.configuration.cmd, the_data, build_depends)
|
||||
|
||||
def showVersions(self):
|
||||
pkg_pn = self.status.pkg_pn
|
||||
preferred_versions = {}
|
||||
latest_versions = {}
|
||||
|
||||
# Sort by priority
|
||||
for pn in pkg_pn.keys():
|
||||
(last_ver,last_file,pref_ver,pref_file) = bb.providers.findBestProvider(pn, self.configuration.data, self.status)
|
||||
preferred_versions[pn] = (pref_ver, pref_file)
|
||||
latest_versions[pn] = (last_ver, last_file)
|
||||
|
||||
pkg_list = pkg_pn.keys()
|
||||
pkg_list.sort()
|
||||
|
||||
for p in pkg_list:
|
||||
pref = preferred_versions[p]
|
||||
latest = latest_versions[p]
|
||||
|
||||
if pref != latest:
|
||||
prefstr = pref[0][0] + ":" + pref[0][1] + '-' + pref[0][2]
|
||||
else:
|
||||
prefstr = ""
|
||||
|
||||
print "%-30s %20s %20s" % (p, latest[0][0] + ":" + latest[0][1] + "-" + latest[0][2],
|
||||
prefstr)
|
||||
|
||||
|
||||
def showEnvironment(self , buildfile = None, pkgs_to_build = []):
|
||||
"""
|
||||
Show the outer or per-package environment
|
||||
"""
|
||||
fn = None
|
||||
envdata = None
|
||||
|
||||
if 'world' in pkgs_to_build:
|
||||
print "'world' is not a valid target for --environment."
|
||||
sys.exit(1)
|
||||
|
||||
if len(pkgs_to_build) > 1:
|
||||
print "Only one target can be used with the --environment option."
|
||||
sys.exit(1)
|
||||
|
||||
if buildfile:
|
||||
if len(pkgs_to_build) > 0:
|
||||
print "No target should be used with the --environment and --buildfile options."
|
||||
sys.exit(1)
|
||||
self.cb = None
|
||||
self.bb_cache = bb.cache.init(self)
|
||||
fn = self.matchFile(buildfile)
|
||||
elif len(pkgs_to_build) == 1:
|
||||
self.updateCache()
|
||||
|
||||
localdata = data.createCopy(self.configuration.data)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
|
||||
taskdata = bb.taskdata.TaskData(self.configuration.abort)
|
||||
|
||||
try:
|
||||
taskdata.add_provider(localdata, self.status, pkgs_to_build[0])
|
||||
taskdata.add_unresolved(localdata, self.status)
|
||||
except bb.providers.NoProvider:
|
||||
sys.exit(1)
|
||||
|
||||
targetid = taskdata.getbuild_id(pkgs_to_build[0])
|
||||
fnid = taskdata.build_targets[targetid][0]
|
||||
fn = taskdata.fn_index[fnid]
|
||||
else:
|
||||
envdata = self.configuration.data
|
||||
|
||||
if fn:
|
||||
try:
|
||||
envdata = self.bb_cache.loadDataFull(fn, self.configuration.data)
|
||||
except IOError, e:
|
||||
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to read %s: %s" % (fn, e))
|
||||
except Exception, e:
|
||||
bb.msg.fatal(bb.msg.domain.Parsing, "%s" % e)
|
||||
|
||||
# emit variables and shell functions
|
||||
try:
|
||||
data.update_data( envdata )
|
||||
data.emit_env(sys.__stdout__, envdata, True)
|
||||
except Exception, e:
|
||||
bb.msg.fatal(bb.msg.domain.Parsing, "%s" % e)
|
||||
# emit the metadata which isnt valid shell
|
||||
data.expandKeys( envdata )
|
||||
for e in envdata.keys():
|
||||
if data.getVarFlag( e, 'python', envdata ):
|
||||
sys.__stdout__.write("\npython %s () {\n%s}\n" % (e, data.getVar(e, envdata, 1)))
|
||||
|
||||
def generateDotGraph( self, pkgs_to_build, ignore_deps ):
|
||||
"""
|
||||
Generate a task dependency graph.
|
||||
|
||||
pkgs_to_build A list of packages that needs to be built
|
||||
ignore_deps A list of names where processing of dependencies
|
||||
should be stopped. e.g. dependencies that get
|
||||
"""
|
||||
|
||||
for dep in ignore_deps:
|
||||
self.status.ignored_dependencies.add(dep)
|
||||
|
||||
localdata = data.createCopy(self.configuration.data)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
taskdata = bb.taskdata.TaskData(self.configuration.abort)
|
||||
|
||||
runlist = []
|
||||
try:
|
||||
for k in pkgs_to_build:
|
||||
taskdata.add_provider(localdata, self.status, k)
|
||||
runlist.append([k, "do_%s" % self.configuration.cmd])
|
||||
taskdata.add_unresolved(localdata, self.status)
|
||||
except bb.providers.NoProvider:
|
||||
sys.exit(1)
|
||||
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
|
||||
rq.prepare_runqueue()
|
||||
|
||||
seen_fnids = []
|
||||
depends_file = file('depends.dot', 'w' )
|
||||
tdepends_file = file('task-depends.dot', 'w' )
|
||||
print >> depends_file, "digraph depends {"
|
||||
print >> tdepends_file, "digraph depends {"
|
||||
|
||||
for task in range(len(rq.runq_fnid)):
|
||||
taskname = rq.runq_task[task]
|
||||
fnid = rq.runq_fnid[task]
|
||||
fn = taskdata.fn_index[fnid]
|
||||
pn = self.status.pkg_fn[fn]
|
||||
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
|
||||
print >> tdepends_file, '"%s.%s" [label="%s %s\\n%s\\n%s"]' % (pn, taskname, pn, taskname, version, fn)
|
||||
for dep in rq.runq_depends[task]:
|
||||
depfn = taskdata.fn_index[rq.runq_fnid[dep]]
|
||||
deppn = self.status.pkg_fn[depfn]
|
||||
print >> tdepends_file, '"%s.%s" -> "%s.%s"' % (pn, rq.runq_task[task], deppn, rq.runq_task[dep])
|
||||
if fnid not in seen_fnids:
|
||||
seen_fnids.append(fnid)
|
||||
packages = []
|
||||
print >> depends_file, '"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn)
|
||||
for depend in self.status.deps[fn]:
|
||||
print >> depends_file, '"%s" -> "%s"' % (pn, depend)
|
||||
rdepends = self.status.rundeps[fn]
|
||||
for package in rdepends:
|
||||
for rdepend in rdepends[package]:
|
||||
print >> depends_file, '"%s" -> "%s" [style=dashed]' % (package, rdepend)
|
||||
packages.append(package)
|
||||
rrecs = self.status.runrecs[fn]
|
||||
for package in rrecs:
|
||||
for rdepend in rrecs[package]:
|
||||
print >> depends_file, '"%s" -> "%s" [style=dashed]' % (package, rdepend)
|
||||
if not package in packages:
|
||||
packages.append(package)
|
||||
for package in packages:
|
||||
if package != pn:
|
||||
print >> depends_file, '"%s" [label="%s(%s) %s\\n%s"]' % (package, package, pn, version, fn)
|
||||
for depend in self.status.deps[fn]:
|
||||
print >> depends_file, '"%s" -> "%s"' % (package, depend)
|
||||
# Prints a flattened form of the above where subpackages of a package are merged into the main pn
|
||||
#print >> depends_file, '"%s" [label="%s %s\\n%s\\n%s"]' % (pn, pn, taskname, version, fn)
|
||||
#for rdep in taskdata.rdepids[fnid]:
|
||||
# print >> depends_file, '"%s" -> "%s" [style=dashed]' % (pn, taskdata.run_names_index[rdep])
|
||||
#for dep in taskdata.depids[fnid]:
|
||||
# print >> depends_file, '"%s" -> "%s"' % (pn, taskdata.build_names_index[dep])
|
||||
print >> depends_file, "}"
|
||||
print >> tdepends_file, "}"
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "Dependencies saved to 'depends.dot'")
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "Task dependencies saved to 'task-depends.dot'")
|
||||
|
||||
def buildDepgraph( self ):
|
||||
all_depends = self.status.all_depends
|
||||
pn_provides = self.status.pn_provides
|
||||
|
||||
localdata = data.createCopy(self.configuration.data)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
|
||||
def calc_bbfile_priority(filename):
|
||||
for (regex, pri) in self.status.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
return pri
|
||||
return 0
|
||||
|
||||
# Handle PREFERRED_PROVIDERS
|
||||
for p in (bb.data.getVar('PREFERRED_PROVIDERS', localdata, 1) or "").split():
|
||||
try:
|
||||
(providee, provider) = p.split(':')
|
||||
except:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Malformed option in PREFERRED_PROVIDERS variable: %s" % p)
|
||||
continue
|
||||
if providee in self.status.preferred and self.status.preferred[providee] != provider:
|
||||
bb.msg.error(bb.msg.domain.Provider, "conflicting preferences for %s: both %s and %s specified" % (providee, provider, self.status.preferred[providee]))
|
||||
self.status.preferred[providee] = provider
|
||||
|
||||
# Calculate priorities for each file
|
||||
for p in self.status.pkg_fn.keys():
|
||||
self.status.bbfile_priority[p] = calc_bbfile_priority(p)
|
||||
|
||||
def buildWorldTargetList(self):
|
||||
"""
|
||||
Build package list for "bitbake world"
|
||||
"""
|
||||
all_depends = self.status.all_depends
|
||||
pn_provides = self.status.pn_provides
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "collating packages for \"world\"")
|
||||
for f in self.status.possible_world:
|
||||
terminal = True
|
||||
pn = self.status.pkg_fn[f]
|
||||
|
||||
for p in pn_provides[pn]:
|
||||
if p.startswith('virtual/'):
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "World build skipping %s due to %s provider starting with virtual/" % (f, p))
|
||||
terminal = False
|
||||
break
|
||||
for pf in self.status.providers[p]:
|
||||
if self.status.pkg_fn[pf] != pn:
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "World build skipping %s due to both us and %s providing %s" % (f, pf, p))
|
||||
terminal = False
|
||||
break
|
||||
if terminal:
|
||||
self.status.world_target.add(pn)
|
||||
|
||||
# drop reference count now
|
||||
self.status.possible_world = None
|
||||
self.status.all_depends = None
|
||||
|
||||
def myProgressCallback( self, x, y, f, from_cache ):
|
||||
"""Update any tty with the progress change"""
|
||||
if os.isatty(sys.stdout.fileno()):
|
||||
sys.stdout.write("\rNOTE: Handling BitBake files: %s (%04d/%04d) [%2d %%]" % ( parsespin.next(), x, y, x*100/y ) )
|
||||
sys.stdout.flush()
|
||||
else:
|
||||
if x == 1:
|
||||
sys.stdout.write("Parsing .bb files, please wait...")
|
||||
sys.stdout.flush()
|
||||
if x == y:
|
||||
sys.stdout.write("done.")
|
||||
sys.stdout.flush()
|
||||
|
||||
def interactiveMode( self ):
|
||||
"""Drop off into a shell"""
|
||||
try:
|
||||
from bb import shell
|
||||
except ImportError, details:
|
||||
bb.msg.fatal(bb.msg.domain.Parsing, "Sorry, shell not available (%s)" % details )
|
||||
else:
|
||||
shell.start( self )
|
||||
sys.exit( 0 )
|
||||
|
||||
def parseConfigurationFile( self, afile ):
|
||||
try:
|
||||
self.configuration.data = bb.parse.handle( afile, self.configuration.data )
|
||||
|
||||
# Add the handlers we inherited by INHERIT
|
||||
# we need to do this manually as it is not guranteed
|
||||
# we will pick up these classes... as we only INHERIT
|
||||
# on .inc and .bb files but not on .conf
|
||||
data = bb.data.createCopy( self.configuration.data )
|
||||
inherits = ["base"] + (bb.data.getVar('INHERIT', data, True ) or "").split()
|
||||
for inherit in inherits:
|
||||
data = bb.parse.handle( os.path.join('classes', '%s.bbclass' % inherit ), data, True )
|
||||
|
||||
# FIXME: This assumes that we included at least one .inc file
|
||||
for var in bb.data.keys(data):
|
||||
if bb.data.getVarFlag(var, 'handler', data):
|
||||
bb.event.register(var,bb.data.getVar(var, data))
|
||||
|
||||
bb.fetch.fetcher_init(self.configuration.data)
|
||||
|
||||
bb.event.fire(bb.event.ConfigParsed(self.configuration.data))
|
||||
|
||||
except IOError:
|
||||
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to open %s" % afile )
|
||||
except bb.parse.ParseError, details:
|
||||
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to parse %s (%s)" % (afile, details) )
|
||||
|
||||
def handleCollections( self, collections ):
|
||||
"""Handle collections"""
|
||||
if collections:
|
||||
collection_list = collections.split()
|
||||
for c in collection_list:
|
||||
regex = bb.data.getVar("BBFILE_PATTERN_%s" % c, self.configuration.data, 1)
|
||||
if regex == None:
|
||||
bb.msg.error(bb.msg.domain.Parsing, "BBFILE_PATTERN_%s not defined" % c)
|
||||
continue
|
||||
priority = bb.data.getVar("BBFILE_PRIORITY_%s" % c, self.configuration.data, 1)
|
||||
if priority == None:
|
||||
bb.msg.error(bb.msg.domain.Parsing, "BBFILE_PRIORITY_%s not defined" % c)
|
||||
continue
|
||||
try:
|
||||
cre = re.compile(regex)
|
||||
except re.error:
|
||||
bb.msg.error(bb.msg.domain.Parsing, "BBFILE_PATTERN_%s \"%s\" is not a valid regular expression" % (c, regex))
|
||||
continue
|
||||
try:
|
||||
pri = int(priority)
|
||||
self.status.bbfile_config_priorities.append((cre, pri))
|
||||
except ValueError:
|
||||
bb.msg.error(bb.msg.domain.Parsing, "invalid value for BBFILE_PRIORITY_%s: \"%s\"" % (c, priority))
|
||||
|
||||
def buildSetVars(self):
|
||||
"""
|
||||
Setup any variables needed before starting a build
|
||||
"""
|
||||
if not bb.data.getVar("BUILDNAME", self.configuration.data):
|
||||
bb.data.setVar("BUILDNAME", os.popen('date +%Y%m%d%H%M').readline().strip(), self.configuration.data)
|
||||
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S',time.gmtime()),self.configuration.data)
|
||||
|
||||
def matchFile(self, buildfile):
|
||||
"""
|
||||
Convert the fragment buildfile into a real file
|
||||
Error if there are too many matches
|
||||
"""
|
||||
bf = os.path.abspath(buildfile)
|
||||
try:
|
||||
os.stat(bf)
|
||||
return bf
|
||||
except OSError:
|
||||
(filelist, masked) = self.collect_bbfiles()
|
||||
regexp = re.compile(buildfile)
|
||||
matches = []
|
||||
for f in filelist:
|
||||
if regexp.search(f) and os.path.isfile(f):
|
||||
bf = f
|
||||
matches.append(f)
|
||||
if len(matches) != 1:
|
||||
bb.msg.error(bb.msg.domain.Parsing, "Unable to match %s (%s matches found):" % (buildfile, len(matches)))
|
||||
for f in matches:
|
||||
bb.msg.error(bb.msg.domain.Parsing, " %s" % f)
|
||||
sys.exit(1)
|
||||
return matches[0]
|
||||
|
||||
def buildFile(self, buildfile):
|
||||
"""
|
||||
Build the file matching regexp buildfile
|
||||
"""
|
||||
|
||||
bf = self.matchFile(buildfile)
|
||||
|
||||
bbfile_data = bb.parse.handle(bf, self.configuration.data)
|
||||
|
||||
# Remove stamp for target if force mode active
|
||||
if self.configuration.force:
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (self.configuration.cmd, bf))
|
||||
bb.build.del_stamp('do_%s' % self.configuration.cmd, bbfile_data)
|
||||
|
||||
item = bb.data.getVar('PN', bbfile_data, 1)
|
||||
try:
|
||||
self.tryBuildPackage(bf, item, self.configuration.cmd, bbfile_data, True)
|
||||
except bb.build.EventException:
|
||||
bb.msg.error(bb.msg.domain.Build, "Build of '%s' failed" % item )
|
||||
|
||||
sys.exit(0)
|
||||
|
||||
def buildTargets(self, targets):
|
||||
"""
|
||||
Attempt to build the targets specified
|
||||
"""
|
||||
|
||||
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, targets, self.configuration.event_data))
|
||||
|
||||
localdata = data.createCopy(self.configuration.data)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
|
||||
taskdata = bb.taskdata.TaskData(self.configuration.abort)
|
||||
|
||||
runlist = []
|
||||
try:
|
||||
for k in targets:
|
||||
taskdata.add_provider(localdata, self.status, k)
|
||||
runlist.append([k, "do_%s" % self.configuration.cmd])
|
||||
taskdata.add_unresolved(localdata, self.status)
|
||||
except bb.providers.NoProvider:
|
||||
sys.exit(1)
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
|
||||
rq.prepare_runqueue()
|
||||
try:
|
||||
failures = rq.execute_runqueue()
|
||||
except runqueue.TaskFailure, fnids:
|
||||
for fnid in fnids:
|
||||
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
|
||||
sys.exit(1)
|
||||
bb.event.fire(bb.event.BuildCompleted(buildname, targets, self.configuration.event_data, failures))
|
||||
|
||||
sys.exit(0)
|
||||
|
||||
def updateCache(self):
|
||||
# Import Psyco if available and not disabled
|
||||
import platform
|
||||
if platform.machine() in ['i386', 'i486', 'i586', 'i686']:
|
||||
if not self.configuration.disable_psyco:
|
||||
try:
|
||||
import psyco
|
||||
except ImportError:
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
|
||||
else:
|
||||
psyco.bind( self.parse_bbfiles )
|
||||
else:
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "You have disabled Psyco. This decreases performance.")
|
||||
|
||||
self.status = bb.cache.CacheData()
|
||||
|
||||
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
|
||||
self.status.ignored_dependencies = set( ignore.split() )
|
||||
|
||||
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
|
||||
(filelist, masked) = self.collect_bbfiles()
|
||||
self.parse_bbfiles(filelist, masked, self.myProgressCallback)
|
||||
bb.msg.debug(1, bb.msg.domain.Collection, "parsing complete")
|
||||
|
||||
self.buildDepgraph()
|
||||
|
||||
def cook(self):
|
||||
"""
|
||||
We are building stuff here. We do the building
|
||||
from here. By default we try to execute task
|
||||
build.
|
||||
"""
|
||||
|
||||
if self.configuration.show_environment:
|
||||
self.showEnvironment(self.configuration.buildfile, self.configuration.pkgs_to_build)
|
||||
sys.exit( 0 )
|
||||
|
||||
self.buildSetVars()
|
||||
|
||||
if self.configuration.interactive:
|
||||
self.interactiveMode()
|
||||
|
||||
if self.configuration.buildfile is not None:
|
||||
return self.buildFile(self.configuration.buildfile)
|
||||
|
||||
# initialise the parsing status now we know we will need deps
|
||||
self.updateCache()
|
||||
|
||||
if self.configuration.parse_only:
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "Requested parsing .bb files only. Exiting.")
|
||||
return 0
|
||||
|
||||
pkgs_to_build = self.configuration.pkgs_to_build
|
||||
|
||||
if len(pkgs_to_build) == 0 and not self.configuration.show_versions:
|
||||
print "Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help'"
|
||||
print "for usage information."
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
if self.configuration.show_versions:
|
||||
self.showVersions()
|
||||
sys.exit( 0 )
|
||||
if 'world' in pkgs_to_build:
|
||||
self.buildWorldTargetList()
|
||||
pkgs_to_build.remove('world')
|
||||
for t in self.status.world_target:
|
||||
pkgs_to_build.append(t)
|
||||
|
||||
if self.configuration.dot_graph:
|
||||
self.generateDotGraph( pkgs_to_build, self.configuration.ignored_dot_deps )
|
||||
sys.exit( 0 )
|
||||
|
||||
return self.buildTargets(pkgs_to_build)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "KeyboardInterrupt - Build not completed.")
|
||||
sys.exit(1)
|
||||
|
||||
def get_bbfiles( self, path = os.getcwd() ):
|
||||
"""Get list of default .bb files by reading out the current directory"""
|
||||
contents = os.listdir(path)
|
||||
bbfiles = []
|
||||
for f in contents:
|
||||
(root, ext) = os.path.splitext(f)
|
||||
if ext == ".bb":
|
||||
bbfiles.append(os.path.abspath(os.path.join(os.getcwd(),f)))
|
||||
return bbfiles
|
||||
|
||||
def find_bbfiles( self, path ):
|
||||
"""Find all the .bb files in a directory"""
|
||||
from os.path import join
|
||||
|
||||
found = []
|
||||
for dir, dirs, files in os.walk(path):
|
||||
for ignored in ('SCCS', 'CVS', '.svn'):
|
||||
if ignored in dirs:
|
||||
dirs.remove(ignored)
|
||||
found += [join(dir,f) for f in files if f.endswith('.bb')]
|
||||
|
||||
return found
|
||||
|
||||
def collect_bbfiles( self ):
|
||||
"""Collect all available .bb build files"""
|
||||
parsed, cached, skipped, masked = 0, 0, 0, 0
|
||||
self.bb_cache = bb.cache.init(self)
|
||||
|
||||
files = (data.getVar( "BBFILES", self.configuration.data, 1 ) or "").split()
|
||||
data.setVar("BBFILES", " ".join(files), self.configuration.data)
|
||||
|
||||
if not len(files):
|
||||
files = self.get_bbfiles()
|
||||
|
||||
if not len(files):
|
||||
bb.msg.error(bb.msg.domain.Collection, "no files to build.")
|
||||
|
||||
newfiles = []
|
||||
for f in files:
|
||||
if os.path.isdir(f):
|
||||
dirfiles = self.find_bbfiles(f)
|
||||
if dirfiles:
|
||||
newfiles += dirfiles
|
||||
continue
|
||||
newfiles += glob.glob(f) or [ f ]
|
||||
|
||||
bbmask = bb.data.getVar('BBMASK', self.configuration.data, 1)
|
||||
|
||||
if not bbmask:
|
||||
return (newfiles, 0)
|
||||
|
||||
try:
|
||||
bbmask_compiled = re.compile(bbmask)
|
||||
except sre_constants.error:
|
||||
bb.msg.fatal(bb.msg.domain.Collection, "BBMASK is not a valid regular expression.")
|
||||
|
||||
finalfiles = []
|
||||
for i in xrange( len( newfiles ) ):
|
||||
f = newfiles[i]
|
||||
if bbmask and bbmask_compiled.search(f):
|
||||
bb.msg.debug(1, bb.msg.domain.Collection, "skipping masked file %s" % f)
|
||||
masked += 1
|
||||
continue
|
||||
finalfiles.append(f)
|
||||
|
||||
return (finalfiles, masked)
|
||||
|
||||
def parse_bbfiles(self, filelist, masked, progressCallback = None):
|
||||
parsed, cached, skipped, error = 0, 0, 0, 0
|
||||
for i in xrange( len( filelist ) ):
|
||||
f = filelist[i]
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Collection, "parsing %s" % f)
|
||||
|
||||
# read a file's metadata
|
||||
try:
|
||||
fromCache, skip = self.bb_cache.loadData(f, self.configuration.data)
|
||||
if skip:
|
||||
skipped += 1
|
||||
bb.msg.debug(2, bb.msg.domain.Collection, "skipping %s" % f)
|
||||
self.bb_cache.skip(f)
|
||||
continue
|
||||
elif fromCache: cached += 1
|
||||
else: parsed += 1
|
||||
deps = None
|
||||
|
||||
# Disabled by RP as was no longer functional
|
||||
# allow metadata files to add items to BBFILES
|
||||
#data.update_data(self.pkgdata[f])
|
||||
#addbbfiles = self.bb_cache.getVar('BBFILES', f, False) or None
|
||||
#if addbbfiles:
|
||||
# for aof in addbbfiles.split():
|
||||
# if not files.count(aof):
|
||||
# if not os.path.isabs(aof):
|
||||
# aof = os.path.join(os.path.dirname(f),aof)
|
||||
# files.append(aof)
|
||||
|
||||
self.bb_cache.handle_data(f, self.status)
|
||||
|
||||
# now inform the caller
|
||||
if progressCallback is not None:
|
||||
progressCallback( i + 1, len( filelist ), f, fromCache )
|
||||
|
||||
except IOError, e:
|
||||
self.bb_cache.remove(f)
|
||||
bb.msg.error(bb.msg.domain.Collection, "opening %s: %s" % (f, e))
|
||||
pass
|
||||
except KeyboardInterrupt:
|
||||
self.bb_cache.sync()
|
||||
raise
|
||||
except Exception, e:
|
||||
error += 1
|
||||
self.bb_cache.remove(f)
|
||||
bb.msg.error(bb.msg.domain.Collection, "%s while parsing %s" % (e, f))
|
||||
except:
|
||||
self.bb_cache.remove(f)
|
||||
raise
|
||||
|
||||
if progressCallback is not None:
|
||||
print "\r" # need newline after Handling Bitbake files message
|
||||
bb.msg.note(1, bb.msg.domain.Collection, "Parsing finished. %d cached, %d parsed, %d skipped, %d masked." % ( cached, parsed, skipped, masked ))
|
||||
|
||||
self.bb_cache.sync()
|
||||
|
||||
if error > 0:
|
||||
bb.msg.fatal(bb.msg.domain.Collection, "Parsing errors found, exiting...")
|
||||
570
bitbake/lib/bb/data.py
Normal file
@@ -0,0 +1,570 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Data' implementations
|
||||
|
||||
Functions for interacting with the data structure used by the
|
||||
BitBake build tools.
|
||||
|
||||
The expandData and update_data are the most expensive
|
||||
operations. At night the cookie monster came by and
|
||||
suggested 'give me cookies on setting the variables and
|
||||
things will work out'. Taking this suggestion into account
|
||||
applying the skills from the not yet passed 'Entwurf und
|
||||
Analyse von Algorithmen' lecture and the cookie
|
||||
monster seems to be right. We will track setVar more carefully
|
||||
to have faster update_data and expandKeys operations.
|
||||
|
||||
This is a treade-off between speed and memory again but
|
||||
the speed is more critical here.
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import sys, os, re, time, types
|
||||
if sys.argv[0][-5:] == "pydoc":
|
||||
path = os.path.dirname(os.path.dirname(sys.argv[1]))
|
||||
else:
|
||||
path = os.path.dirname(os.path.dirname(sys.argv[0]))
|
||||
sys.path.insert(0,path)
|
||||
|
||||
from bb import data_smart
|
||||
import bb
|
||||
|
||||
_dict_type = data_smart.DataSmart
|
||||
|
||||
def init():
|
||||
return _dict_type()
|
||||
|
||||
def init_db(parent = None):
|
||||
if parent:
|
||||
return parent.createCopy()
|
||||
else:
|
||||
return _dict_type()
|
||||
|
||||
def createCopy(source):
|
||||
"""Link the source set to the destination
|
||||
If one does not find the value in the destination set,
|
||||
search will go on to the source set to get the value.
|
||||
Value from source are copy-on-write. i.e. any try to
|
||||
modify one of them will end up putting the modified value
|
||||
in the destination set.
|
||||
"""
|
||||
return source.createCopy()
|
||||
|
||||
def initVar(var, d):
|
||||
"""Non-destructive var init for data structure"""
|
||||
d.initVar(var)
|
||||
|
||||
|
||||
def setVar(var, value, d):
|
||||
"""Set a variable to a given value
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> print getVar('TEST', d)
|
||||
testcontents
|
||||
"""
|
||||
d.setVar(var,value)
|
||||
|
||||
|
||||
def getVar(var, d, exp = 0):
|
||||
"""Gets the value of a variable
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> print getVar('TEST', d)
|
||||
testcontents
|
||||
"""
|
||||
return d.getVar(var,exp)
|
||||
|
||||
|
||||
def renameVar(key, newkey, d):
|
||||
"""Renames a variable from key to newkey
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> renameVar('TEST', 'TEST2', d)
|
||||
>>> print getVar('TEST2', d)
|
||||
testcontents
|
||||
"""
|
||||
d.renameVar(key, newkey)
|
||||
|
||||
def delVar(var, d):
|
||||
"""Removes a variable from the data set
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> print getVar('TEST', d)
|
||||
testcontents
|
||||
>>> delVar('TEST', d)
|
||||
>>> print getVar('TEST', d)
|
||||
None
|
||||
"""
|
||||
d.delVar(var)
|
||||
|
||||
def setVarFlag(var, flag, flagvalue, d):
|
||||
"""Set a flag for a given variable to a given value
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'python', 1, d)
|
||||
>>> print getVarFlag('TEST', 'python', d)
|
||||
1
|
||||
"""
|
||||
d.setVarFlag(var,flag,flagvalue)
|
||||
|
||||
def getVarFlag(var, flag, d):
|
||||
"""Gets given flag from given var
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'python', 1, d)
|
||||
>>> print getVarFlag('TEST', 'python', d)
|
||||
1
|
||||
"""
|
||||
return d.getVarFlag(var,flag)
|
||||
|
||||
def delVarFlag(var, flag, d):
|
||||
"""Removes a given flag from the variable's flags
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'testflag', 1, d)
|
||||
>>> print getVarFlag('TEST', 'testflag', d)
|
||||
1
|
||||
>>> delVarFlag('TEST', 'testflag', d)
|
||||
>>> print getVarFlag('TEST', 'testflag', d)
|
||||
None
|
||||
|
||||
"""
|
||||
d.delVarFlag(var,flag)
|
||||
|
||||
def setVarFlags(var, flags, d):
|
||||
"""Set the flags for a given variable
|
||||
|
||||
Note:
|
||||
setVarFlags will not clear previous
|
||||
flags. Think of this method as
|
||||
addVarFlags
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> myflags = {}
|
||||
>>> myflags['test'] = 'blah'
|
||||
>>> setVarFlags('TEST', myflags, d)
|
||||
>>> print getVarFlag('TEST', 'test', d)
|
||||
blah
|
||||
"""
|
||||
d.setVarFlags(var,flags)
|
||||
|
||||
def getVarFlags(var, d):
|
||||
"""Gets a variable's flags
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'test', 'blah', d)
|
||||
>>> print getVarFlags('TEST', d)['test']
|
||||
blah
|
||||
"""
|
||||
return d.getVarFlags(var)
|
||||
|
||||
def delVarFlags(var, d):
|
||||
"""Removes a variable's flags
|
||||
|
||||
Example:
|
||||
>>> data = init()
|
||||
>>> setVarFlag('TEST', 'testflag', 1, data)
|
||||
>>> print getVarFlag('TEST', 'testflag', data)
|
||||
1
|
||||
>>> delVarFlags('TEST', data)
|
||||
>>> print getVarFlags('TEST', data)
|
||||
None
|
||||
|
||||
"""
|
||||
d.delVarFlags(var)
|
||||
|
||||
def keys(d):
|
||||
"""Return a list of keys in d
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 1, d)
|
||||
>>> setVar('MOO' , 2, d)
|
||||
>>> setVarFlag('TEST', 'test', 1, d)
|
||||
>>> keys(d)
|
||||
['TEST', 'MOO']
|
||||
"""
|
||||
return d.keys()
|
||||
|
||||
def getData(d):
|
||||
"""Returns the data object used"""
|
||||
return d
|
||||
|
||||
def setData(newData, d):
|
||||
"""Sets the data object to the supplied value"""
|
||||
d = newData
|
||||
|
||||
|
||||
##
|
||||
## Cookie Monsters' query functions
|
||||
##
|
||||
def _get_override_vars(d, override):
|
||||
"""
|
||||
Internal!!!
|
||||
|
||||
Get the Names of Variables that have a specific
|
||||
override. This function returns a iterable
|
||||
Set or an empty list
|
||||
"""
|
||||
return []
|
||||
|
||||
def _get_var_flags_triple(d):
|
||||
"""
|
||||
Internal!!!
|
||||
|
||||
"""
|
||||
return []
|
||||
|
||||
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
|
||||
def expand(s, d, varname = None):
|
||||
"""Variable expansion using the data store.
|
||||
|
||||
Example:
|
||||
Standard expansion:
|
||||
>>> d = init()
|
||||
>>> setVar('A', 'sshd', d)
|
||||
>>> print expand('/usr/bin/${A}', d)
|
||||
/usr/bin/sshd
|
||||
|
||||
Python expansion:
|
||||
>>> d = init()
|
||||
>>> print expand('result: ${@37 * 72}', d)
|
||||
result: 2664
|
||||
|
||||
Shell expansion:
|
||||
>>> d = init()
|
||||
>>> print expand('${TARGET_MOO}', d)
|
||||
${TARGET_MOO}
|
||||
>>> setVar('TARGET_MOO', 'yupp', d)
|
||||
>>> print expand('${TARGET_MOO}',d)
|
||||
yupp
|
||||
>>> setVar('SRC_URI', 'http://somebug.${TARGET_MOO}', d)
|
||||
>>> delVar('TARGET_MOO', d)
|
||||
>>> print expand('${SRC_URI}', d)
|
||||
http://somebug.${TARGET_MOO}
|
||||
"""
|
||||
return d.expand(s, varname)
|
||||
|
||||
def expandKeys(alterdata, readdata = None):
|
||||
if readdata == None:
|
||||
readdata = alterdata
|
||||
|
||||
todolist = {}
|
||||
for key in keys(alterdata):
|
||||
if not '${' in key:
|
||||
continue
|
||||
|
||||
ekey = expand(key, readdata)
|
||||
if key == ekey:
|
||||
continue
|
||||
todolist[key] = ekey
|
||||
|
||||
# These two for loops are split for performance to maximise the
|
||||
# usefulness of the expand cache
|
||||
|
||||
for key in todolist:
|
||||
ekey = todolist[key]
|
||||
renameVar(key, ekey, alterdata)
|
||||
|
||||
def expandData(alterdata, readdata = None):
|
||||
"""For each variable in alterdata, expand it, and update the var contents.
|
||||
Replacements use data from readdata.
|
||||
|
||||
Example:
|
||||
>>> a=init()
|
||||
>>> b=init()
|
||||
>>> setVar("dlmsg", "dl_dir is ${DL_DIR}", a)
|
||||
>>> setVar("DL_DIR", "/path/to/whatever", b)
|
||||
>>> expandData(a, b)
|
||||
>>> print getVar("dlmsg", a)
|
||||
dl_dir is /path/to/whatever
|
||||
"""
|
||||
if readdata == None:
|
||||
readdata = alterdata
|
||||
|
||||
for key in keys(alterdata):
|
||||
val = getVar(key, alterdata)
|
||||
if type(val) is not types.StringType:
|
||||
continue
|
||||
expanded = expand(val, readdata)
|
||||
# print "key is %s, val is %s, expanded is %s" % (key, val, expanded)
|
||||
if val != expanded:
|
||||
setVar(key, expanded, alterdata)
|
||||
|
||||
import os
|
||||
|
||||
def inheritFromOS(d):
|
||||
"""Inherit variables from the environment."""
|
||||
# fakeroot needs to be able to set these
|
||||
non_inherit_vars = [ "LD_LIBRARY_PATH", "LD_PRELOAD" ]
|
||||
for s in os.environ.keys():
|
||||
if not s in non_inherit_vars:
|
||||
try:
|
||||
setVar(s, os.environ[s], d)
|
||||
setVarFlag(s, 'matchesenv', '1', d)
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
import sys
|
||||
|
||||
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emit a variable to be sourced by a shell."""
|
||||
if getVarFlag(var, "python", d):
|
||||
return 0
|
||||
|
||||
export = getVarFlag(var, "export", d)
|
||||
unexport = getVarFlag(var, "unexport", d)
|
||||
func = getVarFlag(var, "func", d)
|
||||
if not all and not export and not unexport and not func:
|
||||
return 0
|
||||
|
||||
try:
|
||||
if all:
|
||||
oval = getVar(var, d, 0)
|
||||
val = getVar(var, d, 1)
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
excname = str(sys.exc_info()[0])
|
||||
if excname == "bb.build.FuncFailed":
|
||||
raise
|
||||
o.write('# expansion of %s threw %s\n' % (var, excname))
|
||||
return 0
|
||||
|
||||
if all:
|
||||
o.write('# %s=%s\n' % (var, oval))
|
||||
|
||||
if type(val) is not types.StringType:
|
||||
return 0
|
||||
|
||||
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
|
||||
return 0
|
||||
|
||||
varExpanded = expand(var, d)
|
||||
|
||||
if unexport:
|
||||
o.write('unset %s\n' % varExpanded)
|
||||
return 1
|
||||
|
||||
if getVarFlag(var, 'matchesenv', d):
|
||||
return 0
|
||||
|
||||
val.rstrip()
|
||||
if not val:
|
||||
return 0
|
||||
|
||||
if func:
|
||||
# NOTE: should probably check for unbalanced {} within the var
|
||||
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
|
||||
return 1
|
||||
|
||||
if export:
|
||||
o.write('export ')
|
||||
|
||||
# if we're going to output this within doublequotes,
|
||||
# to a shell, we need to escape the quotes in the var
|
||||
alter = re.sub('"', '\\"', val.strip())
|
||||
o.write('%s="%s"\n' % (varExpanded, alter))
|
||||
return 1
|
||||
|
||||
|
||||
def emit_env(o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
|
||||
|
||||
env = keys(d)
|
||||
|
||||
for e in env:
|
||||
if getVarFlag(e, "func", d):
|
||||
continue
|
||||
emit_var(e, o, d, all) and o.write('\n')
|
||||
|
||||
for e in env:
|
||||
if not getVarFlag(e, "func", d):
|
||||
continue
|
||||
emit_var(e, o, d) and o.write('\n')
|
||||
|
||||
def update_data(d):
|
||||
"""Modifies the environment vars according to local overrides and commands.
|
||||
Examples:
|
||||
Appending to a variable:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'this is a', d)
|
||||
>>> setVar('TEST_append', ' test', d)
|
||||
>>> setVar('TEST_append', ' of the emergency broadcast system.', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
this is a test of the emergency broadcast system.
|
||||
|
||||
Prepending to a variable:
|
||||
>>> setVar('TEST', 'virtual/libc', d)
|
||||
>>> setVar('TEST_prepend', 'virtual/tmake ', d)
|
||||
>>> setVar('TEST_prepend', 'virtual/patcher ', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
virtual/patcher virtual/tmake virtual/libc
|
||||
|
||||
Overrides:
|
||||
>>> setVar('TEST_arm', 'target', d)
|
||||
>>> setVar('TEST_ramses', 'machine', d)
|
||||
>>> setVar('TEST_local', 'local', d)
|
||||
>>> setVar('OVERRIDES', 'arm', d)
|
||||
|
||||
>>> setVar('TEST', 'original', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
target
|
||||
|
||||
>>> setVar('OVERRIDES', 'arm:ramses:local', d)
|
||||
>>> setVar('TEST', 'original', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
local
|
||||
|
||||
CopyMonster:
|
||||
>>> e = d.createCopy()
|
||||
>>> setVar('TEST_foo', 'foo', e)
|
||||
>>> update_data(e)
|
||||
>>> print getVar('TEST', e)
|
||||
local
|
||||
|
||||
>>> setVar('OVERRIDES', 'arm:ramses:local:foo', e)
|
||||
>>> update_data(e)
|
||||
>>> print getVar('TEST', e)
|
||||
foo
|
||||
|
||||
>>> f = d.createCopy()
|
||||
>>> setVar('TEST_moo', 'something', f)
|
||||
>>> setVar('OVERRIDES', 'moo:arm:ramses:local:foo', e)
|
||||
>>> update_data(e)
|
||||
>>> print getVar('TEST', e)
|
||||
foo
|
||||
|
||||
|
||||
>>> h = init()
|
||||
>>> setVar('SRC_URI', 'file://append.foo;patch=1 ', h)
|
||||
>>> g = h.createCopy()
|
||||
>>> setVar('SRC_URI_append_arm', 'file://other.foo;patch=1', g)
|
||||
>>> setVar('OVERRIDES', 'arm:moo', g)
|
||||
>>> update_data(g)
|
||||
>>> print getVar('SRC_URI', g)
|
||||
file://append.foo;patch=1 file://other.foo;patch=1
|
||||
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Data, "update_data()")
|
||||
|
||||
# now ask the cookie monster for help
|
||||
#print "Cookie Monster"
|
||||
#print "Append/Prepend %s" % d._special_values
|
||||
#print "Overrides %s" % d._seen_overrides
|
||||
|
||||
overrides = (getVar('OVERRIDES', d, 1) or "").split(':') or []
|
||||
|
||||
#
|
||||
# Well let us see what breaks here. We used to iterate
|
||||
# over each variable and apply the override and then
|
||||
# do the line expanding.
|
||||
# If we have bad luck - which we will have - the keys
|
||||
# where in some order that is so important for this
|
||||
# method which we don't have anymore.
|
||||
# Anyway we will fix that and write test cases this
|
||||
# time.
|
||||
|
||||
#
|
||||
# First we apply all overrides
|
||||
# Then we will handle _append and _prepend
|
||||
#
|
||||
|
||||
for o in overrides:
|
||||
# calculate '_'+override
|
||||
l = len(o)+1
|
||||
|
||||
# see if one should even try
|
||||
if not d._seen_overrides.has_key(o):
|
||||
continue
|
||||
|
||||
vars = d._seen_overrides[o]
|
||||
for var in vars:
|
||||
name = var[:-l]
|
||||
try:
|
||||
d[name] = d[var]
|
||||
except:
|
||||
bb.msg.note(1, bb.msg.domain.Data, "Untracked delVar")
|
||||
|
||||
# now on to the appends and prepends
|
||||
if d._special_values.has_key('_append'):
|
||||
appends = d._special_values['_append'] or []
|
||||
for append in appends:
|
||||
for (a, o) in getVarFlag(append, '_append', d) or []:
|
||||
# maybe the OVERRIDE was not yet added so keep the append
|
||||
if (o and o in overrides) or not o:
|
||||
delVarFlag(append, '_append', d)
|
||||
if o and not o in overrides:
|
||||
continue
|
||||
|
||||
sval = getVar(append,d) or ""
|
||||
sval+=a
|
||||
setVar(append, sval, d)
|
||||
|
||||
|
||||
if d._special_values.has_key('_prepend'):
|
||||
prepends = d._special_values['_prepend'] or []
|
||||
|
||||
for prepend in prepends:
|
||||
for (a, o) in getVarFlag(prepend, '_prepend', d) or []:
|
||||
# maybe the OVERRIDE was not yet added so keep the prepend
|
||||
if (o and o in overrides) or not o:
|
||||
delVarFlag(prepend, '_prepend', d)
|
||||
if o and not o in overrides:
|
||||
continue
|
||||
|
||||
sval = a + (getVar(prepend,d) or "")
|
||||
setVar(prepend, sval, d)
|
||||
|
||||
|
||||
def inherits_class(klass, d):
|
||||
val = getVar('__inherit_cache', d) or []
|
||||
if os.path.join('classes', '%s.bbclass' % klass) in val:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _test():
|
||||
"""Start a doctest run on this module"""
|
||||
import doctest
|
||||
from bb import data
|
||||
doctest.testmod(data)
|
||||
|
||||
if __name__ == "__main__":
|
||||
_test()
|
||||
291
bitbake/lib/bb/data_smart.py
Normal file
@@ -0,0 +1,291 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake Smart Dictionary Implementation
|
||||
|
||||
Functions for interacting with the data structure used by the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004, 2005 Seb Frankengul
|
||||
# Copyright (C) 2005, 2006 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2005 Uli Luckas
|
||||
# Copyright (C) 2005 ROAD GmbH
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import copy, os, re, sys, time, types
|
||||
import bb
|
||||
from bb import utils, methodpool
|
||||
from COW import COWDictBase
|
||||
from new import classobj
|
||||
|
||||
|
||||
__setvar_keyword__ = ["_append","_prepend"]
|
||||
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
|
||||
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
|
||||
|
||||
class DataSmart:
|
||||
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
|
||||
self.dict = {}
|
||||
|
||||
# cookie monster tribute
|
||||
self._special_values = special
|
||||
self._seen_overrides = seen
|
||||
|
||||
self.expand_cache = {}
|
||||
|
||||
def expand(self,s, varname):
|
||||
def var_sub(match):
|
||||
key = match.group()[2:-1]
|
||||
if varname and key:
|
||||
if varname == key:
|
||||
raise Exception("variable %s references itself!" % varname)
|
||||
var = self.getVar(key, 1)
|
||||
if var is not None:
|
||||
return var
|
||||
else:
|
||||
return match.group()
|
||||
|
||||
def python_sub(match):
|
||||
import bb
|
||||
code = match.group()[3:-1]
|
||||
locals()['d'] = self
|
||||
s = eval(code)
|
||||
if type(s) == types.IntType: s = str(s)
|
||||
return s
|
||||
|
||||
if type(s) is not types.StringType: # sanity check
|
||||
return s
|
||||
|
||||
if varname and varname in self.expand_cache:
|
||||
return self.expand_cache[varname]
|
||||
|
||||
while s.find('${') != -1:
|
||||
olds = s
|
||||
try:
|
||||
s = __expand_var_regexp__.sub(var_sub, s)
|
||||
s = __expand_python_regexp__.sub(python_sub, s)
|
||||
if s == olds: break
|
||||
if type(s) is not types.StringType: # sanity check
|
||||
bb.msg.error(bb.msg.domain.Data, 'expansion of %s returned non-string %s' % (olds, s))
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
bb.msg.note(1, bb.msg.domain.Data, "%s:%s while evaluating:\n%s" % (sys.exc_info()[0], sys.exc_info()[1], s))
|
||||
raise
|
||||
|
||||
if varname:
|
||||
self.expand_cache[varname] = s
|
||||
|
||||
return s
|
||||
|
||||
def initVar(self, var):
|
||||
self.expand_cache = {}
|
||||
if not var in self.dict:
|
||||
self.dict[var] = {}
|
||||
|
||||
def _findVar(self,var):
|
||||
_dest = self.dict
|
||||
|
||||
while (_dest and var not in _dest):
|
||||
if not "_data" in _dest:
|
||||
_dest = None
|
||||
break
|
||||
_dest = _dest["_data"]
|
||||
|
||||
if _dest and var in _dest:
|
||||
return _dest[var]
|
||||
return None
|
||||
|
||||
def _makeShadowCopy(self, var):
|
||||
if var in self.dict:
|
||||
return
|
||||
|
||||
local_var = self._findVar(var)
|
||||
|
||||
if local_var:
|
||||
self.dict[var] = copy.copy(local_var)
|
||||
else:
|
||||
self.initVar(var)
|
||||
|
||||
def setVar(self,var,value):
|
||||
self.expand_cache = {}
|
||||
match = __setvar_regexp__.match(var)
|
||||
if match and match.group("keyword") in __setvar_keyword__:
|
||||
base = match.group('base')
|
||||
keyword = match.group("keyword")
|
||||
override = match.group('add')
|
||||
l = self.getVarFlag(base, keyword) or []
|
||||
l.append([value, override])
|
||||
self.setVarFlag(base, keyword, l)
|
||||
|
||||
# todo make sure keyword is not __doc__ or __module__
|
||||
# pay the cookie monster
|
||||
try:
|
||||
self._special_values[keyword].add( base )
|
||||
except:
|
||||
self._special_values[keyword] = set()
|
||||
self._special_values[keyword].add( base )
|
||||
|
||||
return
|
||||
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
if self.getVarFlag(var, 'matchesenv'):
|
||||
self.delVarFlag(var, 'matchesenv')
|
||||
self.setVarFlag(var, 'export', 1)
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
if not self._seen_overrides.has_key(override):
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override].add( var )
|
||||
|
||||
# setting var
|
||||
self.dict[var]["content"] = value
|
||||
|
||||
def getVar(self,var,exp):
|
||||
value = self.getVarFlag(var,"content")
|
||||
|
||||
if exp and value:
|
||||
return self.expand(value,var)
|
||||
return value
|
||||
|
||||
def renameVar(self, key, newkey):
|
||||
"""
|
||||
Rename the variable key to newkey
|
||||
"""
|
||||
val = self.getVar(key, 0)
|
||||
if val is None:
|
||||
return
|
||||
|
||||
self.setVar(newkey, val)
|
||||
|
||||
for i in ('_append', '_prepend'):
|
||||
dest = self.getVarFlag(newkey, i) or []
|
||||
src = self.getVarFlag(key, i) or []
|
||||
dest.extend(src)
|
||||
self.setVarFlag(newkey, i, dest)
|
||||
|
||||
if self._special_values.has_key(i) and key in self._special_values[i]:
|
||||
self._special_values[i].remove(key)
|
||||
self._special_values[i].add(newkey)
|
||||
|
||||
self.delVar(key)
|
||||
|
||||
def delVar(self,var):
|
||||
self.expand_cache = {}
|
||||
self.dict[var] = {}
|
||||
|
||||
def setVarFlag(self,var,flag,flagvalue):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
self.dict[var][flag] = flagvalue
|
||||
|
||||
def getVarFlag(self,var,flag):
|
||||
local_var = self._findVar(var)
|
||||
if local_var:
|
||||
if flag in local_var:
|
||||
return copy.copy(local_var[flag])
|
||||
return None
|
||||
|
||||
def delVarFlag(self,var,flag):
|
||||
local_var = self._findVar(var)
|
||||
if not local_var:
|
||||
return
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
if var in self.dict and flag in self.dict[var]:
|
||||
del self.dict[var][flag]
|
||||
|
||||
def setVarFlags(self,var,flags):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
for i in flags.keys():
|
||||
if i == "content":
|
||||
continue
|
||||
self.dict[var][i] = flags[i]
|
||||
|
||||
def getVarFlags(self,var):
|
||||
local_var = self._findVar(var)
|
||||
flags = {}
|
||||
|
||||
if local_var:
|
||||
for i in self.dict[var].keys():
|
||||
if i == "content":
|
||||
continue
|
||||
flags[i] = self.dict[var][i]
|
||||
|
||||
if len(flags) == 0:
|
||||
return None
|
||||
return flags
|
||||
|
||||
|
||||
def delVarFlags(self,var):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
if var in self.dict:
|
||||
content = None
|
||||
|
||||
# try to save the content
|
||||
if "content" in self.dict[var]:
|
||||
content = self.dict[var]["content"]
|
||||
self.dict[var] = {}
|
||||
self.dict[var]["content"] = content
|
||||
else:
|
||||
del self.dict[var]
|
||||
|
||||
|
||||
def createCopy(self):
|
||||
"""
|
||||
Create a copy of self by setting _data to self
|
||||
"""
|
||||
# we really want this to be a DataSmart...
|
||||
data = DataSmart(seen=self._seen_overrides.copy(), special=self._special_values.copy())
|
||||
data.dict["_data"] = self.dict
|
||||
|
||||
return data
|
||||
|
||||
# Dictionary Methods
|
||||
def keys(self):
|
||||
def _keys(d, mykey):
|
||||
if "_data" in d:
|
||||
_keys(d["_data"],mykey)
|
||||
|
||||
for key in d.keys():
|
||||
if key != "_data":
|
||||
mykey[key] = None
|
||||
keytab = {}
|
||||
_keys(self.dict,keytab)
|
||||
return keytab.keys()
|
||||
|
||||
def __getitem__(self,item):
|
||||
#print "Warning deprecated"
|
||||
return self.getVar(item, False)
|
||||
|
||||
def __setitem__(self,var,data):
|
||||
#print "Warning deprecated"
|
||||
self.setVar(var,data)
|
||||
|
||||
|
||||
266
bitbake/lib/bb/event.py
Normal file
@@ -0,0 +1,266 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Event' implementation
|
||||
|
||||
Classes and functions for manipulating 'events' in the
|
||||
BitBake build tools.
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, re
|
||||
import bb.utils
|
||||
|
||||
class Event:
|
||||
"""Base class for events"""
|
||||
type = "Event"
|
||||
|
||||
def __init__(self, d):
|
||||
self._data = d
|
||||
|
||||
def getData(self):
|
||||
return self._data
|
||||
|
||||
def setData(self, data):
|
||||
self._data = data
|
||||
|
||||
data = property(getData, setData, None, "data property")
|
||||
|
||||
NotHandled = 0
|
||||
Handled = 1
|
||||
|
||||
Registered = 10
|
||||
AlreadyRegistered = 14
|
||||
|
||||
# Internal
|
||||
_handlers = []
|
||||
_handlers_dict = {}
|
||||
|
||||
def tmpHandler(event):
|
||||
"""Default handler for code events"""
|
||||
return NotHandled
|
||||
|
||||
def defaultTmpHandler():
|
||||
tmp = "def tmpHandler(e):\n\t\"\"\"heh\"\"\"\n\treturn NotHandled"
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event.defaultTmpHandler")
|
||||
return comp
|
||||
|
||||
def fire(event):
|
||||
"""Fire off an Event"""
|
||||
for h in _handlers:
|
||||
if type(h).__name__ == "code":
|
||||
exec(h)
|
||||
if tmpHandler(event) == Handled:
|
||||
return Handled
|
||||
else:
|
||||
if h(event) == Handled:
|
||||
return Handled
|
||||
return NotHandled
|
||||
|
||||
def register(name, handler):
|
||||
"""Register an Event handler"""
|
||||
|
||||
# already registered
|
||||
if name in _handlers_dict:
|
||||
return AlreadyRegistered
|
||||
|
||||
if handler is not None:
|
||||
# handle string containing python code
|
||||
if type(handler).__name__ == "str":
|
||||
_registerCode(handler)
|
||||
else:
|
||||
_handlers.append(handler)
|
||||
|
||||
_handlers_dict[name] = 1
|
||||
return Registered
|
||||
|
||||
def _registerCode(handlerStr):
|
||||
"""Register a 'code' Event.
|
||||
Deprecated interface; call register instead.
|
||||
|
||||
Expects to be passed python code as a string, which will
|
||||
be passed in turn to compile() and then exec(). Note that
|
||||
the code will be within a function, so should have had
|
||||
appropriate tabbing put in place."""
|
||||
tmp = "def tmpHandler(e):\n%s" % handlerStr
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
|
||||
# prevent duplicate registration
|
||||
_handlers.append(comp)
|
||||
|
||||
def remove(name, handler):
|
||||
"""Remove an Event handler"""
|
||||
|
||||
_handlers_dict.pop(name)
|
||||
if type(handler).__name__ == "str":
|
||||
return _removeCode(handler)
|
||||
else:
|
||||
_handlers.remove(handler)
|
||||
|
||||
def _removeCode(handlerStr):
|
||||
"""Remove a 'code' Event handler
|
||||
Deprecated interface; call remove instead."""
|
||||
tmp = "def tmpHandler(e):\n%s" % handlerStr
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._removeCode")
|
||||
_handlers.remove(comp)
|
||||
|
||||
def getName(e):
|
||||
"""Returns the name of a class or class instance"""
|
||||
if getattr(e, "__name__", None) == None:
|
||||
return e.__class__.__name__
|
||||
else:
|
||||
return e.__name__
|
||||
|
||||
class ConfigParsed(Event):
|
||||
"""Configuration Parsing Complete"""
|
||||
|
||||
class PkgBase(Event):
|
||||
"""Base class for package events"""
|
||||
|
||||
def __init__(self, t, d):
|
||||
self._pkg = t
|
||||
Event.__init__(self, d)
|
||||
|
||||
def getPkg(self):
|
||||
return self._pkg
|
||||
|
||||
def setPkg(self, pkg):
|
||||
self._pkg = pkg
|
||||
|
||||
pkg = property(getPkg, setPkg, None, "pkg property")
|
||||
|
||||
|
||||
class BuildBase(Event):
|
||||
"""Base class for bbmake run events"""
|
||||
|
||||
def __init__(self, n, p, c, failures = 0):
|
||||
self._name = n
|
||||
self._pkgs = p
|
||||
Event.__init__(self, c)
|
||||
self._failures = failures
|
||||
|
||||
def getPkgs(self):
|
||||
return self._pkgs
|
||||
|
||||
def setPkgs(self, pkgs):
|
||||
self._pkgs = pkgs
|
||||
|
||||
def getName(self):
|
||||
return self._name
|
||||
|
||||
def setName(self, name):
|
||||
self._name = name
|
||||
|
||||
def getCfg(self):
|
||||
return self.data
|
||||
|
||||
def setCfg(self, cfg):
|
||||
self.data = cfg
|
||||
|
||||
def getFailures(self):
|
||||
"""
|
||||
Return the number of failed packages
|
||||
"""
|
||||
return self._failures
|
||||
|
||||
pkgs = property(getPkgs, setPkgs, None, "pkgs property")
|
||||
name = property(getName, setName, None, "name property")
|
||||
cfg = property(getCfg, setCfg, None, "cfg property")
|
||||
|
||||
|
||||
class DepBase(PkgBase):
|
||||
"""Base class for dependency events"""
|
||||
|
||||
def __init__(self, t, data, d):
|
||||
self._dep = d
|
||||
PkgBase.__init__(self, t, data)
|
||||
|
||||
def getDep(self):
|
||||
return self._dep
|
||||
|
||||
def setDep(self, dep):
|
||||
self._dep = dep
|
||||
|
||||
dep = property(getDep, setDep, None, "dep property")
|
||||
|
||||
|
||||
class PkgStarted(PkgBase):
|
||||
"""Package build started"""
|
||||
|
||||
|
||||
class PkgFailed(PkgBase):
|
||||
"""Package build failed"""
|
||||
|
||||
|
||||
class PkgSucceeded(PkgBase):
|
||||
"""Package build completed"""
|
||||
|
||||
|
||||
class BuildStarted(BuildBase):
|
||||
"""bbmake build run started"""
|
||||
|
||||
|
||||
class BuildCompleted(BuildBase):
|
||||
"""bbmake build run completed"""
|
||||
|
||||
|
||||
class UnsatisfiedDep(DepBase):
|
||||
"""Unsatisfied Dependency"""
|
||||
|
||||
|
||||
class RecursiveDep(DepBase):
|
||||
"""Recursive Dependency"""
|
||||
|
||||
class NoProvider(Event):
|
||||
"""No Provider for an Event"""
|
||||
|
||||
def __init__(self, item, data,runtime=False):
|
||||
Event.__init__(self, data)
|
||||
self._item = item
|
||||
self._runtime = runtime
|
||||
|
||||
def getItem(self):
|
||||
return self._item
|
||||
|
||||
def isRuntime(self):
|
||||
return self._runtime
|
||||
|
||||
class MultipleProviders(Event):
|
||||
"""Multiple Providers"""
|
||||
|
||||
def __init__(self, item, candidates, data, runtime = False):
|
||||
Event.__init__(self, data)
|
||||
self._item = item
|
||||
self._candidates = candidates
|
||||
self._is_runtime = runtime
|
||||
|
||||
def isRuntime(self):
|
||||
"""
|
||||
Is this a runtime issue?
|
||||
"""
|
||||
return self._is_runtime
|
||||
|
||||
def getItem(self):
|
||||
"""
|
||||
The name for the to be build item
|
||||
"""
|
||||
return self._item
|
||||
|
||||
def getCandidates(self):
|
||||
"""
|
||||
Get the possible Candidates for a PROVIDER.
|
||||
"""
|
||||
return self._candidates
|
||||
522
bitbake/lib/bb/fetch/__init__.py
Normal file
@@ -0,0 +1,522 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re, fcntl
|
||||
import bb
|
||||
from bb import data
|
||||
from bb import persist_data
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
|
||||
class FetchError(Exception):
|
||||
"""Exception raised when a download fails"""
|
||||
|
||||
class NoMethodError(Exception):
|
||||
"""Exception raised when there is no method to obtain a supplied url or set of urls"""
|
||||
|
||||
class MissingParameterError(Exception):
|
||||
"""Exception raised when a fetch method is missing a critical parameter in the url"""
|
||||
|
||||
class ParameterError(Exception):
|
||||
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
|
||||
|
||||
class MD5SumError(Exception):
|
||||
"""Exception raised when a MD5SUM of a file does not match the expected one"""
|
||||
|
||||
def uri_replace(uri, uri_find, uri_replace, d):
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
|
||||
if not uri or not uri_find or not uri_replace:
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "uri_replace: passed an undefined value, not replacing")
|
||||
uri_decoded = list(bb.decodeurl(uri))
|
||||
uri_find_decoded = list(bb.decodeurl(uri_find))
|
||||
uri_replace_decoded = list(bb.decodeurl(uri_replace))
|
||||
result_decoded = ['','','','','',{}]
|
||||
for i in uri_find_decoded:
|
||||
loc = uri_find_decoded.index(i)
|
||||
result_decoded[loc] = uri_decoded[loc]
|
||||
import types
|
||||
if type(i) == types.StringType:
|
||||
import re
|
||||
if (re.match(i, uri_decoded[loc])):
|
||||
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
|
||||
if uri_find_decoded.index(i) == 2:
|
||||
if d:
|
||||
localfn = bb.fetch.localpath(uri, d)
|
||||
if localfn:
|
||||
result_decoded[loc] = os.path.dirname(result_decoded[loc]) + "/" + os.path.basename(bb.fetch.localpath(uri, d))
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: matching %s against %s and replacing with %s" % (i, uri_decoded[loc], uri_replace_decoded[loc]))
|
||||
else:
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: no match")
|
||||
return uri
|
||||
# else:
|
||||
# for j in i.keys():
|
||||
# FIXME: apply replacements against options
|
||||
return bb.encodeurl(result_decoded)
|
||||
|
||||
methods = []
|
||||
urldata_cache = {}
|
||||
|
||||
def fetcher_init(d):
|
||||
"""
|
||||
Called to initilize the fetchers once the configuration data is known
|
||||
Calls before this must not hit the cache.
|
||||
"""
|
||||
pd = persist_data.PersistData(d)
|
||||
# When to drop SCM head revisions controled by user policy
|
||||
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Clearing SRCREV cache due to cache policy of: %s" % srcrev_policy)
|
||||
pd.delDomain("BB_URI_HEADREVS")
|
||||
else:
|
||||
bb.msg.fatal(bb.msg.domain.Fetcher, "Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
# Make sure our domains exist
|
||||
pd.addDomain("BB_URI_HEADREVS")
|
||||
pd.addDomain("BB_URI_LOCALCOUNT")
|
||||
|
||||
# Function call order is usually:
|
||||
# 1. init
|
||||
# 2. go
|
||||
# 3. localpaths
|
||||
# localpath can be called at any time
|
||||
|
||||
def init(urls, d, setup = True):
|
||||
urldata = {}
|
||||
fn = bb.data.getVar('FILE', d, 1)
|
||||
if fn in urldata_cache:
|
||||
urldata = urldata_cache[fn]
|
||||
|
||||
for url in urls:
|
||||
if url not in urldata:
|
||||
urldata[url] = FetchData(url, d)
|
||||
|
||||
if setup:
|
||||
for url in urldata:
|
||||
if not urldata[url].setup:
|
||||
urldata[url].setup_localpath(d)
|
||||
|
||||
urldata_cache[fn] = urldata
|
||||
return urldata
|
||||
|
||||
def go(d):
|
||||
"""
|
||||
Fetch all urls
|
||||
init must have previously been called
|
||||
"""
|
||||
urldata = init([], d, True)
|
||||
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
m = ud.method
|
||||
if ud.localfile:
|
||||
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
|
||||
# File already present along with md5 stamp file
|
||||
# Touch md5 file to show activity
|
||||
os.utime(ud.md5, None)
|
||||
continue
|
||||
lf = bb.utils.lockfile(ud.lockfile)
|
||||
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
|
||||
# If someone else fetched this before we got the lock,
|
||||
# notice and don't try again
|
||||
os.utime(ud.md5, None)
|
||||
bb.utils.unlockfile(lf)
|
||||
continue
|
||||
m.go(u, ud, d)
|
||||
if ud.localfile:
|
||||
if not m.forcefetch(u, ud, d):
|
||||
Fetch.write_md5sum(u, ud, d)
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
def localpaths(d):
|
||||
"""
|
||||
Return a list of the local filenames, assuming successful fetch
|
||||
"""
|
||||
local = []
|
||||
urldata = init([], d, True)
|
||||
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
local.append(ud.localpath)
|
||||
|
||||
return local
|
||||
|
||||
srcrev_internal_call = False
|
||||
|
||||
def get_srcrev(d):
|
||||
"""
|
||||
Return the version string for the current package
|
||||
(usually to be used as PV)
|
||||
Most packages usually only have one SCM so we just pass on the call.
|
||||
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
|
||||
have been set.
|
||||
"""
|
||||
|
||||
#
|
||||
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
|
||||
# could translate into a call to here. If it does, we need to catch this
|
||||
# and provide some way so it knows get_srcrev is active instead of being
|
||||
# some number etc. hence the srcrev_internal_call tracking and the magic
|
||||
# "SRCREVINACTION" return value.
|
||||
#
|
||||
# Neater solutions welcome!
|
||||
#
|
||||
if bb.fetch.srcrev_internal_call:
|
||||
return "SRCREVINACTION"
|
||||
|
||||
scms = []
|
||||
|
||||
# Only call setup_localpath on URIs which suppports_srcrev()
|
||||
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
if ud.method.suppports_srcrev():
|
||||
if not ud.setup:
|
||||
ud.setup_localpath(d)
|
||||
scms.append(u)
|
||||
|
||||
if len(scms) == 0:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
raise ParameterError
|
||||
|
||||
if len(scms) == 1:
|
||||
return urldata[scms[0]].method.sortable_revision(scms[0], urldata[scms[0]], d)
|
||||
|
||||
#
|
||||
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
|
||||
#
|
||||
format = bb.data.getVar('SRCREV_FORMAT', d, 1)
|
||||
if not format:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
|
||||
raise ParameterError
|
||||
|
||||
for scm in scms:
|
||||
if 'name' in urldata[scm].parm:
|
||||
name = urldata[scm].parm["name"]
|
||||
rev = urldata[scm].method.sortable_revision(scm, urldata[scm], d)
|
||||
format = format.replace(name, rev)
|
||||
|
||||
return format
|
||||
|
||||
def localpath(url, d, cache = True):
|
||||
"""
|
||||
Called from the parser with cache=False since the cache isn't ready
|
||||
at this point. Also called from classed in OE e.g. patch.bbclass
|
||||
"""
|
||||
ud = init([url], d)
|
||||
if ud[url].method:
|
||||
return ud[url].localpath
|
||||
return url
|
||||
|
||||
def runfetchcmd(cmd, d, quiet = False):
|
||||
"""
|
||||
Run cmd returning the command output
|
||||
Raise an error if interrupted or cmd fails
|
||||
Optionally echo command output to stdout
|
||||
"""
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
|
||||
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
pathcmd = 'export PATH=%s; %s' % (data.expand('${PATH}', d), cmd)
|
||||
|
||||
stdout_handle = os.popen(pathcmd, "r")
|
||||
output = ""
|
||||
|
||||
while 1:
|
||||
line = stdout_handle.readline()
|
||||
if not line:
|
||||
break
|
||||
if not quiet:
|
||||
print line,
|
||||
output += line
|
||||
|
||||
status = stdout_handle.close() or 0
|
||||
signal = status >> 8
|
||||
exitstatus = status & 0xff
|
||||
|
||||
if signal:
|
||||
raise FetchError("Fetch command %s failed with signal %s, output:\n%s" % (pathcmd, signal, output))
|
||||
elif status != 0:
|
||||
raise FetchError("Fetch command %s failed with exit code %s, output:\n%s" % (pathcmd, status, output))
|
||||
|
||||
return output
|
||||
|
||||
class FetchData(object):
|
||||
"""
|
||||
A class which represents the fetcher state for a given URI.
|
||||
"""
|
||||
def __init__(self, url, d):
|
||||
self.localfile = ""
|
||||
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = bb.decodeurl(data.expand(url, d))
|
||||
self.date = Fetch.getSRCDate(self, d)
|
||||
self.url = url
|
||||
self.setup = False
|
||||
for m in methods:
|
||||
if m.supports(url, self, d):
|
||||
self.method = m
|
||||
return
|
||||
raise NoMethodError("Missing implementation for url %s" % url)
|
||||
|
||||
def setup_localpath(self, d):
|
||||
self.setup = True
|
||||
if "localpath" in self.parm:
|
||||
# if user sets localpath for file, use it instead.
|
||||
self.localpath = self.parm["localpath"]
|
||||
else:
|
||||
bb.fetch.srcrev_internal_call = True
|
||||
self.localpath = self.method.localpath(self.url, self, d)
|
||||
bb.fetch.srcrev_internal_call = False
|
||||
# We have to clear data's internal caches since the cached value of SRCREV is now wrong.
|
||||
# Horrible...
|
||||
bb.data.delVar("ISHOULDNEVEREXIST", d)
|
||||
self.md5 = self.localpath + '.md5'
|
||||
self.lockfile = self.localpath + '.lock'
|
||||
|
||||
|
||||
class Fetch(object):
|
||||
"""Base class for 'fetch'ing data"""
|
||||
|
||||
def __init__(self, urls = []):
|
||||
self.urls = []
|
||||
|
||||
def supports(self, url, urldata, d):
|
||||
"""
|
||||
Check to see if this fetch class supports a given url.
|
||||
"""
|
||||
return 0
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
Can also setup variables in urldata for use in go (saving code duplication
|
||||
and duplicate code execution)
|
||||
"""
|
||||
return url
|
||||
|
||||
def setUrls(self, urls):
|
||||
self.__urls = urls
|
||||
|
||||
def getUrls(self):
|
||||
return self.__urls
|
||||
|
||||
urls = property(getUrls, setUrls, None, "Urls property")
|
||||
|
||||
def forcefetch(self, url, urldata, d):
|
||||
"""
|
||||
Force a fetch, even if localpath exists?
|
||||
"""
|
||||
return False
|
||||
|
||||
def suppports_srcrev(self):
|
||||
"""
|
||||
The fetcher supports auto source revisions (SRCREV)
|
||||
"""
|
||||
return False
|
||||
|
||||
def go(self, url, urldata, d):
|
||||
"""
|
||||
Fetch urls
|
||||
Assumes localpath was called first
|
||||
"""
|
||||
raise NoMethodError("Missing implementation for url")
|
||||
|
||||
def getSRCDate(urldata, d):
|
||||
"""
|
||||
Return the SRC Date for the component
|
||||
|
||||
d the bb.data module
|
||||
"""
|
||||
if "srcdate" in urldata.parm:
|
||||
return urldata.parm['srcdate']
|
||||
|
||||
pn = data.getVar("PN", d, 1)
|
||||
|
||||
if pn:
|
||||
return data.getVar("SRCDATE_%s" % pn, d, 1) or data.getVar("CVSDATE_%s" % pn, d, 1) or data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
|
||||
|
||||
return data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
|
||||
getSRCDate = staticmethod(getSRCDate)
|
||||
|
||||
def srcrev_internal_helper(ud, d):
|
||||
"""
|
||||
Return:
|
||||
a) a source revision if specified
|
||||
b) True if auto srcrev is in action
|
||||
c) False otherwise
|
||||
"""
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
return ud.parm['rev']
|
||||
|
||||
if 'tag' in ud.parm:
|
||||
return ud.parm['tag']
|
||||
|
||||
rev = None
|
||||
if 'name' in ud.parm:
|
||||
pn = data.getVar("PN", d, 1)
|
||||
rev = data.getVar("SRCREV_pn-" + pn + "_" + ud.parm['name'], d, 1)
|
||||
if not rev:
|
||||
rev = data.getVar("SRCREV", d, 1)
|
||||
if not rev:
|
||||
return False
|
||||
if rev is "SRCREVINACTION":
|
||||
return True
|
||||
return rev
|
||||
|
||||
srcrev_internal_helper = staticmethod(srcrev_internal_helper)
|
||||
|
||||
def try_mirror(d, tarfn):
|
||||
"""
|
||||
Try to use a mirrored version of the sources. We do this
|
||||
to avoid massive loads on foreign cvs and svn servers.
|
||||
This method will be used by the different fetcher
|
||||
implementations.
|
||||
|
||||
d Is a bb.data instance
|
||||
tarfn is the name of the tarball
|
||||
"""
|
||||
tarpath = os.path.join(data.getVar("DL_DIR", d, 1), tarfn)
|
||||
if os.access(tarpath, os.R_OK):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % tarfn)
|
||||
return True
|
||||
|
||||
pn = data.getVar('PN', d, True)
|
||||
src_tarball_stash = None
|
||||
if pn:
|
||||
src_tarball_stash = (data.getVar('SRC_TARBALL_STASH_%s' % pn, d, True) or data.getVar('CVS_TARBALL_STASH_%s' % pn, d, True) or data.getVar('SRC_TARBALL_STASH', d, True) or data.getVar('CVS_TARBALL_STASH', d, True) or "").split()
|
||||
|
||||
for stash in src_tarball_stash:
|
||||
fetchcmd = data.getVar("FETCHCOMMAND_mirror", d, True) or data.getVar("FETCHCOMMAND_wget", d, True)
|
||||
uri = stash + tarfn
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri)
|
||||
ret = os.system(fetchcmd)
|
||||
if ret == 0:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetched %s from tarball stash, skipping checkout" % tarfn)
|
||||
return True
|
||||
return False
|
||||
try_mirror = staticmethod(try_mirror)
|
||||
|
||||
def verify_md5sum(ud, got_sum):
|
||||
"""
|
||||
Verify the md5sum we wanted with the one we got
|
||||
"""
|
||||
wanted_sum = None
|
||||
if 'md5sum' in ud.parm:
|
||||
wanted_sum = ud.parm['md5sum']
|
||||
if not wanted_sum:
|
||||
return True
|
||||
|
||||
return wanted_sum == got_sum
|
||||
verify_md5sum = staticmethod(verify_md5sum)
|
||||
|
||||
def write_md5sum(url, ud, d):
|
||||
if bb.which(data.getVar('PATH', d), 'md5sum'):
|
||||
try:
|
||||
md5pipe = os.popen('md5sum ' + ud.localpath)
|
||||
md5data = (md5pipe.readline().split() or [ "" ])[0]
|
||||
md5pipe.close()
|
||||
except OSError:
|
||||
md5data = ""
|
||||
|
||||
# verify the md5sum
|
||||
if not Fetch.verify_md5sum(ud, md5data):
|
||||
raise MD5SumError(url)
|
||||
|
||||
md5out = file(ud.md5, 'w')
|
||||
md5out.write(md5data)
|
||||
md5out.close()
|
||||
write_md5sum = staticmethod(write_md5sum)
|
||||
|
||||
def latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Look in the cache for the latest revision, if not present ask the SCM.
|
||||
"""
|
||||
if not hasattr(self, "_latest_revision"):
|
||||
raise ParameterError
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
key = self._revision_key(url, ud, d)
|
||||
rev = pd.getValue("BB_URI_HEADREVS", key)
|
||||
if rev != None:
|
||||
return str(rev)
|
||||
|
||||
rev = self._latest_revision(url, ud, d)
|
||||
pd.setValue("BB_URI_HEADREVS", key, rev)
|
||||
return rev
|
||||
|
||||
def sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
|
||||
"""
|
||||
if hasattr(self, "_sortable_revision"):
|
||||
return self._sortable_revision(url, ud, d)
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
key = self._revision_key(url, ud, d)
|
||||
latest_rev = self._build_revision(url, ud, d)
|
||||
last_rev = pd.getValue("BB_URI_LOCALCOUNT", key + "_rev")
|
||||
count = pd.getValue("BB_URI_LOCALCOUNT", key + "_count")
|
||||
|
||||
if last_rev == latest_rev:
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
if count is None:
|
||||
count = "0"
|
||||
else:
|
||||
count = str(int(count) + 1)
|
||||
|
||||
pd.setValue("BB_URI_LOCALCOUNT", key + "_rev", latest_rev)
|
||||
pd.setValue("BB_URI_LOCALCOUNT", key + "_count", count)
|
||||
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
|
||||
import cvs
|
||||
import git
|
||||
import local
|
||||
import svn
|
||||
import wget
|
||||
import svk
|
||||
import ssh
|
||||
import perforce
|
||||
import bzr
|
||||
import hg
|
||||
|
||||
methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
methods.append(svn.Svn())
|
||||
methods.append(git.Git())
|
||||
methods.append(cvs.Cvs())
|
||||
methods.append(svk.Svk())
|
||||
methods.append(ssh.SSH())
|
||||
methods.append(perforce.Perforce())
|
||||
methods.append(bzr.Bzr())
|
||||
methods.append(hg.Hg())
|
||||
154
bitbake/lib/bb/fetch/bzr.py
Normal file
@@ -0,0 +1,154 @@
|
||||
"""
|
||||
BitBake 'Fetch' implementation for bzr.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2007 Ross Burton
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
#
|
||||
# Classes for obtaining upstream sources for the
|
||||
# BitBake build tools.
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Bzr(Fetch):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['bzr']
|
||||
|
||||
def localpath (self, url, ud, d):
|
||||
|
||||
# Create paths to bzr checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
|
||||
|
||||
revision = Fetch.srcrev_internal_helper(ud, d)
|
||||
if revision is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
elif revision:
|
||||
ud.revision = revision
|
||||
|
||||
if not ud.revision:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildbzrcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an bzr commandline based on ud
|
||||
command is "fetch", "update", "revno"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_bzr}', d)
|
||||
|
||||
proto = "http"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
bzrroot = ud.host + ud.path
|
||||
|
||||
options = []
|
||||
|
||||
if command is "revno":
|
||||
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command is "fetch":
|
||||
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
elif command is "update":
|
||||
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid bzr command %s" % command)
|
||||
|
||||
return bzrcmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
# try to use the tarball stash
|
||||
if Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping bzr checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "update")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Update %s" % loc)
|
||||
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
|
||||
runfetchcmd(bzrcmd, d)
|
||||
else:
|
||||
os.system("rm -rf %s" % os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)))
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Checkout %s" % loc)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % bzrcmd)
|
||||
runfetchcmd(bzrcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.pkgdir)), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "bzr:" + ud.pkgdir
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "BZR fetcher hitting network for %s" % url)
|
||||
|
||||
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
|
||||
|
||||
return output.strip()
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
|
||||
171
bitbake/lib/bb/fetch/cvs.py
Normal file
@@ -0,0 +1,171 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
#
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
class Cvs(Fetch):
|
||||
"""
|
||||
Class to fetch a module or modules from cvs repositories
|
||||
"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['cvs', 'pserver']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("cvs method needs a 'module' parameter")
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.tag = ""
|
||||
if 'tag' in ud.parm:
|
||||
ud.tag = ud.parm['tag']
|
||||
|
||||
# Override the default date in certain cases
|
||||
if 'date' in ud.parm:
|
||||
ud.date = ud.parm['date']
|
||||
elif ud.tag:
|
||||
ud.date = ""
|
||||
|
||||
norecurse = ''
|
||||
if 'norecurse' in ud.parm:
|
||||
norecurse = '_norecurse'
|
||||
|
||||
fullpath = ''
|
||||
if 'fullpath' in ud.parm:
|
||||
fullpath = '_fullpath'
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def forcefetch(self, url, ud, d):
|
||||
if (ud.date == "now"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
|
||||
# try to use the tarball stash
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping cvs checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
method = "pserver"
|
||||
if "method" in ud.parm:
|
||||
method = ud.parm["method"]
|
||||
|
||||
localdir = ud.module
|
||||
if "localdir" in ud.parm:
|
||||
localdir = ud.parm["localdir"]
|
||||
|
||||
cvs_port = ""
|
||||
if "port" in ud.parm:
|
||||
cvs_port = ud.parm["port"]
|
||||
|
||||
cvs_rsh = None
|
||||
if method == "ext":
|
||||
if "rsh" in ud.parm:
|
||||
cvs_rsh = ud.parm["rsh"]
|
||||
|
||||
if method == "dir":
|
||||
cvsroot = ud.path
|
||||
else:
|
||||
cvsroot = ":" + method + ":" + ud.user
|
||||
if ud.pswd:
|
||||
cvsroot += ":" + ud.pswd
|
||||
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
|
||||
|
||||
options = []
|
||||
if 'norecurse' in ud.parm:
|
||||
options.append("-l")
|
||||
if ud.date:
|
||||
options.append("-D %s" % ud.date)
|
||||
if ud.tag:
|
||||
options.append("-r %s" % ud.tag)
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
data.setVar('CVSROOT', cvsroot, localdata)
|
||||
data.setVar('CVSCOOPTS', " ".join(options), localdata)
|
||||
data.setVar('CVSMODULE', ud.module, localdata)
|
||||
cvscmd = data.getVar('FETCHCOMMAND', localdata, 1)
|
||||
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, 1)
|
||||
|
||||
if cvs_rsh:
|
||||
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
|
||||
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
|
||||
|
||||
# create module directory
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory")
|
||||
pkg = data.expand('${PN}', d)
|
||||
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
|
||||
moddir = os.path.join(pkgdir,localdir)
|
||||
if os.access(os.path.join(moddir,'CVS'), os.R_OK):
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(moddir)
|
||||
myret = os.system(cvsupdatecmd)
|
||||
else:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(pkgdir)
|
||||
os.chdir(pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cvscmd)
|
||||
myret = os.system(cvscmd)
|
||||
|
||||
if myret != 0 or not os.access(moddir, os.R_OK):
|
||||
try:
|
||||
os.rmdir(moddir)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
|
||||
# tar them up to a defined filename
|
||||
if 'fullpath' in ud.parm:
|
||||
os.chdir(pkgdir)
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, localdir))
|
||||
else:
|
||||
os.chdir(moddir)
|
||||
os.chdir('..')
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
|
||||
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
142
bitbake/lib/bb/fetch/git.py
Normal file
@@ -0,0 +1,142 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' git implementation
|
||||
|
||||
"""
|
||||
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
def prunedir(topdir):
|
||||
# Delete everything reachable from the directory named in 'topdir'.
|
||||
# CAUTION: This is dangerous!
|
||||
for root, dirs, files in os.walk(topdir, topdown=False):
|
||||
for name in files:
|
||||
os.remove(os.path.join(root, name))
|
||||
for name in dirs:
|
||||
os.rmdir(os.path.join(root, name))
|
||||
|
||||
class Git(Fetch):
|
||||
"""Class to fetch a module or modules from git repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with git.
|
||||
"""
|
||||
return ud.type in ['git']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
ud.proto = "rsync"
|
||||
if 'protocol' in ud.parm:
|
||||
ud.proto = ud.parm['protocol']
|
||||
|
||||
ud.branch = ud.parm.get("branch", "")
|
||||
|
||||
tag = Fetch.srcrev_internal_helper(ud, d)
|
||||
if tag is True:
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
elif tag:
|
||||
ud.tag = tag
|
||||
|
||||
if not ud.tag:
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
|
||||
if ud.tag == "master":
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.tag), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping git checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
|
||||
|
||||
repofilename = 'git_%s.tar.gz' % (gitsrcname)
|
||||
repofile = os.path.join(data.getVar("DL_DIR", d, 1), repofilename)
|
||||
repodir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
|
||||
coname = '%s' % (ud.tag)
|
||||
codir = os.path.join(repodir, coname)
|
||||
|
||||
if not os.path.exists(repodir):
|
||||
if Fetch.try_mirror(d, repofilename):
|
||||
bb.mkdirhier(repodir)
|
||||
os.chdir(repodir)
|
||||
runfetchcmd("tar -xzf %s" % (repofile), d)
|
||||
else:
|
||||
runfetchcmd("git clone -n %s://%s%s %s" % (ud.proto, ud.host, ud.path, repodir), d)
|
||||
|
||||
os.chdir(repodir)
|
||||
# Remove all but the .git directory
|
||||
runfetchcmd("rm * -Rf", d)
|
||||
runfetchcmd("git fetch %s://%s%s" % (ud.proto, ud.host, ud.path), d)
|
||||
runfetchcmd("git fetch --tags %s://%s%s" % (ud.proto, ud.host, ud.path), d)
|
||||
runfetchcmd("git prune-packed", d)
|
||||
runfetchcmd("git pack-redundant --all | xargs -r rm", d)
|
||||
|
||||
os.chdir(repodir)
|
||||
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
|
||||
if mirror_tarballs != "0":
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
|
||||
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
|
||||
|
||||
if os.path.exists(codir):
|
||||
prunedir(codir)
|
||||
|
||||
bb.mkdirhier(codir)
|
||||
os.chdir(repodir)
|
||||
runfetchcmd("git read-tree %s" % (ud.tag), d)
|
||||
runfetchcmd("git checkout-index -q -f --prefix=%s -a" % (os.path.join(codir, "git", "")), d)
|
||||
|
||||
os.chdir(codir)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git checkout")
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
|
||||
|
||||
os.chdir(repodir)
|
||||
prunedir(codir)
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "git:" + ud.host + ud.path.replace('/', '.')
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Compute the HEAD revision for the url
|
||||
"""
|
||||
output = runfetchcmd("git ls-remote %s://%s%s %s" % (ud.proto, ud.host, ud.path, ud.branch), d, True)
|
||||
return output.split()[0]
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.tag
|
||||
|
||||
141
bitbake/lib/bb/fetch/hg.py
Normal file
@@ -0,0 +1,141 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for mercurial DRCS (hg).
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
# Copyright (C) 2007 Robert Schuster
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Hg(Fetch):
|
||||
"""Class to fetch a from mercurial repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with mercurial.
|
||||
"""
|
||||
return ud.type in ['hg']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("hg method needs a 'module' parameter")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to mercurial checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildhgcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an hg commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_hg}', d)
|
||||
|
||||
proto = "http"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
host = ud.host
|
||||
if proto == "file":
|
||||
host = "/"
|
||||
ud.host = "localhost"
|
||||
|
||||
hgroot = host + ud.path
|
||||
|
||||
if command is "info":
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
|
||||
options = [];
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command is "fetch":
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
elif command is "pull":
|
||||
cmd = "%s pull %s" % (basecmd, " ".join(options))
|
||||
elif command is "update":
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid hg command %s" % command)
|
||||
|
||||
return cmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
# try to use the tarball stash
|
||||
if Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping hg checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
|
||||
updatecmd = self._buildhgcommand(ud, d, "pull")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
else:
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
|
||||
runfetchcmd(fetchcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
61
bitbake/lib/bb/fetch/local.py
Normal file
@@ -0,0 +1,61 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
|
||||
class Local(Fetch):
|
||||
def supports(self, url, urldata, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return urldata.type in ['file','patch']
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
"""
|
||||
path = url.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
newpath = path
|
||||
if path[0] != "/":
|
||||
filespath = data.getVar('FILESPATH', d, 1)
|
||||
if filespath:
|
||||
newpath = bb.which(filespath, path)
|
||||
if not newpath:
|
||||
filesdir = data.getVar('FILESDIR', d, 1)
|
||||
if filesdir:
|
||||
newpath = os.path.join(filesdir, path)
|
||||
# We don't set localfile as for this fetcher the file is already local!
|
||||
return newpath
|
||||
|
||||
def go(self, url, urldata, d):
|
||||
"""Fetch urls (no-op for Local method)"""
|
||||
# no need to fetch local files, we'll deal with them in place.
|
||||
return 1
|
||||
213
bitbake/lib/bb/fetch/perforce.py
Normal file
@@ -0,0 +1,213 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
class Perforce(Fetch):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['p4']
|
||||
|
||||
def doparse(url,d):
|
||||
parm = {}
|
||||
path = url.split("://")[1]
|
||||
delim = path.find("@");
|
||||
if delim != -1:
|
||||
(user,pswd,host,port) = path.split('@')[0].split(":")
|
||||
path = path.split('@')[1]
|
||||
else:
|
||||
(host,port) = data.getVar('P4PORT', d).split(':')
|
||||
user = ""
|
||||
pswd = ""
|
||||
|
||||
if path.find(";") != -1:
|
||||
keys=[]
|
||||
values=[]
|
||||
plist = path.split(';')
|
||||
for item in plist:
|
||||
if item.count('='):
|
||||
(key,value) = item.split('=')
|
||||
keys.append(key)
|
||||
values.append(value)
|
||||
|
||||
parm = dict(zip(keys,values))
|
||||
path = "//" + path.split(';')[0]
|
||||
host += ":%s" % (port)
|
||||
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
|
||||
|
||||
return host,path,user,pswd,parm
|
||||
doparse = staticmethod(doparse)
|
||||
|
||||
def getcset(d, depot,host,user,pswd,parm):
|
||||
if "cset" in parm:
|
||||
return parm["cset"];
|
||||
if user:
|
||||
data.setVar('P4USER', user, d)
|
||||
if pswd:
|
||||
data.setVar('P4PASSWD', pswd, d)
|
||||
if host:
|
||||
data.setVar('P4PORT', host, d)
|
||||
|
||||
p4date = data.getVar("P4DATE", d, 1)
|
||||
if "revision" in parm:
|
||||
depot += "#%s" % (parm["revision"])
|
||||
elif "label" in parm:
|
||||
depot += "@%s" % (parm["label"])
|
||||
elif p4date:
|
||||
depot += "@%s" % (p4date)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND_p4', d, 1)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s changes -m 1 %s" % (p4cmd, depot))
|
||||
p4file = os.popen("%s changes -m 1 %s" % (p4cmd,depot))
|
||||
cset = p4file.readline().strip()
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "READ %s" % (cset))
|
||||
if not cset:
|
||||
return -1
|
||||
|
||||
return cset.split(' ')[1]
|
||||
getcset = staticmethod(getcset)
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
(host,path,user,pswd,parm) = Perforce.doparse(url,d)
|
||||
|
||||
# If a label is specified, we use that as our filename
|
||||
|
||||
if "label" in parm:
|
||||
ud.localfile = "%s.tar.gz" % (parm["label"])
|
||||
return os.path.join(data.getVar("DL_DIR", d, 1), ud.localfile)
|
||||
|
||||
base = path
|
||||
which = path.find('/...')
|
||||
if which != -1:
|
||||
base = path[:which]
|
||||
|
||||
if base[0] == "/":
|
||||
base = base[1:]
|
||||
|
||||
cset = Perforce.getcset(d, path, host, user, pswd, parm)
|
||||
|
||||
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host,base.replace('/', '.'), cset), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, 1), ud.localfile)
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""
|
||||
Fetch urls
|
||||
"""
|
||||
|
||||
# try to use the tarball stash
|
||||
if Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping perforce checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
(host,depot,user,pswd,parm) = Perforce.doparse(loc, d)
|
||||
|
||||
if depot.find('/...') != -1:
|
||||
path = depot[:depot.find('/...')]
|
||||
else:
|
||||
path = depot
|
||||
|
||||
if "module" in parm:
|
||||
module = parm["module"]
|
||||
else:
|
||||
module = os.path.basename(path)
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
# Get the p4 command
|
||||
if user:
|
||||
data.setVar('P4USER', user, localdata)
|
||||
|
||||
if pswd:
|
||||
data.setVar('P4PASSWD', pswd, localdata)
|
||||
|
||||
if host:
|
||||
data.setVar('P4PORT', host, localdata)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND', localdata, 1)
|
||||
|
||||
# create temp directory
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
if not tmpfile:
|
||||
bb.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
|
||||
raise FetchError(module)
|
||||
|
||||
if "label" in parm:
|
||||
depot = "%s@%s" % (depot,parm["label"])
|
||||
else:
|
||||
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
|
||||
depot = "%s@%s" % (depot,cset)
|
||||
|
||||
os.chdir(tmpfile)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "%s files %s" % (p4cmd, depot))
|
||||
p4file = os.popen("%s files %s" % (p4cmd, depot))
|
||||
|
||||
if not p4file:
|
||||
bb.error("Fetch: unable to get the P4 files from %s" % (depot))
|
||||
raise FetchError(module)
|
||||
|
||||
count = 0
|
||||
|
||||
for file in p4file:
|
||||
list = file.split()
|
||||
|
||||
if list[2] == "delete":
|
||||
continue
|
||||
|
||||
dest = list[0][len(path)+1:]
|
||||
where = dest.find("#")
|
||||
|
||||
os.system("%s print -o %s/%s %s" % (p4cmd, module,dest[:where],list[0]))
|
||||
count = count + 1
|
||||
|
||||
if count == 0:
|
||||
bb.error("Fetch: No files gathered from the P4 fetch")
|
||||
raise FetchError(module)
|
||||
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, module))
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(module)
|
||||
# cleanup
|
||||
os.system('rm -rf %s' % tmpfile)
|
||||
|
||||
|
||||
120
bitbake/lib/bb/fetch/ssh.py
Normal file
@@ -0,0 +1,120 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
'''
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
This implementation is for Secure Shell (SSH), and attempts to comply with the
|
||||
IETF secsh internet draft:
|
||||
http://tools.ietf.org/wg/secsh/draft-ietf-secsh-scp-sftp-ssh-uri/
|
||||
|
||||
Currently does not support the sftp parameters, as this uses scp
|
||||
Also does not support the 'fingerprint' connection parameter.
|
||||
|
||||
'''
|
||||
|
||||
# Copyright (C) 2006 OpenedHand Ltd.
|
||||
#
|
||||
#
|
||||
# Based in part on svk.py:
|
||||
# Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
# Based on svn.py:
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Based on functions from the base bb module:
|
||||
# Copyright 2003 Holger Schurig
|
||||
#
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
|
||||
__pattern__ = re.compile(r'''
|
||||
\s* # Skip leading whitespace
|
||||
ssh:// # scheme
|
||||
( # Optional username/password block
|
||||
(?P<user>\S+) # username
|
||||
(:(?P<pass>\S+))? # colon followed by the password (optional)
|
||||
)?
|
||||
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
|
||||
@
|
||||
(?P<host>\S+?) # non-greedy match of the host
|
||||
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
|
||||
/
|
||||
(?P<path>[^;]+) # path on the remote system, may be absolute or relative,
|
||||
# and may include the use of '~' to reference the remote home
|
||||
# directory
|
||||
(?P<sparam>(;[^;]+)*)? # parameters block (optional)
|
||||
$
|
||||
''', re.VERBOSE)
|
||||
|
||||
class SSH(Fetch):
|
||||
'''Class to fetch a module or modules via Secure Shell'''
|
||||
|
||||
def supports(self, url, urldata, d):
|
||||
return __pattern__.match(url) != None
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
m = __pattern__.match(url)
|
||||
path = m.group('path')
|
||||
host = m.group('host')
|
||||
lpath = os.path.join(data.getVar('DL_DIR', d, True), host, os.path.basename(path))
|
||||
return lpath
|
||||
|
||||
def go(self, url, urldata, d):
|
||||
dldir = data.getVar('DL_DIR', d, 1)
|
||||
|
||||
m = __pattern__.match(url)
|
||||
path = m.group('path')
|
||||
host = m.group('host')
|
||||
port = m.group('port')
|
||||
user = m.group('user')
|
||||
password = m.group('pass')
|
||||
|
||||
ldir = os.path.join(dldir, host)
|
||||
lpath = os.path.join(ldir, os.path.basename(path))
|
||||
|
||||
if not os.path.exists(ldir):
|
||||
os.makedirs(ldir)
|
||||
|
||||
if port:
|
||||
port = '-P %s' % port
|
||||
else:
|
||||
port = ''
|
||||
|
||||
if user:
|
||||
fr = user
|
||||
if password:
|
||||
fr += ':%s' % password
|
||||
fr += '@%s' % host
|
||||
else:
|
||||
fr = host
|
||||
fr += ':%s' % path
|
||||
|
||||
|
||||
import commands
|
||||
cmd = 'scp -B -r %s %s %s/' % (
|
||||
port,
|
||||
commands.mkarg(fr),
|
||||
commands.mkarg(ldir)
|
||||
)
|
||||
|
||||
(exitstatus, output) = commands.getstatusoutput(cmd)
|
||||
if exitstatus != 0:
|
||||
print output
|
||||
raise FetchError('Unable to fetch %s' % url)
|
||||
109
bitbake/lib/bb/fetch/svk.py
Normal file
@@ -0,0 +1,109 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
This implementation is for svk. It is based on the svn implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
class Svk(Fetch):
|
||||
"""Class to fetch a module or modules from svk repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['svk']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("svk method needs a 'module' parameter")
|
||||
else:
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.revision = ""
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def forcefetch(self, url, ud, d):
|
||||
if (ud.date == "now"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
return
|
||||
|
||||
svkroot = ud.host + ud.path
|
||||
|
||||
svkcmd = "svk co -r {%s} %s/%s" % (date, svkroot, ud.module)
|
||||
|
||||
if ud.revision:
|
||||
svkcmd = "svk co -r %s/%s" % (ud.revision, svkroot, ud.module)
|
||||
|
||||
# create temp directory
|
||||
localdata = data.createCopy(d)
|
||||
data.update_data(localdata)
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
if not tmpfile:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
|
||||
raise FetchError(ud.module)
|
||||
|
||||
# check out sources there
|
||||
os.chdir(tmpfile)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svkcmd)
|
||||
myret = os.system(svkcmd)
|
||||
if myret != 0:
|
||||
try:
|
||||
os.rmdir(tmpfile)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
|
||||
os.chdir(os.path.join(tmpfile, os.path.dirname(ud.module)))
|
||||
# tar them up to a defined filename
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)))
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
# cleanup
|
||||
os.system('rm -rf %s' % tmpfile)
|
||||
204
bitbake/lib/bb/fetch/svn.py
Normal file
@@ -0,0 +1,204 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for svn.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Svn(Fetch):
|
||||
"""Class to fetch a module or modules from svn repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with svn.
|
||||
"""
|
||||
return ud.type in ['svn']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("svn method needs a 'module' parameter")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to svn checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.date = ""
|
||||
ud.revision = ud.parm['rev']
|
||||
elif 'date' in ud.date:
|
||||
ud.date = ud.parm['date']
|
||||
ud.revision = ""
|
||||
else:
|
||||
#
|
||||
# ***Nasty hack***
|
||||
# If DATE in unexpanded PV, use ud.date (which is set from SRCDATE)
|
||||
# Should warn people to switch to SRCREV here
|
||||
#
|
||||
pv = data.getVar("PV", d, 0)
|
||||
if "DATE" in pv:
|
||||
ud.revision = ""
|
||||
else:
|
||||
rev = Fetch.srcrev_internal_helper(ud, d)
|
||||
if rev is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
ud.date = ""
|
||||
elif rev:
|
||||
ud.revision = rev
|
||||
ud.date = ""
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildsvncommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an svn commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_svn}', d)
|
||||
|
||||
proto = "svn"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
svn_rsh = None
|
||||
if proto == "svn+ssh" and "rsh" in ud.parm:
|
||||
svn_rsh = ud.parm["rsh"]
|
||||
|
||||
svnroot = ud.host + ud.path
|
||||
|
||||
# either use the revision, or SRCDATE in braces,
|
||||
options = []
|
||||
|
||||
if ud.user:
|
||||
options.append("--username %s" % ud.user)
|
||||
|
||||
if ud.pswd:
|
||||
options.append("--password %s" % ud.pswd)
|
||||
|
||||
if command is "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
elif ud.date:
|
||||
options.append("-r {%s}" % ud.date)
|
||||
|
||||
if command is "fetch":
|
||||
svncmd = "%s co %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, ud.module)
|
||||
elif command is "update":
|
||||
svncmd = "%s update %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid svn command %s" % command)
|
||||
|
||||
if svn_rsh:
|
||||
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
|
||||
|
||||
return svncmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
# try to use the tarball stash
|
||||
if Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping svn checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
|
||||
svnupdatecmd = self._buildsvncommand(ud, d, "update")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupdatecmd)
|
||||
runfetchcmd(svnupdatecmd, d)
|
||||
else:
|
||||
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnfetchcmd)
|
||||
runfetchcmd(svnfetchcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "svn:" + ud.moddir
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "SVN fetcher hitting network for %s" % url)
|
||||
|
||||
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
|
||||
|
||||
revision = None
|
||||
for line in output.splitlines():
|
||||
if "Last Changed Rev" in line:
|
||||
revision = line.split(":")[1].strip()
|
||||
|
||||
return revision
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
99
bitbake/lib/bb/fetch/wget.py
Normal file
@@ -0,0 +1,99 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import uri_replace
|
||||
|
||||
class Wget(Fetch):
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['http','https','ftp']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
url = bb.encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
|
||||
ud.basename = os.path.basename(ud.path)
|
||||
ud.localfile = data.expand(os.path.basename(url), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, uri, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
def fetch_uri(uri, ud, d):
|
||||
if os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again..
|
||||
fetchcmd = data.getVar("RESUMECOMMAND", d, 1)
|
||||
else:
|
||||
fetchcmd = data.getVar("FETCHCOMMAND", d, 1)
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri)
|
||||
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "executing " + fetchcmd)
|
||||
ret = os.system(fetchcmd)
|
||||
if ret != 0:
|
||||
return False
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath):
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "The fetch command for %s returned success but %s doesn't exist?..." % (uri, ud.localpath))
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
premirrors = [ i.split() for i in (data.getVar('PREMIRRORS', localdata, 1) or "").split('\n') if i ]
|
||||
for (find, replace) in premirrors:
|
||||
newuri = uri_replace(uri, find, replace, d)
|
||||
if newuri != uri:
|
||||
if fetch_uri(newuri, ud, localdata):
|
||||
return
|
||||
|
||||
if fetch_uri(uri, ud, localdata):
|
||||
return
|
||||
|
||||
# try mirrors
|
||||
mirrors = [ i.split() for i in (data.getVar('MIRRORS', localdata, 1) or "").split('\n') if i ]
|
||||
for (find, replace) in mirrors:
|
||||
newuri = uri_replace(uri, find, replace, d)
|
||||
if newuri != uri:
|
||||
if fetch_uri(newuri, ud, localdata):
|
||||
return
|
||||
|
||||
raise FetchError(uri)
|
||||
144
bitbake/lib/bb/manifest.py
Normal file
@@ -0,0 +1,144 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, sys
|
||||
import bb, bb.data
|
||||
|
||||
def getfields(line):
|
||||
fields = {}
|
||||
fieldmap = ( "pkg", "src", "dest", "type", "mode", "uid", "gid", "major", "minor", "start", "inc", "count" )
|
||||
for f in xrange(len(fieldmap)):
|
||||
fields[fieldmap[f]] = None
|
||||
|
||||
if not line:
|
||||
return None
|
||||
|
||||
splitline = line.split()
|
||||
if not len(splitline):
|
||||
return None
|
||||
|
||||
try:
|
||||
for f in xrange(len(fieldmap)):
|
||||
if splitline[f] == '-':
|
||||
continue
|
||||
fields[fieldmap[f]] = splitline[f]
|
||||
except IndexError:
|
||||
pass
|
||||
return fields
|
||||
|
||||
def parse (mfile, d):
|
||||
manifest = []
|
||||
while 1:
|
||||
line = mfile.readline()
|
||||
if not line:
|
||||
break
|
||||
if line.startswith("#"):
|
||||
continue
|
||||
fields = getfields(line)
|
||||
if not fields:
|
||||
continue
|
||||
manifest.append(fields)
|
||||
return manifest
|
||||
|
||||
def emit (func, manifest, d):
|
||||
#str = "%s () {\n" % func
|
||||
str = ""
|
||||
for line in manifest:
|
||||
emittedline = emit_line(func, line, d)
|
||||
if not emittedline:
|
||||
continue
|
||||
str += emittedline + "\n"
|
||||
# str += "}\n"
|
||||
return str
|
||||
|
||||
def mangle (func, line, d):
|
||||
import copy
|
||||
newline = copy.copy(line)
|
||||
src = bb.data.expand(newline["src"], d)
|
||||
|
||||
if src:
|
||||
if not os.path.isabs(src):
|
||||
src = "${WORKDIR}/" + src
|
||||
|
||||
dest = newline["dest"]
|
||||
if not dest:
|
||||
return
|
||||
|
||||
if dest.startswith("/"):
|
||||
dest = dest[1:]
|
||||
|
||||
if func is "do_install":
|
||||
dest = "${D}/" + dest
|
||||
|
||||
elif func is "do_populate":
|
||||
dest = "${WORKDIR}/install/" + newline["pkg"] + "/" + dest
|
||||
|
||||
elif func is "do_stage":
|
||||
varmap = {}
|
||||
varmap["${bindir}"] = "${STAGING_DIR}/${HOST_SYS}/bin"
|
||||
varmap["${libdir}"] = "${STAGING_DIR}/${HOST_SYS}/lib"
|
||||
varmap["${includedir}"] = "${STAGING_DIR}/${HOST_SYS}/include"
|
||||
varmap["${datadir}"] = "${STAGING_DATADIR}"
|
||||
|
||||
matched = 0
|
||||
for key in varmap.keys():
|
||||
if dest.startswith(key):
|
||||
dest = varmap[key] + "/" + dest[len(key):]
|
||||
matched = 1
|
||||
if not matched:
|
||||
newline = None
|
||||
return
|
||||
else:
|
||||
newline = None
|
||||
return
|
||||
|
||||
newline["src"] = src
|
||||
newline["dest"] = dest
|
||||
return newline
|
||||
|
||||
def emit_line (func, line, d):
|
||||
import copy
|
||||
newline = copy.deepcopy(line)
|
||||
newline = mangle(func, newline, d)
|
||||
if not newline:
|
||||
return None
|
||||
|
||||
str = ""
|
||||
type = newline["type"]
|
||||
mode = newline["mode"]
|
||||
src = newline["src"]
|
||||
dest = newline["dest"]
|
||||
if type is "d":
|
||||
str = "install -d "
|
||||
if mode:
|
||||
str += "-m %s " % mode
|
||||
str += dest
|
||||
elif type is "f":
|
||||
if not src:
|
||||
return None
|
||||
if dest.endswith("/"):
|
||||
str = "install -d "
|
||||
str += dest + "\n"
|
||||
str += "install "
|
||||
else:
|
||||
str = "install -D "
|
||||
if mode:
|
||||
str += "-m %s " % mode
|
||||
str += src + " " + dest
|
||||
del newline
|
||||
return str
|
||||
84
bitbake/lib/bb/methodpool.py
Normal file
@@ -0,0 +1,84 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
#
|
||||
# Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
"""
|
||||
What is a method pool?
|
||||
|
||||
BitBake has a global method scope where .bb, .inc and .bbclass
|
||||
files can install methods. These methods are parsed from strings.
|
||||
To avoid recompiling and executing these string we introduce
|
||||
a method pool to do this task.
|
||||
|
||||
This pool will be used to compile and execute the functions. It
|
||||
will be smart enough to
|
||||
"""
|
||||
|
||||
from bb.utils import better_compile, better_exec
|
||||
from bb import error
|
||||
|
||||
# A dict of modules we have handled
|
||||
# it is the number of .bbclasses + x in size
|
||||
_parsed_methods = { }
|
||||
_parsed_fns = { }
|
||||
|
||||
def insert_method(modulename, code, fn):
|
||||
"""
|
||||
Add code of a module should be added. The methods
|
||||
will be simply added, no checking will be done
|
||||
"""
|
||||
comp = better_compile(code, "<bb>", fn )
|
||||
better_exec(comp, __builtins__, code, fn)
|
||||
|
||||
# now some instrumentation
|
||||
code = comp.co_names
|
||||
for name in code:
|
||||
if name in ['None', 'False']:
|
||||
continue
|
||||
elif name in _parsed_fns and not _parsed_fns[name] == modulename:
|
||||
error( "Error Method already seen: %s in' %s' now in '%s'" % (name, _parsed_fns[name], modulename))
|
||||
else:
|
||||
_parsed_fns[name] = modulename
|
||||
|
||||
def check_insert_method(modulename, code, fn):
|
||||
"""
|
||||
Add the code if it wasnt added before. The module
|
||||
name will be used for that
|
||||
|
||||
Variables:
|
||||
@modulename a short name e.g. base.bbclass
|
||||
@code The actual python code
|
||||
@fn The filename from the outer file
|
||||
"""
|
||||
if not modulename in _parsed_methods:
|
||||
return insert_method(modulename, code, fn)
|
||||
_parsed_methods[modulename] = 1
|
||||
|
||||
def parsed_module(modulename):
|
||||
"""
|
||||
Inform me file xyz was parsed
|
||||
"""
|
||||
return modulename in _parsed_methods
|
||||
|
||||
|
||||
def get_parsed_dict():
|
||||
"""
|
||||
shortcut
|
||||
"""
|
||||
return _parsed_methods
|
||||
129
bitbake/lib/bb/msg.py
Normal file
@@ -0,0 +1,129 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'msg' implementation
|
||||
|
||||
Message handling infrastructure for bitbake
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, os, re, bb
|
||||
from bb import utils, event
|
||||
|
||||
debug_level = {}
|
||||
|
||||
verbose = False
|
||||
|
||||
domain = bb.utils.Enum(
|
||||
'Build',
|
||||
'Cache',
|
||||
'Collection',
|
||||
'Data',
|
||||
'Depends',
|
||||
'Fetcher',
|
||||
'Parsing',
|
||||
'PersistData',
|
||||
'Provider',
|
||||
'RunQueue',
|
||||
'TaskData',
|
||||
'Util')
|
||||
|
||||
|
||||
class MsgBase(bb.event.Event):
|
||||
"""Base class for messages"""
|
||||
|
||||
def __init__(self, msg, d ):
|
||||
self._message = msg
|
||||
event.Event.__init__(self, d)
|
||||
|
||||
class MsgDebug(MsgBase):
|
||||
"""Debug Message"""
|
||||
|
||||
class MsgNote(MsgBase):
|
||||
"""Note Message"""
|
||||
|
||||
class MsgWarn(MsgBase):
|
||||
"""Warning Message"""
|
||||
|
||||
class MsgError(MsgBase):
|
||||
"""Error Message"""
|
||||
|
||||
class MsgFatal(MsgBase):
|
||||
"""Fatal Message"""
|
||||
|
||||
class MsgPlain(MsgBase):
|
||||
"""General output"""
|
||||
|
||||
#
|
||||
# Message control functions
|
||||
#
|
||||
|
||||
def set_debug_level(level):
|
||||
bb.msg.debug_level = {}
|
||||
for domain in bb.msg.domain:
|
||||
bb.msg.debug_level[domain] = level
|
||||
bb.msg.debug_level['default'] = level
|
||||
|
||||
def set_verbose(level):
|
||||
bb.msg.verbose = level
|
||||
|
||||
def set_debug_domains(domains):
|
||||
for domain in domains:
|
||||
found = False
|
||||
for ddomain in bb.msg.domain:
|
||||
if domain == str(ddomain):
|
||||
bb.msg.debug_level[ddomain] = bb.msg.debug_level[ddomain] + 1
|
||||
found = True
|
||||
if not found:
|
||||
bb.msg.warn(None, "Logging domain %s is not valid, ignoring" % domain)
|
||||
|
||||
#
|
||||
# Message handling functions
|
||||
#
|
||||
|
||||
def debug(level, domain, msg, fn = None):
|
||||
bb.event.fire(MsgDebug(msg, None))
|
||||
if not domain:
|
||||
domain = 'default'
|
||||
if debug_level[domain] >= level:
|
||||
print 'DEBUG: ' + msg
|
||||
|
||||
def note(level, domain, msg, fn = None):
|
||||
bb.event.fire(MsgNote(msg, None))
|
||||
if not domain:
|
||||
domain = 'default'
|
||||
if level == 1 or verbose or debug_level[domain] >= 1:
|
||||
print 'NOTE: ' + msg
|
||||
|
||||
def warn(domain, msg, fn = None):
|
||||
bb.event.fire(MsgWarn(msg, None))
|
||||
print 'WARNING: ' + msg
|
||||
|
||||
def error(domain, msg, fn = None):
|
||||
bb.event.fire(MsgError(msg, None))
|
||||
print 'ERROR: ' + msg
|
||||
|
||||
def fatal(domain, msg, fn = None):
|
||||
bb.event.fire(MsgFatal(msg, None))
|
||||
print 'ERROR: ' + msg
|
||||
sys.exit(1)
|
||||
|
||||
def plain(msg, fn = None):
|
||||
bb.event.fire(MsgPlain(msg, None))
|
||||
print msg
|
||||
|
||||
80
bitbake/lib/bb/parse/__init__.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
BitBake Parsers
|
||||
|
||||
File parsers for the BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
__all__ = [ 'ParseError', 'SkipPackage', 'cached_mtime', 'mark_dependency',
|
||||
'supports', 'handle', 'init' ]
|
||||
handlers = []
|
||||
|
||||
import bb, os
|
||||
|
||||
class ParseError(Exception):
|
||||
"""Exception raised when parsing fails"""
|
||||
|
||||
class SkipPackage(Exception):
|
||||
"""Exception raised to skip this package"""
|
||||
|
||||
__mtime_cache = {}
|
||||
def cached_mtime(f):
|
||||
if not __mtime_cache.has_key(f):
|
||||
__mtime_cache[f] = os.stat(f)[8]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def cached_mtime_noerror(f):
|
||||
if not __mtime_cache.has_key(f):
|
||||
try:
|
||||
__mtime_cache[f] = os.stat(f)[8]
|
||||
except OSError:
|
||||
return 0
|
||||
return __mtime_cache[f]
|
||||
|
||||
def mark_dependency(d, f):
|
||||
if f.startswith('./'):
|
||||
f = "%s/%s" % (os.getcwd(), f[2:])
|
||||
deps = bb.data.getVar('__depends', d) or []
|
||||
deps.append( (f, cached_mtime(f)) )
|
||||
bb.data.setVar('__depends', deps, d)
|
||||
|
||||
def supports(fn, data):
|
||||
"""Returns true if we have a handler for this file, false otherwise"""
|
||||
for h in handlers:
|
||||
if h['supports'](fn, data):
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def handle(fn, data, include = 0):
|
||||
"""Call the handler that is appropriate for this file"""
|
||||
for h in handlers:
|
||||
if h['supports'](fn, data):
|
||||
return h['handle'](fn, data, include)
|
||||
raise ParseError("%s is not a BitBake file" % fn)
|
||||
|
||||
def init(fn, data):
|
||||
for h in handlers:
|
||||
if h['supports'](fn):
|
||||
return h['init'](data)
|
||||
|
||||
|
||||
from parse_py import __version__, ConfHandler, BBHandler
|
||||
427
bitbake/lib/bb/parse/parse_py/BBHandler.py
Normal file
@@ -0,0 +1,427 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
class for handling .bb files
|
||||
|
||||
Reads a .bb file and obtains its metadata
|
||||
|
||||
"""
|
||||
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, bb, os, sys, time
|
||||
import bb.fetch, bb.build, bb.utils
|
||||
from bb import data, fetch, methodpool
|
||||
|
||||
from ConfHandler import include, localpath, obtain, init
|
||||
from bb.parse import ParseError
|
||||
|
||||
__func_start_regexp__ = re.compile( r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
|
||||
__inherit_regexp__ = re.compile( r"inherit\s+(.+)" )
|
||||
__export_func_regexp__ = re.compile( r"EXPORT_FUNCTIONS\s+(.+)" )
|
||||
__addtask_regexp__ = re.compile("addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
|
||||
__addhandler_regexp__ = re.compile( r"addhandler\s+(.+)" )
|
||||
__def_regexp__ = re.compile( r"def\s+(\w+).*:" )
|
||||
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
|
||||
__word__ = re.compile(r"\S+")
|
||||
|
||||
__infunc__ = ""
|
||||
__inpython__ = False
|
||||
__body__ = []
|
||||
__classname__ = ""
|
||||
classes = [ None, ]
|
||||
|
||||
# We need to indicate EOF to the feeder. This code is so messy that
|
||||
# factoring it out to a close_parse_file method is out of question.
|
||||
# We will use the IN_PYTHON_EOF as an indicator to just close the method
|
||||
#
|
||||
# The two parts using it are tightly integrated anyway
|
||||
IN_PYTHON_EOF = -9999999999999
|
||||
|
||||
__parsed_methods__ = methodpool.get_parsed_dict()
|
||||
|
||||
def supports(fn, d):
|
||||
localfn = localpath(fn, d)
|
||||
return localfn[-3:] == ".bb" or localfn[-8:] == ".bbclass" or localfn[-4:] == ".inc"
|
||||
|
||||
def inherit(files, d):
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
fn = ""
|
||||
lineno = 0
|
||||
files = data.expand(files, d)
|
||||
for file in files:
|
||||
if file[0] != "/" and file[-8:] != ".bbclass":
|
||||
file = os.path.join('classes', '%s.bbclass' % file)
|
||||
|
||||
if not file in __inherit_cache:
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
|
||||
__inherit_cache.append( file )
|
||||
data.setVar('__inherit_cache', __inherit_cache, d)
|
||||
include(fn, file, d, "inherit")
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
|
||||
def handle(fn, d, include = 0):
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__
|
||||
__body__ = []
|
||||
__infunc__ = ""
|
||||
__classname__ = ""
|
||||
__residue__ = []
|
||||
|
||||
if include == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data)")
|
||||
else:
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data, include)")
|
||||
|
||||
(root, ext) = os.path.splitext(os.path.basename(fn))
|
||||
base_name = "%s%s" % (root,ext)
|
||||
init(d)
|
||||
|
||||
if ext == ".bbclass":
|
||||
__classname__ = root
|
||||
classes.append(__classname__)
|
||||
|
||||
if include != 0:
|
||||
oldfile = data.getVar('FILE', d)
|
||||
else:
|
||||
oldfile = None
|
||||
|
||||
fn = obtain(fn, d)
|
||||
bbpath = (data.getVar('BBPATH', d, 1) or '').split(':')
|
||||
if not os.path.isabs(fn):
|
||||
f = None
|
||||
for p in bbpath:
|
||||
j = os.path.join(p, fn)
|
||||
if os.access(j, os.R_OK):
|
||||
abs_fn = j
|
||||
f = open(j, 'r')
|
||||
break
|
||||
if f is None:
|
||||
raise IOError("file not found")
|
||||
else:
|
||||
f = open(fn,'r')
|
||||
abs_fn = fn
|
||||
|
||||
if ext != ".bbclass":
|
||||
bbpath.insert(0, os.path.dirname(abs_fn))
|
||||
data.setVar('BBPATH', ":".join(bbpath), d)
|
||||
|
||||
if include:
|
||||
bb.parse.mark_dependency(d, abs_fn)
|
||||
|
||||
if ext != ".bbclass":
|
||||
data.setVar('FILE', fn, d)
|
||||
i = (data.getVar("INHERIT", d, 1) or "").split()
|
||||
if not "base" in i and __classname__ != "base":
|
||||
i[0:0] = ["base"]
|
||||
inherit(i, d)
|
||||
|
||||
lineno = 0
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = f.readline()
|
||||
if not s: break
|
||||
s = s.rstrip()
|
||||
feeder(lineno, s, fn, base_name, d)
|
||||
if __inpython__:
|
||||
# add a blank line to close out any python definition
|
||||
feeder(IN_PYTHON_EOF, "", fn, base_name, d)
|
||||
if ext == ".bbclass":
|
||||
classes.remove(__classname__)
|
||||
else:
|
||||
if include == 0:
|
||||
data.expandKeys(d)
|
||||
data.update_data(d)
|
||||
anonqueue = data.getVar("__anonqueue", d, 1) or []
|
||||
body = [x['content'] for x in anonqueue]
|
||||
flag = { 'python' : 1, 'func' : 1 }
|
||||
data.setVar("__anonfunc", "\n".join(body), d)
|
||||
data.setVarFlags("__anonfunc", flag, d)
|
||||
from bb import build
|
||||
try:
|
||||
t = data.getVar('T', d)
|
||||
data.setVar('T', '${TMPDIR}/', d)
|
||||
build.exec_func("__anonfunc", d)
|
||||
data.delVar('T', d)
|
||||
if t:
|
||||
data.setVar('T', t, d)
|
||||
except Exception, e:
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "Exception when executing anonymous function: %s" % e)
|
||||
raise
|
||||
data.delVar("__anonqueue", d)
|
||||
data.delVar("__anonfunc", d)
|
||||
set_additional_vars(fn, d, include)
|
||||
data.update_data(d)
|
||||
|
||||
all_handlers = {}
|
||||
for var in data.getVar('__BBHANDLERS', d) or []:
|
||||
# try to add the handler
|
||||
# if we added it remember the choiche
|
||||
handler = data.getVar(var,d)
|
||||
if bb.event.register(var,handler) == bb.event.Registered:
|
||||
all_handlers[var] = handler
|
||||
|
||||
tasklist = {}
|
||||
for var in data.getVar('__BBTASKS', d) or []:
|
||||
if var not in tasklist:
|
||||
tasklist[var] = []
|
||||
deps = data.getVarFlag(var, 'deps', d) or []
|
||||
for p in deps:
|
||||
if p not in tasklist[var]:
|
||||
tasklist[var].append(p)
|
||||
|
||||
postdeps = data.getVarFlag(var, 'postdeps', d) or []
|
||||
for p in postdeps:
|
||||
if p not in tasklist:
|
||||
tasklist[p] = []
|
||||
if var not in tasklist[p]:
|
||||
tasklist[p].append(var)
|
||||
|
||||
bb.build.add_tasks(tasklist, d)
|
||||
|
||||
# now add the handlers
|
||||
if not len(all_handlers) == 0:
|
||||
data.setVar('__all_handlers__', all_handlers, d)
|
||||
|
||||
bbpath.pop(0)
|
||||
if oldfile:
|
||||
bb.data.setVar("FILE", oldfile, d)
|
||||
|
||||
# we have parsed the bb class now
|
||||
if ext == ".bbclass" or ext == ".inc":
|
||||
__parsed_methods__[base_name] = 1
|
||||
|
||||
return d
|
||||
|
||||
def feeder(lineno, s, fn, root, d):
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__,__infunc__, __body__, classes, bb, __residue__
|
||||
if __infunc__:
|
||||
if s == '}':
|
||||
__body__.append('')
|
||||
data.setVar(__infunc__, '\n'.join(__body__), d)
|
||||
data.setVarFlag(__infunc__, "func", 1, d)
|
||||
if __infunc__ == "__anonymous":
|
||||
anonqueue = bb.data.getVar("__anonqueue", d) or []
|
||||
anonitem = {}
|
||||
anonitem["content"] = bb.data.getVar("__anonymous", d)
|
||||
anonitem["flags"] = bb.data.getVarFlags("__anonymous", d)
|
||||
anonqueue.append(anonitem)
|
||||
bb.data.setVar("__anonqueue", anonqueue, d)
|
||||
bb.data.delVarFlags("__anonymous", d)
|
||||
bb.data.delVar("__anonymous", d)
|
||||
__infunc__ = ""
|
||||
__body__ = []
|
||||
else:
|
||||
__body__.append(s)
|
||||
return
|
||||
|
||||
if __inpython__:
|
||||
m = __python_func_regexp__.match(s)
|
||||
if m and lineno != IN_PYTHON_EOF:
|
||||
__body__.append(s)
|
||||
return
|
||||
else:
|
||||
# Note we will add root to parsedmethods after having parse
|
||||
# 'this' file. This means we will not parse methods from
|
||||
# bb classes twice
|
||||
if not root in __parsed_methods__:
|
||||
text = '\n'.join(__body__)
|
||||
methodpool.insert_method( root, text, fn )
|
||||
funcs = data.getVar('__functions__', d) or {}
|
||||
if not funcs.has_key( root ):
|
||||
funcs[root] = text
|
||||
else:
|
||||
funcs[root] = "%s\n%s" % (funcs[root], text)
|
||||
|
||||
data.setVar('__functions__', funcs, d)
|
||||
__body__ = []
|
||||
__inpython__ = False
|
||||
|
||||
if lineno == IN_PYTHON_EOF:
|
||||
return
|
||||
|
||||
# fall through
|
||||
|
||||
if s == '' or s[0] == '#': return # skip comments and empty lines
|
||||
|
||||
if s[-1] == '\\':
|
||||
__residue__.append(s[:-1])
|
||||
return
|
||||
|
||||
s = "".join(__residue__) + s
|
||||
__residue__ = []
|
||||
|
||||
m = __func_start_regexp__.match(s)
|
||||
if m:
|
||||
__infunc__ = m.group("func") or "__anonymous"
|
||||
key = __infunc__
|
||||
if data.getVar(key, d):
|
||||
# clean up old version of this piece of metadata, as its
|
||||
# flags could cause problems
|
||||
data.setVarFlag(key, 'python', None, d)
|
||||
data.setVarFlag(key, 'fakeroot', None, d)
|
||||
if m.group("py") is not None:
|
||||
data.setVarFlag(key, "python", "1", d)
|
||||
else:
|
||||
data.delVarFlag(key, "python", d)
|
||||
if m.group("fr") is not None:
|
||||
data.setVarFlag(key, "fakeroot", "1", d)
|
||||
else:
|
||||
data.delVarFlag(key, "fakeroot", d)
|
||||
return
|
||||
|
||||
m = __def_regexp__.match(s)
|
||||
if m:
|
||||
__body__.append(s)
|
||||
__inpython__ = True
|
||||
return
|
||||
|
||||
m = __export_func_regexp__.match(s)
|
||||
if m:
|
||||
fns = m.group(1)
|
||||
n = __word__.findall(fns)
|
||||
for f in n:
|
||||
allvars = []
|
||||
allvars.append(f)
|
||||
allvars.append(classes[-1] + "_" + f)
|
||||
|
||||
vars = [[ allvars[0], allvars[1] ]]
|
||||
if len(classes) > 1 and classes[-2] is not None:
|
||||
allvars.append(classes[-2] + "_" + f)
|
||||
vars = []
|
||||
vars.append([allvars[2], allvars[1]])
|
||||
vars.append([allvars[0], allvars[2]])
|
||||
|
||||
for (var, calledvar) in vars:
|
||||
if data.getVar(var, d) and not data.getVarFlag(var, 'export_func', d):
|
||||
continue
|
||||
|
||||
if data.getVar(var, d):
|
||||
data.setVarFlag(var, 'python', None, d)
|
||||
data.setVarFlag(var, 'func', None, d)
|
||||
|
||||
for flag in [ "func", "python" ]:
|
||||
if data.getVarFlag(calledvar, flag, d):
|
||||
data.setVarFlag(var, flag, data.getVarFlag(calledvar, flag, d), d)
|
||||
for flag in [ "dirs" ]:
|
||||
if data.getVarFlag(var, flag, d):
|
||||
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag, d), d)
|
||||
|
||||
if data.getVarFlag(calledvar, "python", d):
|
||||
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n", d)
|
||||
else:
|
||||
data.setVar(var, "\t" + calledvar + "\n", d)
|
||||
data.setVarFlag(var, 'export_func', '1', d)
|
||||
|
||||
return
|
||||
|
||||
m = __addtask_regexp__.match(s)
|
||||
if m:
|
||||
func = m.group("func")
|
||||
before = m.group("before")
|
||||
after = m.group("after")
|
||||
if func is None:
|
||||
return
|
||||
var = "do_" + func
|
||||
|
||||
data.setVarFlag(var, "task", 1, d)
|
||||
|
||||
bbtasks = data.getVar('__BBTASKS', d) or []
|
||||
bbtasks.append(var)
|
||||
data.setVar('__BBTASKS', bbtasks, d)
|
||||
|
||||
if after is not None:
|
||||
# set up deps for function
|
||||
data.setVarFlag(var, "deps", after.split(), d)
|
||||
if before is not None:
|
||||
# set up things that depend on this func
|
||||
data.setVarFlag(var, "postdeps", before.split(), d)
|
||||
return
|
||||
|
||||
m = __addhandler_regexp__.match(s)
|
||||
if m:
|
||||
fns = m.group(1)
|
||||
hs = __word__.findall(fns)
|
||||
bbhands = data.getVar('__BBHANDLERS', d) or []
|
||||
for h in hs:
|
||||
bbhands.append(h)
|
||||
data.setVarFlag(h, "handler", 1, d)
|
||||
data.setVar('__BBHANDLERS', bbhands, d)
|
||||
return
|
||||
|
||||
m = __inherit_regexp__.match(s)
|
||||
if m:
|
||||
|
||||
files = m.group(1)
|
||||
n = __word__.findall(files)
|
||||
inherit(n, d)
|
||||
return
|
||||
|
||||
from bb.parse import ConfHandler
|
||||
return ConfHandler.feeder(lineno, s, fn, d)
|
||||
|
||||
__pkgsplit_cache__={}
|
||||
def vars_from_file(mypkg, d):
|
||||
if not mypkg:
|
||||
return (None, None, None)
|
||||
if mypkg in __pkgsplit_cache__:
|
||||
return __pkgsplit_cache__[mypkg]
|
||||
|
||||
myfile = os.path.splitext(os.path.basename(mypkg))
|
||||
parts = myfile[0].split('_')
|
||||
__pkgsplit_cache__[mypkg] = parts
|
||||
if len(parts) > 3:
|
||||
raise ParseError("Unable to generate default variables from the filename: %s (too many underscores)" % mypkg)
|
||||
exp = 3 - len(parts)
|
||||
tmplist = []
|
||||
while exp != 0:
|
||||
exp -= 1
|
||||
tmplist.append(None)
|
||||
parts.extend(tmplist)
|
||||
return parts
|
||||
|
||||
def set_additional_vars(file, d, include):
|
||||
"""Deduce rest of variables, e.g. ${A} out of ${SRC_URI}"""
|
||||
|
||||
return
|
||||
# Nothing seems to use this variable
|
||||
#bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s: set_additional_vars" % file)
|
||||
|
||||
#src_uri = data.getVar('SRC_URI', d, 1)
|
||||
#if not src_uri:
|
||||
# return
|
||||
|
||||
#a = (data.getVar('A', d, 1) or '').split()
|
||||
|
||||
#from bb import fetch
|
||||
#try:
|
||||
# ud = fetch.init(src_uri.split(), d)
|
||||
# a += fetch.localpaths(d, ud)
|
||||
#except fetch.NoMethodError:
|
||||
# pass
|
||||
#except bb.MalformedUrl,e:
|
||||
# raise ParseError("Unable to generate local paths for SRC_URI due to malformed uri: %s" % e)
|
||||
#del fetch
|
||||
|
||||
#data.setVar('A', " ".join(a), d)
|
||||
|
||||
|
||||
# Add us to the handlers list
|
||||
from bb.parse import handlers
|
||||
handlers.append({'supports': supports, 'handle': handle, 'init': init})
|
||||
del handlers
|
||||
228
bitbake/lib/bb/parse/parse_py/ConfHandler.py
Normal file
@@ -0,0 +1,228 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
class for handling configuration data files
|
||||
|
||||
Reads a .conf file and obtains its metadata
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, bb.data, os, sys
|
||||
from bb.parse import ParseError
|
||||
|
||||
#__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}]+)\s*(?P<colon>:)?(?P<ques>\?)?=\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__include_regexp__ = re.compile( r"include\s+(.+)" )
|
||||
__require_regexp__ = re.compile( r"require\s+(.+)" )
|
||||
__export_regexp__ = re.compile( r"export\s+(.+)" )
|
||||
|
||||
def init(data):
|
||||
if not bb.data.getVar('TOPDIR', data):
|
||||
bb.data.setVar('TOPDIR', os.getcwd(), data)
|
||||
if not bb.data.getVar('BBPATH', data):
|
||||
bb.data.setVar('BBPATH', os.path.join(sys.prefix, 'share', 'bitbake'), data)
|
||||
|
||||
def supports(fn, d):
|
||||
return localpath(fn, d)[-5:] == ".conf"
|
||||
|
||||
def localpath(fn, d):
|
||||
if os.path.exists(fn):
|
||||
return fn
|
||||
|
||||
if "://" not in fn:
|
||||
return fn
|
||||
|
||||
localfn = None
|
||||
try:
|
||||
localfn = bb.fetch.localpath(fn, d, False)
|
||||
except bb.MalformedUrl:
|
||||
pass
|
||||
|
||||
if not localfn:
|
||||
return fn
|
||||
return localfn
|
||||
|
||||
def obtain(fn, data):
|
||||
import sys, bb
|
||||
fn = bb.data.expand(fn, data)
|
||||
localfn = bb.data.expand(localpath(fn, data), data)
|
||||
|
||||
if localfn != fn:
|
||||
dldir = bb.data.getVar('DL_DIR', data, 1)
|
||||
if not dldir:
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: DL_DIR not defined")
|
||||
return localfn
|
||||
bb.mkdirhier(dldir)
|
||||
try:
|
||||
bb.fetch.init([fn], data)
|
||||
except bb.fetch.NoMethodError:
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: no method: %s" % value)
|
||||
return localfn
|
||||
|
||||
try:
|
||||
bb.fetch.go(data)
|
||||
except bb.fetch.MissingParameterError:
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: missing parameters: %s" % value)
|
||||
return localfn
|
||||
except bb.fetch.FetchError:
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: failed: %s" % value)
|
||||
return localfn
|
||||
return localfn
|
||||
|
||||
|
||||
def include(oldfn, fn, data, error_out):
|
||||
"""
|
||||
|
||||
error_out If True a ParseError will be reaised if the to be included
|
||||
"""
|
||||
if oldfn == fn: # prevent infinate recursion
|
||||
return None
|
||||
|
||||
import bb
|
||||
fn = bb.data.expand(fn, data)
|
||||
oldfn = bb.data.expand(oldfn, data)
|
||||
|
||||
from bb.parse import handle
|
||||
try:
|
||||
ret = handle(fn, data, True)
|
||||
except IOError:
|
||||
if error_out:
|
||||
raise ParseError("Could not %(error_out)s file %(fn)s" % vars() )
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF file '%s' not found" % fn)
|
||||
|
||||
def handle(fn, data, include = 0):
|
||||
if include:
|
||||
inc_string = "including"
|
||||
else:
|
||||
inc_string = "reading"
|
||||
init(data)
|
||||
|
||||
if include == 0:
|
||||
bb.data.inheritFromOS(data)
|
||||
oldfile = None
|
||||
else:
|
||||
oldfile = bb.data.getVar('FILE', data)
|
||||
|
||||
fn = obtain(fn, data)
|
||||
if not os.path.isabs(fn):
|
||||
f = None
|
||||
bbpath = bb.data.getVar("BBPATH", data, 1) or []
|
||||
for p in bbpath.split(":"):
|
||||
currname = os.path.join(p, fn)
|
||||
if os.access(currname, os.R_OK):
|
||||
f = open(currname, 'r')
|
||||
abs_fn = currname
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF %s %s" % (inc_string, currname))
|
||||
break
|
||||
if f is None:
|
||||
raise IOError("file '%s' not found" % fn)
|
||||
else:
|
||||
f = open(fn,'r')
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "CONF %s %s" % (inc_string,fn))
|
||||
abs_fn = fn
|
||||
|
||||
if include:
|
||||
bb.parse.mark_dependency(data, abs_fn)
|
||||
|
||||
lineno = 0
|
||||
bb.data.setVar('FILE', fn, data)
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = f.readline()
|
||||
if not s: break
|
||||
w = s.strip()
|
||||
if not w: continue # skip empty lines
|
||||
s = s.rstrip()
|
||||
if s[0] == '#': continue # skip comments
|
||||
while s[-1] == '\\':
|
||||
s2 = f.readline()[:-1].strip()
|
||||
lineno = lineno + 1
|
||||
s = s[:-1] + s2
|
||||
feeder(lineno, s, fn, data)
|
||||
|
||||
if oldfile:
|
||||
bb.data.setVar('FILE', oldfile, data)
|
||||
return data
|
||||
|
||||
def feeder(lineno, s, fn, data):
|
||||
def getFunc(groupd, key, data):
|
||||
if 'flag' in groupd and groupd['flag'] != None:
|
||||
return bb.data.getVarFlag(key, groupd['flag'], data)
|
||||
else:
|
||||
return bb.data.getVar(key, data)
|
||||
|
||||
m = __config_regexp__.match(s)
|
||||
if m:
|
||||
groupd = m.groupdict()
|
||||
key = groupd["var"]
|
||||
if "exp" in groupd and groupd["exp"] != None:
|
||||
bb.data.setVarFlag(key, "export", 1, data)
|
||||
if "ques" in groupd and groupd["ques"] != None:
|
||||
val = getFunc(groupd, key, data)
|
||||
if val == None:
|
||||
val = groupd["value"]
|
||||
elif "colon" in groupd and groupd["colon"] != None:
|
||||
e = data.createCopy()
|
||||
bb.data.update_data(e)
|
||||
val = bb.data.expand(groupd["value"], e)
|
||||
elif "append" in groupd and groupd["append"] != None:
|
||||
val = "%s %s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
|
||||
elif "prepend" in groupd and groupd["prepend"] != None:
|
||||
val = "%s %s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
|
||||
elif "postdot" in groupd and groupd["postdot"] != None:
|
||||
val = "%s%s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
|
||||
elif "predot" in groupd and groupd["predot"] != None:
|
||||
val = "%s%s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
|
||||
else:
|
||||
val = groupd["value"]
|
||||
if 'flag' in groupd and groupd['flag'] != None:
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "setVarFlag(%s, %s, %s, data)" % (key, groupd['flag'], val))
|
||||
bb.data.setVarFlag(key, groupd['flag'], val, data)
|
||||
else:
|
||||
bb.data.setVar(key, val, data)
|
||||
return
|
||||
|
||||
m = __include_regexp__.match(s)
|
||||
if m:
|
||||
s = bb.data.expand(m.group(1), data)
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "CONF %s:%d: including %s" % (fn, lineno, s))
|
||||
include(fn, s, data, False)
|
||||
return
|
||||
|
||||
m = __require_regexp__.match(s)
|
||||
if m:
|
||||
s = bb.data.expand(m.group(1), data)
|
||||
include(fn, s, data, "include required")
|
||||
return
|
||||
|
||||
m = __export_regexp__.match(s)
|
||||
if m:
|
||||
bb.data.setVarFlag(m.group(1), "export", 1, data)
|
||||
return
|
||||
|
||||
raise ParseError("%s:%d: unparsed line: '%s'" % (fn, lineno, s));
|
||||
|
||||
# Add us to the handlers list
|
||||
from bb.parse import handlers
|
||||
handlers.append({'supports': supports, 'handle': handle, 'init': init})
|
||||
del handlers
|
||||
33
bitbake/lib/bb/parse/parse_py/__init__.py
Normal file
@@ -0,0 +1,33 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake Parsers
|
||||
|
||||
File parsers for the BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
__version__ = '1.0'
|
||||
|
||||
__all__ = [ 'ConfHandler', 'BBHandler']
|
||||
|
||||
import ConfHandler
|
||||
import BBHandler
|
||||
110
bitbake/lib/bb/persist_data.py
Normal file
@@ -0,0 +1,110 @@
|
||||
# BitBake Persistent Data Store
|
||||
#
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import bb, os
|
||||
|
||||
try:
|
||||
import sqlite3
|
||||
except ImportError:
|
||||
try:
|
||||
from pysqlite2 import dbapi2 as sqlite3
|
||||
except ImportError:
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "Importing sqlite3 and pysqlite2 failed, please install one of them. Python 2.5 or a 'python-pysqlite2' like package is likely to be what you need.")
|
||||
|
||||
sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "sqlite3 version 3.3.0 or later is required.")
|
||||
|
||||
class PersistData:
|
||||
"""
|
||||
BitBake Persistent Data Store
|
||||
|
||||
Used to store data in a central location such that other threads/tasks can
|
||||
access them at some future date.
|
||||
|
||||
The "domain" is used as a key to isolate each data pool and in this
|
||||
implementation corresponds to an SQL table. The SQL table consists of a
|
||||
simple key and value pair.
|
||||
|
||||
Why sqlite? It handles all the locking issues for us.
|
||||
"""
|
||||
def __init__(self, d):
|
||||
self.cachedir = bb.data.getVar("CACHE", d, True)
|
||||
if self.cachedir in [None, '']:
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'CACHE' variable.")
|
||||
try:
|
||||
os.stat(self.cachedir)
|
||||
except OSError:
|
||||
bb.mkdirhier(self.cachedir)
|
||||
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_persist_data.sqlite3")
|
||||
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
|
||||
|
||||
self.connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
|
||||
|
||||
def addDomain(self, domain):
|
||||
"""
|
||||
Should be called before any domain is used
|
||||
Creates it if it doesn't exist.
|
||||
"""
|
||||
self.connection.execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
|
||||
|
||||
def delDomain(self, domain):
|
||||
"""
|
||||
Removes a domain and all the data it contains
|
||||
"""
|
||||
self.connection.execute("DROP TABLE IF EXISTS %s;" % domain)
|
||||
|
||||
def getValue(self, domain, key):
|
||||
"""
|
||||
Return the value of a key for a domain
|
||||
"""
|
||||
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
|
||||
for row in data:
|
||||
return row[1]
|
||||
|
||||
def setValue(self, domain, key, value):
|
||||
"""
|
||||
Sets the value of a key for a domain
|
||||
"""
|
||||
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
|
||||
rows = 0
|
||||
for row in data:
|
||||
rows = rows + 1
|
||||
if rows:
|
||||
self._execute("UPDATE %s SET value=? WHERE key=?;" % domain, [value, key])
|
||||
else:
|
||||
self._execute("INSERT into %s(key, value) values (?, ?);" % domain, [key, value])
|
||||
|
||||
def delValue(self, domain, key):
|
||||
"""
|
||||
Deletes a key/value pair
|
||||
"""
|
||||
self._execute("DELETE from %s where key=?;" % domain, [key])
|
||||
|
||||
def _execute(self, *query):
|
||||
while True:
|
||||
try:
|
||||
self.connection.execute(*query)
|
||||
return
|
||||
except sqlite3.OperationalError, e:
|
||||
if 'database is locked' in str(e):
|
||||
continue
|
||||
raise
|
||||
|
||||
|
||||
|
||||
325
bitbake/lib/bb/providers.py
Normal file
@@ -0,0 +1,325 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
# Copyright (C) 2003 - 2005 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2005 ROAD GmbH
|
||||
# Copyright (C) 2006 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, re
|
||||
from bb import data, utils
|
||||
import bb
|
||||
|
||||
class NoProvider(Exception):
|
||||
"""Exception raised when no provider of a build dependency can be found"""
|
||||
|
||||
class NoRProvider(Exception):
|
||||
"""Exception raised when no provider of a runtime dependency can be found"""
|
||||
|
||||
|
||||
def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
"""
|
||||
Reorder pkg_pn by file priority and default preference
|
||||
"""
|
||||
|
||||
if not pkg_pn:
|
||||
pkg_pn = dataCache.pkg_pn
|
||||
|
||||
files = pkg_pn[pn]
|
||||
priorities = {}
|
||||
for f in files:
|
||||
priority = dataCache.bbfile_priority[f]
|
||||
preference = dataCache.pkg_dp[f]
|
||||
if priority not in priorities:
|
||||
priorities[priority] = {}
|
||||
if preference not in priorities[priority]:
|
||||
priorities[priority][preference] = []
|
||||
priorities[priority][preference].append(f)
|
||||
pri_list = priorities.keys()
|
||||
pri_list.sort(lambda a, b: a - b)
|
||||
tmp_pn = []
|
||||
for pri in pri_list:
|
||||
pref_list = priorities[pri].keys()
|
||||
pref_list.sort(lambda a, b: b - a)
|
||||
tmp_pref = []
|
||||
for pref in pref_list:
|
||||
tmp_pref.extend(priorities[pri][pref])
|
||||
tmp_pn = [tmp_pref] + tmp_pn
|
||||
|
||||
return tmp_pn
|
||||
|
||||
|
||||
def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
Find the first provider in pkg_pn with a PREFERRED_VERSION set.
|
||||
"""
|
||||
|
||||
preferred_file = None
|
||||
preferred_ver = None
|
||||
|
||||
localdata = data.createCopy(cfgData)
|
||||
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
|
||||
bb.data.update_data(localdata)
|
||||
|
||||
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
|
||||
if preferred_v:
|
||||
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
|
||||
if m:
|
||||
if m.group(1):
|
||||
preferred_e = int(m.group(1)[:-1])
|
||||
else:
|
||||
preferred_e = None
|
||||
preferred_v = m.group(2)
|
||||
if m.group(3):
|
||||
preferred_r = m.group(3)[1:]
|
||||
else:
|
||||
preferred_r = None
|
||||
else:
|
||||
preferred_e = None
|
||||
preferred_r = None
|
||||
|
||||
for file_set in pkg_pn:
|
||||
for f in file_set:
|
||||
pe,pv,pr = dataCache.pkg_pepvpr[f]
|
||||
if preferred_v == pv and (preferred_r == pr or preferred_r == None) and (preferred_e == pe or preferred_e == None):
|
||||
preferred_file = f
|
||||
preferred_ver = (pe, pv, pr)
|
||||
break
|
||||
if preferred_file:
|
||||
break;
|
||||
if preferred_r:
|
||||
pv_str = '%s-%s' % (preferred_v, preferred_r)
|
||||
else:
|
||||
pv_str = preferred_v
|
||||
if not (preferred_e is None):
|
||||
pv_str = '%s:%s' % (preferred_e, pv_str)
|
||||
itemstr = ""
|
||||
if item:
|
||||
itemstr = " (for item %s)" % item
|
||||
if preferred_file is None:
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "preferred version %s of %s not available%s" % (pv_str, pn, itemstr))
|
||||
else:
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "selecting %s as PREFERRED_VERSION %s of package %s%s" % (preferred_file, pv_str, pn, itemstr))
|
||||
|
||||
return (preferred_ver, preferred_file)
|
||||
|
||||
|
||||
def findLatestProvider(pn, cfgData, dataCache, file_set):
|
||||
"""
|
||||
Return the highest version of the providers in file_set.
|
||||
Take default preferences into account.
|
||||
"""
|
||||
latest = None
|
||||
latest_p = 0
|
||||
latest_f = None
|
||||
for file_name in file_set:
|
||||
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
|
||||
dp = dataCache.pkg_dp[file_name]
|
||||
|
||||
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
|
||||
latest = (pe, pv, pr)
|
||||
latest_f = file_name
|
||||
latest_p = dp
|
||||
|
||||
return (latest, latest_f)
|
||||
|
||||
|
||||
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
If there is a PREFERRED_VERSION, find the highest-priority bbfile
|
||||
providing that version. If not, find the latest version provided by
|
||||
an bbfile in the highest-priority set.
|
||||
"""
|
||||
|
||||
sortpkg_pn = sortPriorities(pn, dataCache, pkg_pn)
|
||||
# Find the highest priority provider with a PREFERRED_VERSION set
|
||||
(preferred_ver, preferred_file) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
|
||||
# Find the latest version of the highest priority provider
|
||||
(latest, latest_f) = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[0])
|
||||
|
||||
if preferred_file is None:
|
||||
preferred_file = latest_f
|
||||
preferred_ver = latest
|
||||
|
||||
return (latest, latest_f, preferred_ver, preferred_file)
|
||||
|
||||
|
||||
def _filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
"""
|
||||
eligible = []
|
||||
preferred_versions = {}
|
||||
sortpkg_pn = {}
|
||||
|
||||
# The order of providers depends on the order of the files on the disk
|
||||
# up to here. Sort pkg_pn to make dependency issues reproducible rather
|
||||
# than effectively random.
|
||||
providers.sort()
|
||||
|
||||
# Collate providers by PN
|
||||
pkg_pn = {}
|
||||
for p in providers:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
if pn not in pkg_pn:
|
||||
pkg_pn[pn] = []
|
||||
pkg_pn[pn].append(p)
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "providers for %s are: %s" % (item, pkg_pn.keys()))
|
||||
|
||||
# First add PREFERRED_VERSIONS
|
||||
for pn in pkg_pn.keys():
|
||||
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
|
||||
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
|
||||
if preferred_versions[pn][1]:
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
# Now add latest verisons
|
||||
for pn in pkg_pn.keys():
|
||||
if pn in preferred_versions and preferred_versions[pn][1]:
|
||||
continue
|
||||
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
if len(eligible) == 0:
|
||||
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
|
||||
return 0
|
||||
|
||||
# If pn == item, give it a slight default preference
|
||||
# This means PREFERRED_PROVIDER_foobar defaults to foobar if available
|
||||
for p in providers:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
if pn != item:
|
||||
continue
|
||||
(newvers, fn) = preferred_versions[pn]
|
||||
if not fn in eligible:
|
||||
continue
|
||||
eligible.remove(fn)
|
||||
eligible = [fn] + eligible
|
||||
|
||||
# look to see if one of them is already staged, or marked as preferred.
|
||||
# if so, bump it to the head of the queue
|
||||
for p in providers:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
pe, pv, pr = dataCache.pkg_pepvpr[p]
|
||||
|
||||
stamp = '%s.do_populate_staging' % dataCache.stamp[p]
|
||||
if os.path.exists(stamp):
|
||||
(newvers, fn) = preferred_versions[pn]
|
||||
if not fn in eligible:
|
||||
# package was made ineligible by already-failed check
|
||||
continue
|
||||
oldver = "%s-%s" % (pv, pr)
|
||||
if pe > 0:
|
||||
oldver = "%s:%s" % (pe, oldver)
|
||||
newver = "%s-%s" % (newvers[1], newvers[2])
|
||||
if newvers[0] > 0:
|
||||
newver = "%s:%s" % (newvers[0], newver)
|
||||
if (newver != oldver):
|
||||
extra_chat = "%s (%s) already staged but upgrading to %s to satisfy %s" % (pn, oldver, newver, item)
|
||||
else:
|
||||
extra_chat = "Selecting already-staged %s (%s) to satisfy %s" % (pn, oldver, item)
|
||||
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "%s" % extra_chat)
|
||||
eligible.remove(fn)
|
||||
eligible = [fn] + eligible
|
||||
break
|
||||
|
||||
return eligible
|
||||
|
||||
|
||||
def filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
Takes a "normal" target item
|
||||
"""
|
||||
|
||||
eligible = _filterProviders(providers, item, cfgData, dataCache)
|
||||
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
|
||||
if prefervar:
|
||||
dataCache.preferred[item] = prefervar
|
||||
|
||||
foundUnique = False
|
||||
if item in dataCache.preferred:
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
if dataCache.preferred[item] == pn:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
foundUnique = True
|
||||
break
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
|
||||
|
||||
return eligible, foundUnique
|
||||
|
||||
def filterProvidersRunTime(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
Takes a "runtime" target item
|
||||
"""
|
||||
|
||||
eligible = _filterProviders(providers, item, cfgData, dataCache)
|
||||
|
||||
# Should use dataCache.preferred here?
|
||||
preferred = []
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
provides = dataCache.pn_provides[pn]
|
||||
for provide in provides:
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
|
||||
if prefervar == pn:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to PREFERRED_PROVIDERS" % (pn, item))
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
preferred.append(p)
|
||||
break
|
||||
|
||||
numberPreferred = len(preferred)
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
|
||||
|
||||
return eligible, numberPreferred
|
||||
|
||||
def getRuntimeProviders(dataCache, rdepend):
|
||||
"""
|
||||
Return any providers of runtime dependency
|
||||
"""
|
||||
rproviders = []
|
||||
|
||||
if rdepend in dataCache.rproviders:
|
||||
rproviders += dataCache.rproviders[rdepend]
|
||||
|
||||
if rdepend in dataCache.packages:
|
||||
rproviders += dataCache.packages[rdepend]
|
||||
|
||||
if rproviders:
|
||||
return rproviders
|
||||
|
||||
# Only search dynamic packages if we can't find anything in other variables
|
||||
for pattern in dataCache.packages_dynamic:
|
||||
regexp = re.compile(pattern)
|
||||
if regexp.match(rdepend):
|
||||
rproviders += dataCache.packages_dynamic[pattern]
|
||||
|
||||
return rproviders
|
||||
857
bitbake/lib/bb/runqueue.py
Normal file
@@ -0,0 +1,857 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'RunQueue' implementation
|
||||
|
||||
Handles preparation and execution of a queue of tasks
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from bb import msg, data, event, mkdirhier, utils
|
||||
import bb, os, sys
|
||||
import signal
|
||||
|
||||
class TaskFailure(Exception):
|
||||
"""Exception raised when a task in a runqueue fails"""
|
||||
def __init__(self, x):
|
||||
self.args = x
|
||||
|
||||
|
||||
class RunQueueStats:
|
||||
"""
|
||||
Holds statistics on the tasks handled by the associated runQueue
|
||||
"""
|
||||
def __init__(self):
|
||||
self.completed = 0
|
||||
self.skipped = 0
|
||||
self.failed = 0
|
||||
|
||||
def taskFailed(self):
|
||||
self.failed = self.failed + 1
|
||||
|
||||
def taskCompleted(self):
|
||||
self.completed = self.completed + 1
|
||||
|
||||
def taskSkipped(self):
|
||||
self.skipped = self.skipped + 1
|
||||
|
||||
class RunQueueScheduler:
|
||||
"""
|
||||
Control the order tasks are scheduled in.
|
||||
"""
|
||||
def __init__(self, runqueue):
|
||||
"""
|
||||
The default scheduler just returns the first buildable task (the
|
||||
priority map is sorted by task numer)
|
||||
"""
|
||||
self.rq = runqueue
|
||||
numTasks = len(self.rq.runq_fnid)
|
||||
|
||||
self.prio_map = []
|
||||
self.prio_map.extend(range(numTasks))
|
||||
|
||||
def next(self):
|
||||
"""
|
||||
Return the id of the first task we find that is buildable
|
||||
"""
|
||||
for task1 in range(len(self.rq.runq_fnid)):
|
||||
task = self.prio_map[task1]
|
||||
if self.rq.runq_running[task] == 1:
|
||||
continue
|
||||
if self.rq.runq_buildable[task] == 1:
|
||||
return task
|
||||
|
||||
class RunQueueSchedulerSpeed(RunQueueScheduler):
|
||||
"""
|
||||
A scheduler optimised for speed. The priority map is sorted by task weight,
|
||||
heavier weighted tasks (tasks needed by the most other tasks) are run first.
|
||||
"""
|
||||
def __init__(self, runqueue):
|
||||
"""
|
||||
The priority map is sorted by task weight.
|
||||
"""
|
||||
from copy import deepcopy
|
||||
|
||||
self.rq = runqueue
|
||||
|
||||
sortweight = deepcopy(self.rq.runq_weight)
|
||||
sortweight.sort()
|
||||
copyweight = deepcopy(self.rq.runq_weight)
|
||||
self.prio_map = []
|
||||
|
||||
for weight in sortweight:
|
||||
idx = copyweight.index(weight)
|
||||
self.prio_map.append(idx)
|
||||
copyweight[idx] = -1
|
||||
|
||||
self.prio_map.reverse()
|
||||
|
||||
class RunQueueSchedulerCompletion(RunQueueSchedulerSpeed):
|
||||
"""
|
||||
A scheduler optimised to complete .bb files are quickly as possible. The
|
||||
priority map is sorted by task weight, but then reordered so once a given
|
||||
.bb file starts to build, its completed as quickly as possible. This works
|
||||
well where disk space is at a premium and classes like OE's rm_work are in
|
||||
force.
|
||||
"""
|
||||
def __init__(self, runqueue):
|
||||
RunQueueSchedulerSpeed.__init__(self, runqueue)
|
||||
from copy import deepcopy
|
||||
|
||||
#FIXME - whilst this groups all fnids together it does not reorder the
|
||||
#fnid groups optimally.
|
||||
|
||||
basemap = deepcopy(self.prio_map)
|
||||
self.prio_map = []
|
||||
while (len(basemap) > 0):
|
||||
entry = basemap.pop(0)
|
||||
self.prio_map.append(entry)
|
||||
fnid = self.rq.runq_fnid[entry]
|
||||
todel = []
|
||||
for entry in basemap:
|
||||
entry_fnid = self.rq.runq_fnid[entry]
|
||||
if entry_fnid == fnid:
|
||||
todel.append(basemap.index(entry))
|
||||
self.prio_map.append(entry)
|
||||
todel.reverse()
|
||||
for idx in todel:
|
||||
del basemap[idx]
|
||||
|
||||
class RunQueue:
|
||||
"""
|
||||
BitBake Run Queue implementation
|
||||
"""
|
||||
def __init__(self, cooker, cfgData, dataCache, taskData, targets):
|
||||
self.reset_runqueue()
|
||||
self.cooker = cooker
|
||||
self.dataCache = dataCache
|
||||
self.taskData = taskData
|
||||
self.targets = targets
|
||||
|
||||
self.number_tasks = int(bb.data.getVar("BB_NUMBER_THREADS", cfgData) or 1)
|
||||
self.multi_provider_whitelist = (bb.data.getVar("MULTI_PROVIDER_WHITELIST", cfgData) or "").split()
|
||||
|
||||
def reset_runqueue(self):
|
||||
|
||||
self.runq_fnid = []
|
||||
self.runq_task = []
|
||||
self.runq_depends = []
|
||||
self.runq_revdeps = []
|
||||
|
||||
def get_user_idstring(self, task):
|
||||
fn = self.taskData.fn_index[self.runq_fnid[task]]
|
||||
taskname = self.runq_task[task]
|
||||
return "%s, %s" % (fn, taskname)
|
||||
|
||||
def circular_depchains_handler(self, tasks):
|
||||
"""
|
||||
Some tasks aren't buildable, likely due to circular dependency issues.
|
||||
Identify the circular dependencies and print them in a user readable format.
|
||||
"""
|
||||
from copy import deepcopy
|
||||
|
||||
valid_chains = []
|
||||
explored_deps = {}
|
||||
msgs = []
|
||||
|
||||
def chain_reorder(chain):
|
||||
"""
|
||||
Reorder a dependency chain so the lowest task id is first
|
||||
"""
|
||||
lowest = 0
|
||||
new_chain = []
|
||||
for entry in range(len(chain)):
|
||||
if chain[entry] < chain[lowest]:
|
||||
lowest = entry
|
||||
new_chain.extend(chain[lowest:])
|
||||
new_chain.extend(chain[:lowest])
|
||||
return new_chain
|
||||
|
||||
def chain_compare_equal(chain1, chain2):
|
||||
"""
|
||||
Compare two dependency chains and see if they're the same
|
||||
"""
|
||||
if len(chain1) != len(chain2):
|
||||
return False
|
||||
for index in range(len(chain1)):
|
||||
if chain1[index] != chain2[index]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def chain_array_contains(chain, chain_array):
|
||||
"""
|
||||
Return True if chain_array contains chain
|
||||
"""
|
||||
for ch in chain_array:
|
||||
if chain_compare_equal(ch, chain):
|
||||
return True
|
||||
return False
|
||||
|
||||
def find_chains(taskid, prev_chain):
|
||||
prev_chain.append(taskid)
|
||||
total_deps = []
|
||||
total_deps.extend(self.runq_revdeps[taskid])
|
||||
for revdep in self.runq_revdeps[taskid]:
|
||||
if revdep in prev_chain:
|
||||
idx = prev_chain.index(revdep)
|
||||
# To prevent duplicates, reorder the chain to start with the lowest taskid
|
||||
# and search through an array of those we've already printed
|
||||
chain = prev_chain[idx:]
|
||||
new_chain = chain_reorder(chain)
|
||||
if not chain_array_contains(new_chain, valid_chains):
|
||||
valid_chains.append(new_chain)
|
||||
msgs.append("Dependency loop #%d found:\n" % len(valid_chains))
|
||||
for dep in new_chain:
|
||||
msgs.append(" Task %s (%s) (depends: %s)\n" % (dep, self.get_user_idstring(dep), self.runq_depends[dep]))
|
||||
msgs.append("\n")
|
||||
if len(valid_chains) > 10:
|
||||
msgs.append("Aborted dependency loops search after 10 matches.\n")
|
||||
return msgs
|
||||
continue
|
||||
scan = False
|
||||
if revdep not in explored_deps:
|
||||
scan = True
|
||||
elif revdep in explored_deps[revdep]:
|
||||
scan = True
|
||||
else:
|
||||
for dep in prev_chain:
|
||||
if dep in explored_deps[revdep]:
|
||||
scan = True
|
||||
if scan:
|
||||
find_chains(revdep, deepcopy(prev_chain))
|
||||
for dep in explored_deps[revdep]:
|
||||
if dep not in total_deps:
|
||||
total_deps.append(dep)
|
||||
|
||||
explored_deps[taskid] = total_deps
|
||||
|
||||
for task in tasks:
|
||||
find_chains(task, [])
|
||||
|
||||
return msgs
|
||||
|
||||
def calculate_task_weights(self, endpoints):
|
||||
"""
|
||||
Calculate a number representing the "weight" of each task. Heavier weighted tasks
|
||||
have more dependencies and hence should be executed sooner for maximum speed.
|
||||
|
||||
This function also sanity checks the task list finding tasks that its not
|
||||
possible to execute due to circular dependencies.
|
||||
"""
|
||||
|
||||
numTasks = len(self.runq_fnid)
|
||||
weight = []
|
||||
deps_left = []
|
||||
task_done = []
|
||||
|
||||
for listid in range(numTasks):
|
||||
task_done.append(False)
|
||||
weight.append(0)
|
||||
deps_left.append(len(self.runq_revdeps[listid]))
|
||||
|
||||
for listid in endpoints:
|
||||
weight[listid] = 1
|
||||
task_done[listid] = True
|
||||
|
||||
while 1:
|
||||
next_points = []
|
||||
for listid in endpoints:
|
||||
for revdep in self.runq_depends[listid]:
|
||||
weight[revdep] = weight[revdep] + weight[listid]
|
||||
deps_left[revdep] = deps_left[revdep] - 1
|
||||
if deps_left[revdep] == 0:
|
||||
next_points.append(revdep)
|
||||
task_done[revdep] = True
|
||||
endpoints = next_points
|
||||
if len(next_points) == 0:
|
||||
break
|
||||
|
||||
# Circular dependency sanity check
|
||||
problem_tasks = []
|
||||
for task in range(numTasks):
|
||||
if task_done[task] is False or deps_left[task] != 0:
|
||||
problem_tasks.append(task)
|
||||
bb.msg.debug(2, bb.msg.domain.RunQueue, "Task %s (%s) is not buildable\n" % (task, self.get_user_idstring(task)))
|
||||
bb.msg.debug(2, bb.msg.domain.RunQueue, "(Complete marker was %s and the remaining dependency count was %s)\n\n" % (task_done[task], deps_left[task]))
|
||||
|
||||
if problem_tasks:
|
||||
message = "Unbuildable tasks were found.\n"
|
||||
message = message + "These are usually caused by circular dependencies and any circular dependency chains found will be printed below. Increase the debug level to see a list of unbuildable tasks.\n\n"
|
||||
message = message + "Identifying dependency loops (this may take a short while)...\n"
|
||||
bb.msg.error(bb.msg.domain.RunQueue, message)
|
||||
|
||||
msgs = self.circular_depchains_handler(problem_tasks)
|
||||
|
||||
message = "\n"
|
||||
for msg in msgs:
|
||||
message = message + msg
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, message)
|
||||
|
||||
return weight
|
||||
|
||||
def prepare_runqueue(self):
|
||||
"""
|
||||
Turn a set of taskData into a RunQueue and compute data needed
|
||||
to optimise the execution order.
|
||||
"""
|
||||
|
||||
depends = []
|
||||
runq_build = []
|
||||
|
||||
taskData = self.taskData
|
||||
|
||||
if len(taskData.tasks_name) == 0:
|
||||
# Nothing to do
|
||||
return
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Preparing runqueue")
|
||||
|
||||
# Step A - Work out a list of tasks to run
|
||||
#
|
||||
# Taskdata gives us a list of possible providers for a every target
|
||||
# ordered by priority (build_targets, run_targets). It also gives
|
||||
# information on each of those providers.
|
||||
#
|
||||
# To create the actual list of tasks to execute we fix the list of
|
||||
# providers and then resolve the dependencies into task IDs. This
|
||||
# process is repeated for each type of dependency (tdepends, deptask,
|
||||
# rdeptast, recrdeptask, idepends).
|
||||
|
||||
for task in range(len(taskData.tasks_name)):
|
||||
fnid = taskData.tasks_fnid[task]
|
||||
fn = taskData.fn_index[fnid]
|
||||
task_deps = self.dataCache.task_deps[fn]
|
||||
|
||||
if fnid not in taskData.failed_fnids:
|
||||
|
||||
# Resolve task internal dependencies
|
||||
#
|
||||
# e.g. addtask before X after Y
|
||||
depends = taskData.tasks_tdepends[task]
|
||||
|
||||
# Resolve 'deptask' dependencies
|
||||
#
|
||||
# e.g. do_sometask[deptask] = "do_someothertask"
|
||||
# (makes sure sometask runs after someothertask of all DEPENDS)
|
||||
if 'deptask' in task_deps and taskData.tasks_name[task] in task_deps['deptask']:
|
||||
tasknames = task_deps['deptask'][taskData.tasks_name[task]].split()
|
||||
for depid in taskData.depids[fnid]:
|
||||
# Won't be in build_targets if ASSUME_PROVIDED
|
||||
if depid in taskData.build_targets:
|
||||
depdata = taskData.build_targets[depid][0]
|
||||
if depdata is not None:
|
||||
dep = taskData.fn_index[depdata]
|
||||
for taskname in tasknames:
|
||||
depends.append(taskData.gettask_id(dep, taskname))
|
||||
|
||||
# Resolve 'rdeptask' dependencies
|
||||
#
|
||||
# e.g. do_sometask[rdeptask] = "do_someothertask"
|
||||
# (makes sure sometask runs after someothertask of all RDEPENDS)
|
||||
if 'rdeptask' in task_deps and taskData.tasks_name[task] in task_deps['rdeptask']:
|
||||
taskname = task_deps['rdeptask'][taskData.tasks_name[task]]
|
||||
for depid in taskData.rdepids[fnid]:
|
||||
if depid in taskData.run_targets:
|
||||
depdata = taskData.run_targets[depid][0]
|
||||
if depdata is not None:
|
||||
dep = taskData.fn_index[depdata]
|
||||
depends.append(taskData.gettask_id(dep, taskname))
|
||||
|
||||
# Resolve inter-task dependencies
|
||||
#
|
||||
# e.g. do_sometask[depends] = "targetname:do_someothertask"
|
||||
# (makes sure sometask runs after targetname's someothertask)
|
||||
idepends = taskData.tasks_idepends[task]
|
||||
for idepend in idepends:
|
||||
depid = int(idepend.split(":")[0])
|
||||
if depid in taskData.build_targets:
|
||||
# Won't be in build_targets if ASSUME_PROVIDED
|
||||
depdata = taskData.build_targets[depid][0]
|
||||
if depdata is not None:
|
||||
dep = taskData.fn_index[depdata]
|
||||
depends.append(taskData.gettask_id(dep, idepend.split(":")[1]))
|
||||
|
||||
def add_recursive_build(depid, depfnid):
|
||||
"""
|
||||
Add build depends of depid to depends
|
||||
(if we've not see it before)
|
||||
(calls itself recursively)
|
||||
"""
|
||||
if str(depid) in dep_seen:
|
||||
return
|
||||
dep_seen.append(depid)
|
||||
if depid in taskData.build_targets:
|
||||
depdata = taskData.build_targets[depid][0]
|
||||
if depdata is not None:
|
||||
dep = taskData.fn_index[depdata]
|
||||
idepends = []
|
||||
# Need to avoid creating new tasks here
|
||||
taskid = taskData.gettask_id(dep, taskname, False)
|
||||
if taskid is not None:
|
||||
depends.append(taskid)
|
||||
fnid = taskData.tasks_fnid[taskid]
|
||||
idepends = taskData.tasks_idepends[taskid]
|
||||
#print "Added %s (%s) due to %s" % (taskid, taskData.fn_index[fnid], taskData.fn_index[depfnid])
|
||||
else:
|
||||
fnid = taskData.getfn_id(dep)
|
||||
for nextdepid in taskData.depids[fnid]:
|
||||
if nextdepid not in dep_seen:
|
||||
add_recursive_build(nextdepid, fnid)
|
||||
for nextdepid in taskData.rdepids[fnid]:
|
||||
if nextdepid not in rdep_seen:
|
||||
add_recursive_run(nextdepid, fnid)
|
||||
for idepend in idepends:
|
||||
nextdepid = int(idepend.split(":")[0])
|
||||
if nextdepid not in dep_seen:
|
||||
add_recursive_build(nextdepid, fnid)
|
||||
|
||||
def add_recursive_run(rdepid, depfnid):
|
||||
"""
|
||||
Add runtime depends of rdepid to depends
|
||||
(if we've not see it before)
|
||||
(calls itself recursively)
|
||||
"""
|
||||
if str(rdepid) in rdep_seen:
|
||||
return
|
||||
rdep_seen.append(rdepid)
|
||||
if rdepid in taskData.run_targets:
|
||||
depdata = taskData.run_targets[rdepid][0]
|
||||
if depdata is not None:
|
||||
dep = taskData.fn_index[depdata]
|
||||
idepends = []
|
||||
# Need to avoid creating new tasks here
|
||||
taskid = taskData.gettask_id(dep, taskname, False)
|
||||
if taskid is not None:
|
||||
depends.append(taskid)
|
||||
fnid = taskData.tasks_fnid[taskid]
|
||||
idepends = taskData.tasks_idepends[taskid]
|
||||
#print "Added %s (%s) due to %s" % (taskid, taskData.fn_index[fnid], taskData.fn_index[depfnid])
|
||||
else:
|
||||
fnid = taskData.getfn_id(dep)
|
||||
for nextdepid in taskData.depids[fnid]:
|
||||
if nextdepid not in dep_seen:
|
||||
add_recursive_build(nextdepid, fnid)
|
||||
for nextdepid in taskData.rdepids[fnid]:
|
||||
if nextdepid not in rdep_seen:
|
||||
add_recursive_run(nextdepid, fnid)
|
||||
for idepend in idepends:
|
||||
nextdepid = int(idepend.split(":")[0])
|
||||
if nextdepid not in dep_seen:
|
||||
add_recursive_build(nextdepid, fnid)
|
||||
|
||||
# Resolve recursive 'recrdeptask' dependencies
|
||||
#
|
||||
# e.g. do_sometask[recrdeptask] = "do_someothertask"
|
||||
# (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively)
|
||||
if 'recrdeptask' in task_deps and taskData.tasks_name[task] in task_deps['recrdeptask']:
|
||||
for taskname in task_deps['recrdeptask'][taskData.tasks_name[task]].split():
|
||||
dep_seen = []
|
||||
rdep_seen = []
|
||||
idep_seen = []
|
||||
for depid in taskData.depids[fnid]:
|
||||
add_recursive_build(depid, fnid)
|
||||
for rdepid in taskData.rdepids[fnid]:
|
||||
add_recursive_run(rdepid, fnid)
|
||||
for idepend in idepends:
|
||||
depid = int(idepend.split(":")[0])
|
||||
add_recursive_build(depid, fnid)
|
||||
|
||||
# Rmove all self references
|
||||
if task in depends:
|
||||
newdep = []
|
||||
bb.msg.debug(2, bb.msg.domain.RunQueue, "Task %s (%s %s) contains self reference! %s" % (task, taskData.fn_index[taskData.tasks_fnid[task]], taskData.tasks_name[task], depends))
|
||||
for dep in depends:
|
||||
if task != dep:
|
||||
newdep.append(dep)
|
||||
depends = newdep
|
||||
|
||||
|
||||
self.runq_fnid.append(taskData.tasks_fnid[task])
|
||||
self.runq_task.append(taskData.tasks_name[task])
|
||||
self.runq_depends.append(set(depends))
|
||||
self.runq_revdeps.append(set())
|
||||
|
||||
runq_build.append(0)
|
||||
|
||||
# Step B - Mark all active tasks
|
||||
#
|
||||
# Start with the tasks we were asked to run and mark all dependencies
|
||||
# as active too. If the task is to be 'forced', clear its stamp. Once
|
||||
# all active tasks are marked, prune the ones we don't need.
|
||||
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Marking Active Tasks")
|
||||
|
||||
def mark_active(listid, depth):
|
||||
"""
|
||||
Mark an item as active along with its depends
|
||||
(calls itself recursively)
|
||||
"""
|
||||
|
||||
if runq_build[listid] == 1:
|
||||
return
|
||||
|
||||
runq_build[listid] = 1
|
||||
|
||||
depends = self.runq_depends[listid]
|
||||
for depend in depends:
|
||||
mark_active(depend, depth+1)
|
||||
|
||||
for target in self.targets:
|
||||
targetid = taskData.getbuild_id(target[0])
|
||||
|
||||
if targetid not in taskData.build_targets:
|
||||
continue
|
||||
|
||||
if targetid in taskData.failed_deps:
|
||||
continue
|
||||
|
||||
fnid = taskData.build_targets[targetid][0]
|
||||
|
||||
# Remove stamps for targets if force mode active
|
||||
if self.cooker.configuration.force:
|
||||
fn = taskData.fn_index[fnid]
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (target[1], fn))
|
||||
bb.build.del_stamp(target[1], self.dataCache, fn)
|
||||
|
||||
if fnid in taskData.failed_fnids:
|
||||
continue
|
||||
|
||||
if target[1] not in taskData.tasks_lookup[fnid]:
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "Task %s does not exist for target %s" % (target[1], target[0]))
|
||||
|
||||
listid = taskData.tasks_lookup[fnid][target[1]]
|
||||
|
||||
mark_active(listid, 1)
|
||||
|
||||
# Step C - Prune all inactive tasks
|
||||
#
|
||||
# Once all active tasks are marked, prune the ones we don't need.
|
||||
|
||||
maps = []
|
||||
delcount = 0
|
||||
for listid in range(len(self.runq_fnid)):
|
||||
if runq_build[listid-delcount] == 1:
|
||||
maps.append(listid-delcount)
|
||||
else:
|
||||
del self.runq_fnid[listid-delcount]
|
||||
del self.runq_task[listid-delcount]
|
||||
del self.runq_depends[listid-delcount]
|
||||
del runq_build[listid-delcount]
|
||||
del self.runq_revdeps[listid-delcount]
|
||||
delcount = delcount + 1
|
||||
maps.append(-1)
|
||||
|
||||
#
|
||||
# Step D - Sanity checks and computation
|
||||
#
|
||||
|
||||
# Check to make sure we still have tasks to run
|
||||
if len(self.runq_fnid) == 0:
|
||||
if not taskData.abort:
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "All buildable tasks have been run but the build is incomplete (--continue mode). Errors for the tasks that failed will have been printed above.")
|
||||
else:
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "No active tasks and not in --continue mode?! Please report this bug.")
|
||||
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Pruned %s inactive tasks, %s left" % (delcount, len(self.runq_fnid)))
|
||||
|
||||
# Remap the dependencies to account for the deleted tasks
|
||||
# Check we didn't delete a task we depend on
|
||||
for listid in range(len(self.runq_fnid)):
|
||||
newdeps = []
|
||||
origdeps = self.runq_depends[listid]
|
||||
for origdep in origdeps:
|
||||
if maps[origdep] == -1:
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "Invalid mapping - Should never happen!")
|
||||
newdeps.append(maps[origdep])
|
||||
self.runq_depends[listid] = set(newdeps)
|
||||
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Assign Weightings")
|
||||
|
||||
# Generate a list of reverse dependencies to ease future calculations
|
||||
for listid in range(len(self.runq_fnid)):
|
||||
for dep in self.runq_depends[listid]:
|
||||
self.runq_revdeps[dep].add(listid)
|
||||
|
||||
# Identify tasks at the end of dependency chains
|
||||
# Error on circular dependency loops (length two)
|
||||
endpoints = []
|
||||
for listid in range(len(self.runq_fnid)):
|
||||
revdeps = self.runq_revdeps[listid]
|
||||
if len(revdeps) == 0:
|
||||
endpoints.append(listid)
|
||||
for dep in revdeps:
|
||||
if dep in self.runq_depends[listid]:
|
||||
#self.dump_data(taskData)
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "Task %s (%s) has circular dependency on %s (%s)" % (taskData.fn_index[self.runq_fnid[dep]], self.runq_task[dep] , taskData.fn_index[self.runq_fnid[listid]], self.runq_task[listid]))
|
||||
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Compute totals (have %s endpoint(s))" % len(endpoints))
|
||||
|
||||
|
||||
# Calculate task weights
|
||||
# Check of higher length circular dependencies
|
||||
self.runq_weight = self.calculate_task_weights(endpoints)
|
||||
|
||||
# Decide what order to execute the tasks in, pick a scheduler
|
||||
# FIXME - Allow user selection
|
||||
#self.sched = RunQueueScheduler(self)
|
||||
self.sched = RunQueueSchedulerSpeed(self)
|
||||
#self.sched = RunQueueSchedulerCompletion(self)
|
||||
|
||||
# Sanity Check - Check for multiple tasks building the same provider
|
||||
prov_list = {}
|
||||
seen_fn = []
|
||||
for task in range(len(self.runq_fnid)):
|
||||
fn = taskData.fn_index[self.runq_fnid[task]]
|
||||
if fn in seen_fn:
|
||||
continue
|
||||
seen_fn.append(fn)
|
||||
for prov in self.dataCache.fn_provides[fn]:
|
||||
if prov not in prov_list:
|
||||
prov_list[prov] = [fn]
|
||||
elif fn not in prov_list[prov]:
|
||||
prov_list[prov].append(fn)
|
||||
error = False
|
||||
for prov in prov_list:
|
||||
if len(prov_list[prov]) > 1 and prov not in self.multi_provider_whitelist:
|
||||
error = True
|
||||
bb.msg.error(bb.msg.domain.RunQueue, "Multiple .bb files are due to be built which each provide %s (%s).\n This usually means one provides something the other doesn't and should." % (prov, " ".join(prov_list[prov])))
|
||||
#if error:
|
||||
# bb.msg.fatal(bb.msg.domain.RunQueue, "Corrupted metadata configuration detected, aborting...")
|
||||
|
||||
#self.dump_data(taskData)
|
||||
|
||||
def execute_runqueue(self):
|
||||
"""
|
||||
Run the tasks in a queue prepared by prepare_runqueue
|
||||
Upon failure, optionally try to recover the build using any alternate providers
|
||||
(if the abort on failure configuration option isn't set)
|
||||
"""
|
||||
|
||||
failures = 0
|
||||
while 1:
|
||||
failed_fnids = []
|
||||
try:
|
||||
self.execute_runqueue_internal()
|
||||
finally:
|
||||
if self.master_process:
|
||||
failed_fnids = self.finish_runqueue()
|
||||
if len(failed_fnids) == 0:
|
||||
return failures
|
||||
if self.taskData.abort:
|
||||
raise bb.runqueue.TaskFailure(failed_fnids)
|
||||
for fnid in failed_fnids:
|
||||
#print "Failure: %s %s %s" % (fnid, self.taskData.fn_index[fnid], self.runq_task[fnid])
|
||||
self.taskData.fail_fnid(fnid)
|
||||
failures = failures + 1
|
||||
self.reset_runqueue()
|
||||
self.prepare_runqueue()
|
||||
|
||||
def execute_runqueue_initVars(self):
|
||||
|
||||
self.stats = RunQueueStats()
|
||||
|
||||
self.active_builds = 0
|
||||
self.runq_buildable = []
|
||||
self.runq_running = []
|
||||
self.runq_complete = []
|
||||
self.build_pids = {}
|
||||
self.failed_fnids = []
|
||||
self.master_process = True
|
||||
|
||||
# Mark initial buildable tasks
|
||||
for task in range(len(self.runq_fnid)):
|
||||
self.runq_running.append(0)
|
||||
self.runq_complete.append(0)
|
||||
if len(self.runq_depends[task]) == 0:
|
||||
self.runq_buildable.append(1)
|
||||
else:
|
||||
self.runq_buildable.append(0)
|
||||
|
||||
def task_complete(self, task):
|
||||
"""
|
||||
Mark a task as completed
|
||||
Look at the reverse dependencies and mark any task with
|
||||
completed dependencies as buildable
|
||||
"""
|
||||
self.runq_complete[task] = 1
|
||||
for revdep in self.runq_revdeps[task]:
|
||||
if self.runq_running[revdep] == 1:
|
||||
continue
|
||||
if self.runq_buildable[revdep] == 1:
|
||||
continue
|
||||
alldeps = 1
|
||||
for dep in self.runq_depends[revdep]:
|
||||
if self.runq_complete[dep] != 1:
|
||||
alldeps = 0
|
||||
if alldeps == 1:
|
||||
self.runq_buildable[revdep] = 1
|
||||
fn = self.taskData.fn_index[self.runq_fnid[revdep]]
|
||||
taskname = self.runq_task[revdep]
|
||||
bb.msg.debug(1, bb.msg.domain.RunQueue, "Marking task %s (%s, %s) as buildable" % (revdep, fn, taskname))
|
||||
|
||||
def execute_runqueue_internal(self):
|
||||
"""
|
||||
Run the tasks in a queue prepared by prepare_runqueue
|
||||
"""
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Executing runqueue")
|
||||
|
||||
self.execute_runqueue_initVars()
|
||||
|
||||
if len(self.runq_fnid) == 0:
|
||||
# nothing to do
|
||||
return []
|
||||
|
||||
def sigint_handler(signum, frame):
|
||||
raise KeyboardInterrupt
|
||||
|
||||
# RP - this code allows tasks to run out of the correct order - disabled, FIXME
|
||||
# Find any tasks with current stamps and remove them from the queue
|
||||
#for task1 in range(len(self.runq_fnid)):
|
||||
# task = self.prio_map[task1]
|
||||
# fn = self.taskData.fn_index[self.runq_fnid[task]]
|
||||
# taskname = self.runq_task[task]
|
||||
# if bb.build.stamp_is_current(taskname, self.dataCache, fn):
|
||||
# bb.msg.debug(2, bb.msg.domain.RunQueue, "Stamp current task %s (%s)" % (task, self.get_user_idstring(task)))
|
||||
# self.runq_running[task] = 1
|
||||
# self.task_complete(task)
|
||||
# self.stats.taskCompleted()
|
||||
# self.stats.taskSkipped()
|
||||
|
||||
while True:
|
||||
task = self.sched.next()
|
||||
if task is not None:
|
||||
fn = self.taskData.fn_index[self.runq_fnid[task]]
|
||||
|
||||
taskname = self.runq_task[task]
|
||||
if bb.build.stamp_is_current(taskname, self.dataCache, fn):
|
||||
bb.msg.debug(2, bb.msg.domain.RunQueue, "Stamp current task %s (%s)" % (task, self.get_user_idstring(task)))
|
||||
self.runq_running[task] = 1
|
||||
self.task_complete(task)
|
||||
self.stats.taskCompleted()
|
||||
self.stats.taskSkipped()
|
||||
continue
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Running task %d of %d (ID: %s, %s)" % (self.stats.completed + self.active_builds + 1, len(self.runq_fnid), task, self.get_user_idstring(task)))
|
||||
try:
|
||||
pid = os.fork()
|
||||
except OSError, e:
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "fork failed: %d (%s)" % (e.errno, e.strerror))
|
||||
if pid == 0:
|
||||
# Bypass master process' handling
|
||||
self.master_process = False
|
||||
# Stop Ctrl+C being sent to children
|
||||
# signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
# Make the child the process group leader
|
||||
os.setpgid(0, 0)
|
||||
newsi = os.open('/dev/null', os.O_RDWR)
|
||||
os.dup2(newsi, sys.stdin.fileno())
|
||||
self.cooker.configuration.cmd = taskname[3:]
|
||||
try:
|
||||
self.cooker.tryBuild(fn, False)
|
||||
except bb.build.EventException:
|
||||
bb.msg.error(bb.msg.domain.Build, "Build of " + fn + " " + taskname + " failed")
|
||||
sys.exit(1)
|
||||
except:
|
||||
bb.msg.error(bb.msg.domain.Build, "Build of " + fn + " " + taskname + " failed")
|
||||
raise
|
||||
sys.exit(0)
|
||||
self.build_pids[pid] = task
|
||||
self.runq_running[task] = 1
|
||||
self.active_builds = self.active_builds + 1
|
||||
if self.active_builds < self.number_tasks:
|
||||
continue
|
||||
if self.active_builds > 0:
|
||||
result = os.waitpid(-1, 0)
|
||||
self.active_builds = self.active_builds - 1
|
||||
task = self.build_pids[result[0]]
|
||||
if result[1] != 0:
|
||||
del self.build_pids[result[0]]
|
||||
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) failed" % (task, self.get_user_idstring(task)))
|
||||
self.failed_fnids.append(self.runq_fnid[task])
|
||||
self.stats.taskFailed()
|
||||
break
|
||||
self.task_complete(task)
|
||||
self.stats.taskCompleted()
|
||||
del self.build_pids[result[0]]
|
||||
continue
|
||||
return
|
||||
|
||||
def finish_runqueue(self):
|
||||
try:
|
||||
while self.active_builds > 0:
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Waiting for %s active tasks to finish" % self.active_builds)
|
||||
tasknum = 1
|
||||
for k, v in self.build_pids.iteritems():
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "%s: %s (%s)" % (tasknum, self.get_user_idstring(v), k))
|
||||
tasknum = tasknum + 1
|
||||
result = os.waitpid(-1, 0)
|
||||
task = self.build_pids[result[0]]
|
||||
if result[1] != 0:
|
||||
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) failed" % (task, self.get_user_idstring(task)))
|
||||
self.failed_fnids.append(self.runq_fnid[task])
|
||||
self.stats.taskFailed()
|
||||
del self.build_pids[result[0]]
|
||||
self.active_builds = self.active_builds - 1
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed." % (self.stats.completed, self.stats.skipped, self.stats.failed))
|
||||
return self.failed_fnids
|
||||
except KeyboardInterrupt:
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Sending SIGINT to remaining %s tasks" % self.active_builds)
|
||||
for k, v in self.build_pids.iteritems():
|
||||
try:
|
||||
os.kill(-k, signal.SIGINT)
|
||||
except:
|
||||
pass
|
||||
raise
|
||||
|
||||
# Sanity Checks
|
||||
for task in range(len(self.runq_fnid)):
|
||||
if self.runq_buildable[task] == 0:
|
||||
bb.msg.error(bb.msg.domain.RunQueue, "Task %s never buildable!" % task)
|
||||
if self.runq_running[task] == 0:
|
||||
bb.msg.error(bb.msg.domain.RunQueue, "Task %s never ran!" % task)
|
||||
if self.runq_complete[task] == 0:
|
||||
bb.msg.error(bb.msg.domain.RunQueue, "Task %s never completed!" % task)
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.RunQueue, "Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed." % (self.stats.completed, self.stats.skipped, self.stats.failed))
|
||||
|
||||
return self.failed_fnids
|
||||
|
||||
def dump_data(self, taskQueue):
|
||||
"""
|
||||
Dump some debug information on the internal data structures
|
||||
"""
|
||||
bb.msg.debug(3, bb.msg.domain.RunQueue, "run_tasks:")
|
||||
for task in range(len(self.runq_fnid)):
|
||||
bb.msg.debug(3, bb.msg.domain.RunQueue, " (%s)%s - %s: %s Deps %s RevDeps %s" % (task,
|
||||
taskQueue.fn_index[self.runq_fnid[task]],
|
||||
self.runq_task[task],
|
||||
self.runq_weight[task],
|
||||
self.runq_depends[task],
|
||||
self.runq_revdeps[task]))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.RunQueue, "sorted_tasks:")
|
||||
for task1 in range(len(self.runq_fnid)):
|
||||
if task1 in self.prio_map:
|
||||
task = self.prio_map[task1]
|
||||
bb.msg.debug(3, bb.msg.domain.RunQueue, " (%s)%s - %s: %s Deps %s RevDeps %s" % (task,
|
||||
taskQueue.fn_index[self.runq_fnid[task]],
|
||||
self.runq_task[task],
|
||||
self.runq_weight[task],
|
||||
self.runq_depends[task],
|
||||
self.runq_revdeps[task]))
|
||||
840
bitbake/lib/bb/shell.py
Normal file
@@ -0,0 +1,840 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
##########################################################################
|
||||
#
|
||||
# Copyright (C) 2005-2006 Michael 'Mickey' Lauer <mickey@Vanille.de>
|
||||
# Copyright (C) 2005-2006 Vanille Media
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
##########################################################################
|
||||
#
|
||||
# Thanks to:
|
||||
# * Holger Freyther <zecke@handhelds.org>
|
||||
# * Justin Patrin <papercrane@reversefold.com>
|
||||
#
|
||||
##########################################################################
|
||||
|
||||
"""
|
||||
BitBake Shell
|
||||
|
||||
IDEAS:
|
||||
* list defined tasks per package
|
||||
* list classes
|
||||
* toggle force
|
||||
* command to reparse just one (or more) bbfile(s)
|
||||
* automatic check if reparsing is necessary (inotify?)
|
||||
* frontend for bb file manipulation
|
||||
* more shell-like features:
|
||||
- output control, i.e. pipe output into grep, sort, etc.
|
||||
- job control, i.e. bring running commands into background and foreground
|
||||
* start parsing in background right after startup
|
||||
* ncurses interface
|
||||
|
||||
PROBLEMS:
|
||||
* force doesn't always work
|
||||
* readline completion for commands with more than one parameters
|
||||
|
||||
"""
|
||||
|
||||
##########################################################################
|
||||
# Import and setup global variables
|
||||
##########################################################################
|
||||
|
||||
try:
|
||||
set
|
||||
except NameError:
|
||||
from sets import Set as set
|
||||
import sys, os, readline, socket, httplib, urllib, commands, popen2, copy, shlex, Queue, fnmatch
|
||||
from bb import data, parse, build, fatal, cache, taskdata, runqueue, providers as Providers
|
||||
|
||||
__version__ = "0.5.3.1"
|
||||
__credits__ = """BitBake Shell Version %s (C) 2005 Michael 'Mickey' Lauer <mickey@Vanille.de>
|
||||
Type 'help' for more information, press CTRL-D to exit.""" % __version__
|
||||
|
||||
cmds = {}
|
||||
leave_mainloop = False
|
||||
last_exception = None
|
||||
cooker = None
|
||||
parsed = False
|
||||
debug = os.environ.get( "BBSHELL_DEBUG", "" )
|
||||
|
||||
##########################################################################
|
||||
# Class BitBakeShellCommands
|
||||
##########################################################################
|
||||
|
||||
class BitBakeShellCommands:
|
||||
"""This class contains the valid commands for the shell"""
|
||||
|
||||
def __init__( self, shell ):
|
||||
"""Register all the commands"""
|
||||
self._shell = shell
|
||||
for attr in BitBakeShellCommands.__dict__:
|
||||
if not attr.startswith( "_" ):
|
||||
if attr.endswith( "_" ):
|
||||
command = attr[:-1].lower()
|
||||
else:
|
||||
command = attr[:].lower()
|
||||
method = getattr( BitBakeShellCommands, attr )
|
||||
debugOut( "registering command '%s'" % command )
|
||||
# scan number of arguments
|
||||
usage = getattr( method, "usage", "" )
|
||||
if usage != "<...>":
|
||||
numArgs = len( usage.split() )
|
||||
else:
|
||||
numArgs = -1
|
||||
shell.registerCommand( command, method, numArgs, "%s %s" % ( command, usage ), method.__doc__ )
|
||||
|
||||
def _checkParsed( self ):
|
||||
if not parsed:
|
||||
print "SHELL: This command needs to parse bbfiles..."
|
||||
self.parse( None )
|
||||
|
||||
def _findProvider( self, item ):
|
||||
self._checkParsed()
|
||||
# Need to use taskData for this information
|
||||
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
|
||||
if not preferred: preferred = item
|
||||
try:
|
||||
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
|
||||
except KeyError:
|
||||
if item in cooker.status.providers:
|
||||
pf = cooker.status.providers[item][0]
|
||||
else:
|
||||
pf = None
|
||||
return pf
|
||||
|
||||
def alias( self, params ):
|
||||
"""Register a new name for a command"""
|
||||
new, old = params
|
||||
if not old in cmds:
|
||||
print "ERROR: Command '%s' not known" % old
|
||||
else:
|
||||
cmds[new] = cmds[old]
|
||||
print "OK"
|
||||
alias.usage = "<alias> <command>"
|
||||
|
||||
def buffer( self, params ):
|
||||
"""Dump specified output buffer"""
|
||||
index = params[0]
|
||||
print self._shell.myout.buffer( int( index ) )
|
||||
buffer.usage = "<index>"
|
||||
|
||||
def buffers( self, params ):
|
||||
"""Show the available output buffers"""
|
||||
commands = self._shell.myout.bufferedCommands()
|
||||
if not commands:
|
||||
print "SHELL: No buffered commands available yet. Start doing something."
|
||||
else:
|
||||
print "="*35, "Available Output Buffers", "="*27
|
||||
for index, cmd in enumerate( commands ):
|
||||
print "| %s %s" % ( str( index ).ljust( 3 ), cmd )
|
||||
print "="*88
|
||||
|
||||
def build( self, params, cmd = "build" ):
|
||||
"""Build a providee"""
|
||||
global last_exception
|
||||
globexpr = params[0]
|
||||
self._checkParsed()
|
||||
names = globfilter( cooker.status.pkg_pn.keys(), globexpr )
|
||||
if len( names ) == 0: names = [ globexpr ]
|
||||
print "SHELL: Building %s" % ' '.join( names )
|
||||
|
||||
oldcmd = cooker.configuration.cmd
|
||||
cooker.configuration.cmd = cmd
|
||||
|
||||
td = taskdata.TaskData(cooker.configuration.abort)
|
||||
localdata = data.createCopy(cooker.configuration.data)
|
||||
data.update_data(localdata)
|
||||
data.expandKeys(localdata)
|
||||
|
||||
try:
|
||||
tasks = []
|
||||
for name in names:
|
||||
td.add_provider(localdata, cooker.status, name)
|
||||
providers = td.get_provider(name)
|
||||
|
||||
if len(providers) == 0:
|
||||
raise Providers.NoProvider
|
||||
|
||||
tasks.append([name, "do_%s" % cooker.configuration.cmd])
|
||||
|
||||
td.add_unresolved(localdata, cooker.status)
|
||||
|
||||
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
|
||||
rq.prepare_runqueue()
|
||||
rq.execute_runqueue()
|
||||
|
||||
except Providers.NoProvider:
|
||||
print "ERROR: No Provider"
|
||||
last_exception = Providers.NoProvider
|
||||
|
||||
except runqueue.TaskFailure, fnids:
|
||||
for fnid in fnids:
|
||||
print "ERROR: '%s' failed" % td.fn_index[fnid]
|
||||
last_exception = runqueue.TaskFailure
|
||||
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % names
|
||||
last_exception = e
|
||||
|
||||
cooker.configuration.cmd = oldcmd
|
||||
|
||||
build.usage = "<providee>"
|
||||
|
||||
def clean( self, params ):
|
||||
"""Clean a providee"""
|
||||
self.build( params, "clean" )
|
||||
clean.usage = "<providee>"
|
||||
|
||||
def compile( self, params ):
|
||||
"""Execute 'compile' on a providee"""
|
||||
self.build( params, "compile" )
|
||||
compile.usage = "<providee>"
|
||||
|
||||
def configure( self, params ):
|
||||
"""Execute 'configure' on a providee"""
|
||||
self.build( params, "configure" )
|
||||
configure.usage = "<providee>"
|
||||
|
||||
def edit( self, params ):
|
||||
"""Call $EDITOR on a providee"""
|
||||
name = params[0]
|
||||
bbfile = self._findProvider( name )
|
||||
if bbfile is not None:
|
||||
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), bbfile ) )
|
||||
else:
|
||||
print "ERROR: Nothing provides '%s'" % name
|
||||
edit.usage = "<providee>"
|
||||
|
||||
def environment( self, params ):
|
||||
"""Dump out the outer BitBake environment"""
|
||||
cooker.showEnvironment()
|
||||
|
||||
def exit_( self, params ):
|
||||
"""Leave the BitBake Shell"""
|
||||
debugOut( "setting leave_mainloop to true" )
|
||||
global leave_mainloop
|
||||
leave_mainloop = True
|
||||
|
||||
def fetch( self, params ):
|
||||
"""Fetch a providee"""
|
||||
self.build( params, "fetch" )
|
||||
fetch.usage = "<providee>"
|
||||
|
||||
def fileBuild( self, params, cmd = "build" ):
|
||||
"""Parse and build a .bb file"""
|
||||
global last_exception
|
||||
name = params[0]
|
||||
bf = completeFilePath( name )
|
||||
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
|
||||
|
||||
oldcmd = cooker.configuration.cmd
|
||||
cooker.configuration.cmd = cmd
|
||||
|
||||
thisdata = data.createCopy(cooker.configuration.data)
|
||||
data.update_data(thisdata)
|
||||
data.expandKeys(thisdata)
|
||||
|
||||
try:
|
||||
bbfile_data = parse.handle( bf, thisdata )
|
||||
except parse.ParseError:
|
||||
print "ERROR: Unable to open or parse '%s'" % bf
|
||||
else:
|
||||
# Remove stamp for target if force mode active
|
||||
if cooker.configuration.force:
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (cmd, bf))
|
||||
bb.build.del_stamp('do_%s' % cmd, bbfile_data)
|
||||
|
||||
item = data.getVar('PN', bbfile_data, 1)
|
||||
data.setVar( "_task_cache", [], bbfile_data ) # force
|
||||
try:
|
||||
cooker.tryBuildPackage( os.path.abspath( bf ), item, cmd, bbfile_data, True )
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % name
|
||||
last_exception = e
|
||||
|
||||
cooker.configuration.cmd = oldcmd
|
||||
fileBuild.usage = "<bbfile>"
|
||||
|
||||
def fileClean( self, params ):
|
||||
"""Clean a .bb file"""
|
||||
self.fileBuild( params, "clean" )
|
||||
fileClean.usage = "<bbfile>"
|
||||
|
||||
def fileEdit( self, params ):
|
||||
"""Call $EDITOR on a .bb file"""
|
||||
name = params[0]
|
||||
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), completeFilePath( name ) ) )
|
||||
fileEdit.usage = "<bbfile>"
|
||||
|
||||
def fileRebuild( self, params ):
|
||||
"""Rebuild (clean & build) a .bb file"""
|
||||
self.fileBuild( params, "rebuild" )
|
||||
fileRebuild.usage = "<bbfile>"
|
||||
|
||||
def fileReparse( self, params ):
|
||||
"""(re)Parse a bb file"""
|
||||
bbfile = params[0]
|
||||
print "SHELL: Parsing '%s'" % bbfile
|
||||
parse.update_mtime( bbfile )
|
||||
cooker.bb_cache.cacheValidUpdate(bbfile)
|
||||
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data)
|
||||
cooker.bb_cache.sync()
|
||||
if False: #fromCache:
|
||||
print "SHELL: File has not been updated, not reparsing"
|
||||
else:
|
||||
print "SHELL: Parsed"
|
||||
fileReparse.usage = "<bbfile>"
|
||||
|
||||
def abort( self, params ):
|
||||
"""Toggle abort task execution flag (see bitbake -k)"""
|
||||
cooker.configuration.abort = not cooker.configuration.abort
|
||||
print "SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort )
|
||||
|
||||
def force( self, params ):
|
||||
"""Toggle force task execution flag (see bitbake -f)"""
|
||||
cooker.configuration.force = not cooker.configuration.force
|
||||
print "SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force )
|
||||
|
||||
def help( self, params ):
|
||||
"""Show a comprehensive list of commands and their purpose"""
|
||||
print "="*30, "Available Commands", "="*30
|
||||
allcmds = cmds.keys()
|
||||
allcmds.sort()
|
||||
for cmd in allcmds:
|
||||
function,numparams,usage,helptext = cmds[cmd]
|
||||
print "| %s | %s" % (usage.ljust(30), helptext)
|
||||
print "="*78
|
||||
|
||||
def lastError( self, params ):
|
||||
"""Show the reason or log that was produced by the last BitBake event exception"""
|
||||
if last_exception is None:
|
||||
print "SHELL: No Errors yet (Phew)..."
|
||||
else:
|
||||
reason, event = last_exception.args
|
||||
print "SHELL: Reason for the last error: '%s'" % reason
|
||||
if ':' in reason:
|
||||
msg, filename = reason.split( ':' )
|
||||
filename = filename.strip()
|
||||
print "SHELL: Dumping log file for last error:"
|
||||
try:
|
||||
print open( filename ).read()
|
||||
except IOError:
|
||||
print "ERROR: Couldn't open '%s'" % filename
|
||||
|
||||
def match( self, params ):
|
||||
"""Dump all files or providers matching a glob expression"""
|
||||
what, globexpr = params
|
||||
if what == "files":
|
||||
self._checkParsed()
|
||||
for key in globfilter( cooker.status.pkg_fn.keys(), globexpr ): print key
|
||||
elif what == "providers":
|
||||
self._checkParsed()
|
||||
for key in globfilter( cooker.status.pkg_pn.keys(), globexpr ): print key
|
||||
else:
|
||||
print "Usage: match %s" % self.print_.usage
|
||||
match.usage = "<files|providers> <glob>"
|
||||
|
||||
def new( self, params ):
|
||||
"""Create a new .bb file and open the editor"""
|
||||
dirname, filename = params
|
||||
packages = '/'.join( data.getVar( "BBFILES", cooker.configuration.data, 1 ).split('/')[:-2] )
|
||||
fulldirname = "%s/%s" % ( packages, dirname )
|
||||
|
||||
if not os.path.exists( fulldirname ):
|
||||
print "SHELL: Creating '%s'" % fulldirname
|
||||
os.mkdir( fulldirname )
|
||||
if os.path.exists( fulldirname ) and os.path.isdir( fulldirname ):
|
||||
if os.path.exists( "%s/%s" % ( fulldirname, filename ) ):
|
||||
print "SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename )
|
||||
return False
|
||||
print "SHELL: Creating '%s/%s'" % ( fulldirname, filename )
|
||||
newpackage = open( "%s/%s" % ( fulldirname, filename ), "w" )
|
||||
print >>newpackage,"""DESCRIPTION = ""
|
||||
SECTION = ""
|
||||
AUTHOR = ""
|
||||
HOMEPAGE = ""
|
||||
MAINTAINER = ""
|
||||
LICENSE = "GPL"
|
||||
PR = "r0"
|
||||
|
||||
SRC_URI = ""
|
||||
|
||||
#inherit base
|
||||
|
||||
#do_configure() {
|
||||
#
|
||||
#}
|
||||
|
||||
#do_compile() {
|
||||
#
|
||||
#}
|
||||
|
||||
#do_stage() {
|
||||
#
|
||||
#}
|
||||
|
||||
#do_install() {
|
||||
#
|
||||
#}
|
||||
"""
|
||||
newpackage.close()
|
||||
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
|
||||
new.usage = "<directory> <filename>"
|
||||
|
||||
def package( self, params ):
|
||||
"""Execute 'package' on a providee"""
|
||||
self.build( params, "package" )
|
||||
package.usage = "<providee>"
|
||||
|
||||
def pasteBin( self, params ):
|
||||
"""Send a command + output buffer to the pastebin at http://rafb.net/paste"""
|
||||
index = params[0]
|
||||
contents = self._shell.myout.buffer( int( index ) )
|
||||
sendToPastebin( "output of " + params[0], contents )
|
||||
pasteBin.usage = "<index>"
|
||||
|
||||
def pasteLog( self, params ):
|
||||
"""Send the last event exception error log (if there is one) to http://rafb.net/paste"""
|
||||
if last_exception is None:
|
||||
print "SHELL: No Errors yet (Phew)..."
|
||||
else:
|
||||
reason, event = last_exception.args
|
||||
print "SHELL: Reason for the last error: '%s'" % reason
|
||||
if ':' in reason:
|
||||
msg, filename = reason.split( ':' )
|
||||
filename = filename.strip()
|
||||
print "SHELL: Pasting log file to pastebin..."
|
||||
|
||||
file = open( filename ).read()
|
||||
sendToPastebin( "contents of " + filename, file )
|
||||
|
||||
def patch( self, params ):
|
||||
"""Execute 'patch' command on a providee"""
|
||||
self.build( params, "patch" )
|
||||
patch.usage = "<providee>"
|
||||
|
||||
def parse( self, params ):
|
||||
"""(Re-)parse .bb files and calculate the dependency graph"""
|
||||
cooker.status = cache.CacheData()
|
||||
ignore = data.getVar("ASSUME_PROVIDED", cooker.configuration.data, 1) or ""
|
||||
cooker.status.ignored_dependencies = set( ignore.split() )
|
||||
cooker.handleCollections( data.getVar("BBFILE_COLLECTIONS", cooker.configuration.data, 1) )
|
||||
|
||||
(filelist, masked) = cooker.collect_bbfiles()
|
||||
cooker.parse_bbfiles(filelist, masked, cooker.myProgressCallback)
|
||||
cooker.buildDepgraph()
|
||||
global parsed
|
||||
parsed = True
|
||||
print
|
||||
|
||||
def reparse( self, params ):
|
||||
"""(re)Parse a providee's bb file"""
|
||||
bbfile = self._findProvider( params[0] )
|
||||
if bbfile is not None:
|
||||
print "SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] )
|
||||
self.fileReparse( [ bbfile ] )
|
||||
else:
|
||||
print "ERROR: Nothing provides '%s'" % params[0]
|
||||
reparse.usage = "<providee>"
|
||||
|
||||
def getvar( self, params ):
|
||||
"""Dump the contents of an outer BitBake environment variable"""
|
||||
var = params[0]
|
||||
value = data.getVar( var, cooker.configuration.data, 1 )
|
||||
print value
|
||||
getvar.usage = "<variable>"
|
||||
|
||||
def peek( self, params ):
|
||||
"""Dump contents of variable defined in providee's metadata"""
|
||||
name, var = params
|
||||
bbfile = self._findProvider( name )
|
||||
if bbfile is not None:
|
||||
the_data = cooker.bb_cache.loadDataFull(bbfile, cooker.configuration.data)
|
||||
value = the_data.getVar( var, 1 )
|
||||
print value
|
||||
else:
|
||||
print "ERROR: Nothing provides '%s'" % name
|
||||
peek.usage = "<providee> <variable>"
|
||||
|
||||
def poke( self, params ):
|
||||
"""Set contents of variable defined in providee's metadata"""
|
||||
name, var, value = params
|
||||
bbfile = self._findProvider( name )
|
||||
if bbfile is not None:
|
||||
print "ERROR: Sorry, this functionality is currently broken"
|
||||
#d = cooker.pkgdata[bbfile]
|
||||
#data.setVar( var, value, d )
|
||||
|
||||
# mark the change semi persistant
|
||||
#cooker.pkgdata.setDirty(bbfile, d)
|
||||
#print "OK"
|
||||
else:
|
||||
print "ERROR: Nothing provides '%s'" % name
|
||||
poke.usage = "<providee> <variable> <value>"
|
||||
|
||||
def print_( self, params ):
|
||||
"""Dump all files or providers"""
|
||||
what = params[0]
|
||||
if what == "files":
|
||||
self._checkParsed()
|
||||
for key in cooker.status.pkg_fn.keys(): print key
|
||||
elif what == "providers":
|
||||
self._checkParsed()
|
||||
for key in cooker.status.providers.keys(): print key
|
||||
else:
|
||||
print "Usage: print %s" % self.print_.usage
|
||||
print_.usage = "<files|providers>"
|
||||
|
||||
def python( self, params ):
|
||||
"""Enter the expert mode - an interactive BitBake Python Interpreter"""
|
||||
sys.ps1 = "EXPERT BB>>> "
|
||||
sys.ps2 = "EXPERT BB... "
|
||||
import code
|
||||
interpreter = code.InteractiveConsole( dict( globals() ) )
|
||||
interpreter.interact( "SHELL: Expert Mode - BitBake Python %s\nType 'help' for more information, press CTRL-D to switch back to BBSHELL." % sys.version )
|
||||
|
||||
def showdata( self, params ):
|
||||
"""Show the parsed metadata for a given providee"""
|
||||
cooker.showEnvironment(None, params)
|
||||
showdata.usage = "<providee>"
|
||||
|
||||
def setVar( self, params ):
|
||||
"""Set an outer BitBake environment variable"""
|
||||
var, value = params
|
||||
data.setVar( var, value, cooker.configuration.data )
|
||||
print "OK"
|
||||
setVar.usage = "<variable> <value>"
|
||||
|
||||
def rebuild( self, params ):
|
||||
"""Clean and rebuild a .bb file or a providee"""
|
||||
self.build( params, "clean" )
|
||||
self.build( params, "build" )
|
||||
rebuild.usage = "<providee>"
|
||||
|
||||
def shell( self, params ):
|
||||
"""Execute a shell command and dump the output"""
|
||||
if params != "":
|
||||
print commands.getoutput( " ".join( params ) )
|
||||
shell.usage = "<...>"
|
||||
|
||||
def stage( self, params ):
|
||||
"""Execute 'stage' on a providee"""
|
||||
self.build( params, "stage" )
|
||||
stage.usage = "<providee>"
|
||||
|
||||
def status( self, params ):
|
||||
"""<just for testing>"""
|
||||
print "-" * 78
|
||||
print "building list = '%s'" % cooker.building_list
|
||||
print "build path = '%s'" % cooker.build_path
|
||||
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
|
||||
print "build stats = '%s'" % cooker.stats
|
||||
if last_exception is not None: print "last_exception = '%s'" % repr( last_exception.args )
|
||||
print "memory output contents = '%s'" % self._shell.myout._buffer
|
||||
|
||||
def test( self, params ):
|
||||
"""<just for testing>"""
|
||||
print "testCommand called with '%s'" % params
|
||||
|
||||
def unpack( self, params ):
|
||||
"""Execute 'unpack' on a providee"""
|
||||
self.build( params, "unpack" )
|
||||
unpack.usage = "<providee>"
|
||||
|
||||
def which( self, params ):
|
||||
"""Computes the providers for a given providee"""
|
||||
# Need to use taskData for this information
|
||||
item = params[0]
|
||||
|
||||
self._checkParsed()
|
||||
|
||||
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
|
||||
if not preferred: preferred = item
|
||||
|
||||
try:
|
||||
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
|
||||
except KeyError:
|
||||
lv, lf, pv, pf = (None,)*4
|
||||
|
||||
try:
|
||||
providers = cooker.status.providers[item]
|
||||
except KeyError:
|
||||
print "SHELL: ERROR: Nothing provides", preferred
|
||||
else:
|
||||
for provider in providers:
|
||||
if provider == pf: provider = " (***) %s" % provider
|
||||
else: provider = " %s" % provider
|
||||
print provider
|
||||
which.usage = "<providee>"
|
||||
|
||||
##########################################################################
|
||||
# Common helper functions
|
||||
##########################################################################
|
||||
|
||||
def completeFilePath( bbfile ):
|
||||
"""Get the complete bbfile path"""
|
||||
if not cooker.status.pkg_fn: return bbfile
|
||||
for key in cooker.status.pkg_fn.keys():
|
||||
if key.endswith( bbfile ):
|
||||
return key
|
||||
return bbfile
|
||||
|
||||
def sendToPastebin( desc, content ):
|
||||
"""Send content to http://oe.pastebin.com"""
|
||||
mydata = {}
|
||||
mydata["lang"] = "Plain Text"
|
||||
mydata["desc"] = desc
|
||||
mydata["cvt_tabs"] = "No"
|
||||
mydata["nick"] = "%s@%s" % ( os.environ.get( "USER", "unknown" ), socket.gethostname() or "unknown" )
|
||||
mydata["text"] = content
|
||||
params = urllib.urlencode( mydata )
|
||||
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
|
||||
|
||||
host = "rafb.net"
|
||||
conn = httplib.HTTPConnection( "%s:80" % host )
|
||||
conn.request("POST", "/paste/paste.php", params, headers )
|
||||
|
||||
response = conn.getresponse()
|
||||
conn.close()
|
||||
|
||||
if response.status == 302:
|
||||
location = response.getheader( "location" ) or "unknown"
|
||||
print "SHELL: Pasted to http://%s%s" % ( host, location )
|
||||
else:
|
||||
print "ERROR: %s %s" % ( response.status, response.reason )
|
||||
|
||||
def completer( text, state ):
|
||||
"""Return a possible readline completion"""
|
||||
debugOut( "completer called with text='%s', state='%d'" % ( text, state ) )
|
||||
|
||||
if state == 0:
|
||||
line = readline.get_line_buffer()
|
||||
if " " in line:
|
||||
line = line.split()
|
||||
# we are in second (or more) argument
|
||||
if line[0] in cmds and hasattr( cmds[line[0]][0], "usage" ): # known command and usage
|
||||
u = getattr( cmds[line[0]][0], "usage" ).split()[0]
|
||||
if u == "<variable>":
|
||||
allmatches = cooker.configuration.data.keys()
|
||||
elif u == "<bbfile>":
|
||||
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
|
||||
else: allmatches = [ x.split("/")[-1] for x in cooker.status.pkg_fn.keys() ]
|
||||
elif u == "<providee>":
|
||||
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
|
||||
else: allmatches = cooker.status.providers.iterkeys()
|
||||
else: allmatches = [ "(No tab completion available for this command)" ]
|
||||
else: allmatches = [ "(No tab completion available for this command)" ]
|
||||
else:
|
||||
# we are in first argument
|
||||
allmatches = cmds.iterkeys()
|
||||
|
||||
completer.matches = [ x for x in allmatches if x[:len(text)] == text ]
|
||||
#print "completer.matches = '%s'" % completer.matches
|
||||
if len( completer.matches ) > state:
|
||||
return completer.matches[state]
|
||||
else:
|
||||
return None
|
||||
|
||||
def debugOut( text ):
|
||||
if debug:
|
||||
sys.stderr.write( "( %s )\n" % text )
|
||||
|
||||
def columnize( alist, width = 80 ):
|
||||
"""
|
||||
A word-wrap function that preserves existing line breaks
|
||||
and most spaces in the text. Expects that existing line
|
||||
breaks are posix newlines (\n).
|
||||
"""
|
||||
return reduce(lambda line, word, width=width: '%s%s%s' %
|
||||
(line,
|
||||
' \n'[(len(line[line.rfind('\n')+1:])
|
||||
+ len(word.split('\n',1)[0]
|
||||
) >= width)],
|
||||
word),
|
||||
alist
|
||||
)
|
||||
|
||||
def globfilter( names, pattern ):
|
||||
return fnmatch.filter( names, pattern )
|
||||
|
||||
##########################################################################
|
||||
# Class MemoryOutput
|
||||
##########################################################################
|
||||
|
||||
class MemoryOutput:
|
||||
"""File-like output class buffering the output of the last 10 commands"""
|
||||
def __init__( self, delegate ):
|
||||
self.delegate = delegate
|
||||
self._buffer = []
|
||||
self.text = []
|
||||
self._command = None
|
||||
|
||||
def startCommand( self, command ):
|
||||
self._command = command
|
||||
self.text = []
|
||||
def endCommand( self ):
|
||||
if self._command is not None:
|
||||
if len( self._buffer ) == 10: del self._buffer[0]
|
||||
self._buffer.append( ( self._command, self.text ) )
|
||||
def removeLast( self ):
|
||||
if self._buffer:
|
||||
del self._buffer[ len( self._buffer ) - 1 ]
|
||||
self.text = []
|
||||
self._command = None
|
||||
def lastBuffer( self ):
|
||||
if self._buffer:
|
||||
return self._buffer[ len( self._buffer ) -1 ][1]
|
||||
def bufferedCommands( self ):
|
||||
return [ cmd for cmd, output in self._buffer ]
|
||||
def buffer( self, i ):
|
||||
if i < len( self._buffer ):
|
||||
return "BB>> %s\n%s" % ( self._buffer[i][0], "".join( self._buffer[i][1] ) )
|
||||
else: return "ERROR: Invalid buffer number. Buffer needs to be in (0, %d)" % ( len( self._buffer ) - 1 )
|
||||
def write( self, text ):
|
||||
if self._command is not None and text != "BB>> ": self.text.append( text )
|
||||
if self.delegate is not None: self.delegate.write( text )
|
||||
def flush( self ):
|
||||
return self.delegate.flush()
|
||||
def fileno( self ):
|
||||
return self.delegate.fileno()
|
||||
def isatty( self ):
|
||||
return self.delegate.isatty()
|
||||
|
||||
##########################################################################
|
||||
# Class BitBakeShell
|
||||
##########################################################################
|
||||
|
||||
class BitBakeShell:
|
||||
|
||||
def __init__( self ):
|
||||
"""Register commands and set up readline"""
|
||||
self.commandQ = Queue.Queue()
|
||||
self.commands = BitBakeShellCommands( self )
|
||||
self.myout = MemoryOutput( sys.stdout )
|
||||
self.historyfilename = os.path.expanduser( "~/.bbsh_history" )
|
||||
self.startupfilename = os.path.expanduser( "~/.bbsh_startup" )
|
||||
|
||||
readline.set_completer( completer )
|
||||
readline.set_completer_delims( " " )
|
||||
readline.parse_and_bind("tab: complete")
|
||||
|
||||
try:
|
||||
readline.read_history_file( self.historyfilename )
|
||||
except IOError:
|
||||
pass # It doesn't exist yet.
|
||||
|
||||
print __credits__
|
||||
|
||||
def cleanup( self ):
|
||||
"""Write readline history and clean up resources"""
|
||||
debugOut( "writing command history" )
|
||||
try:
|
||||
readline.write_history_file( self.historyfilename )
|
||||
except:
|
||||
print "SHELL: Unable to save command history"
|
||||
|
||||
def registerCommand( self, command, function, numparams = 0, usage = "", helptext = "" ):
|
||||
"""Register a command"""
|
||||
if usage == "": usage = command
|
||||
if helptext == "": helptext = function.__doc__ or "<not yet documented>"
|
||||
cmds[command] = ( function, numparams, usage, helptext )
|
||||
|
||||
def processCommand( self, command, params ):
|
||||
"""Process a command. Check number of params and print a usage string, if appropriate"""
|
||||
debugOut( "processing command '%s'..." % command )
|
||||
try:
|
||||
function, numparams, usage, helptext = cmds[command]
|
||||
except KeyError:
|
||||
print "SHELL: ERROR: '%s' command is not a valid command." % command
|
||||
self.myout.removeLast()
|
||||
else:
|
||||
if (numparams != -1) and (not len( params ) == numparams):
|
||||
print "Usage: '%s'" % usage
|
||||
return
|
||||
|
||||
result = function( self.commands, params )
|
||||
debugOut( "result was '%s'" % result )
|
||||
|
||||
def processStartupFile( self ):
|
||||
"""Read and execute all commands found in $HOME/.bbsh_startup"""
|
||||
if os.path.exists( self.startupfilename ):
|
||||
startupfile = open( self.startupfilename, "r" )
|
||||
for cmdline in startupfile:
|
||||
debugOut( "processing startup line '%s'" % cmdline )
|
||||
if not cmdline:
|
||||
continue
|
||||
if "|" in cmdline:
|
||||
print "ERROR: '|' in startup file is not allowed. Ignoring line"
|
||||
continue
|
||||
self.commandQ.put( cmdline.strip() )
|
||||
|
||||
def main( self ):
|
||||
"""The main command loop"""
|
||||
while not leave_mainloop:
|
||||
try:
|
||||
if self.commandQ.empty():
|
||||
sys.stdout = self.myout.delegate
|
||||
cmdline = raw_input( "BB>> " )
|
||||
sys.stdout = self.myout
|
||||
else:
|
||||
cmdline = self.commandQ.get()
|
||||
if cmdline:
|
||||
allCommands = cmdline.split( ';' )
|
||||
for command in allCommands:
|
||||
pipecmd = None
|
||||
#
|
||||
# special case for expert mode
|
||||
if command == 'python':
|
||||
sys.stdout = self.myout.delegate
|
||||
self.processCommand( command, "" )
|
||||
sys.stdout = self.myout
|
||||
else:
|
||||
self.myout.startCommand( command )
|
||||
if '|' in command: # disable output
|
||||
command, pipecmd = command.split( '|' )
|
||||
delegate = self.myout.delegate
|
||||
self.myout.delegate = None
|
||||
tokens = shlex.split( command, True )
|
||||
self.processCommand( tokens[0], tokens[1:] or "" )
|
||||
self.myout.endCommand()
|
||||
if pipecmd is not None: # restore output
|
||||
self.myout.delegate = delegate
|
||||
|
||||
pipe = popen2.Popen4( pipecmd )
|
||||
pipe.tochild.write( "\n".join( self.myout.lastBuffer() ) )
|
||||
pipe.tochild.close()
|
||||
sys.stdout.write( pipe.fromchild.read() )
|
||||
#
|
||||
except EOFError:
|
||||
print
|
||||
return
|
||||
except KeyboardInterrupt:
|
||||
print
|
||||
|
||||
##########################################################################
|
||||
# Start function - called from the BitBake command line utility
|
||||
##########################################################################
|
||||
|
||||
def start( aCooker ):
|
||||
global cooker
|
||||
cooker = aCooker
|
||||
bbshell = BitBakeShell()
|
||||
bbshell.processStartupFile()
|
||||
bbshell.main()
|
||||
bbshell.cleanup()
|
||||
|
||||
if __name__ == "__main__":
|
||||
print "SHELL: Sorry, this program should only be called by BitBake."
|
||||
581
bitbake/lib/bb/taskdata.py
Normal file
@@ -0,0 +1,581 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'TaskData' implementation
|
||||
|
||||
Task data collection and handling
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from bb import data, event, mkdirhier, utils
|
||||
import bb, os
|
||||
|
||||
class TaskData:
|
||||
"""
|
||||
BitBake Task Data implementation
|
||||
"""
|
||||
def __init__(self, abort = True):
|
||||
self.build_names_index = []
|
||||
self.run_names_index = []
|
||||
self.fn_index = []
|
||||
|
||||
self.build_targets = {}
|
||||
self.run_targets = {}
|
||||
|
||||
self.external_targets = []
|
||||
|
||||
self.tasks_fnid = []
|
||||
self.tasks_name = []
|
||||
self.tasks_tdepends = []
|
||||
self.tasks_idepends = []
|
||||
# Cache to speed up task ID lookups
|
||||
self.tasks_lookup = {}
|
||||
|
||||
self.depids = {}
|
||||
self.rdepids = {}
|
||||
|
||||
self.consider_msgs_cache = []
|
||||
|
||||
self.failed_deps = []
|
||||
self.failed_rdeps = []
|
||||
self.failed_fnids = []
|
||||
|
||||
self.abort = abort
|
||||
|
||||
def getbuild_id(self, name):
|
||||
"""
|
||||
Return an ID number for the build target name.
|
||||
If it doesn't exist, create one.
|
||||
"""
|
||||
if not name in self.build_names_index:
|
||||
self.build_names_index.append(name)
|
||||
return len(self.build_names_index) - 1
|
||||
|
||||
return self.build_names_index.index(name)
|
||||
|
||||
def getrun_id(self, name):
|
||||
"""
|
||||
Return an ID number for the run target name.
|
||||
If it doesn't exist, create one.
|
||||
"""
|
||||
if not name in self.run_names_index:
|
||||
self.run_names_index.append(name)
|
||||
return len(self.run_names_index) - 1
|
||||
|
||||
return self.run_names_index.index(name)
|
||||
|
||||
def getfn_id(self, name):
|
||||
"""
|
||||
Return an ID number for the filename.
|
||||
If it doesn't exist, create one.
|
||||
"""
|
||||
if not name in self.fn_index:
|
||||
self.fn_index.append(name)
|
||||
return len(self.fn_index) - 1
|
||||
|
||||
return self.fn_index.index(name)
|
||||
|
||||
def gettask_id(self, fn, task, create = True):
|
||||
"""
|
||||
Return an ID number for the task matching fn and task.
|
||||
If it doesn't exist, create one by default.
|
||||
Optionally return None instead.
|
||||
"""
|
||||
fnid = self.getfn_id(fn)
|
||||
|
||||
if fnid in self.tasks_lookup:
|
||||
if task in self.tasks_lookup[fnid]:
|
||||
return self.tasks_lookup[fnid][task]
|
||||
|
||||
if not create:
|
||||
return None
|
||||
|
||||
self.tasks_name.append(task)
|
||||
self.tasks_fnid.append(fnid)
|
||||
self.tasks_tdepends.append([])
|
||||
self.tasks_idepends.append([])
|
||||
|
||||
listid = len(self.tasks_name) - 1
|
||||
|
||||
if fnid not in self.tasks_lookup:
|
||||
self.tasks_lookup[fnid] = {}
|
||||
self.tasks_lookup[fnid][task] = listid
|
||||
|
||||
return listid
|
||||
|
||||
def add_tasks(self, fn, dataCache):
|
||||
"""
|
||||
Add tasks for a given fn to the database
|
||||
"""
|
||||
|
||||
task_graph = dataCache.task_queues[fn]
|
||||
task_deps = dataCache.task_deps[fn]
|
||||
|
||||
fnid = self.getfn_id(fn)
|
||||
|
||||
if fnid in self.failed_fnids:
|
||||
bb.msg.fatal(bb.msg.domain.TaskData, "Trying to re-add a failed file? Something is broken...")
|
||||
|
||||
# Check if we've already seen this fn
|
||||
if fnid in self.tasks_fnid:
|
||||
return
|
||||
|
||||
for task in task_graph.allnodes():
|
||||
|
||||
# Work out task dependencies
|
||||
parentids = []
|
||||
for dep in task_graph.getparents(task):
|
||||
parentid = self.gettask_id(fn, dep)
|
||||
parentids.append(parentid)
|
||||
taskid = self.gettask_id(fn, task)
|
||||
self.tasks_tdepends[taskid].extend(parentids)
|
||||
|
||||
# Touch all intertask dependencies
|
||||
if 'depends' in task_deps and task in task_deps['depends']:
|
||||
ids = []
|
||||
for dep in task_deps['depends'][task].split():
|
||||
if dep:
|
||||
ids.append(str(self.getbuild_id(dep.split(":")[0])) + ":" + dep.split(":")[1])
|
||||
self.tasks_idepends[taskid].extend(ids)
|
||||
|
||||
# Work out build dependencies
|
||||
if not fnid in self.depids:
|
||||
dependids = {}
|
||||
for depend in dataCache.deps[fn]:
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added dependency %s for %s" % (depend, fn))
|
||||
dependids[self.getbuild_id(depend)] = None
|
||||
self.depids[fnid] = dependids.keys()
|
||||
|
||||
# Work out runtime dependencies
|
||||
if not fnid in self.rdepids:
|
||||
rdependids = {}
|
||||
rdepends = dataCache.rundeps[fn]
|
||||
rrecs = dataCache.runrecs[fn]
|
||||
for package in rdepends:
|
||||
for rdepend in rdepends[package]:
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime dependency %s for %s" % (rdepend, fn))
|
||||
rdependids[self.getrun_id(rdepend)] = None
|
||||
for package in rrecs:
|
||||
for rdepend in rrecs[package]:
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime recommendation %s for %s" % (rdepend, fn))
|
||||
rdependids[self.getrun_id(rdepend)] = None
|
||||
self.rdepids[fnid] = rdependids.keys()
|
||||
|
||||
for dep in self.depids[fnid]:
|
||||
if dep in self.failed_deps:
|
||||
self.fail_fnid(fnid)
|
||||
return
|
||||
for dep in self.rdepids[fnid]:
|
||||
if dep in self.failed_rdeps:
|
||||
self.fail_fnid(fnid)
|
||||
return
|
||||
|
||||
def have_build_target(self, target):
|
||||
"""
|
||||
Have we a build target matching this name?
|
||||
"""
|
||||
targetid = self.getbuild_id(target)
|
||||
|
||||
if targetid in self.build_targets:
|
||||
return True
|
||||
return False
|
||||
|
||||
def have_runtime_target(self, target):
|
||||
"""
|
||||
Have we a runtime target matching this name?
|
||||
"""
|
||||
targetid = self.getrun_id(target)
|
||||
|
||||
if targetid in self.run_targets:
|
||||
return True
|
||||
return False
|
||||
|
||||
def add_build_target(self, fn, item):
|
||||
"""
|
||||
Add a build target.
|
||||
If already present, append the provider fn to the list
|
||||
"""
|
||||
targetid = self.getbuild_id(item)
|
||||
fnid = self.getfn_id(fn)
|
||||
|
||||
if targetid in self.build_targets:
|
||||
if fnid in self.build_targets[targetid]:
|
||||
return
|
||||
self.build_targets[targetid].append(fnid)
|
||||
return
|
||||
self.build_targets[targetid] = [fnid]
|
||||
|
||||
def add_runtime_target(self, fn, item):
|
||||
"""
|
||||
Add a runtime target.
|
||||
If already present, append the provider fn to the list
|
||||
"""
|
||||
targetid = self.getrun_id(item)
|
||||
fnid = self.getfn_id(fn)
|
||||
|
||||
if targetid in self.run_targets:
|
||||
if fnid in self.run_targets[targetid]:
|
||||
return
|
||||
self.run_targets[targetid].append(fnid)
|
||||
return
|
||||
self.run_targets[targetid] = [fnid]
|
||||
|
||||
def mark_external_target(self, item):
|
||||
"""
|
||||
Mark a build target as being externally requested
|
||||
"""
|
||||
targetid = self.getbuild_id(item)
|
||||
|
||||
if targetid not in self.external_targets:
|
||||
self.external_targets.append(targetid)
|
||||
|
||||
def get_unresolved_build_targets(self, dataCache):
|
||||
"""
|
||||
Return a list of build targets who's providers
|
||||
are unknown.
|
||||
"""
|
||||
unresolved = []
|
||||
for target in self.build_names_index:
|
||||
if target in dataCache.ignored_dependencies:
|
||||
continue
|
||||
if self.build_names_index.index(target) in self.failed_deps:
|
||||
continue
|
||||
if not self.have_build_target(target):
|
||||
unresolved.append(target)
|
||||
return unresolved
|
||||
|
||||
def get_unresolved_run_targets(self, dataCache):
|
||||
"""
|
||||
Return a list of runtime targets who's providers
|
||||
are unknown.
|
||||
"""
|
||||
unresolved = []
|
||||
for target in self.run_names_index:
|
||||
if target in dataCache.ignored_dependencies:
|
||||
continue
|
||||
if self.run_names_index.index(target) in self.failed_rdeps:
|
||||
continue
|
||||
if not self.have_runtime_target(target):
|
||||
unresolved.append(target)
|
||||
return unresolved
|
||||
|
||||
def get_provider(self, item):
|
||||
"""
|
||||
Return a list of providers of item
|
||||
"""
|
||||
targetid = self.getbuild_id(item)
|
||||
|
||||
return self.build_targets[targetid]
|
||||
|
||||
def get_dependees(self, itemid):
|
||||
"""
|
||||
Return a list of targets which depend on item
|
||||
"""
|
||||
dependees = []
|
||||
for fnid in self.depids:
|
||||
if itemid in self.depids[fnid]:
|
||||
dependees.append(fnid)
|
||||
return dependees
|
||||
|
||||
def get_dependees_str(self, item):
|
||||
"""
|
||||
Return a list of targets which depend on item as a user readable string
|
||||
"""
|
||||
itemid = self.getbuild_id(item)
|
||||
dependees = []
|
||||
for fnid in self.depids:
|
||||
if itemid in self.depids[fnid]:
|
||||
dependees.append(self.fn_index[fnid])
|
||||
return dependees
|
||||
|
||||
def get_rdependees(self, itemid):
|
||||
"""
|
||||
Return a list of targets which depend on runtime item
|
||||
"""
|
||||
dependees = []
|
||||
for fnid in self.rdepids:
|
||||
if itemid in self.rdepids[fnid]:
|
||||
dependees.append(fnid)
|
||||
return dependees
|
||||
|
||||
def get_rdependees_str(self, item):
|
||||
"""
|
||||
Return a list of targets which depend on runtime item as a user readable string
|
||||
"""
|
||||
itemid = self.getrun_id(item)
|
||||
dependees = []
|
||||
for fnid in self.rdepids:
|
||||
if itemid in self.rdepids[fnid]:
|
||||
dependees.append(self.fn_index[fnid])
|
||||
return dependees
|
||||
|
||||
def add_provider(self, cfgData, dataCache, item):
|
||||
try:
|
||||
self.add_provider_internal(cfgData, dataCache, item)
|
||||
except bb.providers.NoProvider:
|
||||
if self.abort:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
|
||||
raise
|
||||
targetid = self.getbuild_id(item)
|
||||
self.remove_buildtarget(targetid)
|
||||
|
||||
self.mark_external_target(item)
|
||||
|
||||
def add_provider_internal(self, cfgData, dataCache, item):
|
||||
"""
|
||||
Add the providers of item to the task data
|
||||
Mark entries were specifically added externally as against dependencies
|
||||
added internally during dependency resolution
|
||||
"""
|
||||
|
||||
if item in dataCache.ignored_dependencies:
|
||||
return
|
||||
|
||||
if not item in dataCache.providers:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData))
|
||||
raise bb.providers.NoProvider(item)
|
||||
|
||||
if self.have_build_target(item):
|
||||
return
|
||||
|
||||
all_p = dataCache.providers[item]
|
||||
|
||||
eligible, foundUnique = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
|
||||
|
||||
for p in eligible:
|
||||
fnid = self.getfn_id(p)
|
||||
if fnid in self.failed_fnids:
|
||||
eligible.remove(p)
|
||||
|
||||
if not eligible:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "No buildable provider PROVIDES '%s' but '%s' DEPENDS on or otherwise requires it. Enable debugging and see earlier logs to find unbuildable providers." % (item, self.get_dependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData))
|
||||
raise bb.providers.NoProvider(item)
|
||||
|
||||
if len(eligible) > 1 and foundUnique == False:
|
||||
if item not in self.consider_msgs_cache:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "multiple providers are available for %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "consider defining PREFERRED_PROVIDER_%s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item, providers_list, cfgData))
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
for fn in eligible:
|
||||
fnid = self.getfn_id(fn)
|
||||
if fnid in self.failed_fnids:
|
||||
continue
|
||||
bb.msg.debug(2, bb.msg.domain.Provider, "adding %s to satisfy %s" % (fn, item))
|
||||
self.add_build_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
|
||||
#item = dataCache.pkg_fn[fn]
|
||||
|
||||
def add_rprovider(self, cfgData, dataCache, item):
|
||||
"""
|
||||
Add the runtime providers of item to the task data
|
||||
(takes item names from RDEPENDS/PACKAGES namespace)
|
||||
"""
|
||||
|
||||
if item in dataCache.ignored_dependencies:
|
||||
return
|
||||
|
||||
if self.have_runtime_target(item):
|
||||
return
|
||||
|
||||
all_p = bb.providers.getRuntimeProviders(dataCache, item)
|
||||
|
||||
if not all_p:
|
||||
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables" % (self.get_rdependees_str(item), item))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
|
||||
raise bb.providers.NoRProvider(item)
|
||||
|
||||
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
|
||||
|
||||
for p in eligible:
|
||||
fnid = self.getfn_id(p)
|
||||
if fnid in self.failed_fnids:
|
||||
eligible.remove(p)
|
||||
|
||||
if not eligible:
|
||||
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables of any buildable targets.\nEnable debugging and see earlier logs to find unbuildable targets." % (self.get_rdependees_str(item), item))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
|
||||
raise bb.providers.NoRProvider(item)
|
||||
|
||||
if len(eligible) > 1 and numberPreferred == 0:
|
||||
if item not in self.consider_msgs_cache:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "consider defining a PREFERRED_PROVIDER entry to match runtime %s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
if numberPreferred > 1:
|
||||
if item not in self.consider_msgs_cache:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (top %s entries preferred) (%s);" % (item, numberPreferred, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "consider defining only one PREFERRED_PROVIDER entry to match runtime %s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
# run through the list until we find one that we can build
|
||||
for fn in eligible:
|
||||
fnid = self.getfn_id(fn)
|
||||
if fnid in self.failed_fnids:
|
||||
continue
|
||||
bb.msg.debug(2, bb.msg.domain.Provider, "adding '%s' to satisfy runtime '%s'" % (fn, item))
|
||||
self.add_runtime_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
def fail_fnid(self, fnid, missing_list = []):
|
||||
"""
|
||||
Mark a file as failed (unbuildable)
|
||||
Remove any references from build and runtime provider lists
|
||||
|
||||
missing_list, A list of missing requirements for this target
|
||||
"""
|
||||
if fnid in self.failed_fnids:
|
||||
return
|
||||
if not missing_list:
|
||||
missing_list = [fnid]
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "File '%s' is unbuildable, removing..." % self.fn_index[fnid])
|
||||
self.failed_fnids.append(fnid)
|
||||
for target in self.build_targets:
|
||||
if fnid in self.build_targets[target]:
|
||||
self.build_targets[target].remove(fnid)
|
||||
if len(self.build_targets[target]) == 0:
|
||||
self.remove_buildtarget(target, missing_list)
|
||||
for target in self.run_targets:
|
||||
if fnid in self.run_targets[target]:
|
||||
self.run_targets[target].remove(fnid)
|
||||
if len(self.run_targets[target]) == 0:
|
||||
self.remove_runtarget(target, missing_list)
|
||||
|
||||
def remove_buildtarget(self, targetid, missing_list = []):
|
||||
"""
|
||||
Mark a build target as failed (unbuildable)
|
||||
Trigger removal of any files that have this as a dependency
|
||||
"""
|
||||
if not missing_list:
|
||||
missing_list = [self.build_names_index[targetid]]
|
||||
else:
|
||||
missing_list = [self.build_names_index[targetid]] + missing_list
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
|
||||
self.failed_deps.append(targetid)
|
||||
dependees = self.get_dependees(targetid)
|
||||
for fnid in dependees:
|
||||
self.fail_fnid(fnid, missing_list)
|
||||
if self.abort and targetid in self.external_targets:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
|
||||
raise bb.providers.NoProvider
|
||||
|
||||
def remove_runtarget(self, targetid, missing_list = []):
|
||||
"""
|
||||
Mark a run target as failed (unbuildable)
|
||||
Trigger removal of any files that have this as a dependency
|
||||
"""
|
||||
if not missing_list:
|
||||
missing_list = [self.run_names_index[targetid]]
|
||||
else:
|
||||
missing_list = [self.run_names_index[targetid]] + missing_list
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.run_names_index[targetid], missing_list))
|
||||
self.failed_rdeps.append(targetid)
|
||||
dependees = self.get_rdependees(targetid)
|
||||
for fnid in dependees:
|
||||
self.fail_fnid(fnid, missing_list)
|
||||
|
||||
def add_unresolved(self, cfgData, dataCache):
|
||||
"""
|
||||
Resolve all unresolved build and runtime targets
|
||||
"""
|
||||
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
|
||||
while 1:
|
||||
added = 0
|
||||
for target in self.get_unresolved_build_targets(dataCache):
|
||||
try:
|
||||
self.add_provider_internal(cfgData, dataCache, target)
|
||||
added = added + 1
|
||||
except bb.providers.NoProvider:
|
||||
targetid = self.getbuild_id(target)
|
||||
if self.abort and targetid in self.external_targets:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (target, self.get_dependees_str(target)))
|
||||
raise
|
||||
self.remove_buildtarget(targetid)
|
||||
for target in self.get_unresolved_run_targets(dataCache):
|
||||
try:
|
||||
self.add_rprovider(cfgData, dataCache, target)
|
||||
added = added + 1
|
||||
except bb.providers.NoRProvider:
|
||||
self.remove_runtarget(self.getrun_id(target))
|
||||
bb.msg.debug(1, bb.msg.domain.TaskData, "Resolved " + str(added) + " extra dependecies")
|
||||
if added == 0:
|
||||
break
|
||||
# self.dump_data()
|
||||
|
||||
def dump_data(self):
|
||||
"""
|
||||
Dump some debug information on the internal data structures
|
||||
"""
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "build_names:")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.build_names_index))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "run_names:")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.run_names_index))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "build_targets:")
|
||||
for buildid in range(len(self.build_names_index)):
|
||||
target = self.build_names_index[buildid]
|
||||
targets = "None"
|
||||
if buildid in self.build_targets:
|
||||
targets = self.build_targets[buildid]
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (buildid, target, targets))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "run_targets:")
|
||||
for runid in range(len(self.run_names_index)):
|
||||
target = self.run_names_index[runid]
|
||||
targets = "None"
|
||||
if runid in self.run_targets:
|
||||
targets = self.run_targets[runid]
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (runid, target, targets))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
|
||||
for task in range(len(self.tasks_name)):
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
|
||||
task,
|
||||
self.fn_index[self.tasks_fnid[task]],
|
||||
self.tasks_name[task],
|
||||
self.tasks_tdepends[task]))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
|
||||
for fnid in self.depids:
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.depids[fnid]))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
|
||||
for fnid in self.rdepids:
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))
|
||||
|
||||
|
||||
239
bitbake/lib/bb/utils.py
Normal file
@@ -0,0 +1,239 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake Utility Functions
|
||||
"""
|
||||
|
||||
# Copyright (C) 2004 Michael Lauer
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
digits = "0123456789"
|
||||
ascii_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
|
||||
import re, fcntl, os
|
||||
|
||||
def explode_version(s):
|
||||
r = []
|
||||
alpha_regexp = re.compile('^([a-zA-Z]+)(.*)$')
|
||||
numeric_regexp = re.compile('^(\d+)(.*)$')
|
||||
while (s != ''):
|
||||
if s[0] in digits:
|
||||
m = numeric_regexp.match(s)
|
||||
r.append(int(m.group(1)))
|
||||
s = m.group(2)
|
||||
continue
|
||||
if s[0] in ascii_letters:
|
||||
m = alpha_regexp.match(s)
|
||||
r.append(m.group(1))
|
||||
s = m.group(2)
|
||||
continue
|
||||
s = s[1:]
|
||||
return r
|
||||
|
||||
def vercmp_part(a, b):
|
||||
va = explode_version(a)
|
||||
vb = explode_version(b)
|
||||
while True:
|
||||
if va == []:
|
||||
ca = None
|
||||
else:
|
||||
ca = va.pop(0)
|
||||
if vb == []:
|
||||
cb = None
|
||||
else:
|
||||
cb = vb.pop(0)
|
||||
if ca == None and cb == None:
|
||||
return 0
|
||||
if ca > cb:
|
||||
return 1
|
||||
if ca < cb:
|
||||
return -1
|
||||
|
||||
def vercmp(ta, tb):
|
||||
(ea, va, ra) = ta
|
||||
(eb, vb, rb) = tb
|
||||
|
||||
r = int(ea)-int(eb)
|
||||
if (r == 0):
|
||||
r = vercmp_part(va, vb)
|
||||
if (r == 0):
|
||||
r = vercmp_part(ra, rb)
|
||||
return r
|
||||
|
||||
def explode_deps(s):
|
||||
"""
|
||||
Take an RDEPENDS style string of format:
|
||||
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
|
||||
and return a list of dependencies.
|
||||
Version information is ignored.
|
||||
"""
|
||||
r = []
|
||||
l = s.split()
|
||||
flag = False
|
||||
for i in l:
|
||||
if i[0] == '(':
|
||||
flag = True
|
||||
j = []
|
||||
if flag:
|
||||
j.append(i)
|
||||
else:
|
||||
r.append(i)
|
||||
if flag and i.endswith(')'):
|
||||
flag = False
|
||||
# Ignore version
|
||||
#r[-1] += ' ' + ' '.join(j)
|
||||
return r
|
||||
|
||||
|
||||
|
||||
def _print_trace(body, line):
|
||||
"""
|
||||
Print the Environment of a Text Body
|
||||
"""
|
||||
import bb
|
||||
|
||||
# print the environment of the method
|
||||
bb.msg.error(bb.msg.domain.Util, "Printing the environment of the function")
|
||||
min_line = max(1,line-4)
|
||||
max_line = min(line+4,len(body)-1)
|
||||
for i in range(min_line,max_line+1):
|
||||
bb.msg.error(bb.msg.domain.Util, "\t%.4d:%s" % (i, body[i-1]) )
|
||||
|
||||
|
||||
def better_compile(text, file, realfile):
|
||||
"""
|
||||
A better compile method. This method
|
||||
will print the offending lines.
|
||||
"""
|
||||
try:
|
||||
return compile(text, file, "exec")
|
||||
except Exception, e:
|
||||
import bb,sys
|
||||
|
||||
# split the text into lines again
|
||||
body = text.split('\n')
|
||||
bb.msg.error(bb.msg.domain.Util, "Error in compiling: ", realfile)
|
||||
bb.msg.error(bb.msg.domain.Util, "The lines resulting into this error were:")
|
||||
bb.msg.error(bb.msg.domain.Util, "\t%d:%s:'%s'" % (e.lineno, e.__class__.__name__, body[e.lineno-1]))
|
||||
|
||||
_print_trace(body, e.lineno)
|
||||
|
||||
# exit now
|
||||
sys.exit(1)
|
||||
|
||||
def better_exec(code, context, text, realfile):
|
||||
"""
|
||||
Similiar to better_compile, better_exec will
|
||||
print the lines that are responsible for the
|
||||
error.
|
||||
"""
|
||||
import bb,sys
|
||||
try:
|
||||
exec code in context
|
||||
except:
|
||||
(t,value,tb) = sys.exc_info()
|
||||
|
||||
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
|
||||
raise
|
||||
|
||||
# print the Header of the Error Message
|
||||
bb.msg.error(bb.msg.domain.Util, "Error in executing: ", realfile)
|
||||
bb.msg.error(bb.msg.domain.Util, "Exception:%s Message:%s" % (t,value) )
|
||||
|
||||
# let us find the line number now
|
||||
while tb.tb_next:
|
||||
tb = tb.tb_next
|
||||
|
||||
import traceback
|
||||
line = traceback.tb_lineno(tb)
|
||||
|
||||
_print_trace( text.split('\n'), line )
|
||||
|
||||
raise
|
||||
|
||||
def Enum(*names):
|
||||
"""
|
||||
A simple class to give Enum support
|
||||
"""
|
||||
|
||||
assert names, "Empty enums are not supported"
|
||||
|
||||
class EnumClass(object):
|
||||
__slots__ = names
|
||||
def __iter__(self): return iter(constants)
|
||||
def __len__(self): return len(constants)
|
||||
def __getitem__(self, i): return constants[i]
|
||||
def __repr__(self): return 'Enum' + str(names)
|
||||
def __str__(self): return 'enum ' + str(constants)
|
||||
|
||||
class EnumValue(object):
|
||||
__slots__ = ('__value')
|
||||
def __init__(self, value): self.__value = value
|
||||
Value = property(lambda self: self.__value)
|
||||
EnumType = property(lambda self: EnumType)
|
||||
def __hash__(self): return hash(self.__value)
|
||||
def __cmp__(self, other):
|
||||
# C fans might want to remove the following assertion
|
||||
# to make all enums comparable by ordinal value {;))
|
||||
assert self.EnumType is other.EnumType, "Only values from the same enum are comparable"
|
||||
return cmp(self.__value, other.__value)
|
||||
def __invert__(self): return constants[maximum - self.__value]
|
||||
def __nonzero__(self): return bool(self.__value)
|
||||
def __repr__(self): return str(names[self.__value])
|
||||
|
||||
maximum = len(names) - 1
|
||||
constants = [None] * len(names)
|
||||
for i, each in enumerate(names):
|
||||
val = EnumValue(i)
|
||||
setattr(EnumClass, each, val)
|
||||
constants[i] = val
|
||||
constants = tuple(constants)
|
||||
EnumType = EnumClass()
|
||||
return EnumType
|
||||
|
||||
def lockfile(name):
|
||||
"""
|
||||
Use the file fn as a lock file, return when the lock has been acquired.
|
||||
Returns a variable to pass to unlockfile().
|
||||
"""
|
||||
while True:
|
||||
# If we leave the lockfiles lying around there is no problem
|
||||
# but we should clean up after ourselves. This gives potential
|
||||
# for races though. To work around this, when we acquire the lock
|
||||
# we check the file we locked was still the lock file on disk.
|
||||
# by comparing inode numbers. If they don't match or the lockfile
|
||||
# no longer exists, we start again.
|
||||
|
||||
# This implementation is unfair since the last person to request the
|
||||
# lock is the most likely to win it.
|
||||
|
||||
lf = open(name, "a+")
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
statinfo = os.fstat(lf.fileno())
|
||||
if os.path.exists(lf.name):
|
||||
statinfo2 = os.stat(lf.name)
|
||||
if statinfo.st_ino == statinfo2.st_ino:
|
||||
return lf
|
||||
# File no longer exists or changed, retry
|
||||
lf.close
|
||||
|
||||
def unlockfile(lf):
|
||||
"""
|
||||
Unlock a file locked using lockfile()
|
||||
"""
|
||||
os.unlink(lf.name)
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
lf.close
|
||||
|
||||
137
build/conf/local.conf.sample
Normal file
@@ -0,0 +1,137 @@
|
||||
# Where to cache the files Poky downloads
|
||||
DL_DIR ?= "${OEROOT}/sources"
|
||||
BBFILES = "${OEROOT}/meta/packages/*/*.bb"
|
||||
|
||||
# Poky has various extra metadata collections (openmoko, extras).
|
||||
# To enable these, uncomment all (or some of) the following lines:
|
||||
# BBFILES = "\
|
||||
# ${OEROOT}/meta/packages/*/*.bb
|
||||
# ${OEROOT}/meta-extras/packages/*/*.bb
|
||||
# ${OEROOT}/meta-openmoko/packages/*/*.bb
|
||||
# "
|
||||
# BBFILE_COLLECTIONS = "normal extras openmoko"
|
||||
# BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
|
||||
# BBFILE_PATTERN_extras = "^${OEROOT}/meta-extras/"
|
||||
# BBFILE_PATTERN_openmoko = "^${OEROOT}/meta-openmoko/"
|
||||
# BBFILE_PRIORITY_normal = "5"
|
||||
# BBFILE_PRIORITY_extras = "5"
|
||||
# BBFILE_PRIORITY_openmoko = "5"
|
||||
|
||||
BBMASK = ""
|
||||
|
||||
# The machine to target
|
||||
MACHINE ?= "qemuarm"
|
||||
|
||||
# Other supported machines
|
||||
#MACHINE ?= "qemux86"
|
||||
#MACHINE ?= "c7x0"
|
||||
#MACHINE ?= "akita"
|
||||
#MACHINE ?= "spitz"
|
||||
#MACHINE ?= "nokia770"
|
||||
#MACHINE ?= "nokia800"
|
||||
#MACHINE ?= "fic-gta01"
|
||||
#MACHINE ?= "bootcdx86"
|
||||
#MACHINE ?= "cm-x270"
|
||||
#MACHINE ?= "em-x270"
|
||||
#MACHINE ?= "htcuniversal"
|
||||
#MACHINE ?= "mx31ads"
|
||||
#MACHINE ?= "mx31litekit"
|
||||
#MACHINE ?= "mx31phy"
|
||||
#MACHINE ?= "zylonite"
|
||||
|
||||
DISTRO ?= "poky"
|
||||
# For bleeding edge / experimental / unstable package versions
|
||||
# DISTRO ?= "poky-bleeding"
|
||||
|
||||
# EXTRA_IMAGE_FEATURES allows extra packages to be added to the generated images
|
||||
# (Some of these are automatically added to certain image types)
|
||||
# "dbg-pkgs" - add -dbg packages for all installed packages
|
||||
# (adds symbol information for debugging/profiling)
|
||||
# "dev-pkgs" - add -dev packages for all installed packages
|
||||
# (useful if you want to develop against libs in the image)
|
||||
# "tools-sdk" - add development tools (gcc, make, pkgconfig etc.)
|
||||
# "tools-debug" - add debugging tools (gdb, strace)
|
||||
# "tools-profile" - add profiling tools (oprofile, exmap, lttng valgrind (x86 only))
|
||||
# "tools-testapps" - add useful testing tools (ts_print, aplay, arecord etc.)
|
||||
# "debug-tweaks" - make an image for suitable of development
|
||||
# e.g. ssh root access has a blank password
|
||||
# There are other application targets too, see meta/classes/poky-image.bbclass
|
||||
# and meta/packages/tasks/task-poky.bb for more details.
|
||||
|
||||
EXTRA_IMAGE_FEATURES = "tools-debug tools-profile tools-testapps debug-tweaks"
|
||||
|
||||
# The default IMAGE_FEATURES above are too large for the mx31phy and
|
||||
# c700/c750 machines which have limited space. The code below limits
|
||||
# the default features for those machines.
|
||||
EXTRA_IMAGE_FEATURES_c7x0 = "tools-testapps debug-tweaks"
|
||||
EXTRA_IMAGE_FEATURES_mx31phy = "debug-tweaks"
|
||||
EXTRA_IMAGE_FEATURES_mx31ads = "tools-testapps debug-tweaks"
|
||||
|
||||
# A list of packaging systems used in generated images
|
||||
# The first package type listed will be used for rootfs generation
|
||||
# include 'package_deb' for debs
|
||||
# include 'package_ipk' for ipks
|
||||
#PACKAGE_CLASSES ?= "package_deb package_ipk"
|
||||
PACKAGE_CLASSES ?= "package_ipk"
|
||||
|
||||
# POKYMODE controls the characteristics of the generated packages/images by
|
||||
# telling poky which type of toolchain to use.
|
||||
#
|
||||
# Options include several different EABI combinations and a compatibility
|
||||
# mode for the OABI mode poky previously used.
|
||||
#
|
||||
# The default is "eabi"
|
||||
# Use "oabi" for machines with kernels < 2.6.18 on ARM for example.
|
||||
# Use "external-MODE" to use the precompiled external toolchains where MODE
|
||||
# is the type of external toolchain to use e.g. eabi.
|
||||
# POKYMODE = "external-eabi"
|
||||
|
||||
# Uncomment this to specify where BitBake should create its temporary files.
|
||||
# Note that a full build of everything in OpenEmbedded will take GigaBytes of hard
|
||||
# disk space, so make sure to free enough space. The default TMPDIR is
|
||||
# <build directory>/tmp
|
||||
TMPDIR = "${OEROOT}/build/tmp"
|
||||
|
||||
# Uncomment and set to allow bitbake to execute multiple tasks at once.
|
||||
# Note, This option is currently experimental - YMMV.
|
||||
# BB_NUMBER_THREADS = "1"
|
||||
# Also, make can be passed flags so it run parallel threads e.g.:
|
||||
# PARALLEL_MAKE = "-j 4"
|
||||
|
||||
# Uncomment this if you are using the Openedhand provided qemu deb - see README
|
||||
# ASSUME_PROVIDED += "qemu-native"
|
||||
|
||||
# Comment this out if you don't have a 3.x gcc version available and wish
|
||||
# poky to build one for you. The 3.x gcc is required to build qemu-native.
|
||||
ASSUME_PROVIDED += "gcc3-native"
|
||||
|
||||
# Uncomment these two if you want BitBake to build images useful for debugging.
|
||||
# DEBUG_BUILD = "1"
|
||||
# INHIBIT_PACKAGE_STRIP = "1"
|
||||
|
||||
# Uncomment these to build a package such that you can use gprof to profile it.
|
||||
# NOTE: This will only work with 'linux' targets, not
|
||||
# 'linux-uclibc', as uClibc doesn't provide the necessary
|
||||
# object files. Also, don't build glibc itself with these
|
||||
# flags, or it'll fail to build.
|
||||
#
|
||||
# PROFILE_OPTIMIZATION = "-pg"
|
||||
# SELECTED_OPTIMIZATION = "${PROFILE_OPTIMIZATION}"
|
||||
# LDFLAGS =+ "-pg"
|
||||
|
||||
# Uncomment this if you want BitBake to emit debugging output
|
||||
# BBDEBUG = "yes"
|
||||
# Uncomment this if you want BitBake to emit the log if a build fails.
|
||||
BBINCLUDELOGS = "yes"
|
||||
|
||||
# Specifies a location to search for pre-generated tarballs when fetching
|
||||
# a cvs:// URI. Uncomment this, if you not want to pull directly from CVS.
|
||||
CVS_TARBALL_STASH = "http://folks.o-hand.com/~richard/poky/sources/"
|
||||
|
||||
# Set this if you wish to make pkgconfig libraries from your system available
|
||||
# for native builds. Combined with extra ASSUME_PROVIDEDs this can allow
|
||||
# native builds of applications like oprofileui-native (unsupported feature).
|
||||
#EXTRA_NATIVE_PKGCONFIG_PATH = ":/usr/lib/pkgconfig"
|
||||
#ASSUME_PROVIDED += "gtk+-native libglade-native"
|
||||
|
||||
ENABLE_BINARY_LOCALE_GENERATION = "1"
|
||||
38
handbook/ChangeLog
Normal file
@@ -0,0 +1,38 @@
|
||||
2008-02-29 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* development.xml:
|
||||
Disable images too big / lack context for now.
|
||||
* introduction.xml:
|
||||
Remove some OH specific stuff.
|
||||
* style.css:
|
||||
Remove limit on image size
|
||||
|
||||
2008-02-15 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* introduction.xml:
|
||||
Minor tweaks to 'What is Poky'
|
||||
|
||||
2008-02-15 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* poky-handbook.xml:
|
||||
* poky-handbook.png
|
||||
* poky-beaver.png
|
||||
* poky-logo.svg:
|
||||
* style.css:
|
||||
Add some title images.
|
||||
|
||||
2008-02-14 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* development.xml:
|
||||
remove uri's
|
||||
* style.css:
|
||||
Fix glossary
|
||||
|
||||
2008-02-06 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* Makefile:
|
||||
Add various xslto options for html.
|
||||
* introduction.xml:
|
||||
Remove link in title.
|
||||
* style.css:
|
||||
Add initial version
|
||||
25
handbook/Makefile
Normal file
@@ -0,0 +1,25 @@
|
||||
all: html pdf
|
||||
|
||||
pdf:
|
||||
|
||||
poky-docbook-to-pdf poky-handbook.xml
|
||||
# -- old way --
|
||||
# dblatex poky-handbook.xml
|
||||
|
||||
html:
|
||||
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
|
||||
xsltproc --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel 1 \
|
||||
--stringparam section.autolabel 1 \
|
||||
-o poky-handbook.html \
|
||||
--xinclude /usr/share/xml/docbook/stylesheet/nwalsh/html/docbook.xsl \
|
||||
poky-handbook.xml
|
||||
# -- old way --
|
||||
# xmlto xhtml-nochunks poky-handbook.xml
|
||||
|
||||
tarball: html
|
||||
tar -cvzf poky-handbook.tgz poky-handbook.html style.css screenshots/ss-sato.png poky-beaver.png poky-handbook.png
|
||||
|
||||
validate:
|
||||
xmllint --postvalid --xinclude --noout poky-handbook.xml
|
||||
10
handbook/TODO
Normal file
@@ -0,0 +1,10 @@
|
||||
Handbook Todo List:
|
||||
|
||||
* Document adding a new IMAGE_FEATURE to the customising images section
|
||||
* Add instructions about using zaurus/openmoko emulation
|
||||
* Add component overview/block diagrams
|
||||
* Software Deevelopment intro should mention its software development for
|
||||
intended target and could be a different arch etc and thus special case.
|
||||
* Expand insane.bbclass documentation to cover tests
|
||||
* Document remaining classes (see list in ref-classes)
|
||||
|
||||
30
handbook/contactus.xml
Normal file
@@ -0,0 +1,30 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='contact'>
|
||||
<title>OpenedHand Contact Information</title>
|
||||
|
||||
<literallayout>
|
||||
OpenedHand Ltd
|
||||
Unit R, Homesdale Business Center
|
||||
216-218 Homesdale Rd
|
||||
Bromley, BR1 2QZ
|
||||
England
|
||||
+44 (0) 208 819 6559
|
||||
info@openedhand.com</literallayout>
|
||||
|
||||
<!-- Fop messes this up so we do like above
|
||||
<address>
|
||||
OpenedHand Ltd
|
||||
Unit R, Homesdale Business Center
|
||||
<street>216-218 Homesdale Rd</street>
|
||||
<city>Bromley</city>, <postcode>BR1 2QZ</postcode>
|
||||
<country>England</country>
|
||||
<phone> +44 (0) 208 819 6559</phone>
|
||||
<email>info@openedhand.com</email>
|
||||
</address>
|
||||
-->
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
853
handbook/development.xml
Normal file
@@ -0,0 +1,853 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id="platdev">
|
||||
<title>Platform Development with Poky</title>
|
||||
|
||||
<section id="platdev-appdev">
|
||||
<title>Software development</title>
|
||||
<para>
|
||||
Poky supports several methods of software development. These different
|
||||
forms of development are explained below and can be switched
|
||||
between as needed.
|
||||
</para>
|
||||
|
||||
<section id="platdev-appdev-external-sdk">
|
||||
<title>Developing externally using the Poky SDK</title>
|
||||
|
||||
<para>
|
||||
The meta-toolchain and meta-toolchain-sdk targets (<link linkend='ref-images'>see
|
||||
the images section</link>) build tarballs which contain toolchains and
|
||||
libraries suitable for application development outside Poky. These unpack into the
|
||||
<filename class="directory">/usr/local/poky</filename> directory and contain
|
||||
a setup script, e.g.
|
||||
<filename>/usr/local/poky/eabi-glibc/arm/environment-setup</filename> which
|
||||
can be sourced to initialise a suitable environment. After sourcing this, the
|
||||
compiler, QEMU scripts, QEMU binary, a special version of pkgconfig and other
|
||||
useful utilities are added to the PATH. Variables to assist pkgconfig and
|
||||
autotools are also set so that, for example, configure can find pre-generated test
|
||||
results for tests which need target hardware to run.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Using the toolchain with autotool enabled packages is straightforward, just pass the
|
||||
appropriate host option to configure e.g. "./configure --host=arm-poky-linux-gnueabi".
|
||||
For other projects it is usually a case of ensuring the cross tools are used e.g.
|
||||
CC=arm-poky-linux-gnueabi-gcc and LD=arm-poky-linux-gnueabi-ld.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta">
|
||||
<title>Developing externally using the Anjuta plugin</title>
|
||||
|
||||
<para>
|
||||
An Anjuta IDE plugin exists to make developing software within the Poky framework
|
||||
easier for the application developer. It presents a graphical IDE from which the
|
||||
developer can cross compile an application then deploy and execute the output in a QEMU
|
||||
emulation session. It also supports cross debugging and profiling.
|
||||
</para>
|
||||
<!-- DISBALED, TOO BIG!
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-anjuta-poky-1.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>The Anjuta Poky SDK plugin showing an active QEMU session running Sato</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
-->
|
||||
<para>
|
||||
To use the plugin, a toolchain and SDK built by Poky is required along with Anjuta and the Anjuta
|
||||
plugin. The Poky Anjuta plugin is available from the OpenedHand SVN repository located at
|
||||
http://svn.o-hand.com/repos/anjuta-poky/trunk/anjuta-plugin-sdk/; a web interface
|
||||
to the repository can be accessed at <ulink url='http://svn.o-hand.com/view/anjuta-poky/'/>.
|
||||
See the README file contained in the project for more information
|
||||
about the dependencies and how to get them along with details of
|
||||
the prebuilt packages.
|
||||
</para>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta-setup">
|
||||
<title>Setting up the Anjuta plugin</title>
|
||||
|
||||
<para>Extract the tarball for the toolchain into / as root. The
|
||||
toolchain will be installed into
|
||||
<filename class="directory">/usr/local/poky</filename>.</para>
|
||||
|
||||
<para>To use the plugin, first open or create an existing
|
||||
project. If creating a new project the "C GTK+" project type
|
||||
will allow itself to be cross-compiled. However you should be
|
||||
aware that this uses glade for the UI.</para>
|
||||
|
||||
<para>To activate the plugin go to
|
||||
<menuchoice><guimenu>Edit</guimenu><guimenuitem>Preferences</guimenuitem></menuchoice>,
|
||||
then choose <guilabel>General</guilabel> from the left hand side. Choose the
|
||||
Installed plugins tab, scroll down to <guilabel>Poky
|
||||
SDK</guilabel> and check the
|
||||
box. The plugin is now activated but first it must be
|
||||
configured.</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta-configuration">
|
||||
<title>Configuring the Anjuta plugin</title>
|
||||
|
||||
<para>The configuration options for the SDK can be found by choosing
|
||||
the <guilabel>Poky SDK</guilabel> icon from the left hand side. The following options
|
||||
need to be set:</para>
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
<listitem><para><guilabel>SDK root</guilabel>: this is the root directory of the SDK
|
||||
for an ARM EABI SDK this will be <filename
|
||||
class="directory">/usr/local/poky/eabi-glibc/arm</filename>.
|
||||
This directory will contain directories named like "bin",
|
||||
"include", "var", etc. With the file chooser it is important
|
||||
to enter into the "arm" subdirectory for this
|
||||
example.</para></listitem>
|
||||
|
||||
<listitem><para><guilabel>Toolchain triplet</guilabel>: this is the cross compile
|
||||
triplet, e.g. "arm-poky-linux-gnueabi".</para></listitem>
|
||||
|
||||
<listitem><para><guilabel>Kernel</guilabel>: use the file chooser to select the kernel
|
||||
to use with QEMU</para></listitem>
|
||||
|
||||
<listitem><para><guilabel>Root filesystem</guilabel>: use the file chooser to select
|
||||
the root filesystem image, this should be an image (not a
|
||||
tarball)</para></listitem>
|
||||
</itemizedlist>
|
||||
<!-- DISBALED, TOO BIG!
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-anjuta-poky-2.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>Anjuta Preferences Dialog</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
-->
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta-usage">
|
||||
<title>Using the Anjuta plugin</title>
|
||||
|
||||
<para>As an example, cross-compiling a project, deploying it into
|
||||
QEMU and running a debugger against it and then doing a system
|
||||
wide profile.</para>
|
||||
|
||||
<para>Choose <menuchoice><guimenu>Build</guimenu><guimenuitem>Run
|
||||
Configure</guimenuitem></menuchoice> or
|
||||
<menuchoice><guimenu>Build</guimenu><guimenuitem>Run
|
||||
Autogenerate</guimenuitem></menuchoice> to run "configure"
|
||||
(or to run "autogen") for the project. This passes command line
|
||||
arguments to instruct it to cross-compile.</para>
|
||||
|
||||
<para>Next do
|
||||
<menuchoice><guimenu>Build</guimenu><guimenuitem>Build
|
||||
Project</guimenuitem></menuchoice> to build and compile the
|
||||
project. If you have previously built the project in the same
|
||||
tree without using the cross-compiler you may find that your
|
||||
project fails to link. Simply do
|
||||
<menuchoice><guimenu>Build</guimenu><guimenuitem>Clean
|
||||
Project</guimenuitem></menuchoice> to remove the old
|
||||
binaries. You may then try building again.</para>
|
||||
|
||||
<para>Next start QEMU by using
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Start
|
||||
QEMU</guimenuitem></menuchoice>, this will start QEMU and
|
||||
will show any error messages in the message view. Once Poky has
|
||||
fully booted within QEMU you may now deploy into it.</para>
|
||||
|
||||
<para>Once built and QEMU is running, choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Deploy</guimenuitem></menuchoice>,
|
||||
this will install the package into a temporary directory and
|
||||
then copy using rsync over SSH into the target. Progress and
|
||||
messages will be shown in the message view.</para>
|
||||
|
||||
<para>To debug a program installed into onto the target choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Debug
|
||||
remote</guimenuitem></menuchoice>. This prompts for the
|
||||
local binary to debug and also the command line to run on the
|
||||
target. The command line to run should include the full path to
|
||||
the to binary installed in the target. This will start a
|
||||
gdbserver over SSH on the target and also an instance of a
|
||||
cross-gdb in a local terminal. This will be preloaded to connect
|
||||
to the server and use the <guilabel>SDK root</guilabel> to find
|
||||
symbols. This gdb will connect to the target and load in
|
||||
various libraries and the target program. You should setup any
|
||||
breakpoints or watchpoints now since you might not be able to
|
||||
interrupt the execution later. You may stop
|
||||
the debugger on the target using
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Stop
|
||||
debugger</guimenuitem></menuchoice>.</para>
|
||||
|
||||
<para>It is also possible to execute a command in the target over
|
||||
SSH, the appropriate environment will be be set for the
|
||||
execution. Choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Run
|
||||
remote</guimenuitem></menuchoice> to do this. This will open
|
||||
a terminal with the SSH command inside.</para>
|
||||
|
||||
<para>To do a system wide profile against the system running in
|
||||
QEMU choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Profile
|
||||
remote</guimenuitem></menuchoice>. This will start up
|
||||
OProfileUI with the appropriate parameters to connect to the
|
||||
server running inside QEMU and will also supply the path to the
|
||||
debug information necessary to get a useful profile.</para>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
||||
<section id="platdev-appdev-qemu">
|
||||
<title>Developing externally in QEMU</title>
|
||||
<para>
|
||||
Running Poky QEMU images is covered in the <link
|
||||
linkend='intro-quickstart-qemu'>Running an Image</link> section.
|
||||
</para>
|
||||
<para>
|
||||
Poky's QEMU images contain a complete native toolchain. This means
|
||||
that applications can be developed within QEMU in the same was as a
|
||||
normal system. Using qemux86 on an x86 machine is fast since the
|
||||
guest and host architectures match, qemuarm is slower but gives
|
||||
faithful emulation of ARM specific issues. To speed things up these
|
||||
images support using distcc to call a cross-compiler outside the
|
||||
emulated system too. If <command>runqemu</command> was used to start
|
||||
QEMU, and distccd is present on the host system, any bitbake cross
|
||||
compiling toolchain available from the build system will automatically
|
||||
be used from within qemu simply by calling distcc
|
||||
(<command>export CC="distcc"</command> can be set in the enviroment).
|
||||
Alterntatively, if a suitable SDK/toolchain is present in
|
||||
<filename class="directory">/usr/local/poky</filename> it will also
|
||||
automatically be used.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
There are several options for connecting into the emulated system.
|
||||
QEMU provides a framebuffer interface which has standard consoles
|
||||
available. There is also a serial connection available which has a
|
||||
console to the system running on it and IP networking as standard.
|
||||
The images have a dropbear ssh server running with the root password
|
||||
disabled allowing standard ssh and scp commands to work. The images
|
||||
also contain an NFS server exporting the guest's root filesystem
|
||||
allowing that to be made available to the host.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-chroot">
|
||||
<title>Developing externally in a chroot</title>
|
||||
<para>
|
||||
If you have a system that matches the architecture of the Poky machine you're using,
|
||||
such as qemux86, you can run binaries directly from the image on the host system
|
||||
using a chroot combined with tools like <ulink url='http://projects.o-hand.com/xephyr'>Xephyr</ulink>.
|
||||
</para>
|
||||
<para>
|
||||
Poky has some scripts to make using its qemux86 images within a chroot easier. To use
|
||||
these you need to install the poky-scripts package or otherwise obtain the
|
||||
<filename>poky-chroot-setup</filename> and <filename>poky-chroot-run</filename> scripts.
|
||||
You also need Xephyr and chrootuid binaries available. To initialize a system use the setup script:
|
||||
</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# poky-chroot-setup <qemux86-rootfs.tgz> <target-directory>
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
which will unpack the specified qemux86 rootfs tarball into the target-directory.
|
||||
You can then start the system with:
|
||||
</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# poky-chroot-run <target-directory> <command>
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
where the target-directory is the place the rootfs was unpacked to and command is
|
||||
an optional command to run. If no command is specified, the system will drop you
|
||||
within a bash shell. A Xephyr window will be displayed containing the emulated
|
||||
system and you may be asked for a password since some of the commands used for
|
||||
bind mounting directories need to be run using sudo.
|
||||
</para>
|
||||
<para>
|
||||
There are limits as to how far the the realism of the chroot environment extends.
|
||||
It is useful for simple development work or quick tests but full system emulation
|
||||
with QEMU offers a much more realistic environment for more complex development
|
||||
tasks. Note that chroot support within Poky is still experimental.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-insitu">
|
||||
<title>Developing in Poky directly</title>
|
||||
<para>
|
||||
Working directly in Poky is a fast and effective development technique.
|
||||
The idea is that you can directly edit files in
|
||||
<glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm>
|
||||
or the source directory <glossterm><link linkend='var-S'>S</link></glossterm>
|
||||
and then force specific tasks to rerun in order to test the changes.
|
||||
An example session working on the matchbox-desktop package might
|
||||
look like this:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake matchbox-desktop
|
||||
$ sh
|
||||
$ cd tmp/work/armv5te-poky-linux-gnueabi/matchbox-desktop-2.0+svnr1708-r0/
|
||||
$ cd matchbox-desktop-2
|
||||
$ vi src/main.c
|
||||
$ exit
|
||||
$ bitbake matchbox-desktop -c compile -f
|
||||
$ bitbake matchbox-desktop
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Here, we build the package, change into the work directory for the package,
|
||||
change a file, then recompile the package. Instead of using sh like this,
|
||||
you can also use two different terminals. The risk with working like this
|
||||
is that a command like unpack could wipe out the changes you've made to the
|
||||
work directory so you need to work carefully.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It is useful when making changes directly to the work directory files to do
|
||||
so using quilt as detailed in the <link linkend='usingpoky-modifying-packages-quilt'>
|
||||
modifying packages with quilt</link> section. The resulting patches can be copied
|
||||
into the recipe directory and used directly in the <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm>.
|
||||
</para>
|
||||
<para>
|
||||
For a review of the skills used in this section see Sections <link
|
||||
linkend="usingpoky-components-bitbake">2.1.1</link> and <link
|
||||
linkend="usingpoky-debugging-taskrunning">2.4.2</link>.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-devshell">
|
||||
<title>Developing with 'devshell'</title>
|
||||
|
||||
<para>
|
||||
When debugging certain commands or even to just edit packages, the
|
||||
'devshell' can be a useful tool. To start it you run a command like:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake matchbox-desktop -c devshell
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
which will open a terminal with a shell prompt within the Poky
|
||||
environment. This means PATH is setup to include the cross toolchain,
|
||||
the pkgconfig variables are setup to find the right .pc files,
|
||||
configure will be able to find the Poky site files etc. Within this
|
||||
environment, you can run configure or compile command as if they
|
||||
were being run by Poky itself. You are also changed into the
|
||||
source (<glossterm><link linkend='var-S'>S</link></glossterm>)
|
||||
directory automatically. When finished with the shell just exit it
|
||||
or close the terminal window.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The default shell used by devshell is the gnome-terminal. Other
|
||||
forms of terminal can also be used by setting the <glossterm>
|
||||
<link linkend='var-TERMCMD'>TERMCMD</link></glossterm> and <glossterm>
|
||||
<link linkend='var-TERMCMDRUN'>TERMCMDRUN</link></glossterm> variables
|
||||
in local.conf. For examples of the other options available, see
|
||||
<filename>meta/conf/bitbake.conf</filename>. An external shell is
|
||||
launched rather than opening directly into the original terminal
|
||||
window to make interaction with bitbakes multiple threads easier
|
||||
and also allow a client/server split of bitbake in the future
|
||||
(devshell will still work over X11 forwarding or similar).
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It is worth remembering that inside devshell you need to use the full
|
||||
compiler name such as <command>arm-poky-linux-gnueabi-gcc</command>
|
||||
instead of just <command>gcc</command> and the same applies to other
|
||||
applications from gcc, bintuils, libtool etc. Poky will have setup
|
||||
environmental variables such as CC to assist applications, such as make,
|
||||
find the correct tools.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-srcrev">
|
||||
<title>Developing within Poky with an external SCM based package</title>
|
||||
|
||||
<para>
|
||||
If you're working on a recipe which pulls from an external SCM it
|
||||
is possible to have Poky notice new changes added to the
|
||||
SCM and then build the latest version. This only works for SCMs
|
||||
where its possible to get a sensible revision number for changes.
|
||||
Currently it works for svn, git and bzr repositories.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To enable this behaviour it is simply a case of adding <glossterm>
|
||||
<link linkend='var-SRCREV'>SRCREV</link></glossterm>_pn-<glossterm>
|
||||
<link linkend='var-PN'>PN</link></glossterm> = "${AUTOREV}" to
|
||||
local.conf where <glossterm><link linkend='var-PN'>PN</link></glossterm>
|
||||
is the name of the package for which you want to enable automatic source
|
||||
revision updating.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-gdb-remotedebug">
|
||||
<title>Debugging with GDB Remotely</title>
|
||||
|
||||
<para>
|
||||
<ulink url="http://sourceware.org/gdb/">GDB</ulink> (The GNU Project Debugger)
|
||||
allows you to examine running programs to understand and fix problems and
|
||||
also to perform postmortem style analsys of program crashes. It is available
|
||||
as a package within poky and installed by default in sdk images. It works best
|
||||
when -dbg packages for the application being debugged are installed as the
|
||||
extra symbols give more meaningful output from GDB.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Sometimes, due to memory or disk space constraints, it is not possible
|
||||
to use GDB directly on the remote target to debug applications. This is
|
||||
due to the fact that
|
||||
GDB needs to load the debugging information and the binaries of the
|
||||
process being debugged. GDB then needs to perform many
|
||||
computations to locate information such as function names, variable
|
||||
names and values, stack traces, etc. even before starting the debugging
|
||||
process. This places load on the target system and can alter the
|
||||
characteristics of the program being debugged.
|
||||
</para>
|
||||
<para>
|
||||
This is where GDBSERVER comes into play as it runs on the remote target
|
||||
and does not load any debugging information from the debugged process.
|
||||
Instead, the debugging information processing is done by a GDB instance
|
||||
running on a distant computer - the host GDB. The host GDB then sends
|
||||
control commands to GDBSERVER to make it stop or start the debugged
|
||||
program, as well as read or write some memory regions of that debugged
|
||||
program. All the debugging information loading and processing as well
|
||||
as the heavy debugging duty is done by the host GDB, giving the
|
||||
GDBSERVER running on the target a chance to remain small and fast.
|
||||
</para>
|
||||
<para>
|
||||
As the host GDB is responsible for loading the debugging information and
|
||||
doing the necessary processing to make actual debugging happen, the
|
||||
user has to make sure it can access the unstripped binaries complete
|
||||
with their debugging information and compiled with no optimisations. The
|
||||
host GDB must also have local access to all the libraries used by the
|
||||
debugged program. On the remote target the binaries can remain stripped
|
||||
as GDBSERVER does not need any debugging information there. However they
|
||||
must also be compiled without optimisation matching the host's binaries.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The binary being debugged on the remote target machine is hence referred
|
||||
to as the 'inferior' in keeping with GDB documentation and terminology.
|
||||
Further documentation on GDB, is available on
|
||||
<ulink url="http://sourceware.org/gdb/documentation/">on their site</ulink>.
|
||||
</para>
|
||||
|
||||
<section id="platdev-gdb-remotedebug-launch-gdbserver">
|
||||
<title>Launching GDBSERVER on the target</title>
|
||||
<para>
|
||||
First, make sure gdbserver is installed on the target. If not,
|
||||
install the gdbserver package (which needs the libthread-db1
|
||||
package).
|
||||
</para>
|
||||
<para>
|
||||
To launch GDBSERVER on the target and make it ready to "debug" a
|
||||
program located at <emphasis>/path/to/inferior</emphasis>, connect
|
||||
to the target and launch:
|
||||
<programlisting>$ gdbserver localhost:2345 /path/to/inferior</programlisting>
|
||||
After that, gdbserver should be listening on port 2345 for debugging
|
||||
commands coming from a remote GDB process running on the host computer.
|
||||
Communication between the GDBSERVER and the host GDB will be done using
|
||||
TCP. To use other communication protocols please refer to the
|
||||
GDBSERVER documentation.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb">
|
||||
<title>Launching GDB on the host computer</title>
|
||||
|
||||
<para>
|
||||
Running GDB on the host computer takes a number of stages, described in the
|
||||
following sections.
|
||||
</para>
|
||||
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-buildcross">
|
||||
<title>Build the cross GDB package</title>
|
||||
<para>
|
||||
A suitable gdb cross binary is required which runs on your host computer but
|
||||
knows about the the ABI of the remote target. This can be obtained from
|
||||
the the Poky toolchain, e.g.
|
||||
<filename>/usr/local/poky/eabi-glibc/arm/bin/arm-poky-linux-gnueabi-gdb</filename>
|
||||
which "arm" is the target architecture and "linux-gnueabi" the target ABI.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively this can be built directly by Poky. To do this you would build
|
||||
the gdb-cross package so for example you would run:
|
||||
<programlisting>bitbake gdb-cross</programlisting>
|
||||
Once built, the cross gdb binary can be found at
|
||||
<programlisting>tmp/cross/bin/<target-abi>-gdb </programlisting>
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-inferiorbins">
|
||||
|
||||
<title>Making the inferior binaries available</title>
|
||||
|
||||
<para>
|
||||
The inferior binary needs to be available to GDB complete with all debugging
|
||||
symbols in order to get the best possible results along with any libraries
|
||||
the inferior depends on and their debugging symbols. There are a number of
|
||||
ways this can be done.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Perhaps the easiest is to have an 'sdk' image corresponding to the plain
|
||||
image installed on the device. In the case of 'pky-image-sato',
|
||||
'poky-image-sdk' would contain suitable symbols. The sdk images already
|
||||
have the debugging symbols installed so its just a question expanding the
|
||||
archive to some location and telling GDB where this is.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively, poky can build a custom directory of files for a specific
|
||||
debugging purpose by reusing its tmp/rootfs directory, on the host computer
|
||||
in a slightly different way to normal. This directory contains the contents
|
||||
of the last built image. This process assumes the image running on the
|
||||
target was the last image to be built by Poky, the package <emphasis>foo</emphasis>
|
||||
contains the inferior binary to be debugged has been built without without
|
||||
optimisation and has debugging information available.
|
||||
</para>
|
||||
<para>
|
||||
Firstly you want to install the <emphasis>foo</emphasis> package to tmp/rootfs
|
||||
by doing:
|
||||
</para>
|
||||
<programlisting>tmp/staging/i686-linux/usr/bin/ipkg-cl -f \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/ipkg.conf -o \
|
||||
tmp/rootfs/ update</programlisting>
|
||||
<para>
|
||||
then,
|
||||
</para>
|
||||
<programlisting>tmp/staging/i686-linux/usr/bin/ipkg-cl -f \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/ipkg.conf \
|
||||
-o tmp/rootfs install foo
|
||||
|
||||
tmp/staging/i686-linux/usr/bin/ipkg-cl -f \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/ipkg.conf \
|
||||
-o tmp/rootfs install foo-dbg</programlisting>
|
||||
<para>
|
||||
which installs the debugging information too.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-launchhost">
|
||||
|
||||
<title>Launch the host GDB</title>
|
||||
<para>
|
||||
To launch the host GDB, run the cross gdb binary identified above with
|
||||
the inferior binary specified on the commandline:
|
||||
<programlisting><target-abi>-gdb rootfs/usr/bin/foo</programlisting>
|
||||
This loads the binary of program <emphasis>foo</emphasis>
|
||||
as well as its debugging information. Once the gdb prompt
|
||||
appears, you must instruct GDB to load all the libraries
|
||||
of the inferior from tmp/rootfs:
|
||||
<programlisting>set solib-absolute-prefix /path/to/tmp/rootfs</programlisting>
|
||||
where <filename>/path/to/tmp/rootfs</filename> must be
|
||||
the absolute path to <filename>tmp/rootfs</filename> or wherever the
|
||||
binaries with debugging information are located.
|
||||
</para>
|
||||
<para>
|
||||
Now, tell GDB to connect to the GDBSERVER running on the remote target:
|
||||
<programlisting>target remote remote-target-ip-address:2345</programlisting>
|
||||
Where remote-target-ip-address is the IP address of the
|
||||
remote target where the GDBSERVER is running. 2345 is the
|
||||
port on which the GDBSERVER is running.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-using">
|
||||
|
||||
<title>Using the Debugger</title>
|
||||
<para>
|
||||
Debugging can now proceed as normal, as if the debugging were being done on the
|
||||
local machine, for example to tell GDB to break in the <emphasis>main</emphasis>
|
||||
function, for instance:
|
||||
<programlisting>break main</programlisting>
|
||||
and then to tell GDB to "continue" the inferior execution,
|
||||
<programlisting>continue</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
For more information about using GDB please see the
|
||||
project's online documentation at <ulink
|
||||
url="http://sourceware.org/gdb/download/onlinedocs/"/>.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-oprofile">
|
||||
<title>Profiling with OProfile</title>
|
||||
|
||||
<para>
|
||||
<ulink url="http://oprofile.sourceforge.net/">OProfile</ulink> is a
|
||||
statistical profiler well suited to finding performance
|
||||
bottlenecks in both userspace software and the kernel. It provides
|
||||
answers to questions like "Which functions does my application spend
|
||||
the most time in when doing X?". Poky is well integrated with OProfile
|
||||
to make profiling applications on target hardware straightforward.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To use OProfile you need an image with OProfile installed. The easiest
|
||||
way to do this is with "tools-profile" in <glossterm><link
|
||||
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>. You also
|
||||
need debugging symbols to be available on the system where the analysis
|
||||
will take place. This can be achieved with "dbg-pkgs" in <glossterm><link
|
||||
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm> or by
|
||||
installing the appropriate -dbg packages. For
|
||||
successful call graph analysis the binaries must preserve the frame
|
||||
pointer register and hence should be compiled with the
|
||||
"-fno-omit-framepointer" flag. In Poky this can be achieved with
|
||||
<glossterm><link linkend='var-SELECTED_OPTIMIZATION'>SELECTED_OPTIMIZATION
|
||||
</link></glossterm> = "-fexpensive-optimizations -fno-omit-framepointer
|
||||
-frename-registers -O2" or by setting <glossterm><link
|
||||
linkend='var-DEBUG_BUILD'>DEBUG_BUILD</link></glossterm> = "1" in
|
||||
local.conf (the latter will also add extra debug information making the
|
||||
debug packages large).
|
||||
</para>
|
||||
|
||||
<section id="platdev-oprofile-target">
|
||||
<title>Profiling on the target</title>
|
||||
|
||||
<para>
|
||||
All the profiling work can be performed on the target device. A
|
||||
simple OProfile session might look like:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# opcontrol --reset
|
||||
# opcontrol --start --separate=lib --no-vmlinux -c 5
|
||||
[do whatever is being profiled]
|
||||
# opcontrol --stop
|
||||
$ opreport -cl
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Here, the reset command clears any previously profiled data,
|
||||
OProfile is then started. The options used to start OProfile mean
|
||||
dynamic library data is kept separately per application, kernel
|
||||
profiling is disabled and callgraphing is enabled up to 5 levels
|
||||
deep. To profile the kernel, you would specify the
|
||||
<parameter>--vmlinux=/path/to/vmlinux</parameter> option (the vmlinux file is usually in
|
||||
<filename class="directory">/boot/</filename> in Poky and must match the running kernel). The profile is
|
||||
then stopped and the results viewed with opreport with options
|
||||
to see the separate library symbols and callgraph information.
|
||||
</para>
|
||||
<para>
|
||||
Callgraphing means OProfile not only logs infomation about which
|
||||
functions time is being spent in but also which functions
|
||||
called those functions (their parents) and which functions that
|
||||
function calls (its children). The higher the callgraphing depth,
|
||||
the more accurate the results but this also increased the loging
|
||||
overhead so it should be used with caution. On ARM, binaries need
|
||||
to have the frame pointer enabled for callgraphing to work (compile
|
||||
with the gcc option -fno-omit-framepointer).
|
||||
</para>
|
||||
<para>
|
||||
For more information on using OProfile please see the OProfile
|
||||
online documentation at <ulink
|
||||
url="http://oprofile.sourceforge.net/docs/"/>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-oprofile-oprofileui">
|
||||
<title>Using OProfileUI</title>
|
||||
|
||||
<para>
|
||||
A graphical user interface for OProfile is also available. You can
|
||||
either use prebuilt Debian packages from the <ulink
|
||||
url='http://debian.o-hand.com/'>OpenedHand repository</ulink> or
|
||||
download and build from svn at
|
||||
http://svn.o-hand.com/repos/oprofileui/trunk/. If the
|
||||
"tools-profile" image feature is selected, all necessary binaries
|
||||
are installed onto the target device for OProfileUI interaction.
|
||||
</para>
|
||||
|
||||
<!-- DISBALED, Need a more 'contexual' shot?
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-oprofile-viewer.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>OProfileUI Viewer showing an application being profiled on a remote device</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
-->
|
||||
<para>
|
||||
In order to convert the data in the sample format from the target
|
||||
to the host the <filename>opimport</filename> program is needed.
|
||||
This is not included in standard Debian OProfile packages but an
|
||||
OProfile package with this addition is also available from the <ulink
|
||||
url='http://debian.o-hand.com/'>OpenedHand repository</ulink>.
|
||||
We recommend using OProfile 0.9.3 or greater. Other patches to
|
||||
OProfile may be needed for recent OProfileUI features, but Poky
|
||||
usually includes all needed patches on the target device. Please
|
||||
see the <ulink
|
||||
url='http://svn.o-hand.com/repos/oprofileui/trunk/README'>
|
||||
OProfileUI README</ulink> for up to date information, and the
|
||||
<ulink url="http://labs.o-hand.com/oprofileui">OProfileUI website
|
||||
</ulink> for more information on the OProfileUI project.
|
||||
</para>
|
||||
|
||||
<section id="platdev-oprofile-oprofileui-online">
|
||||
<title>Online mode</title>
|
||||
|
||||
<para>
|
||||
This assumes a working network connection with the target
|
||||
hardware. In this case you just need to run <command>
|
||||
"oprofile-server"</command> on the device. By default it listens
|
||||
on port 4224. This can be changed with the <parameter>--port</parameter> command line
|
||||
option.
|
||||
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The client program is called <command>oprofile-viewer</command>. The
|
||||
UI is relatively straightforward, the key functionality is accessed
|
||||
through the buttons on the toolbar (which are duplicated in the
|
||||
menus.) These buttons are:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Connect - connect to the remote host, the IP address or hostname for the
|
||||
target can be supplied here.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Disconnect - disconnect from the target.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Start - start the profiling on the device.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Stop - stop the profiling on the device and download the data to the local
|
||||
host. This will generate the profile and show it in the viewer.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Download - download the data from the target, generate the profile and show it
|
||||
in the viewer.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Reset - reset the sample data on the device. This will remove the sample
|
||||
information that was collected on a previous sampling run. Ensure you do this
|
||||
if you do not want to include old sample information.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Save - save the data downloaded from the target to another directory for later
|
||||
examination.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Open - load data that was previously saved.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
The behaviour of the client is to download the complete 'profile archive' from
|
||||
the target to the host for processing. This archive is a directory containing
|
||||
the sample data, the object files and the debug information for said object
|
||||
files. This archive is then converted using a script included in this
|
||||
distribution ('oparchconv') that uses 'opimport' to convert the archive from
|
||||
the target to something that can be processed on the host.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Downloaded archives are kept in /tmp and cleared up when they are no longer in
|
||||
use.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If you wish to profile into the kernel, this is possible, you just need to ensure
|
||||
a vmlinux file matching the running kernel is available. In Poky this is usually
|
||||
located in /boot/vmlinux-KERNELVERSION, where KERNEL-version is the version of
|
||||
the kernel e.g. 2.6.23. Poky generates separate vmlinux packages for each kernel
|
||||
it builds so it should be a question of just ensuring a matching package is
|
||||
installed (<command> ipkg install kernel-vmlinux</command>. These are automatically
|
||||
installed into development and profiling images alongside OProfile. There is a
|
||||
configuration option within the OProfileUI settings page where the location of
|
||||
the vmlinux file can be entered.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Waiting for debug symbols to transfer from the device can be slow and it's not
|
||||
always necessary to actually have them on device for OProfile use. All that is
|
||||
needed is a copy of the filesystem with the debug symbols present on the viewer
|
||||
system. The <link linkend='platdev-gdb-remotedebug-launch-gdb'>GDB remote debug
|
||||
section</link> covers how to create such a directory with Poky and the location
|
||||
of this directory can again be specified in the OProfileUI settings dialog. If
|
||||
specified, it will be used where the file checksums match those on the system
|
||||
being profiled.
|
||||
</para>
|
||||
</section>
|
||||
<section id="platdev-oprofile-oprofileui-offline">
|
||||
<title>Offline mode</title>
|
||||
|
||||
<para>
|
||||
If no network access to the target is available an archive for processing in
|
||||
'oprofile-viewer' can be generated with the following set of command.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# opcontrol --reset
|
||||
# opcontrol --start --separate=lib --no-vmlinux -c 5
|
||||
[do whatever is being profiled]
|
||||
# opcontrol --stop
|
||||
# oparchive -o my_archive
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Where my_archive is the name of the archive directory where you would like the
|
||||
profile archive to be kept. The directory will be created for you. This can
|
||||
then be copied to another host and loaded using 'oprofile-viewer''s open
|
||||
functionality. The archive will be converted if necessary.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
7
handbook/examples/hello-autotools/hello_2.3.bb
Normal file
@@ -0,0 +1,7 @@
|
||||
DESCRIPTION = "GNU Helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "GPLv3"
|
||||
|
||||
SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.bz2"
|
||||
|
||||
inherit autotools
|
||||
8
handbook/examples/hello-single/files/helloworld.c
Normal file
@@ -0,0 +1,8 @@
|
||||
#include <stdio.h>
|
||||
|
||||
int main(void)
|
||||
{
|
||||
printf("Hello world!\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
16
handbook/examples/hello-single/hello.bb
Normal file
@@ -0,0 +1,16 @@
|
||||
DESCRIPTION = "Simple helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "MIT"
|
||||
|
||||
SRC_URI = "file://helloworld.c"
|
||||
|
||||
S = "${WORKDIR}"
|
||||
|
||||
do_compile() {
|
||||
${CC} helloworld.c -o helloworld
|
||||
}
|
||||
|
||||
do_install() {
|
||||
install -d ${D}${bindir}
|
||||
install -m 0755 helloworld ${D}${bindir}
|
||||
}
|
||||
13
handbook/examples/libxpm/libxpm_3.5.6.bb
Normal file
@@ -0,0 +1,13 @@
|
||||
require xorg-lib-common.inc
|
||||
|
||||
DESCRIPTION = "X11 Pixmap library"
|
||||
LICENSE = "X-BSD"
|
||||
DEPENDS += "libxext"
|
||||
PR = "r2"
|
||||
PE = "1"
|
||||
|
||||
XORG_PN = "libXpm"
|
||||
|
||||
PACKAGES =+ "sxpm cxpm"
|
||||
FILES_cxpm = "${bindir}/cxpm"
|
||||
FILES_sxpm = "${bindir}/sxpm"
|
||||
13
handbook/examples/mtd-makefile/mtd-utils_1.0.0.bb
Normal file
@@ -0,0 +1,13 @@
|
||||
DESCRIPTION = "Tools for managing memory technology devices."
|
||||
SECTION = "base"
|
||||
DEPENDS = "zlib"
|
||||
HOMEPAGE = "http://www.linux-mtd.infradead.org/"
|
||||
LICENSE = "GPLv2"
|
||||
|
||||
SRC_URI = "ftp://ftp.infradead.org/pub/mtd-utils/mtd-utils-${PV}.tar.gz"
|
||||
|
||||
CFLAGS_prepend = "-I ${S}/include "
|
||||
|
||||
do_install() {
|
||||
oe_runmake install DESTDIR=${D}
|
||||
}
|
||||
726
handbook/extendpoky.xml
Normal file
@@ -0,0 +1,726 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='extendpoky'>
|
||||
|
||||
<title>Extending Poky</title>
|
||||
|
||||
<para>
|
||||
This section gives information about how to extend the functionality
|
||||
already present in Poky, documenting standard tasks such as adding new
|
||||
software packages, extending or customising images or porting poky to
|
||||
new hardware (adding a new machine). It also contains advice about how
|
||||
to manage the process of making changes to Poky to achieve best results.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-extend-addpkg'>
|
||||
<title>Adding a Package</title>
|
||||
|
||||
<para>
|
||||
To add package into Poky you need to write a recipe for it.
|
||||
Writing a recipe means creating a .bb file which sets various
|
||||
variables. The variables
|
||||
useful for recipes are detailed in the <link linkend='ref-varlocality-recipe-required'>
|
||||
recipe reference</link> section along with more detailed information
|
||||
about issues such as recipe naming.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The simplest way to add a new package is to base it on a similar
|
||||
pre-existing recipe. There are some examples below of how to add
|
||||
standard types of packages:
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-singlec'>
|
||||
<title>Single .c File Package (Hello World!)</title>
|
||||
|
||||
<para>
|
||||
To build an application from a single file stored locally requires a
|
||||
recipe which has the file listed in the <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm> variable. In addition
|
||||
the <function>do_compile</function> and <function>do_install</function>
|
||||
tasks need to be manually written. The <glossterm><link linkend='var-S'>
|
||||
S</link></glossterm> variable defines the directory containing the source
|
||||
code which in this case is set equal to <glossterm><link linkend='var-WORKDIR'>
|
||||
WORKDIR</link></glossterm>, the directory BitBake uses for the build.
|
||||
</para>
|
||||
<programlisting>
|
||||
DESCRIPTION = "Simple helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "MIT"
|
||||
|
||||
SRC_URI = "file://helloworld.c"
|
||||
|
||||
S = "${WORKDIR}"
|
||||
|
||||
do_compile() {
|
||||
${CC} helloworld.c -o helloworld
|
||||
}
|
||||
|
||||
do_install() {
|
||||
install -d ${D}${bindir}
|
||||
install -m 0755 helloworld ${D}${bindir}
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
As a result of the build process "helloworld" and "helloworld-dbg"
|
||||
packages will be built.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-autotools'>
|
||||
<title>Autotooled Package</title>
|
||||
|
||||
<para>
|
||||
Applications which use autotools (autoconf, automake)
|
||||
require a recipe which has a source archive listed in
|
||||
<glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm> and
|
||||
<command>inherit autotools</command> to instruct BitBake to use the
|
||||
<filename>autotools.bbclass</filename> which has
|
||||
definitions of all the steps
|
||||
needed to build an autotooled application.
|
||||
The result of the build will be automatically packaged and if
|
||||
the application uses NLS to localise then packages with
|
||||
locale information will be generated (one package per
|
||||
language).
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
DESCRIPTION = "GNU Helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "GPLv2"
|
||||
|
||||
SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.bz2"
|
||||
|
||||
inherit autotools
|
||||
</programlisting>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-makefile'>
|
||||
<title>Makefile-Based Package</title>
|
||||
|
||||
<para>
|
||||
Applications which use GNU make require a recipe which has
|
||||
the source archive listed in <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm>.
|
||||
Adding a <function>do_compile</function> step
|
||||
is not needed as by default BitBake will start the "make"
|
||||
command to compile the application. If there is a need for
|
||||
additional options to make then they should be stored in the
|
||||
<glossterm><link
|
||||
linkend='var-EXTRA_OEMAKE'>EXTRA_OEMAKE</link></glossterm> variable - BitBake
|
||||
will pass them into the GNU
|
||||
make invocation. A <function>do_install</function> task is required
|
||||
- otherwise BitBake will run an empty <function>do_install</function>
|
||||
task by default.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Some applications may require extra parameters to be passed to
|
||||
the compiler, for example an additional header path. This can
|
||||
be done buy adding to the <glossterm><link
|
||||
linkend='var-CFLAGS'>CFLAGS</link></glossterm> variable, as in the example below.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
DESCRIPTION = "Tools for managing memory technology devices."
|
||||
SECTION = "base"
|
||||
DEPENDS = "zlib"
|
||||
HOMEPAGE = "http://www.linux-mtd.infradead.org/"
|
||||
LICENSE = "GPLv2"
|
||||
|
||||
SRC_URI = "ftp://ftp.infradead.org/pub/mtd-utils/mtd-utils-${PV}.tar.gz"
|
||||
|
||||
CFLAGS_prepend = "-I ${S}/include "
|
||||
|
||||
do_install() {
|
||||
oe_runmake install DESTDIR=${D}
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-files'>
|
||||
<title>Controlling packages content</title>
|
||||
|
||||
<para>
|
||||
The variables <glossterm><link
|
||||
linkend='var-PACKAGES'>PACKAGES</link></glossterm> and
|
||||
<glossterm><link linkend='var-FILES'>FILES</link></glossterm> are used to split an
|
||||
application into multiple packages.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Below the "libXpm" recipe is used as an example. By
|
||||
default the "libXpm" recipe generates one package
|
||||
which contains the library
|
||||
and also a few binaries. The recipe can be adapted to
|
||||
split the binaries into separate packages.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
require xorg-lib-common.inc
|
||||
|
||||
DESCRIPTION = "X11 Pixmap library"
|
||||
LICENSE = "X-BSD"
|
||||
DEPENDS += "libxext"
|
||||
PE = "1"
|
||||
|
||||
XORG_PN = "libXpm"
|
||||
|
||||
PACKAGES =+ "sxpm cxpm"
|
||||
FILES_cxpm = "${bindir}/cxpm"
|
||||
FILES_sxpm = "${bindir}/sxpm"
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
In this example we want to ship the "sxpm" and "cxpm" binaries
|
||||
in separate packages. Since "bindir" would be packaged into the
|
||||
main <glossterm><link linkend='var-PN'>PN</link></glossterm>
|
||||
package as standard we prepend the <glossterm><link
|
||||
linkend='var-PACKAGES'>PACKAGES</link></glossterm> variable so
|
||||
additional package names are added to the start of list. The
|
||||
extra <glossterm><link linkend='var-PN'>FILES</link></glossterm>_*
|
||||
variables then contain information to specify which files and
|
||||
directories goes into which package.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-postinstalls'>
|
||||
<title>Post Install Scripts</title>
|
||||
|
||||
<para>
|
||||
To add a post-installation script to a package, add
|
||||
a <function>pkg_postinst_PACKAGENAME()</function>
|
||||
function to the .bb file
|
||||
where PACKAGENAME is the name of the package to attach
|
||||
the postinst script to. A post-installation function has the following structure:
|
||||
</para>
|
||||
<programlisting>
|
||||
pkg_postinst_PACKAGENAME () {
|
||||
#!/bin/sh -e
|
||||
# Commands to carry out
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
The script defined in the post installation function
|
||||
gets called when the rootfs is made. If the script succeeds,
|
||||
the package is marked as installed. If the script fails,
|
||||
the package is marked as unpacked and the script will be
|
||||
executed again on the first boot of the image.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Sometimes it is necessary that the execution of a post-installation
|
||||
script is delayed until the first boot, because the script
|
||||
needs to be executed the device itself. To delay script execution
|
||||
until boot time, the post-installation function should have the
|
||||
following structure:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
pkg_postinst_PACKAGENAME () {
|
||||
#!/bin/sh -e
|
||||
if [ x"$D" = "x" ]; then
|
||||
# Actions to carry out on the device go here
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
The structure above delays execution until first boot
|
||||
because the <glossterm><link
|
||||
linkend='var-D'>D</link></glossterm> variable points
|
||||
to the 'image'
|
||||
directory when the rootfs is being made at build time but
|
||||
is unset when executed on the first boot.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage'>
|
||||
<title>Customising Images</title>
|
||||
|
||||
<para>
|
||||
Poky images can be customised to satisfy
|
||||
particular requirements. Several methods are detailed below
|
||||
along with guidelines of when to use them.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-extend-customimage-custombb'>
|
||||
<title>Customising Images through a custom image .bb files</title>
|
||||
|
||||
<para>
|
||||
One way to get additional software into an image is by creating a
|
||||
custom image. The recipe will contain two lines:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
IMAGE_INSTALL = "task-poky-x11-base package1 package2"
|
||||
|
||||
inherit poky-image
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
By creating a custom image, a developer has total control
|
||||
over the contents of the image. It is important use
|
||||
the correct names of packages in the <glossterm><link
|
||||
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm> variable.
|
||||
The names must be in
|
||||
the OpenEmbedded notation instead of Debian notation, for example
|
||||
"glibc-dev" instead of "libc6-dev" etc.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The other method of creating a new image is by modifying
|
||||
an existing image. For example if a developer wants to add
|
||||
"strace" into "poky-image-sato" the following recipe can
|
||||
be used:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
require poky-image-sato.bb
|
||||
|
||||
IMAGE_INSTALL += "strace"
|
||||
</programlisting>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage-customtasks'>
|
||||
<title>Customising Images through custom tasks</title>
|
||||
|
||||
<para>
|
||||
For for complex custom images, the best approach is to create a custom
|
||||
task package which is them used to build the image (or images). A good
|
||||
example of a tasks package is <filename>meta/packages/tasks/task-poky.bb
|
||||
</filename>. The <glossterm><link linkend='var-PACKAGES'>PACKAGES</link></glossterm>
|
||||
variable lists the task packages to build (along with the complimentary
|
||||
-dbg and -dev packages). For each package added,
|
||||
<glossterm><link linkend='var-PACKAGES'>RDEPENDS</link></glossterm> and
|
||||
<glossterm><link linkend='var-PACKAGES'>RRECOMMENDS</link></glossterm>
|
||||
entries can then be added each containing a list of packages the parent
|
||||
task package should contain. An example would be:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<programlisting>
|
||||
DESCRIPTION = "My Custom Tasks"
|
||||
|
||||
PACKAGES = "\
|
||||
task-custom-apps \
|
||||
task-custom-apps-dbg \
|
||||
task-custom-apps-dev \
|
||||
task-custom-tools \
|
||||
task-custom-tools-dbg \
|
||||
task-custom-tools-dev \
|
||||
"
|
||||
|
||||
RDEPENDS_task-custom-apps = "\
|
||||
dropbear \
|
||||
portmap \
|
||||
psplash"
|
||||
|
||||
RDEPENDS_task-custom-tools = "\
|
||||
oprofile \
|
||||
oprofileui-server \
|
||||
lttng-control \
|
||||
lttng-viewer"
|
||||
|
||||
RRECOMMENDS_task-custom-tools = "\
|
||||
kernel-module-oprofile"
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In this example, two tasks packages are created, task-custom-apps and
|
||||
task-custom-tools with the dependencies and recommended package dependencies
|
||||
listed. To build an image using these task packages, you would then add
|
||||
"task-custom-apps" and/or "task-custom-tools" to <glossterm><link
|
||||
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm> or other forms
|
||||
of image dependencies as described in other areas of this section.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage-imagefeatures'>
|
||||
<title>Customising Images through custom <glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm></title>
|
||||
|
||||
<para>
|
||||
Ultimately users may want to add extra image "features" as used by Poky with the
|
||||
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
|
||||
variable. To create these, the best reference is <filename>meta/classes/poky-image.bbclass</filename>
|
||||
which illustrates how poky achieves this. In summary, the file looks at the contents of the
|
||||
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
|
||||
variable and based on this generates the <glossterm><link linkend='var-IMAGE_INSTALL'>
|
||||
IMAGE_INSTALL</link></glossterm> variable automatically. Extra features can be added by
|
||||
extending the class or creating a custom class for use with specialised image .bb files.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage-localconf'>
|
||||
<title>Customising Images through local.conf</title>
|
||||
|
||||
<para>
|
||||
It is possible to customise image contents by abusing
|
||||
variables used by distribution maintainers in local.conf.
|
||||
This method only allows the addition of packages and
|
||||
is not recommended.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To add an "strace" package into the image the following is
|
||||
added to local.conf:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
DISTRO_EXTRA_RDEPENDS += "strace"
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
However, since the <glossterm><link linkend='var-DISTRO_EXTRA_RDEPENDS'>
|
||||
DISTRO_EXTRA_RDEPENDS</link></glossterm> variable is for
|
||||
distribution maintainers this method does not make
|
||||
adding packages as simple as a custom .bb file. Using
|
||||
this method, a few packages will need to be recreated
|
||||
and the the image built.
|
||||
</para>
|
||||
<programlisting>
|
||||
bitbake -cclean task-boot task-base task-poky
|
||||
bitbake poky-image-sato
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Cleaning task-* packages is required because they use the
|
||||
<glossterm><link linkend='var-DISTRO_EXTRA_RDEPENDS'>
|
||||
DISTRO_EXTRA_RDEPENDS</link></glossterm> variable. There is no need to
|
||||
build them by hand as Poky images depend on the packages they contain so
|
||||
dependencies will be built automatically. For this reason we don't use the
|
||||
"rebuild" task in this case since "rebuild" does not care about
|
||||
dependencies - it only rebuilds the specified package.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-newmachine">
|
||||
<title>Porting Poky to a new machine</title>
|
||||
<para>
|
||||
Adding a new machine to Poky is a straightforward process and
|
||||
this section gives an idea of the changes that are needed. This guide is
|
||||
meant to cover adding machines similar to those Poky already supports.
|
||||
Adding a totally new architecture might require gcc/glibc changes as
|
||||
well as updates to the site information and, whilst well within Poky's
|
||||
capabilities, is outside the scope of this section.
|
||||
</para>
|
||||
|
||||
<section id="platdev-newmachine-conffile">
|
||||
<title>Adding the machine configuration file</title>
|
||||
<para>
|
||||
A .conf file needs to be added to conf/machine/ with details of the
|
||||
device being added. The name of the file determines the name Poky will
|
||||
use to reference this machine.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The most important variables to set in this file are <glossterm>
|
||||
<link linkend='var-TARGET_ARCH'>TARGET_ARCH</link></glossterm>
|
||||
(e.g. "arm"), <glossterm><link linkend='var-PREFERRED_PROVIDER'>
|
||||
PREFERRED_PROVIDER</link></glossterm>_virtual/kernel (see below) and
|
||||
<glossterm><link linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES
|
||||
</link></glossterm> (e.g. "kernel26 apm screen wifi"). Other variables
|
||||
like <glossterm><link linkend='var-SERIAL_CONSOLE'>SERIAL_CONSOLE
|
||||
</link></glossterm> (e.g. "115200 ttyS0"), <glossterm>
|
||||
<link linkend='var-KERNEL_IMAGETYPE'>KERNEL_IMAGETYPE</link>
|
||||
</glossterm> (e.g. "zImage") and <glossterm><link linkend='var-IMAGE_FSTYPES'>
|
||||
IMAGE_FSTYPES</link></glossterm> (e.g. "tar.gz jffs2") might also be
|
||||
needed. Full details on what these variables do and the meaning of
|
||||
their contents is available through the links.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-newmachine-kernel">
|
||||
<title>Adding a kernel for the machine</title>
|
||||
<para>
|
||||
Poky needs to be able to build a kernel for the machine. You need
|
||||
to either create a new kernel recipe for this machine or extend an
|
||||
existing recipe. There are plenty of kernel examples in the
|
||||
packages/linux directory which can be used as references.
|
||||
</para>
|
||||
<para>
|
||||
If creating a new recipe the "normal" recipe writing rules apply
|
||||
for setting up a <glossterm><link linkend='var-SRC_URI'>SRC_URI
|
||||
</link></glossterm> including any patches and setting <glossterm>
|
||||
<link linkend='var-S'>S</link></glossterm> to point at the source
|
||||
code. You will need to create a configure task which configures the
|
||||
unpacked kernel with a defconfig be that through a "make defconfig"
|
||||
command or more usually though copying in a suitable defconfig and
|
||||
running "make oldconfig". By making use of "inherit kernel" and also
|
||||
maybe some of the linux-*.inc files, most other functionality is
|
||||
centralised and the the defaults of the class normally work well.
|
||||
</para>
|
||||
<para>
|
||||
If extending an existing kernel it is usually a case of adding a
|
||||
suitable defconfig file in a location similar to that used by other
|
||||
machine's defconfig files in a given kernel, possibly listing it in
|
||||
the SRC_URI and adding the machine to the expression in <glossterm>
|
||||
<link linkend='var-COMPATIBLE_MACHINES'>COMPATIBLE_MACHINES</link>
|
||||
</glossterm>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-newmachine-formfactor">
|
||||
<title>Adding a formfactor configuration file</title>
|
||||
<para>
|
||||
A formfactor configuration file provides information about the
|
||||
target hardware on which Poky is running, and that Poky cannot
|
||||
obtain from other sources such as the kernel. Some examples of
|
||||
information contained in a formfactor configuration file include
|
||||
framebuffer orientation, whether or not the system has a keyboard,
|
||||
the positioning of the keyboard in relation to the screen, and
|
||||
screen resolution.
|
||||
</para>
|
||||
<para>
|
||||
Sane defaults should be used in most cases, but if customisation is
|
||||
necessary you need to create a <filename>machconfig</filename> file
|
||||
under <filename>meta/packages/formfactor/files/MACHINENAME/</filename>
|
||||
where <literal>MACHINENAME</literal> is the name for which this infomation
|
||||
applies. For information about the settings available and the defaults, please see
|
||||
<filename>meta/packages/formfactor/files/config</filename>.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-changes'>
|
||||
<title>Making and Maintaining Changes</title>
|
||||
|
||||
<para>
|
||||
We recognise that people will want to extend/configure/optimise Poky for
|
||||
their specific uses, especially due to the extreme configurability and
|
||||
flexibility Poky offers. To ensure ease of keeping pace with future
|
||||
changes in Poky we recommend making changes to Poky in a controlled way.
|
||||
</para>
|
||||
<para>
|
||||
Poky supports the idea of <link
|
||||
linkend='usingpoky-changes-collections'>"collections"</link> which when used
|
||||
properly can massively ease future upgrades and allow segregation
|
||||
between the Poky core and a given developer's changes. Some other advice on
|
||||
managing changes to Poky is also given in the following section.
|
||||
</para>
|
||||
|
||||
<section id="usingpoky-changes-collections">
|
||||
<title>Bitbake Collections</title>
|
||||
|
||||
<para>
|
||||
Often, people want to extend Poky either through adding packages
|
||||
or overriding files contained within Poky to add their own
|
||||
functionality. Bitbake has a powerful mechanism called
|
||||
collections which provide a way to handle this which is fully
|
||||
supported and actively encouraged within Poky.
|
||||
</para>
|
||||
<para>
|
||||
In the standard tree, meta-extras is an example of how you can
|
||||
do this. As standard the data in meta-extras is not used on a
|
||||
Poky build but local.conf.sample shows how to enable it:
|
||||
</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
BBFILES := "${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-extras/packages/*/*.bb"
|
||||
BBFILE_COLLECTIONS = "normal extras"
|
||||
BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
|
||||
BBFILE_PATTERN_extras = "^${OEROOT}/meta-extras/"
|
||||
BBFILE_PRIORITY_normal = "5"
|
||||
BBFILE_PRIORITY_extras = "5"</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
As can be seen, the extra recipes are added to BBFILES. The
|
||||
BBFILE_COLLECTIONS variable is then set to contain a list of
|
||||
collection names. The BBFILE_PATTERN variables are regular
|
||||
expressions used to match files from BBFILES into a particular
|
||||
collection in this case by using the base pathname.
|
||||
The BBFILE_PRIORITY variable then assigns the different
|
||||
priorities to the files in different collections. This is useful
|
||||
in situations where the same package might appear in both
|
||||
repositories and allows you to choose which collection should
|
||||
'win'.
|
||||
</para>
|
||||
<para>
|
||||
This works well for recipes. For bbclasses and configuration
|
||||
files, you can use the BBPATH environment variable. In this
|
||||
case, the first file with the matching name found in BBPATH is
|
||||
the one that is used, just like the PATH variable for binaries.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section id='usingpoky-changes-commits'>
|
||||
<title>Committing Changes</title>
|
||||
|
||||
<para>
|
||||
Modifications to Poky are often managed under some kind of source
|
||||
revision control system. The policy for committing to such systems
|
||||
is important as some simple policy can significantly improve
|
||||
usability. The tips below are based on the policy that OpenedHand
|
||||
uses for commits to Poky.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It helps to use a consistent style for commit messages when committing
|
||||
changes. We've found a style where the first line of a commit message
|
||||
summarises the change and starts with the name of any package affected
|
||||
work well. Not all changes are to specific packages so the prefix could
|
||||
also be a machine name or class name instead. If a change needs a longer
|
||||
description this should follow the summary.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Any commit should be self contained in that it should leave the
|
||||
metadata in a consistent state, buildable before and after the
|
||||
commit. This helps ensure the autobuilder test results are valid
|
||||
but is good practice regardless.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-changes-prbump'>
|
||||
<title>Package Revision Incrementing</title>
|
||||
|
||||
<para>
|
||||
If a committed change will result in changing the package output
|
||||
then the value of the <glossterm><link linkend='var-PR'>PR</link>
|
||||
</glossterm> variable needs to be increased (commonly referred to
|
||||
as 'bumped') as part of that commit. Only integer values are used
|
||||
and <glossterm><link linkend='var-PR'>PR</link></glossterm> =
|
||||
"r0" should not be added into new recipes as this is default value.
|
||||
When upgrading the version of a package (<glossterm><link
|
||||
linkend='var-PV'>PV</link></glossterm>), the <glossterm><link
|
||||
linkend='var-PR'>PR</link></glossterm> variable should be removed.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The aim is that the package version will only ever increase. If
|
||||
for some reason <glossterm><link linkend='var-PV'>PV</link></glossterm>
|
||||
will change and but not increase, the <glossterm><link
|
||||
linkend='var-PE'>PE</link></glossterm> (Package Epoch) can
|
||||
be increased (it defaults to '0'). The version numbers aim to
|
||||
follow the <ulink url='http://www.debian.org/doc/debian-policy/ch-controlfields.html'>
|
||||
Debian Version Field Policy Guidelines</ulink> which define how
|
||||
versions are compared and hence what "increasing" means.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
There are two reasons for doing this, the first is to ensure that
|
||||
when a developer updates and rebuilds, they get all the changes to
|
||||
the repository and don't have to remember to rebuild any sections.
|
||||
The second is to ensure that target users are able to upgrade their
|
||||
devices via their package manager such as with the <command>
|
||||
ipkg update;ipkg upgrade</command> commands (or similar for
|
||||
dpkg/apt or rpm based systems). The aim is to ensure Poky has
|
||||
upgradable packages in all cases.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-modifing-packages'>
|
||||
<title>Modifying Package Source Code</title>
|
||||
|
||||
<para>
|
||||
Poky is usually used to build software rather than modifying
|
||||
it. However, there are ways Poky can be used to modify software.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
During building, the sources are available in <glossterm><link
|
||||
linkend='var-WORKDIR'>WORKDIR</link></glossterm> directory.
|
||||
Where exactly this is depends on the type of package and the
|
||||
architecture of target device. For a standard recipe not
|
||||
related to <glossterm><link
|
||||
linkend='var-MACHINE'>MACHINE</link></glossterm> it will be
|
||||
<filename>tmp/work/PACKAGE_ARCH-poky-TARGET_OS/PN-PV-PR/</filename>.
|
||||
Target device dependent packages use <glossterm><link
|
||||
linkend='var-MACHINE'>MACHINE
|
||||
</link></glossterm>
|
||||
instead of <glossterm><link linkend='var-PACKAGE_ARCH'>PACKAGE_ARCH
|
||||
</link></glossterm>
|
||||
in the directory name.
|
||||
</para>
|
||||
|
||||
<tip>
|
||||
<para>
|
||||
Check the package recipe sets the <glossterm><link
|
||||
linkend='var-S'>S</link></glossterm> variable to something
|
||||
other than standard <filename>WORKDIR/PN-PV/</filename> value.
|
||||
</para>
|
||||
</tip>
|
||||
<para>
|
||||
After building a package, a user can modify the package source code
|
||||
without problem. The easiest way to test changes is by calling the
|
||||
"compile" task:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
bitbake --cmd compile --force NAME_OF_PACKAGE
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Other tasks may also be called this way.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-modifying-packages-quilt'>
|
||||
<title>Modifying Package Source Code with quilt</title>
|
||||
|
||||
<para>
|
||||
By default Poky uses <ulink
|
||||
url='http://savannah.nongnu.org/projects/quilt'>quilt</ulink>
|
||||
to manage patches in <function>do_patch</function> task.
|
||||
It is a powerful tool which can be used to track all
|
||||
modifications done to package sources.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Before modifying source code it is important to
|
||||
notify quilt so it will track changes into new patch
|
||||
file:
|
||||
<programlisting>
|
||||
quilt new NAME-OF-PATCH.patch
|
||||
</programlisting>
|
||||
|
||||
Then add all files which will be modified into that
|
||||
patch:
|
||||
<programlisting>
|
||||
quilt add file1 file2 file3
|
||||
</programlisting>
|
||||
|
||||
Now start editing. At the end quilt needs to be used
|
||||
to generate final patch which will contain all
|
||||
modifications:
|
||||
<programlisting>
|
||||
quilt refresh
|
||||
</programlisting>
|
||||
|
||||
The resulting patch file can be found in the
|
||||
<filename class="directory">patches/</filename> subdirectory of the source
|
||||
(<glossterm><link linkend='var-S'>S</link></glossterm>) directory. For future builds it
|
||||
should be copied into
|
||||
Poky metadata and added into <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm> of a recipe:
|
||||
<programlisting>
|
||||
SRC_URI += "file://NAME-OF-PATCH.patch;patch=1"
|
||||
</programlisting>
|
||||
|
||||
This also requires a bump of <glossterm><link
|
||||
linkend='var-PR'>PR</link></glossterm> value in the same recipe as we changed resulting packages.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
234
handbook/faq.xml
Normal file
@@ -0,0 +1,234 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='faq'>
|
||||
<title>FAQ</title>
|
||||
<qandaset>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How does Poky differ from <ulink url='http://www.openembedded.org/'>OpenEmbedded</ulink>?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Poky is a derivative of <ulink
|
||||
url='http://www.openembedded.org/'>OpenEmbedded</ulink>, a stable,
|
||||
smaller subset focused on the GNOME Mobile environment. Development
|
||||
in Poky is closely tied to OpenEmbedded with features being merged
|
||||
regularly between the two for mutual benefit.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How can you claim Poky is stable?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
There are three areas that help with stability;
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
We keep Poky small and focused - around 650 packages compared to over 5000 for full OE
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
We only support hardware that we have access to for testing
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
We have a Buildbot which provides continuous build and integration tests
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How do I get support for my board added to Poky?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
There are two main ways to get a board supported in Poky;
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Send us the board if we don't have it yet
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Send us bitbake recipes if you have them (see the Poky handbook to find out how to create recipes)
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
Usually if it's not a completely exotic board then adding support in Poky should be fairly straightforward.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
Are there any products running poky ?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
The <ulink url='http://vernier.com/labquest/'>Vernier Labquest</ulink> is using Poky (for more about the Labquest see the case study at OpenedHand). There are a number of pre-production devices using Poky and we will announce those as soon as they are released.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
What is the Poky output ?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
The output of a Poky build will depend on how it was started, as the same set of recipes can be used to output various formats. Usually the output is a flashable image ready for the target device.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How do I add my package to Poky?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
To add a package you need to create a bitbake recipe - see the Poky handbook to find out how to create a recipe.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
Do I have to reflash my entire board with a new poky image when recompiling a package?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Poky can build packages in various formats, ipkg, Debian package, or RPM. The package can then be upgraded using the package tools on the device, much like on a desktop distribution like Ubuntu or Fedora.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
What is GNOME Mobile? What's the difference between GNOME Mobile and GNOME?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
<ulink url='http://www.gnome.org/mobile/'>GNOME Mobile</ulink> is a subset of the GNOME platform targeted at mobile and embedded devices. The the main difference between GNOME Mobile and standard GNOME is that desktop-orientated libraries have been removed, along with deprecated libraries, creating a much smaller footprint.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How do I make Poky work in RHEL/CentOS?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
To get Poky working under RHEL/CentOS 5.1 you need to first install some required packages. The standard CentOS packages needed are:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
"Development tools" (selected during installation)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
texi2html
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
compat-gcc-34
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
On top of those the following external packages are needed:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
python-sqlite2 from <ulink
|
||||
url='http://dag.wieers.com/rpm/packages/python-sqlite2/'>DAG
|
||||
repository</ulink>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
help2man from <ulink
|
||||
url='http://centos.karan.org/el5/extras/testing/i386/RPMS/help2man-1.33.1-2.noarch.rpm'>Karan
|
||||
repository</ulink>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Once these packages are installed Poky will be able to build standard images however there
|
||||
may be a problem with QEMU segfaulting. You can either disable the generation of binary
|
||||
locales by setting <glossterm><link linkend='var-ENABLE_BINARY_LOCALE_GENERATION'>ENABLE_BINARY_LOCALE_GENERATION</link>
|
||||
</glossterm> to "0" or remove the linux-2.6-execshield.patch from the kernel and rebuild
|
||||
it since its that patch which causes the problems with QEMU.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I see lots of 404 responses for files on http://folks.o-hand.com/~richard/poky/sources/*. Is something wrong?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Nothing is wrong, Poky will check any configured source mirrors before downloading
|
||||
from the upstream sources. It does this searching for both source archives and
|
||||
pre-checked out versions of SCM managed software. This is so in large installations,
|
||||
it can reduce load on the SCM servers themselves. The address above is one of the
|
||||
default mirrors configured into standard Poky so if an upstream source disappears,
|
||||
we can place sources there so builds continue to work.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I have a machine specific data in a package for one machine only but the package is
|
||||
being marked as machine specific in all cases, how do I stop it?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Set <glossterm><link linkend='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'>SRC_URI_OVERRIDES_PACKAGE_ARCH</link>
|
||||
</glossterm> = "0" in the .bb file but make sure the package is manually marked as
|
||||
machine specific in the case that needs it. The code which handles <glossterm><link
|
||||
linkend='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'>SRC_URI_OVERRIDES_PACKAGE_ARCH</link></glossterm>
|
||||
is in base.bbclass.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
</qandaset>
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
|
||||
324
handbook/introduction.xml
Normal file
@@ -0,0 +1,324 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='intro'>
|
||||
<title>Introduction</title>
|
||||
|
||||
<section id='intro-what-is'>
|
||||
<title>What is Poky?</title>
|
||||
|
||||
<para>
|
||||
|
||||
Poky is an open source platform build tool. It is a complete
|
||||
software development environment for the creation of Linux
|
||||
devices. It aids the design, development, building, debugging,
|
||||
simulation and testing of complete modern software stacks
|
||||
using Linux, the X Window System and GNOME Mobile
|
||||
based application frameworks. It is based on <ulink
|
||||
url='http://openembedded.org/'>OpenEmbedded</ulink> but has
|
||||
been customised with a particular focus.
|
||||
|
||||
</para>
|
||||
|
||||
<para> Poky was setup to:</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Provide an open source Linux, X11, Matchbox, GTK+, Pimlico, Clutter, and other <ulink url='http://gnome.org/mobile'>GNOME Mobile</ulink> technologies based full platform build and development tool.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a focused, stable, subset of OpenEmbedded that can be easily and reliably built and developed upon.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Fully support a wide range of x86 and ARM hardware and device virtulisation</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
Poky is primarily a platform builder which generates filesystem images
|
||||
based on open source software such as the Kdrive X server, the Matchbox
|
||||
window manager, the GTK+ toolkit and the D-Bus message bus system. Images
|
||||
for many kinds of devices can be generated, however the standard example
|
||||
machines target QEMU full system emulation (both x86 and ARM) and the ARM based
|
||||
Sharp Zaurus series of devices. Poky's ability to boot inside a QEMU
|
||||
emulator makes it particularly suitable as a test platform for development
|
||||
of embedded software.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
An important component integrated within Poky is Sato, a GNOME Mobile
|
||||
based user interface environment.
|
||||
It is designed to work well with screens at very high DPI and restricted
|
||||
size, such as those often found on smartphones and PDAs. It is coded with
|
||||
focus on efficiency and speed so that it works smoothly on hand-held and
|
||||
other embedded hardware. It will sit neatly on top of any device
|
||||
using the GNOME Mobile stack, providing a well defined user experience.
|
||||
</para>
|
||||
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-sato.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>The Sato Desktop - A screenshot from a machine running a Poky built image</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
|
||||
|
||||
<para>
|
||||
|
||||
Poky has a growing open source community backed up by commercial support provided by the principle developer and maintainer of Poky, <ulink url="http://o-hand.com/">OpenedHand Ltd</ulink>.
|
||||
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='intro-manualoverview'>
|
||||
<title>Documentation Overview</title>
|
||||
|
||||
<para>
|
||||
The handbook is split into sections covering different aspects of Poky.
|
||||
The <link linkend='usingpoky'>'Using Poky' section</link> gives an overview
|
||||
of the components that make up Poky followed by information about using and
|
||||
debugging the Poky build system. The <link linkend='extendpoky'>'Extending Poky' section</link>
|
||||
gives information about how to extend and customise Poky along with advice
|
||||
on how to manage these changes. The <link linkend='platdev'>'Platform Development with Poky'
|
||||
section</link> gives information about interaction between Poky and target
|
||||
hardware for common platform development tasks such as software development,
|
||||
debugging and profiling. The rest of the manual
|
||||
consists of several reference sections each giving details on a specific
|
||||
section of Poky functionality.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This manual applies to Poky Release 3.1 (Pinky).
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
|
||||
<section id='intro-requirements'>
|
||||
<title>System Requirements</title>
|
||||
|
||||
<para>
|
||||
We recommend Debian-based distributions, in particular a recent Ubuntu
|
||||
release (7.04 or newer), as the host system for Poky. Nothing in Poky is
|
||||
distribution specific and
|
||||
other distributions will most likely work as long as the appropriate
|
||||
prerequisites are installed - we know of Poky being used successfully on Redhat,
|
||||
SUSE, Gentoo and Slackware host systems.
|
||||
</para>
|
||||
|
||||
<para>On a Debian-based system, you need the following packages installed:</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>build-essential</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>python</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>diffstat</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>texinfo</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>texi2html</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>cvs</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>subversion</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>wget</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>gawk</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>help2man</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>bochsbios (only to run qemux86 images)</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
Debian users can add debian.o-hand.com to their APT sources (See
|
||||
<ulink url='http://debian.o-hand.com'/>
|
||||
for instructions on doing this) and then run <command>
|
||||
"apt-get install qemu poky-depends poky-scripts"</command> which will
|
||||
automatically install all these dependencies. OpenedHand can also provide
|
||||
VMware images with Poky and all dependencies pre-installed if required.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Poky can use a system provided QEMU or build its own depending on how it's
|
||||
configured. See the options in <filename>local.conf</filename> for more details.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='intro-quickstart'>
|
||||
<title>Quick Start</title>
|
||||
|
||||
<section id='intro-quickstart-build'>
|
||||
<title>Building and Running an Image</title>
|
||||
|
||||
<para>
|
||||
If you want to try Poky, you can do so in a few commands. The example below
|
||||
checks out the Poky source code, sets up a build environment, builds an
|
||||
image and then runs that image under the QEMU emulator in ARM system emulation mode:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ wget http://pokylinux.org/releases/pinky-3.1.tar.gz
|
||||
$ tar zxvf pinky-3.1.tar.gz
|
||||
$ cd pinky-3.1/
|
||||
$ source poky-init-build-env
|
||||
$ bitbake poky-image-sato
|
||||
$ runqemu qemuarm
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<note>
|
||||
<para>
|
||||
This process will need Internet access, about 3 GB of disk space
|
||||
available, and you should expect the build to take about 4 - 5 hours since
|
||||
it is building an entire Linux system from source including the toolchain!
|
||||
</para>
|
||||
</note>
|
||||
|
||||
<para>
|
||||
To build for other machines see the <glossterm><link
|
||||
linkend='var-MACHINE'>MACHINE</link></glossterm> variable in build/conf/local.conf
|
||||
which also contains other configuration information. The images/kernels built
|
||||
by Poky are placed in the <filename class="directory">tmp/deploy/images</filename>
|
||||
directory.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You could also run <command>"poky-qemu zImage-qemuarm.bin poky-image-sato-qemuarm.ext2"
|
||||
</command> within the images directory if you have the poky-scripts Debian package
|
||||
installed from debian.o-hand.com. This allows the QEMU images to be used standalone
|
||||
outside the Poky build environment.
|
||||
</para>
|
||||
<para>
|
||||
To setup networking within QEMU see the <link linkend='usingpoky-install-qemu-networking'>
|
||||
QEMU/USB networking with IP masquerading</link> section.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id='intro-quickstart-qemu'>
|
||||
<title>Downloading and Using Prebuilt Images</title>
|
||||
|
||||
<para>
|
||||
Prebuilt images from Poky are also available if you just want to run the system
|
||||
under QEMU. To use these you need to:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Add debian.o-hand.com to your APT sources (See
|
||||
<ulink url='http://debian.o-hand.com'/> for instructions on doing this)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Install patched QEMU and poky-scripts:</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ apt-get install qemu poky-scripts
|
||||
</literallayout>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Download a Poky QEMU release kernel (*zImage*qemu*.bin) and compressed
|
||||
filesystem image (poky-image-*-qemu*.ext2.bz2) which
|
||||
you'll need to decompress with 'bzip2 -d'. These are available from the
|
||||
<ulink url='http://pokylinux.org/releases/blinky-3.0/'>last release</ulink>
|
||||
or from the <ulink url='http://pokylinux.org/autobuild/poky/'>autobuilder</ulink>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the image:</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ poky-qemu <kernel> <image>
|
||||
</literallayout>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<note><para>
|
||||
A patched version of QEMU is required at present. A suitable version is available from
|
||||
<ulink url='http://debian.o-hand.com'/>, it can be built
|
||||
by poky (bitbake qemu-native) or can be downloaded/built as part of the toolchain/SDK tarballs.
|
||||
</para></note>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='intro-getit'>
|
||||
<title>Obtaining Poky</title>
|
||||
|
||||
<section id='intro-getit-releases'>
|
||||
<title>Releases</title>
|
||||
|
||||
<para>Periodically, we make releases of Poky and these are available
|
||||
at <ulink url='http://pokylinux.org/releases/'/>.
|
||||
These are more stable and tested than the nightly development images.</para>
|
||||
</section>
|
||||
|
||||
<section id='intro-getit-nightly'>
|
||||
<title>Nightly Builds</title>
|
||||
|
||||
<para>
|
||||
We make nightly builds of Poky for testing purposes and to make the
|
||||
latest developments available. The output from these builds is available
|
||||
at <ulink url='http://pokylinux.org/autobuild/'/>
|
||||
where the numbers represent the svn revision the builds were made from.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Automated builds are available for "standard" Poky and for Poky SDKs and toolchains as well
|
||||
as any testing versions we might have such as poky-bleeding. The toolchains can
|
||||
be used either as external standalone toolchains or can be combined with Poky as a
|
||||
prebuilt toolchain to reduce build time. Using the external toolchains is simply a
|
||||
case of untarring the tarball into the root of your system (it only creates files in
|
||||
<filename class="directory">/usr/local/poky</filename>) and then enabling the option
|
||||
in <filename>local.conf</filename>.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='intro-getit-dev'>
|
||||
<title>Development Checkouts</title>
|
||||
|
||||
<para>
|
||||
Poky is available from our SVN repository located at
|
||||
http://svn.o-hand.com/repos/poky/trunk; a web interface to the repository
|
||||
can be accessed at <ulink url='http://svn.o-hand.com/view/poky/'/>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
'trunk' is where the deveopment work takes place and you should use this if you're
|
||||
after to work with the latest cutting edge developments. It is possible trunk
|
||||
can suffer temporary periods of instability while new features are developed and
|
||||
if this is undesireable we recommend using one of the release branches.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
BIN
handbook/poky-beaver.png
Normal file
|
After Width: | Height: | Size: 26 KiB |
0
handbook/poky-doc-tools/AUTHORS
Normal file
0
handbook/poky-doc-tools/COPYING
Normal file
30
handbook/poky-doc-tools/ChangeLog
Normal file
@@ -0,0 +1,30 @@
|
||||
2008-02-15 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* common/Makefile.am:
|
||||
* common/poky-handbook.png:
|
||||
Add a PNG image for the manual. Seems our logo SVG
|
||||
is too complex/transparent for PDF
|
||||
|
||||
2008-02-14 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* common/Makefile.am:
|
||||
* common/fop-config.xml.in:
|
||||
* common/poky-db-pdf.xsl:
|
||||
* poky-docbook-to-pdf.in:
|
||||
Font tweakage.
|
||||
|
||||
2008-01-27 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* INSTALL:
|
||||
* Makefile.am:
|
||||
* README:
|
||||
* autogen.sh:
|
||||
* common/Makefile.am:
|
||||
* common/fop-config.xml.in:
|
||||
* common/ohand-color.svg:
|
||||
* common/poky-db-pdf.xsl:
|
||||
* common/poky.svg:
|
||||
* common/titlepage.templates.xml:
|
||||
* configure.ac:
|
||||
* poky-docbook-to-pdf.in:
|
||||
Initial import.
|
||||
236
handbook/poky-doc-tools/INSTALL
Normal file
@@ -0,0 +1,236 @@
|
||||
Installation Instructions
|
||||
*************************
|
||||
|
||||
Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005 Free
|
||||
Software Foundation, Inc.
|
||||
|
||||
This file is free documentation; the Free Software Foundation gives
|
||||
unlimited permission to copy, distribute and modify it.
|
||||
|
||||
Basic Installation
|
||||
==================
|
||||
|
||||
These are generic installation instructions.
|
||||
|
||||
The `configure' shell script attempts to guess correct values for
|
||||
various system-dependent variables used during compilation. It uses
|
||||
those values to create a `Makefile' in each directory of the package.
|
||||
It may also create one or more `.h' files containing system-dependent
|
||||
definitions. Finally, it creates a shell script `config.status' that
|
||||
you can run in the future to recreate the current configuration, and a
|
||||
file `config.log' containing compiler output (useful mainly for
|
||||
debugging `configure').
|
||||
|
||||
It can also use an optional file (typically called `config.cache'
|
||||
and enabled with `--cache-file=config.cache' or simply `-C') that saves
|
||||
the results of its tests to speed up reconfiguring. (Caching is
|
||||
disabled by default to prevent problems with accidental use of stale
|
||||
cache files.)
|
||||
|
||||
If you need to do unusual things to compile the package, please try
|
||||
to figure out how `configure' could check whether to do them, and mail
|
||||
diffs or instructions to the address given in the `README' so they can
|
||||
be considered for the next release. If you are using the cache, and at
|
||||
some point `config.cache' contains results you don't want to keep, you
|
||||
may remove or edit it.
|
||||
|
||||
The file `configure.ac' (or `configure.in') is used to create
|
||||
`configure' by a program called `autoconf'. You only need
|
||||
`configure.ac' if you want to change it or regenerate `configure' using
|
||||
a newer version of `autoconf'.
|
||||
|
||||
The simplest way to compile this package is:
|
||||
|
||||
1. `cd' to the directory containing the package's source code and type
|
||||
`./configure' to configure the package for your system. If you're
|
||||
using `csh' on an old version of System V, you might need to type
|
||||
`sh ./configure' instead to prevent `csh' from trying to execute
|
||||
`configure' itself.
|
||||
|
||||
Running `configure' takes awhile. While running, it prints some
|
||||
messages telling which features it is checking for.
|
||||
|
||||
2. Type `make' to compile the package.
|
||||
|
||||
3. Optionally, type `make check' to run any self-tests that come with
|
||||
the package.
|
||||
|
||||
4. Type `make install' to install the programs and any data files and
|
||||
documentation.
|
||||
|
||||
5. You can remove the program binaries and object files from the
|
||||
source code directory by typing `make clean'. To also remove the
|
||||
files that `configure' created (so you can compile the package for
|
||||
a different kind of computer), type `make distclean'. There is
|
||||
also a `make maintainer-clean' target, but that is intended mainly
|
||||
for the package's developers. If you use it, you may have to get
|
||||
all sorts of other programs in order to regenerate files that came
|
||||
with the distribution.
|
||||
|
||||
Compilers and Options
|
||||
=====================
|
||||
|
||||
Some systems require unusual options for compilation or linking that the
|
||||
`configure' script does not know about. Run `./configure --help' for
|
||||
details on some of the pertinent environment variables.
|
||||
|
||||
You can give `configure' initial values for configuration parameters
|
||||
by setting variables in the command line or in the environment. Here
|
||||
is an example:
|
||||
|
||||
./configure CC=c89 CFLAGS=-O2 LIBS=-lposix
|
||||
|
||||
*Note Defining Variables::, for more details.
|
||||
|
||||
Compiling For Multiple Architectures
|
||||
====================================
|
||||
|
||||
You can compile the package for more than one kind of computer at the
|
||||
same time, by placing the object files for each architecture in their
|
||||
own directory. To do this, you must use a version of `make' that
|
||||
supports the `VPATH' variable, such as GNU `make'. `cd' to the
|
||||
directory where you want the object files and executables to go and run
|
||||
the `configure' script. `configure' automatically checks for the
|
||||
source code in the directory that `configure' is in and in `..'.
|
||||
|
||||
If you have to use a `make' that does not support the `VPATH'
|
||||
variable, you have to compile the package for one architecture at a
|
||||
time in the source code directory. After you have installed the
|
||||
package for one architecture, use `make distclean' before reconfiguring
|
||||
for another architecture.
|
||||
|
||||
Installation Names
|
||||
==================
|
||||
|
||||
By default, `make install' installs the package's commands under
|
||||
`/usr/local/bin', include files under `/usr/local/include', etc. You
|
||||
can specify an installation prefix other than `/usr/local' by giving
|
||||
`configure' the option `--prefix=PREFIX'.
|
||||
|
||||
You can specify separate installation prefixes for
|
||||
architecture-specific files and architecture-independent files. If you
|
||||
pass the option `--exec-prefix=PREFIX' to `configure', the package uses
|
||||
PREFIX as the prefix for installing programs and libraries.
|
||||
Documentation and other data files still use the regular prefix.
|
||||
|
||||
In addition, if you use an unusual directory layout you can give
|
||||
options like `--bindir=DIR' to specify different values for particular
|
||||
kinds of files. Run `configure --help' for a list of the directories
|
||||
you can set and what kinds of files go in them.
|
||||
|
||||
If the package supports it, you can cause programs to be installed
|
||||
with an extra prefix or suffix on their names by giving `configure' the
|
||||
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
|
||||
|
||||
Optional Features
|
||||
=================
|
||||
|
||||
Some packages pay attention to `--enable-FEATURE' options to
|
||||
`configure', where FEATURE indicates an optional part of the package.
|
||||
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
|
||||
is something like `gnu-as' or `x' (for the X Window System). The
|
||||
`README' should mention any `--enable-' and `--with-' options that the
|
||||
package recognizes.
|
||||
|
||||
For packages that use the X Window System, `configure' can usually
|
||||
find the X include and library files automatically, but if it doesn't,
|
||||
you can use the `configure' options `--x-includes=DIR' and
|
||||
`--x-libraries=DIR' to specify their locations.
|
||||
|
||||
Specifying the System Type
|
||||
==========================
|
||||
|
||||
There may be some features `configure' cannot figure out automatically,
|
||||
but needs to determine by the type of machine the package will run on.
|
||||
Usually, assuming the package is built to be run on the _same_
|
||||
architectures, `configure' can figure that out, but if it prints a
|
||||
message saying it cannot guess the machine type, give it the
|
||||
`--build=TYPE' option. TYPE can either be a short name for the system
|
||||
type, such as `sun4', or a canonical name which has the form:
|
||||
|
||||
CPU-COMPANY-SYSTEM
|
||||
|
||||
where SYSTEM can have one of these forms:
|
||||
|
||||
OS KERNEL-OS
|
||||
|
||||
See the file `config.sub' for the possible values of each field. If
|
||||
`config.sub' isn't included in this package, then this package doesn't
|
||||
need to know the machine type.
|
||||
|
||||
If you are _building_ compiler tools for cross-compiling, you should
|
||||
use the option `--target=TYPE' to select the type of system they will
|
||||
produce code for.
|
||||
|
||||
If you want to _use_ a cross compiler, that generates code for a
|
||||
platform different from the build platform, you should specify the
|
||||
"host" platform (i.e., that on which the generated programs will
|
||||
eventually be run) with `--host=TYPE'.
|
||||
|
||||
Sharing Defaults
|
||||
================
|
||||
|
||||
If you want to set default values for `configure' scripts to share, you
|
||||
can create a site shell script called `config.site' that gives default
|
||||
values for variables like `CC', `cache_file', and `prefix'.
|
||||
`configure' looks for `PREFIX/share/config.site' if it exists, then
|
||||
`PREFIX/etc/config.site' if it exists. Or, you can set the
|
||||
`CONFIG_SITE' environment variable to the location of the site script.
|
||||
A warning: not all `configure' scripts look for a site script.
|
||||
|
||||
Defining Variables
|
||||
==================
|
||||
|
||||
Variables not defined in a site shell script can be set in the
|
||||
environment passed to `configure'. However, some packages may run
|
||||
configure again during the build, and the customized values of these
|
||||
variables may be lost. In order to avoid this problem, you should set
|
||||
them in the `configure' command line, using `VAR=value'. For example:
|
||||
|
||||
./configure CC=/usr/local2/bin/gcc
|
||||
|
||||
causes the specified `gcc' to be used as the C compiler (unless it is
|
||||
overridden in the site shell script). Here is a another example:
|
||||
|
||||
/bin/bash ./configure CONFIG_SHELL=/bin/bash
|
||||
|
||||
Here the `CONFIG_SHELL=/bin/bash' operand causes subsequent
|
||||
configuration-related scripts to be executed by `/bin/bash'.
|
||||
|
||||
`configure' Invocation
|
||||
======================
|
||||
|
||||
`configure' recognizes the following options to control how it operates.
|
||||
|
||||
`--help'
|
||||
`-h'
|
||||
Print a summary of the options to `configure', and exit.
|
||||
|
||||
`--version'
|
||||
`-V'
|
||||
Print the version of Autoconf used to generate the `configure'
|
||||
script, and exit.
|
||||
|
||||
`--cache-file=FILE'
|
||||
Enable the cache: use and save the results of the tests in FILE,
|
||||
traditionally `config.cache'. FILE defaults to `/dev/null' to
|
||||
disable caching.
|
||||
|
||||
`--config-cache'
|
||||
`-C'
|
||||
Alias for `--cache-file=config.cache'.
|
||||
|
||||
`--quiet'
|
||||
`--silent'
|
||||
`-q'
|
||||
Do not print messages saying which checks are being made. To
|
||||
suppress all normal output, redirect it to `/dev/null' (any error
|
||||
messages will still be shown).
|
||||
|
||||
`--srcdir=DIR'
|
||||
Look for the package's source code in directory DIR. Usually
|
||||
`configure' can determine that directory automatically.
|
||||
|
||||
`configure' also accepts some other, not widely useful, options. Run
|
||||
`configure --help' for more details.
|
||||
|
||||
18
handbook/poky-doc-tools/Makefile.am
Normal file
@@ -0,0 +1,18 @@
|
||||
SUBDIRS = common
|
||||
|
||||
EXTRA_DIST = poky-docbook-to-pdf.in
|
||||
|
||||
bin_SCRIPTS = poky-docbook-to-pdf
|
||||
|
||||
edit = sed \
|
||||
-e 's,@datadir\@,$(pkgdatadir),g' \
|
||||
-e 's,@prefix\@,$(prefix),g' \
|
||||
-e 's,@version\@,@VERSION@,g'
|
||||
|
||||
poky-docbook-to-pdf: poky-docbook-to-pdf.in
|
||||
rm -f poky-docbook-to-pdf
|
||||
$(edit) poky-docbook-to-pdf.in > poky-docbook-to-pdf
|
||||
|
||||
clean-local:
|
||||
rm -fr poky-docbook-to-pdf
|
||||
rm -fr poky-pr-docbook-to-pdf
|
||||
0
handbook/poky-doc-tools/NEWS
Normal file
24
handbook/poky-doc-tools/README
Normal file
@@ -0,0 +1,24 @@
|
||||
poky-doc-tools
|
||||
==============
|
||||
|
||||
Simple tools to wrap fop to create oh branded PDF's from docbook sources.
|
||||
(based on OH doc tools)
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
Sun Java, make sure the java in your path is the *sun* java.
|
||||
|
||||
xlstproc, nwalsh style sheets.
|
||||
|
||||
FOP, installed - see http://www.sagehill.net/docbookxsl/InstallingAnFO.html.
|
||||
Also a 'fop' binary, eg I have;
|
||||
|
||||
% cat ~/bin/fop
|
||||
#!/bin/sh
|
||||
java org.apache.fop.apps.Fop "$@"
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
3
handbook/poky-doc-tools/autogen.sh
Executable file
@@ -0,0 +1,3 @@
|
||||
#! /bin/sh
|
||||
autoreconf -v --install || exit 1
|
||||
./configure --enable-maintainer-mode --enable-debug "$@"
|
||||
21
handbook/poky-doc-tools/common/Makefile.am
Normal file
@@ -0,0 +1,21 @@
|
||||
SUPPORT_FILES = VeraMoBd.ttf VeraMoBd.xml \
|
||||
VeraMono.ttf VeraMono.xml \
|
||||
Vera.ttf Vera.xml \
|
||||
draft.png titlepage.templates.xml \
|
||||
poky-db-pdf.xsl poky.svg \
|
||||
ohand-color.svg poky-handbook.png
|
||||
|
||||
commondir = $(pkgdatadir)/common
|
||||
common_DATA = $(SUPPORT_FILES) fop-config.xml
|
||||
|
||||
EXTRA_DIST = $(SUPPORT_FILES) fop-config.xml.in
|
||||
|
||||
edit = sed -e 's,@datadir\@,$(pkgdatadir),g'
|
||||
|
||||
fop-config.xml: fop-config.xml.in
|
||||
rm -f fop-config.xml
|
||||
$(edit) fop-config.xml.in > fop-config.xml
|
||||
|
||||
|
||||
clean-local:
|
||||
rm -fr fop-config.xml
|
||||
BIN
handbook/poky-doc-tools/common/Vera.ttf
Normal file
1
handbook/poky-doc-tools/common/Vera.xml
Normal file
BIN
handbook/poky-doc-tools/common/VeraMoBd.ttf
Normal file
1
handbook/poky-doc-tools/common/VeraMoBd.xml
Normal file
BIN
handbook/poky-doc-tools/common/VeraMono.ttf
Normal file
1
handbook/poky-doc-tools/common/VeraMono.xml
Normal file
BIN
handbook/poky-doc-tools/common/draft.png
Normal file
|
After Width: | Height: | Size: 24 KiB |
33
handbook/poky-doc-tools/common/fop-config.xml.in
Normal file
@@ -0,0 +1,33 @@
|
||||
<configuration>
|
||||
<entry>
|
||||
<!--
|
||||
Set the baseDir so common/openedhand.svg references in plans still
|
||||
work ok. Note, relative file references to current dir should still work.
|
||||
-->
|
||||
<key>baseDir</key>
|
||||
<value>@datadir@</value>
|
||||
</entry>
|
||||
<fonts>
|
||||
<font metrics-file="@datadir@/common/VeraMono.xml"
|
||||
kerning="yes"
|
||||
embed-file="@datadir@/common/VeraMono.ttf">
|
||||
<font-triplet name="veramono" style="normal" weight="normal"/>
|
||||
</font>
|
||||
|
||||
<font metrics-file="@datadir@/common/VeraMoBd.xml"
|
||||
kerning="yes"
|
||||
embed-file="@datadir@/common/VeraMoBd.ttf">
|
||||
<font-triplet name="veramono" style="normal" weight="bold"/>
|
||||
</font>
|
||||
|
||||
<font metrics-file="@datadir@/common/Vera.xml"
|
||||
kerning="yes"
|
||||
embed-file="@datadir@/common/Vera.ttf">
|
||||
<font-triplet name="verasans" style="normal" weight="normal"/>
|
||||
<font-triplet name="verasans" style="normal" weight="bold"/>
|
||||
<font-triplet name="verasans" style="italic" weight="normal"/>
|
||||
<font-triplet name="verasans" style="italic" weight="bold"/>
|
||||
</font>
|
||||
|
||||
</fonts>
|
||||
</configuration>
|
||||
150
handbook/poky-doc-tools/common/ohand-color.svg
Normal file
@@ -0,0 +1,150 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://web.resource.org/cc/"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
width="141.17999"
|
||||
height="55.34"
|
||||
id="svg2207"
|
||||
sodipodi:version="0.32"
|
||||
inkscape:version="0.45"
|
||||
version="1.0"
|
||||
sodipodi:docname="ohand-color.svg"
|
||||
inkscape:output_extension="org.inkscape.output.svg.inkscape"
|
||||
sodipodi:docbase="/home/mallum/Projects/admin/oh-doc-tools/common"
|
||||
sodipodi:modified="true">
|
||||
<defs
|
||||
id="defs3" />
|
||||
<sodipodi:namedview
|
||||
id="base"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:zoom="1.2"
|
||||
inkscape:cx="160"
|
||||
inkscape:cy="146.21189"
|
||||
inkscape:document-units="mm"
|
||||
inkscape:current-layer="layer1"
|
||||
height="55.34px"
|
||||
width="141.18px"
|
||||
inkscape:window-width="772"
|
||||
inkscape:window-height="581"
|
||||
inkscape:window-x="5"
|
||||
inkscape:window-y="48" />
|
||||
<metadata
|
||||
id="metadata2211">
|
||||
<rdf:RDF>
|
||||
<cc:Work
|
||||
rdf:about="">
|
||||
<dc:format>image/svg+xml</dc:format>
|
||||
<dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
|
||||
</cc:Work>
|
||||
</rdf:RDF>
|
||||
</metadata>
|
||||
<g
|
||||
inkscape:label="Layer 1"
|
||||
inkscape:groupmode="layer"
|
||||
id="layer1">
|
||||
<g
|
||||
id="g2094"
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
inkscape:export-filename="/home/mallum/Desktop/g2126.png"
|
||||
inkscape:export-xdpi="312.71841"
|
||||
inkscape:export-ydpi="312.71841"
|
||||
transform="matrix(0.5954767,0,0,0.5954767,31.793058,-18.471052)">
|
||||
<g
|
||||
id="g19"
|
||||
style="fill:#6d6d70;fill-opacity:1">
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path21"
|
||||
d="M 48.693,50.633 C 40.282,50.633 33.439,57.477 33.439,65.888 L 33.439,81.142 L 41.066,81.142 L 41.066,65.888 C 41.066,61.684 44.486,58.261 48.692,58.261 C 52.897,58.261 56.32,61.684 56.32,65.888 C 56.32,70.093 52.897,73.516 48.692,73.516 C 45.677,73.516 43.065,71.756 41.828,69.211 L 41.828,79.504 C 43.892,80.549 46.224,81.142 48.692,81.142 C 57.103,81.142 63.947,74.3 63.947,65.888 C 63.948,57.477 57.104,50.633 48.693,50.633 z " />
|
||||
</g>
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path23"
|
||||
d="M 18.486,50.557 C 26.942,50.557 33.819,57.435 33.819,65.888 C 33.819,74.344 26.942,81.223 18.486,81.223 C 10.032,81.223 3.152,74.344 3.152,65.888 C 3.152,57.435 10.032,50.557 18.486,50.557 z M 18.486,73.556 C 22.713,73.556 26.153,70.118 26.153,65.888 C 26.153,61.661 22.713,58.222 18.486,58.222 C 14.258,58.222 10.819,61.661 10.819,65.888 C 10.82,70.117 14.259,73.556 18.486,73.556 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path25"
|
||||
d="M 94.074,107.465 L 94.074,96.016 C 94.074,87.605 87.233,80.763 78.822,80.763 C 70.41,80.763 63.567,87.605 63.567,96.016 C 63.567,104.427 70.41,111.269 78.822,111.269 C 81.289,111.269 83.621,110.676 85.685,109.631 L 85.685,99.339 C 84.448,101.885 81.836,103.644 78.822,103.644 C 74.615,103.644 71.194,100.221 71.194,96.016 C 71.194,91.81 74.615,88.388 78.822,88.388 C 83.026,88.388 86.448,91.81 86.448,96.016 L 86.448,107.456 C 86.448,109.562 88.156,111.268 90.262,111.268 C 92.364,111.269 94.068,109.566 94.074,107.465 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path27"
|
||||
d="M 124.197,95.814 C 124.088,87.496 117.293,80.762 108.949,80.762 C 100.59,80.762 93.783,87.52 93.697,95.856 L 93.693,95.856 L 93.695,107.456 C 93.695,109.562 95.402,111.268 97.509,111.268 C 99.611,111.268 101.316,109.566 101.321,107.464 L 101.321,95.994 L 101.321,95.994 C 101.333,91.798 104.747,88.388 108.948,88.388 C 113.147,88.388 116.563,91.798 116.575,95.994 L 116.575,107.456 C 116.575,109.562 118.282,111.268 120.387,111.268 C 122.492,111.268 124.201,109.562 124.201,107.456 L 124.201,95.814 L 124.197,95.814 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path29"
|
||||
d="M 63.946,96.005 L 63.946,95.854 L 63.943,95.854 L 63.943,95.815 L 63.942,95.815 C 63.833,87.497 57.037,80.761 48.693,80.761 C 48.682,80.761 48.671,80.763 48.658,80.763 C 48.382,80.763 48.107,80.772 47.833,80.786 C 47.75,80.791 47.668,80.799 47.586,80.806 C 47.378,80.822 47.172,80.838 46.968,80.862 C 46.884,80.873 46.801,80.882 46.719,80.893 C 46.508,80.92 46.298,80.952 46.091,80.987 C 46.024,80.999 45.958,81.01 45.891,81.024 C 45.649,81.068 45.406,81.119 45.168,81.175 C 45.14,81.183 45.112,81.189 45.085,81.195 C 43.656,81.542 42.306,82.092 41.065,82.812 L 41.065,80.761 L 33.438,80.761 L 33.438,95.857 L 33.435,95.857 L 33.435,107.457 C 33.435,109.563 35.142,111.269 37.248,111.269 C 39.093,111.269 40.632,109.958 40.984,108.217 C 41.036,107.963 41.065,107.702 41.065,107.435 L 41.065,95.873 C 41.086,94.732 41.357,93.65 41.828,92.685 L 41.828,92.693 C 42.598,91.106 43.905,89.824 45.511,89.085 C 45.519,89.08 45.529,89.076 45.536,89.073 C 45.849,88.928 46.174,88.807 46.508,88.707 C 46.523,88.704 46.536,88.699 46.55,88.696 C 46.699,88.651 46.85,88.614 47.004,88.576 C 47.025,88.575 47.046,88.567 47.069,88.562 C 47.234,88.527 47.402,88.495 47.572,88.469 C 47.586,88.468 47.6,88.466 47.615,88.463 C 47.763,88.443 47.916,88.427 48.067,88.415 C 48.106,88.41 48.145,88.407 48.186,88.404 C 48.352,88.393 48.52,88.386 48.691,88.386 C 52.888,88.387 56.304,91.797 56.316,95.992 L 56.316,107.454 C 56.316,109.56 58.023,111.266 60.13,111.266 C 61.976,111.266 63.516,109.954 63.867,108.211 C 63.919,107.963 63.946,107.706 63.946,107.442 L 63.946,96.024 C 63.946,96.021 63.947,96.018 63.947,96.015 C 63.948,96.011 63.946,96.008 63.946,96.005 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path31"
|
||||
d="M 180.644,50.633 C 178.539,50.633 176.832,52.341 176.832,54.447 L 176.832,65.887 C 176.832,70.092 173.41,73.513 169.203,73.513 C 164.998,73.513 161.576,70.092 161.576,65.887 C 161.576,61.683 164.998,58.26 169.203,58.26 C 172.219,58.26 174.83,60.019 176.068,62.565 L 176.068,52.271 C 174.004,51.225 171.673,50.632 169.203,50.632 C 160.793,50.632 153.951,57.476 153.951,65.887 C 153.951,74.298 160.793,81.141 169.203,81.141 C 177.615,81.141 184.459,74.298 184.459,65.887 L 184.459,54.447 C 184.458,52.341 182.751,50.633 180.644,50.633 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path33"
|
||||
d="M 124.203,77.339 L 124.203,65.687 L 124.197,65.687 C 124.088,57.371 117.293,50.633 108.949,50.633 C 100.592,50.633 93.783,57.393 93.697,65.731 L 93.695,65.731 L 93.695,65.877 C 93.695,65.882 93.693,65.885 93.693,65.888 C 93.693,65.891 93.695,65.896 93.695,65.899 L 93.695,77.33 C 93.695,79.435 95.402,81.142 97.509,81.142 C 99.614,81.142 101.321,79.435 101.321,77.33 L 101.321,65.868 C 101.333,61.672 104.747,58.261 108.948,58.261 C 113.147,58.261 116.563,61.672 116.575,65.868 L 116.575,65.868 L 116.575,77.329 C 116.575,79.434 118.282,81.141 120.389,81.141 C 122.492,81.142 124.197,79.44 124.203,77.339 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path35"
|
||||
d="M 150.517,80.761 C 148.41,80.761 146.703,82.469 146.703,84.575 L 146.703,96.015 C 146.703,100.22 143.283,103.643 139.076,103.643 C 134.871,103.643 131.449,100.22 131.449,96.015 C 131.449,91.808 134.871,88.387 139.076,88.387 C 142.092,88.387 144.703,90.145 145.941,92.692 L 145.941,82.397 C 143.875,81.353 141.545,80.76 139.076,80.76 C 130.666,80.76 123.822,87.604 123.822,96.015 C 123.822,104.426 130.666,111.268 139.076,111.268 C 147.486,111.268 154.33,104.426 154.33,96.015 L 154.33,84.575 C 154.33,82.469 152.623,80.761 150.517,80.761 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path37"
|
||||
d="M 82.625,77.345 C 82.625,75.247 80.923,73.547 78.826,73.547 L 78.826,81.142 C 80.922,81.142 82.625,79.442 82.625,77.345 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path39"
|
||||
d="M 90.252,69.685 C 92.35,69.685 94.048,67.987 94.048,65.888 L 86.453,65.888 C 86.453,67.986 88.154,69.685 90.252,69.685 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path41"
|
||||
d="M 93.832,77.329 C 93.832,75.223 92.125,73.516 90.018,73.516 L 78.825,73.516 C 74.619,73.516 71.199,70.093 71.199,65.888 C 71.199,61.684 74.619,58.261 78.825,58.261 C 83.032,58.261 86.453,61.684 86.453,65.888 C 86.453,68.904 84.694,71.514 82.149,72.752 L 92.442,72.752 C 93.488,70.689 94.08,68.356 94.08,65.888 C 94.08,57.477 87.237,50.633 78.826,50.633 C 70.415,50.633 63.571,57.477 63.571,65.888 C 63.571,74.3 70.415,81.142 78.826,81.142 L 90.018,81.142 C 92.125,81.142 93.832,79.435 93.832,77.329 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path43"
|
||||
d="M 142.869,77.345 C 142.869,75.247 141.168,73.547 139.07,73.547 L 139.07,81.142 C 141.167,81.142 142.869,79.442 142.869,77.345 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path45"
|
||||
d="M 150.496,69.685 C 152.594,69.685 154.293,67.987 154.293,65.888 L 146.699,65.888 C 146.699,67.986 148.398,69.685 150.496,69.685 z " />
|
||||
<path
|
||||
style="fill:#6d6d70;fill-opacity:1"
|
||||
id="path47"
|
||||
d="M 154.076,77.329 C 154.076,75.223 152.367,73.516 150.262,73.516 L 139.07,73.516 C 134.865,73.516 131.443,70.093 131.443,65.888 C 131.443,61.684 134.865,58.261 139.07,58.261 C 143.275,58.261 146.699,61.684 146.699,65.888 C 146.699,68.904 144.939,71.514 142.392,72.752 L 152.687,72.752 C 153.73,70.689 154.324,68.356 154.324,65.888 C 154.324,57.477 147.48,50.633 139.07,50.633 C 130.66,50.633 123.816,57.477 123.816,65.888 C 123.816,74.3 130.66,81.142 139.07,81.142 L 150.261,81.142 C 152.367,81.142 154.076,79.435 154.076,77.329 z " />
|
||||
</g>
|
||||
<g
|
||||
id="g2126"
|
||||
transform="matrix(0.7679564,0,0,0.7679564,-66.520631,11.42903)"
|
||||
inkscape:export-xdpi="312.71841"
|
||||
inkscape:export-ydpi="312.71841"
|
||||
style="fill:#35992a;fill-opacity:1">
|
||||
<g
|
||||
transform="translate(86.33975,4.23985e-2)"
|
||||
style="fill:#35992a;fill-opacity:1"
|
||||
id="g2114">
|
||||
<g
|
||||
style="fill:#35992a;fill-opacity:1"
|
||||
id="g2116">
|
||||
<path
|
||||
id="path2118"
|
||||
transform="translate(-86.33975,-4.239934e-2)"
|
||||
d="M 89.96875,0.03125 C 87.962748,0.031250001 86.34375,1.6815001 86.34375,3.6875 L 86.34375,17.71875 L 86.34375,19.6875 L 86.34375,28.90625 C 86.343752,39.06825 94.61925,47.34375 104.78125,47.34375 L 113.375,47.34375 L 123.1875,47.34375 L 127.15625,47.34375 C 129.16325,47.343749 130.8125,45.72475 130.8125,43.71875 C 130.8125,41.71275 129.16325,40.09375 127.15625,40.09375 L 123.1875,40.09375 L 123.1875,19.6875 L 123.1875,14.65625 L 123.1875,3.6875 C 123.1875,1.6815 121.5675,0.03125 119.5625,0.03125 C 117.5555,0.031250001 115.9375,1.6815001 115.9375,3.6875 L 115.9375,14.28125 C 115.1185,13.65425 114.26275,13.109 113.34375,12.625 L 113.34375,3.6875 C 113.34475,1.6815 111.6925,0.03125 109.6875,0.03125 C 107.6825,0.031250001 106.0625,1.6815001 106.0625,3.6875 L 106.0625,10.5625 C 105.6305,10.5325 105.22025,10.5 104.78125,10.5 C 104.34125,10.5 103.90075,10.5325 103.46875,10.5625 L 103.46875,3.6875 C 103.46975,1.6815 101.84975,0.03125 99.84375,0.03125 C 97.837749,0.031250001 96.21875,1.6815001 96.21875,3.6875 L 96.21875,12.625 C 95.299754,13.109 94.41375,13.65425 93.59375,14.28125 L 93.59375,3.6875 C 93.59475,1.6815 91.97475,0.03125 89.96875,0.03125 z M 104.78125,14.34375 C 112.80825,14.34375 119.3125,20.87925 119.3125,28.90625 C 119.3125,36.93325 112.80825,43.46875 104.78125,43.46875 C 96.754254,43.46875 90.21875,36.93425 90.21875,28.90625 C 90.218752,20.87825 96.753253,14.34375 104.78125,14.34375 z "
|
||||
style="fill:#35992a;fill-opacity:1" />
|
||||
</g>
|
||||
</g>
|
||||
<path
|
||||
style="fill:#35992a;fill-opacity:1"
|
||||
id="path2122"
|
||||
d="M 112.04875,28.913399 C 112.04875,24.899399 108.78275,21.634399 104.76975,21.634399 C 100.75675,21.634399 97.490753,24.900399 97.490753,28.913399 C 97.490753,32.926399 100.75675,36.192399 104.76975,36.192399 C 108.78275,36.192399 112.04875,32.927399 112.04875,28.913399 z " />
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 12 KiB |
64
handbook/poky-doc-tools/common/poky-db-pdf.xsl
Normal file
@@ -0,0 +1,64 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
|
||||
|
||||
<xsl:import href="file:///usr/share/xml/docbook/stylesheet/nwalsh/fo/docbook.xsl"/>
|
||||
|
||||
<!-- check project-plan.sh for how this is generated, needed to tweak
|
||||
the cover page
|
||||
-->
|
||||
<xsl:include href="/tmp/titlepage.xsl"/>
|
||||
|
||||
<!-- To force a page break in document, i.e per section add a
|
||||
<?hard-pagebreak?> tag.
|
||||
-->
|
||||
<xsl:template match="processing-instruction('hard-pagebreak')">
|
||||
<fo:block break-before='page' />
|
||||
</xsl:template>
|
||||
|
||||
<!--Fix for defualt indent getting TOC all wierd..
|
||||
See http://sources.redhat.com/ml/docbook-apps/2005-q1/msg00455.html
|
||||
FIXME: must be a better fix
|
||||
-->
|
||||
<xsl:param name="body.start.indent" select="'0'"/>
|
||||
<!--<xsl:param name="title.margin.left" select="'0'"/>-->
|
||||
|
||||
<!-- stop long-ish header titles getting wrapped -->
|
||||
<xsl:param name="header.column.widths">1 10 1</xsl:param>
|
||||
|
||||
<!-- customise headers and footers a little -->
|
||||
|
||||
<xsl:template name="head.sep.rule">
|
||||
<xsl:if test="$header.rule != 0">
|
||||
<xsl:attribute name="border-bottom-width">0.5pt</xsl:attribute>
|
||||
<xsl:attribute name="border-bottom-style">solid</xsl:attribute>
|
||||
<xsl:attribute name="border-bottom-color">#cccccc</xsl:attribute>
|
||||
</xsl:if>
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template name="foot.sep.rule">
|
||||
<xsl:if test="$footer.rule != 0">
|
||||
<xsl:attribute name="border-top-width">0.5pt</xsl:attribute>
|
||||
<xsl:attribute name="border-top-style">solid</xsl:attribute>
|
||||
<xsl:attribute name="border-top-color">#cccccc</xsl:attribute>
|
||||
</xsl:if>
|
||||
</xsl:template>
|
||||
|
||||
<xsl:attribute-set name="header.content.properties">
|
||||
<xsl:attribute name="color">#cccccc</xsl:attribute>
|
||||
</xsl:attribute-set>
|
||||
|
||||
<xsl:attribute-set name="footer.content.properties">
|
||||
<xsl:attribute name="color">#cccccc</xsl:attribute>
|
||||
</xsl:attribute-set>
|
||||
|
||||
|
||||
<!-- general settings -->
|
||||
|
||||
<xsl:param name="fop.extensions" select="1"></xsl:param>
|
||||
<xsl:param name="paper.type" select="'A4'"></xsl:param>
|
||||
<xsl:param name="section.autolabel" select="1"></xsl:param>
|
||||
<xsl:param name="body.font.family" select="'verasans'"></xsl:param>
|
||||
<xsl:param name="title.font.family" select="'verasans'"></xsl:param>
|
||||
<xsl:param name="monospace.font.family" select="'veramono'"></xsl:param>
|
||||
|
||||
</xsl:stylesheet>
|
||||
BIN
handbook/poky-doc-tools/common/poky-handbook.png
Normal file
|
After Width: | Height: | Size: 31 KiB |
163
handbook/poky-doc-tools/common/poky.svg
Normal file
@@ -0,0 +1,163 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
<svg
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
version="1.0"
|
||||
width="158.56076"
|
||||
height="79.284424"
|
||||
viewBox="-40.981 -92.592 300 300"
|
||||
id="svg2"
|
||||
xml:space="preserve">
|
||||
<defs
|
||||
id="defs4">
|
||||
</defs>
|
||||
<path
|
||||
d="M -36.585379,54.412576 L -36.585379,54.421305 L -36.582469,54.421305 L -36.582469,54.243829 C -36.57956,54.302018 -36.585379,54.357297 -36.585379,54.412576 z "
|
||||
style="fill:#6ac7bd"
|
||||
id="path6" />
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g8">
|
||||
<g
|
||||
id="g10">
|
||||
<path
|
||||
d="M 24.482,23.998 L 24.482,23.995 C 10.961,23.994 0,34.955 0,48.476 L 0.001,48.479 L 0.001,48.482 C 0.003,62.001 10.962,72.96 24.482,72.96 L 24.482,72.96 L 0,72.96 L 0,97.442 L 0.003,97.442 C 13.523,97.44 24.482,86.48 24.482,72.961 L 24.485,72.961 C 38.005,72.959 48.963,62 48.963,48.479 L 48.963,48.476 C 48.962,34.957 38.001,23.998 24.482,23.998 z M 24.482,50.928 C 23.13,50.928 22.034,49.832 22.034,48.48 C 22.034,47.128 23.13,46.032 24.482,46.032 C 25.834,46.032 26.93,47.128 26.93,48.48 C 26.93,49.832 25.834,50.928 24.482,50.928 z "
|
||||
style="fill:#ef412a"
|
||||
id="path12" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g14">
|
||||
<g
|
||||
id="g16">
|
||||
<path
|
||||
d="M 119.96,48.842 C 120.024,47.548 121.086,46.516 122.397,46.516 C 123.707,46.516 124.768,47.548 124.833,48.843 C 137.211,47.62 146.879,37.181 146.879,24.483 L 122.397,24.483 C 122.396,10.961 111.435,0 97.915,0 L 97.915,24.485 C 97.917,37.183 107.584,47.619 119.96,48.842 z M 124.833,49.084 C 124.769,50.379 123.707,51.411 122.397,51.411 L 122.396,51.411 L 122.396,73.444 L 146.878,73.444 L 146.878,73.441 C 146.876,60.745 137.208,50.308 124.833,49.084 z M 119.949,48.963 L 97.915,48.963 L 97.915,73.442 L 97.915,73.442 C 110.613,73.442 121.052,63.774 122.275,51.399 C 120.981,51.334 119.949,50.274 119.949,48.963 z "
|
||||
style="fill:#a9c542"
|
||||
id="path18" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g20">
|
||||
<g
|
||||
id="g22">
|
||||
<path
|
||||
d="M 168.912,48.967 C 168.912,47.656 169.945,46.596 171.24,46.531 C 170.018,34.152 159.579,24.482 146.879,24.482 L 146.879,48.963 C 146.879,62.484 157.84,73.444 171.361,73.444 L 171.361,51.414 C 170.007,51.415 168.912,50.319 168.912,48.967 z M 195.841,48.978 C 195.841,48.973 195.842,48.969 195.842,48.964 L 195.842,24.482 L 195.838,24.482 C 183.14,24.484 172.702,34.154 171.482,46.531 C 172.776,46.595 173.808,47.656 173.808,48.967 C 173.808,50.278 172.776,51.339 171.481,51.403 C 172.679,63.59 182.814,73.146 195.244,73.445 L 171.361,73.445 L 171.361,97.927 L 171.364,97.927 C 184.879,97.925 195.834,86.973 195.842,73.46 L 195.844,73.46 L 195.844,48.979 L 195.841,48.978 z M 195.832,48.964 L 195.842,48.964 L 195.842,48.978 L 195.832,48.964 z "
|
||||
style="fill:#f9c759"
|
||||
id="path24" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g26">
|
||||
<g
|
||||
id="g28">
|
||||
<path
|
||||
d="M 70.994,48.479 L 48.962,48.479 L 48.962,48.481 L 70.995,48.481 C 70.995,48.481 70.994,48.48 70.994,48.479 z M 73.44,24.001 L 73.437,24.001 L 73.437,46.032 C 73.439,46.032 73.44,46.032 73.442,46.032 C 74.794,46.032 75.89,47.128 75.89,48.48 C 75.89,49.832 74.794,50.928 73.442,50.928 C 72.091,50.928 70.996,49.834 70.994,48.483 L 48.958,48.483 L 48.958,48.486 C 48.96,62.005 59.919,72.964 73.437,72.964 C 86.955,72.964 97.914,62.005 97.916,48.486 L 97.916,48.483 C 97.916,34.963 86.958,24.003 73.44,24.001 z "
|
||||
style="fill:#6ac7bd"
|
||||
id="path30" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g32">
|
||||
<g
|
||||
id="g34">
|
||||
<path
|
||||
d="M 24.482,23.998 L 24.482,23.995 C 10.961,23.994 0,34.955 0,48.476 L 22.034,48.476 C 22.036,47.125 23.131,46.031 24.482,46.031 C 25.834,46.031 26.93,47.127 26.93,48.479 C 26.93,49.831 25.834,50.927 24.482,50.927 L 24.482,72.937 C 24.469,59.427 13.514,48.479 0,48.479 L 0,72.96 L 24.481,72.96 L 24.481,72.96 L 0,72.96 L 0,97.442 L 0.003,97.442 C 13.523,97.44 24.482,86.48 24.482,72.961 L 24.485,72.961 C 38.005,72.959 48.963,62 48.963,48.479 L 48.963,48.476 C 48.962,34.957 38.001,23.998 24.482,23.998 z "
|
||||
style="fill:#ef412a"
|
||||
id="path36" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g38">
|
||||
<g
|
||||
id="g40">
|
||||
<path
|
||||
d="M 122.397,46.516 C 123.707,46.516 124.768,47.548 124.833,48.843 C 137.211,47.62 146.879,37.181 146.879,24.483 L 122.397,24.483 L 122.397,46.516 L 122.397,46.516 z M 97.915,0 L 97.915,24.482 L 122.396,24.482 C 122.396,10.961 111.435,0 97.915,0 z M 122.275,46.528 C 121.052,34.151 110.613,24.482 97.914,24.482 L 97.914,48.964 L 97.914,48.964 L 97.914,73.443 L 97.914,73.443 C 110.612,73.443 121.051,63.775 122.274,51.4 C 120.98,51.335 119.948,50.275 119.948,48.964 C 119.949,47.653 120.98,46.593 122.275,46.528 z M 124.833,49.084 C 124.769,50.379 123.707,51.411 122.397,51.411 L 122.396,51.411 L 122.396,73.444 L 146.878,73.444 L 146.878,73.441 C 146.876,60.745 137.208,50.308 124.833,49.084 z "
|
||||
style="fill:#a9c542"
|
||||
id="path42" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g44">
|
||||
<g
|
||||
id="g46">
|
||||
<path
|
||||
d="M 173.795,49.1 C 173.724,50.389 172.666,51.415 171.36,51.415 C 170.006,51.415 168.911,50.319 168.911,48.967 C 168.911,47.656 169.944,46.596 171.239,46.531 C 170.017,34.152 159.578,24.482 146.878,24.482 L 146.878,48.963 C 146.878,62.484 157.839,73.444 171.36,73.444 L 171.36,97.926 L 171.363,97.926 C 184.878,97.924 195.833,86.972 195.841,73.459 L 195.842,73.459 L 195.842,73.443 L 195.841,73.443 C 195.833,60.753 186.167,50.322 173.795,49.1 z M 195.838,24.482 C 183.14,24.484 172.702,34.154 171.482,46.531 C 172.775,46.595 173.806,47.655 173.808,48.964 L 195.841,48.964 L 195.841,48.979 C 195.841,48.974 195.842,48.969 195.842,48.964 L 195.842,24.482 L 195.838,24.482 z "
|
||||
style="fill:#f9c759"
|
||||
id="path48" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g50">
|
||||
<g
|
||||
id="g52">
|
||||
<path
|
||||
d="M 71.007,48.347 C 71.075,47.105 72.062,46.117 73.304,46.046 C 72.509,38.02 67.85,31.133 61.201,27.284 C 57.601,25.2 53.424,24 48.965,24 L 48.962,24 C 48.962,28.46 50.161,32.638 52.245,36.24 C 56.093,42.891 62.98,47.552 71.007,48.347 z M 48.962,48.418 C 48.962,48.438 48.961,48.456 48.961,48.476 L 48.961,48.479 L 48.962,48.479 L 48.962,48.418 z M 70.995,48.482 C 70.995,48.481 70.995,48.481 70.995,48.48 L 48.962,48.48 L 48.962,48.482 L 70.995,48.482 z M 73.44,24.001 L 73.437,24.001 L 73.437,46.032 C 73.439,46.032 73.44,46.032 73.442,46.032 C 74.794,46.032 75.89,47.128 75.89,48.48 C 75.89,49.832 74.794,50.928 73.442,50.928 C 72.091,50.928 70.996,49.834 70.994,48.483 L 48.958,48.483 L 48.958,48.486 C 48.96,62.005 59.919,72.964 73.437,72.964 C 86.955,72.964 97.914,62.005 97.916,48.486 L 97.916,48.483 C 97.916,34.963 86.958,24.003 73.44,24.001 z "
|
||||
style="fill:#6ac7bd"
|
||||
id="path54" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g56">
|
||||
<g
|
||||
id="g58">
|
||||
<path
|
||||
d="M 24.482,23.998 L 24.482,23.995 C 10.961,23.994 0,34.955 0,48.476 L 22.034,48.476 C 22.036,47.125 23.131,46.031 24.482,46.031 C 25.834,46.031 26.93,47.127 26.93,48.479 C 26.93,49.831 25.834,50.927 24.482,50.927 C 23.171,50.927 22.11,49.894 22.046,48.6 C 9.669,49.824 0.001,60.262 0.001,72.96 L 0,72.96 L 0,97.442 L 0.003,97.442 C 13.523,97.44 24.482,86.48 24.482,72.961 L 24.485,72.961 C 38.005,72.959 48.963,62 48.963,48.479 L 48.963,48.476 C 48.962,34.957 38.001,23.998 24.482,23.998 z "
|
||||
style="fill:#ef412a"
|
||||
id="path60" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g62">
|
||||
<g
|
||||
id="g64">
|
||||
<path
|
||||
d="M 119.949,48.963 C 119.949,47.611 121.045,46.515 122.397,46.515 C 123.707,46.515 124.768,47.547 124.833,48.842 C 137.211,47.619 146.879,37.18 146.879,24.482 L 122.397,24.482 C 122.396,10.961 111.435,0 97.915,0 L 97.915,24.482 L 122.394,24.482 C 108.874,24.484 97.916,35.444 97.916,48.963 L 97.916,48.963 L 97.916,73.442 L 97.916,73.442 C 110.614,73.442 121.053,63.774 122.276,51.399 C 120.981,51.334 119.949,50.274 119.949,48.963 z M 124.833,49.084 C 124.769,50.379 123.707,51.411 122.397,51.411 L 122.396,51.411 L 122.396,73.444 L 146.878,73.444 L 146.878,73.441 C 146.876,60.745 137.208,50.308 124.833,49.084 z "
|
||||
style="fill:#a9c542"
|
||||
id="path66" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g68">
|
||||
<g
|
||||
id="g70">
|
||||
<path
|
||||
d="M 195.841,48.979 L 195.835,48.964 L 195.841,48.964 L 195.841,48.979 C 195.841,48.974 195.842,48.969 195.842,48.964 L 195.842,24.482 L 195.838,24.482 C 183.14,24.484 172.702,34.154 171.482,46.531 C 172.776,46.595 173.808,47.656 173.808,48.967 C 173.808,50.319 172.712,51.415 171.361,51.415 C 170.007,51.415 168.912,50.319 168.912,48.967 C 168.912,47.656 169.945,46.596 171.24,46.531 C 170.018,34.152 159.579,24.482 146.879,24.482 L 146.879,48.963 C 146.879,62.484 157.84,73.444 171.361,73.444 L 171.361,97.926 L 171.364,97.926 C 184.883,97.924 195.843,86.963 195.843,73.444 L 171.959,73.444 C 185.203,73.126 195.841,62.299 195.841,48.979 z "
|
||||
style="fill:#f9c759"
|
||||
id="path72" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
|
||||
style="opacity:0.65"
|
||||
id="g74">
|
||||
<g
|
||||
id="g76">
|
||||
<path
|
||||
d="M 73.44,24.001 L 73.437,24.001 C 59.919,24.003 48.96,34.959 48.958,48.476 L 48.958,48.479 L 48.961,48.479 L 48.961,48.481 L 48.957,48.482 L 48.957,48.485 C 48.959,62.004 59.918,72.963 73.436,72.963 C 86.954,72.963 97.913,62.004 97.915,48.485 L 97.915,48.482 C 97.916,34.963 86.958,24.003 73.44,24.001 z M 73.442,50.928 C 72.09,50.928 70.994,49.832 70.994,48.48 C 70.994,47.128 72.09,46.032 73.442,46.032 C 74.794,46.032 75.89,47.128 75.89,48.48 C 75.89,49.832 74.794,50.928 73.442,50.928 z "
|
||||
style="fill:#6ac7bd"
|
||||
id="path78" />
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 10 KiB |
1240
handbook/poky-doc-tools/common/titlepage.templates.xml
Normal file
27
handbook/poky-doc-tools/configure.ac
Normal file
@@ -0,0 +1,27 @@
|
||||
AC_PREREQ(2.53)
|
||||
AC_INIT(poky-doc-tools, 0.1, http://o-hand.com)
|
||||
AM_INIT_AUTOMAKE()
|
||||
|
||||
AC_PATH_PROG(HAVE_XSLTPROC, xsltproc, no)
|
||||
if test x$HAVE_XSLTPROC = xno; then
|
||||
AC_MSG_ERROR([Required xsltproc program not found])
|
||||
fi
|
||||
|
||||
AC_PATH_PROG(HAVE_FOP, fop, no)
|
||||
if test x$HAVE_FOP = xno; then
|
||||
AC_MSG_ERROR([Required fop program not found])
|
||||
fi
|
||||
|
||||
AC_CHECK_FILE([/usr/share/xml/docbook/stylesheet/nwalsh/template/titlepage.xsl],HAVE_NWALSH="yes", HAVE_NWALSH="no")
|
||||
if test x$HAVE_FOP = xno; then
|
||||
AC_MSG_ERROR([Required 'nwalsh' docbook stylesheets not found])
|
||||
fi
|
||||
|
||||
AC_OUTPUT([
|
||||
Makefile
|
||||
common/Makefile
|
||||
])
|
||||
|
||||
echo "
|
||||
== poky-doc-tools $VERSION configured successfully. ==
|
||||
"
|
||||
44
handbook/poky-doc-tools/poky-docbook-to-pdf.in
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/bin/sh
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
echo "usage: [-v] $0 <docbook file>"
|
||||
echo
|
||||
echo "*NOTE* you need xsltproc, fop and nwalsh docbook stylesheets"
|
||||
echo " installed for this to work!"
|
||||
echo
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$1" = "-v" ]; then
|
||||
echo "Version @version@"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BASENAME=`basename $1 .xml` || exit 1
|
||||
FO="$BASENAME.fo"
|
||||
PDF="$BASENAME.pdf"
|
||||
|
||||
xsltproc -o /tmp/titlepage.xsl \
|
||||
--xinclude \
|
||||
/usr/share/xml/docbook/stylesheet/nwalsh/template/titlepage.xsl \
|
||||
@datadir@/common/titlepage.templates.xml || exit 1
|
||||
|
||||
xsltproc --xinclude \
|
||||
--stringparam hyphenate false \
|
||||
--stringparam formal.title.placement "figure after" \
|
||||
--stringparam ulink.show 1 \
|
||||
--stringparam body.font.master 9 \
|
||||
--stringparam title.font.master 11 \
|
||||
--stringparam draft.watermark.image "@datadir@/common/draft.png" \
|
||||
--output $FO \
|
||||
@datadir@/common/poky-db-pdf.xsl \
|
||||
$1 || exit 1
|
||||
|
||||
fop -c @datadir@/common/fop-config.xml -fo $FO -pdf $PDF || exit 1
|
||||
|
||||
rm -f $FO
|
||||
rm -f /tmp/titlepage.xsl
|
||||
|
||||
echo
|
||||
echo " #### Success! $PDF ready. ####"
|
||||
echo
|
||||
BIN
handbook/poky-handbook.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
111
handbook/poky-handbook.xml
Normal file
@@ -0,0 +1,111 @@
|
||||
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<book id='poky-handbook' lang='en'
|
||||
xmlns:xi="http://www.w3.org/2003/XInclude"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
>
|
||||
<bookinfo>
|
||||
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref='common/poky-handbook.png'
|
||||
format='SVG'
|
||||
align='center'/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>Poky Handbook</title>
|
||||
<subtitle>Hitchhiker's Guide to Poky</subtitle>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<firstname>Richard</firstname> <surname>Purdie</surname>
|
||||
<affiliation>
|
||||
<orgname>OpenedHand Ltd</orgname>
|
||||
</affiliation>
|
||||
<email>richard@openedhand.com</email>
|
||||
</author>
|
||||
|
||||
<author>
|
||||
<firstname>Tomas</firstname> <surname>Frydrych</surname>
|
||||
<affiliation>
|
||||
<orgname>OpenedHand Ltd</orgname>
|
||||
</affiliation>
|
||||
<email>tf@openedhand.com</email>
|
||||
</author>
|
||||
|
||||
<author>
|
||||
<firstname>Marcin</firstname> <surname>Juszkiewicz</surname>
|
||||
<affiliation>
|
||||
<orgname>OpenedHand Ltd</orgname>
|
||||
</affiliation>
|
||||
<email>hrw@openedhand.com</email>
|
||||
</author>
|
||||
<author>
|
||||
<firstname>Dodji</firstname> <surname>Seketeli</surname>
|
||||
<affiliation>
|
||||
<orgname>OpenedHand Ltd</orgname>
|
||||
</affiliation>
|
||||
<email>dodji@openedhand.com</email>
|
||||
</author>
|
||||
</authorgroup>
|
||||
|
||||
<revhistory>
|
||||
<revision>
|
||||
<revnumber>3.1</revnumber>
|
||||
<date>15 Feburary 2008</date>
|
||||
<revremark>Poky 3.1 (Pinky) Documentation Release</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
<year>2007</year>
|
||||
<holder>OpenedHand Limited</holder>
|
||||
</copyright>
|
||||
|
||||
<legalnotice>
|
||||
<para>
|
||||
Permission is granted to copy, distribute and/or modify this document under
|
||||
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales</ulink> as published by Creative Commons.
|
||||
</para>
|
||||
</legalnotice>
|
||||
|
||||
</bookinfo>
|
||||
|
||||
<xi:include href="introduction.xml"/>
|
||||
|
||||
<xi:include href="usingpoky.xml"/>
|
||||
|
||||
<xi:include href="extendpoky.xml"/>
|
||||
|
||||
<xi:include href="development.xml"/>
|
||||
|
||||
<xi:include href="ref-structure.xml"/>
|
||||
|
||||
<xi:include href="ref-bitbake.xml"/>
|
||||
|
||||
<xi:include href="ref-classes.xml"/>
|
||||
|
||||
<xi:include href="ref-images.xml"/>
|
||||
|
||||
<xi:include href="ref-features.xml"/>
|
||||
|
||||
<xi:include href="ref-variables.xml"/>
|
||||
|
||||
<xi:include href="ref-varlocality.xml"/>
|
||||
|
||||
<xi:include href="faq.xml"/>
|
||||
|
||||
<xi:include href="resources.xml"/>
|
||||
|
||||
<xi:include href="contactus.xml"/>
|
||||
|
||||
<index id='index'>
|
||||
<title>Index</title>
|
||||
</index>
|
||||
|
||||
</book>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
117
handbook/poky-logo.svg
Normal file
@@ -0,0 +1,117 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 13.0.0, SVG Export Plug-In -->
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" [
|
||||
<!ENTITY ns_flows "http://ns.adobe.com/Flows/1.0/">
|
||||
]>
|
||||
<svg version="1.1"
|
||||
xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:a="http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/"
|
||||
x="0px" y="0px" width="300px" height="300px" viewBox="-40.981 -92.592 300 300" enable-background="new -40.981 -92.592 300 300"
|
||||
xml:space="preserve">
|
||||
<defs>
|
||||
</defs>
|
||||
<path fill="#6AC7BD" d="M48.96,48.476v0.003h0.001v-0.061C48.962,48.438,48.96,48.457,48.96,48.476z"/>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#EF412A" d="M24.482,23.998v-0.003C10.961,23.994,0,34.955,0,48.476l0.001,0.003v0.003
|
||||
C0.003,62.001,10.962,72.96,24.482,72.96l0,0H0v24.482h0.003c13.52-0.002,24.479-10.962,24.479-24.481h0.003
|
||||
C38.005,72.959,48.963,62,48.963,48.479v-0.003C48.962,34.957,38.001,23.998,24.482,23.998z M24.482,50.928
|
||||
c-1.352,0-2.448-1.096-2.448-2.448s1.096-2.448,2.448-2.448s2.448,1.096,2.448,2.448S25.834,50.928,24.482,50.928z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#A9C542" d="M119.96,48.842c0.064-1.294,1.126-2.326,2.437-2.326c1.31,0,2.371,1.032,2.436,2.327
|
||||
c12.378-1.223,22.046-11.662,22.046-24.36h-24.482C122.396,10.961,111.435,0,97.915,0v24.485
|
||||
C97.917,37.183,107.584,47.619,119.96,48.842z M124.833,49.084c-0.064,1.295-1.126,2.327-2.436,2.327h-0.001v22.033h24.482v-0.003
|
||||
C146.876,60.745,137.208,50.308,124.833,49.084z M119.949,48.963H97.915v24.479h0c12.698,0,23.137-9.668,24.36-22.043
|
||||
C120.981,51.334,119.949,50.274,119.949,48.963z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#F9C759" d="M168.912,48.967c0-1.311,1.033-2.371,2.328-2.436c-1.222-12.379-11.661-22.049-24.361-22.049v24.481
|
||||
c0,13.521,10.961,24.481,24.482,24.481v-22.03C170.007,51.415,168.912,50.319,168.912,48.967z M195.841,48.978
|
||||
c0-0.005,0.001-0.009,0.001-0.014V24.482h-0.004c-12.698,0.002-23.136,9.672-24.356,22.049c1.294,0.064,2.326,1.125,2.326,2.436
|
||||
s-1.032,2.372-2.327,2.436c1.198,12.187,11.333,21.743,23.763,22.042h-23.883v24.482h0.003
|
||||
c13.515-0.002,24.47-10.954,24.478-24.467h0.002V48.979L195.841,48.978z M195.832,48.964h0.01v0.014L195.832,48.964z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#6AC7BD" d="M70.994,48.479H48.962v0.002h22.033C70.995,48.481,70.994,48.48,70.994,48.479z M73.44,24.001h-0.003
|
||||
v22.031c0.002,0,0.003,0,0.005,0c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448c-1.351,0-2.446-1.094-2.448-2.445
|
||||
H48.958v0.003c0.002,13.519,10.961,24.478,24.479,24.478s24.477-10.959,24.479-24.478v-0.003
|
||||
C97.916,34.963,86.958,24.003,73.44,24.001z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#EF412A" d="M24.482,23.998v-0.003C10.961,23.994,0,34.955,0,48.476h22.034c0.002-1.351,1.097-2.445,2.448-2.445
|
||||
c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448v22.01C24.469,59.427,13.514,48.479,0,48.479V72.96h24.481l0,0H0
|
||||
v24.482h0.003c13.52-0.002,24.479-10.962,24.479-24.481h0.003C38.005,72.959,48.963,62,48.963,48.479v-0.003
|
||||
C48.962,34.957,38.001,23.998,24.482,23.998z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#A9C542" d="M122.397,46.516c1.31,0,2.371,1.032,2.436,2.327c12.378-1.223,22.046-11.662,22.046-24.36h-24.482
|
||||
L122.397,46.516L122.397,46.516z M97.915,0v24.482h24.481C122.396,10.961,111.435,0,97.915,0z M122.275,46.528
|
||||
c-1.223-12.377-11.662-22.046-24.361-22.046v24.482h0v24.479h0c12.698,0,23.137-9.668,24.36-22.043
|
||||
c-1.294-0.065-2.326-1.125-2.326-2.436C119.949,47.653,120.98,46.593,122.275,46.528z M124.833,49.084
|
||||
c-0.064,1.295-1.126,2.327-2.436,2.327h-0.001v22.033h24.482v-0.003C146.876,60.745,137.208,50.308,124.833,49.084z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#F9C759" d="M173.795,49.1c-0.071,1.289-1.129,2.315-2.435,2.315c-1.354,0-2.449-1.096-2.449-2.448
|
||||
c0-1.311,1.033-2.371,2.328-2.436c-1.222-12.379-11.661-22.049-24.361-22.049v24.481c0,13.521,10.961,24.481,24.482,24.481v24.482
|
||||
h0.003c13.515-0.002,24.47-10.954,24.478-24.467h0.001v-0.016h-0.001C195.833,60.753,186.167,50.322,173.795,49.1z
|
||||
M195.838,24.482c-12.698,0.002-23.136,9.672-24.356,22.049c1.293,0.064,2.324,1.124,2.326,2.433h22.033v0.015
|
||||
c0-0.005,0.001-0.01,0.001-0.015V24.482H195.838z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#6AC7BD" d="M71.007,48.347c0.068-1.242,1.055-2.23,2.297-2.301c-0.795-8.026-5.454-14.913-12.103-18.762
|
||||
C57.601,25.2,53.424,24,48.965,24h-0.003c0,4.46,1.199,8.638,3.283,12.24C56.093,42.891,62.98,47.552,71.007,48.347z
|
||||
M48.962,48.418c0,0.02-0.001,0.038-0.001,0.058v0.003h0.001V48.418z M70.995,48.482c0-0.001,0-0.001,0-0.002H48.962v0.002H70.995
|
||||
z M73.44,24.001h-0.003v22.031c0.002,0,0.003,0,0.005,0c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448
|
||||
c-1.351,0-2.446-1.094-2.448-2.445H48.958v0.003c0.002,13.519,10.961,24.478,24.479,24.478s24.477-10.959,24.479-24.478v-0.003
|
||||
C97.916,34.963,86.958,24.003,73.44,24.001z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#EF412A" d="M24.482,23.998v-0.003C10.961,23.994,0,34.955,0,48.476h22.034c0.002-1.351,1.097-2.445,2.448-2.445
|
||||
c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448c-1.311,0-2.372-1.033-2.436-2.327
|
||||
C9.669,49.824,0.001,60.262,0.001,72.96H0v24.482h0.003c13.52-0.002,24.479-10.962,24.479-24.481h0.003
|
||||
C38.005,72.959,48.963,62,48.963,48.479v-0.003C48.962,34.957,38.001,23.998,24.482,23.998z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#A9C542" d="M119.949,48.963c0-1.352,1.096-2.448,2.448-2.448c1.31,0,2.371,1.032,2.436,2.327
|
||||
c12.378-1.223,22.046-11.662,22.046-24.36h-24.482C122.396,10.961,111.435,0,97.915,0v24.482h24.479
|
||||
c-13.52,0.002-24.478,10.962-24.478,24.481h0v24.479h0c12.698,0,23.137-9.668,24.36-22.043
|
||||
C120.981,51.334,119.949,50.274,119.949,48.963z M124.833,49.084c-0.064,1.295-1.126,2.327-2.436,2.327h-0.001v22.033h24.482
|
||||
v-0.003C146.876,60.745,137.208,50.308,124.833,49.084z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#F9C759" d="M195.841,48.979l-0.006-0.015h0.006V48.979c0-0.005,0.001-0.01,0.001-0.015V24.482h-0.004
|
||||
c-12.698,0.002-23.136,9.672-24.356,22.049c1.294,0.064,2.326,1.125,2.326,2.436c0,1.352-1.096,2.448-2.447,2.448
|
||||
c-1.354,0-2.449-1.096-2.449-2.448c0-1.311,1.033-2.371,2.328-2.436c-1.222-12.379-11.661-22.049-24.361-22.049v24.481
|
||||
c0,13.521,10.961,24.481,24.482,24.481v24.482h0.003c13.519-0.002,24.479-10.963,24.479-24.482h-23.884
|
||||
C185.203,73.126,195.841,62.299,195.841,48.979z"/>
|
||||
</g>
|
||||
</g>
|
||||
<g opacity="0.65">
|
||||
<g>
|
||||
<path fill="#6AC7BD" d="M73.44,24.001h-0.003C59.919,24.003,48.96,34.959,48.958,48.476v0.003h0.003v0.002l-0.004,0.001v0.003
|
||||
c0.002,13.519,10.961,24.478,24.479,24.478s24.477-10.959,24.479-24.478v-0.003C97.916,34.963,86.958,24.003,73.44,24.001z
|
||||
M73.442,50.928c-1.352,0-2.448-1.096-2.448-2.448s1.096-2.448,2.448-2.448s2.448,1.096,2.448,2.448S74.794,50.928,73.442,50.928z
|
||||
"/>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 6.9 KiB |
340
handbook/ref-bitbake.xml
Normal file
@@ -0,0 +1,340 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='ref-bitbake'>
|
||||
|
||||
<title>Reference: Bitbake</title>
|
||||
|
||||
<para>
|
||||
Bitbake a program written in Python which interprets the metadata
|
||||
that makes up Poky. At some point, people wonder what actually happens
|
||||
when you type <command>bitbake poky-image-sato</command>. This section
|
||||
aims to give an overview of what happens behind the scenes from a
|
||||
BitBake perspective.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It is worth noting that bitbake aims to be a generic "task" executor
|
||||
capable of handling complex dependency relationships. As such it has no
|
||||
real knowledge of what the tasks its executing actually do. It just
|
||||
considers a list of tasks with dependencies and handles metadata
|
||||
consisting of variables in a certain format which get passed to the
|
||||
tasks.
|
||||
</para>
|
||||
|
||||
<section id='ref-bitbake-parsing'>
|
||||
<title>Parsing</title>
|
||||
|
||||
<para>
|
||||
The first thing BitBake does is work out its configuration by
|
||||
looking for a file called <filename>bitbake.conf</filename>.
|
||||
Bitbake searches through the <varname>BBPATH</varname> environment
|
||||
variable looking for a <filename class="directory">conf/</filename>
|
||||
directory containing a <filename>bitbake.conf</filename> file and
|
||||
adds the first <filename>bitbake.conf</filename> file found in
|
||||
<varname>BBPATH</varname> (similar to the PATH environment variable).
|
||||
For Poky, <filename>bitbake.conf</filename> is found in <filename
|
||||
class="directory">meta/conf/</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In Poky, <filename>bitbake.conf</filename> lists other configuration
|
||||
files to include from a <filename class="directory">conf/</filename>
|
||||
directory below the directories listed in <varname>BBPATH</varname>.
|
||||
In general the most important configuration file from a user's perspective
|
||||
is <filename>local.conf</filename>, which contains a users customized
|
||||
settings for Poky. Other notable configuration files are the distribution
|
||||
configuration file (set by the <glossterm><link linkend='var-DISTRO'>
|
||||
DISTRO</link></glossterm> variable) and the machine configuration file
|
||||
(set by the <glossterm><link linkend='var-MACHINE'>MACHINE</link>
|
||||
</glossterm> variable). The <glossterm><link linkend='var-DISTRO'>
|
||||
DISTRO</link></glossterm> and <glossterm><link linkend='var-MACHINE'>
|
||||
MACHINE</link></glossterm> environment variables are both usually set in
|
||||
the <filename>local.conf</filename> file. Valid distribution
|
||||
configuration files are available in the <filename class="directory">
|
||||
meta/conf/distro/</filename> directory and valid machine configuration
|
||||
files in the <filename class="directory">meta/conf/machine/</filename>
|
||||
directory. Within the <filename class="directory">
|
||||
meta/conf/machine/include/</filename> directory are various <filename>
|
||||
tune-*.inc</filename> configuration files which provide common
|
||||
"tuning" settings specific to and shared between particular
|
||||
architectures and machines.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
After the parsing of the configuration files some standard classes
|
||||
are included. In particular, <filename>base.bbclass</filename> is
|
||||
always included, as will any other classes
|
||||
specified in the configuration using the <glossterm><link
|
||||
linkend='var-INHERIT'>INHERIT</link></glossterm>
|
||||
variable. Class files are searched for in a classes subdirectory
|
||||
under the paths in <varname>BBPATH</varname> in the same way as
|
||||
configuration files.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
After the parsing of the configuration files is complete, the
|
||||
variable <glossterm><link linkend='var-BBFILES'>BBFILES</link></glossterm>
|
||||
is set, usually in
|
||||
<filename>local.conf</filename>, and defines the list of places to search for
|
||||
<filename class="extension">.bb</filename> files. By
|
||||
default this specifies the <filename class="directory">meta/packages/
|
||||
</filename> directory within Poky, but other directories such as
|
||||
<filename class="directory">meta-extras/</filename> can be included
|
||||
too. If multiple directories are specified a system referred to as
|
||||
<link linkend='usingpoky-changes-collections'>"collections"</link> is used to
|
||||
determine which files have priority.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Bitbake parses each <filename class="extension">.bb</filename> file in
|
||||
<glossterm><link linkend='var-BBFILES'>BBFILES</link></glossterm> and
|
||||
stores the values of various variables. In summary, for each
|
||||
<filename class="extension">.bb</filename>
|
||||
file the configuration + base class of variables are set, followed
|
||||
by the data in the <filename class="extension">.bb</filename> file
|
||||
itself, followed by any inherit commands that
|
||||
<filename class="extension">.bb</filename> file might contain.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Parsing <filename class="extension">.bb</filename> files is a time
|
||||
consuming process, so a cache is kept to speed up subsequent parsing.
|
||||
This cache is invalid if the timestamp of the <filename class="extension">.bb</filename>
|
||||
file itself has changed, or if the timestamps of any of the include,
|
||||
configuration or class files the <filename class="extension">.bb</filename>
|
||||
file depends on have changed.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-bitbake-providers'>
|
||||
<title>Preferences and Providers</title>
|
||||
|
||||
<para>
|
||||
Once all the <filename class="extension">.bb</filename> files have been
|
||||
parsed, BitBake will proceed to build "poky-image-sato" (or whatever was
|
||||
specified on the commandline) and looks for providers of that target.
|
||||
Once a provider is selected, BitBake resolves all the dependencies for
|
||||
the target. In the case of "poky-image-sato", it would lead to
|
||||
<filename>task-oh.bb</filename> and <filename>task-base.bb</filename>
|
||||
which in turn would lead to packages like <application>Contacts</application>,
|
||||
<application>Dates</application>, <application>BusyBox</application>
|
||||
and these in turn depend on glibc and the toolchain.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Sometimes a target might have multiple providers and a common example
|
||||
is "virtual/kernel" that is provided by each kernel package. Each machine
|
||||
will often elect the best provider of its kernel with a line like the
|
||||
following in the machine configuration file:
|
||||
</para>
|
||||
<programlisting><glossterm><link linkend='var-PREFERRED_PROVIDER'>PREFERRED_PROVIDER</link></glossterm>_virtual/kernel = "linux-rp"</programlisting>
|
||||
<para>
|
||||
The default <glossterm><link linkend='var-PREFERRED_PROVIDER'>
|
||||
PREFERRED_PROVIDER</link></glossterm> is the provider with the same name as
|
||||
the target.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Understanding how providers are chosen is complicated by the fact
|
||||
multiple versions might be present. Bitbake defaults to the highest
|
||||
version of a provider by default. Version comparisons are made using
|
||||
the same method as Debian. The <glossterm><link
|
||||
linkend='var-PREFERRED_VERSION'>PREFERRED_VERSION</link></glossterm>
|
||||
variable can be used to specify a particular version
|
||||
(usually in the distro configuration) but the order can
|
||||
also be influenced by the <glossterm><link
|
||||
linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm>
|
||||
variable. By default files
|
||||
have a preference of "0". Setting the
|
||||
<glossterm><link
|
||||
linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm> to "-1" will
|
||||
make a package unlikely to be used unless it was explicitly referenced and
|
||||
"1" makes it likely the package will be used.
|
||||
<glossterm><link
|
||||
linkend='var-PREFERRED_VERSION'>PREFERRED_VERSION</link></glossterm> overrides
|
||||
any default preference. <glossterm><link
|
||||
linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm>
|
||||
is often used to mark more
|
||||
experimental new versions of packages until they've undergone sufficient
|
||||
testing to be considered stable.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The end result is that internally, BitBake has now built a list of
|
||||
providers for each target it needs in order of priority.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-bitbake-dependencies'>
|
||||
<title>Dependencies</title>
|
||||
|
||||
<para>
|
||||
Each target BitBake builds consists of multiple tasks (e.g. fetch,
|
||||
unpack, patch, configure, compile etc.). For best performance on
|
||||
multi-core systems, BitBake considers each task as an independent
|
||||
entity with a set of dependencies. There are many variables that
|
||||
are used to signify these dependencies and more information can be found
|
||||
found about these in the <ulink url='http://bitbake.berlios.de/manual/'>
|
||||
BitBake manual</ulink>. At a basic level it is sufficient to know
|
||||
that BitBake uses the <glossterm><link
|
||||
linkend='var-DEPENDS'>DEPENDS</link></glossterm> and
|
||||
<glossterm><link linkend='var-RDEPENDS'>RDEPENDS</link></glossterm> variables when
|
||||
calculating dependencies and descriptions of these variables are
|
||||
available through the links.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-bitbake-tasklist'>
|
||||
<title>The Task List</title>
|
||||
|
||||
<para>
|
||||
Based on the generated list of providers and the dependency information,
|
||||
BitBake can now calculate exactly which tasks it needs to run and in what
|
||||
order. The build now starts with BitBake forking off threads up to
|
||||
the limit set in the <glossterm><link
|
||||
linkend='var-BB_NUMBER_THREADS'>BB_NUMBER_THREADS</link></glossterm> variable
|
||||
as long there are tasks ready to run, i.e. tasks with all their
|
||||
dependencies met.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
As each task completes, a timestamp is written to the directory
|
||||
specified by the <glossterm><link
|
||||
linkend='var-STAMPS'>STAMPS</link></glossterm> variable (usually
|
||||
<filename class="directory">build/tmp/stamps/*/</filename>). On
|
||||
subsequent runs, BitBake looks at the <glossterm><link
|
||||
linkend='var-STAMPS'>STAMPS</link></glossterm>
|
||||
directory and will not rerun
|
||||
tasks its already completed unless a timestamp is found to be invalid.
|
||||
Currently, invalid timestamps are only considered on a per <filename
|
||||
class="extension">.bb</filename> file basis so if for example the configure stamp has a timestamp greater than the
|
||||
compile timestamp for a given target the compile task would rerun but this
|
||||
has no effect on other providers depending on that target. This could
|
||||
change or become configurable in future versions of BitBake. Some tasks
|
||||
are marked as "nostamp" tasks which means no timestamp file will be written
|
||||
and the task will always rerun.
|
||||
</para>
|
||||
|
||||
<para>Once all the tasks have been completed BitBake exits.</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-bitbake-runtask'>
|
||||
<title>Running a Task</title>
|
||||
|
||||
<para>
|
||||
It's worth noting what BitBake does to run a task. A task can either
|
||||
be a shell task or a python task. For shell tasks, BitBake writes a
|
||||
shell script to <filename>${WORKDIR}/temp/run.do_taskname.pid</filename>
|
||||
and then executes the script. The generated
|
||||
shell script contains all the exported variables, and the shell functions
|
||||
with all variables expanded. Output from the shell script is
|
||||
sent to the file <filename>${WORKDIR}/temp/log.do_taskname.pid</filename>.
|
||||
Looking at the
|
||||
expanded shell functions in the run file and the output in the log files
|
||||
is a useful debugging technique.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Python functions are executed internally to BitBake itself and
|
||||
logging goes to the controlling terminal. Future versions of BitBake will
|
||||
write the functions to files in a similar way to shell functions and
|
||||
logging will also go to the log files in a similar way.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
|
||||
<section id='ref-bitbake-commandline'>
|
||||
<title>Commandline</title>
|
||||
|
||||
<para>
|
||||
To quote from "bitbake --help":
|
||||
</para>
|
||||
|
||||
<screen>Usage: bitbake [options] [package ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of BitBake files.
|
||||
It expects that BBFILES is defined, which is a space separated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
Default BBFILES are the .bb files in the current directory.
|
||||
|
||||
Options:
|
||||
--version show program's version number and exit
|
||||
-h, --help show this help message and exit
|
||||
-b BUILDFILE, --buildfile=BUILDFILE
|
||||
execute the task against this .bb file, rather than a
|
||||
package from BBFILES.
|
||||
-k, --continue continue as much as possible after an error. While the
|
||||
target that failed, and those that depend on it,
|
||||
cannot be remade, the other dependencies of these
|
||||
targets can be processed all the same.
|
||||
-f, --force force run of specified cmd, regardless of stamp status
|
||||
-i, --interactive drop into the interactive mode also called the BitBake
|
||||
shell.
|
||||
-c CMD, --cmd=CMD Specify task to execute. Note that this only executes
|
||||
the specified task for the providee and the packages
|
||||
it depends on, i.e. 'compile' does not implicitly call
|
||||
stage for the dependencies (IOW: use only if you know
|
||||
what you are doing). Depending on the base.bbclass a
|
||||
listtasks tasks is defined and will show available
|
||||
tasks
|
||||
-r FILE, --read=FILE read the specified file before bitbake.conf
|
||||
-v, --verbose output more chit-chat to the terminal
|
||||
-D, --debug Increase the debug level. You can specify this more
|
||||
than once.
|
||||
-n, --dry-run don't execute, just go through the motions
|
||||
-p, --parse-only quit after parsing the BB files (developers only)
|
||||
-d, --disable-psyco disable using the psyco just-in-time compiler (not
|
||||
recommended)
|
||||
-s, --show-versions show current and preferred versions of all packages
|
||||
-e, --environment show the global or per-package environment (this is
|
||||
what used to be bbread)
|
||||
-g, --graphviz emit the dependency trees of the specified packages in
|
||||
the dot syntax
|
||||
-I IGNORED_DOT_DEPS, --ignore-deps=IGNORED_DOT_DEPS
|
||||
Stop processing at the given list of dependencies when
|
||||
generating dependency graphs. This can help to make
|
||||
the graph more appealing
|
||||
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
|
||||
Show debug logging for the specified logging domains
|
||||
-P, --profile profile the command and print a report</screen>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-bitbake-fetchers'>
|
||||
<title>Fetchers</title>
|
||||
|
||||
<para>
|
||||
As well as the containing the parsing and task/dependency handling
|
||||
code, bitbake also contains a set of "fetcher" modules which allow
|
||||
fetching of source code from various types of sources. Example
|
||||
sources might be from disk with the metadata, from websites, from
|
||||
remote shell accounts or from SCM systems like cvs/subversion/git.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The fetchers are usually triggered by entries in
|
||||
<glossterm><link linkend='var-SRC_URI'>SRC_URI</link></glossterm>. Information about the
|
||||
options and formats of entries for specific fetchers can be found in the
|
||||
<ulink url='http://bitbake.berlios.de/manual/'>BitBake manual</ulink>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
One useful feature for certain SCM fetchers is the ability to
|
||||
"auto-update" when the upstream SCM changes version. Since this
|
||||
requires certain functionality from the SCM only certain systems
|
||||
support it, currently Subversion, Bazaar and to a limited extent, Git. It
|
||||
works using the <glossterm><link linkend='var-SRCREV'>SRCREV</link>
|
||||
</glossterm> variable. See the <link linkend='platdev-appdev-srcrev'>
|
||||
developing with an external SCM based project</link> section for more
|
||||
information.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4 spell spelllang=en_gb
|
||||
-->
|
||||
460
handbook/ref-classes.xml
Normal file
@@ -0,0 +1,460 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='ref-classes'>
|
||||
<title>Reference: Classes</title>
|
||||
|
||||
<para>
|
||||
Class files are used to abstract common functionality and share it amongst multiple
|
||||
<filename class="extension">.bb</filename> files. Any metadata usually found in a
|
||||
<filename class="extension">.bb</filename> file can also be placed in a class
|
||||
file. Class files are identified by the extension
|
||||
<filename class="extension">.bbclass</filename> and are usually placed
|
||||
in a <filename class="directory">classes/</filename> directory beneath the
|
||||
<filename class="directory">meta/</filename> directory or the <filename
|
||||
class="directory">build/</filename> directory in the same way as <filename
|
||||
class="extension">.conf</filename> files in the <filename
|
||||
class="directory">conf</filename> directory. Class files are searched for
|
||||
in BBPATH in the same was as <filename class="extension">.conf</filename> files too.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In most cases inheriting the class is enough to enable its features, although
|
||||
for some classes you may need to set variables and/or override some of the
|
||||
default behaviour.
|
||||
</para>
|
||||
|
||||
<section id='ref-classes-base'>
|
||||
<title>The base class - <filename>base.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
The base class is special in that every <filename class="extension">.bb</filename>
|
||||
file inherits it automatically. It contains definitions of standard basic
|
||||
tasks such as fetching, unpacking, configuring (empty by default), compiling
|
||||
(runs any Makefile present), installing (empty by default) and packaging
|
||||
(empty by default). These are often overridden or extended by other classes
|
||||
such as <filename>autotools.bbclass</filename> or
|
||||
<filename>package.bbclass</filename>. The class contains some commonly
|
||||
some commonly used functions such as <function>oe_libinstall</function>
|
||||
and <function>oe_runmake</function>. The end of the class file has a
|
||||
list of standard mirrors for software projects for use by the fetcher code.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-autotools'>
|
||||
<title>Autotooled Packages - <filename>autotools.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Autotools (autoconf, automake, libtool) brings standardisation and this
|
||||
class aims to define a set of tasks (configure, compile etc.) that will
|
||||
work for all autotooled packages. It should usualy be enough to define
|
||||
a few standard variables as documented in the <link
|
||||
linkend='usingpoky-extend-addpkg-autotools'>simple autotools
|
||||
example</link> section and then simply "inherit autotools". This class
|
||||
can also work with software that emulates autotools.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Its useful to have some idea of the tasks this class defines work and
|
||||
what they do behind the scenes.
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
'do_configure' regenearates the configure script and
|
||||
then launches it with a standard set of arguments used during
|
||||
cross-compilation. Additional parameters can be passed to
|
||||
<command>configure</command> through the <glossterm><link
|
||||
linkend='var-EXTRA_OECONF'>EXTRA_OECONF</link></glossterm> variable.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
'do_compile' runs <command>make</command> with arguments specifying
|
||||
the compiler and linker. Additional arguments can be passed through
|
||||
the <glossterm><link linkend='var-EXTRA_OEMAKE'>EXTRA_OEMAKE</link>
|
||||
</glossterm> variable.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
'do_install' runs <command>make install</command> passing a DESTDIR
|
||||
option taking its value from the standard <glossterm><link
|
||||
linkend='var-DESTDIR'>DESTDIR</link></glossterm> variable.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
By default the class does not stage headers and libraries so
|
||||
the recipe author needs to add their own <function>do_stage()</function>
|
||||
task. For typical recipes the following example code will usually be
|
||||
enough:
|
||||
<programlisting>
|
||||
do_stage() {
|
||||
autotools_stage_all
|
||||
}</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-update-alternatives'>
|
||||
<title>Alternatives - <filename>update-alternatives.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Several programs can fulfill the same or similar function and
|
||||
they can be installed with the same name. For example the <command>ar</command>
|
||||
command is available from the "busybox", "binutils" and "elfutils" packages.
|
||||
This class handles the renaming of the binaries so multiple packages
|
||||
can be installed which would otherwise conflict and yet the
|
||||
<command>ar</command> command still works regardless of which are installed
|
||||
or subsequently removed. It renames the conflicting binary in each package
|
||||
and symlinks the highest priority binary during installation or removal
|
||||
of packages.
|
||||
|
||||
Four variables control this class:
|
||||
</para>
|
||||
|
||||
|
||||
<variablelist>
|
||||
<varlistentry>
|
||||
<term>ALTERNATIVE_NAME</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Name of binary which will be replaced (<command>ar</command> in this example)
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>ALTERNATIVE_LINK</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Path to resulting binary ("/bin/ar" in this example)
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>ALTERNATIVE_PATH</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Path to real binary ("/usr/bin/ar.binutils" in this example)
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>ALTERNATIVE_PRIORITY</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Priority of binary, the version with the most features should have the highest priority
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
</variablelist>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-update-rc.d'>
|
||||
<title>Initscripts - <filename>update-rc.d.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
This class uses update-rc.d to safely install an initscript on behalf of
|
||||
the package. Details such as making sure the initscript is stopped before
|
||||
a package is removed and started when the package is installed are taken
|
||||
care of. Three variables control this class,
|
||||
<link linkend='var-INITSCRIPT_PACKAGES'>INITSCRIPT_PACKAGES</link>,
|
||||
<link linkend='var-INITSCRIPT_NAME'>INITSCRIPT_NAME</link> and
|
||||
<link linkend='var-INITSCRIPT_PARAMS'>INITSCRIPT_PARAMS</link>. See the
|
||||
links for details.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-binconfig'>
|
||||
<title>Binary config scripts - <filename>binconfig.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Before pkg-config became widespread, libraries shipped shell
|
||||
scripts to give information about the libraries and include paths needed
|
||||
to build software (usually named 'LIBNAME-config'). This class assists
|
||||
any recipe using such scripts.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
During staging Bitbake installs such scripts into the <filename
|
||||
class="directory">staging/</filename> directory. It also changes all
|
||||
paths to point into the <filename class="directory">staging/</filename>
|
||||
directory so all builds which use the script will use the correct
|
||||
directories for the cross compiling layout.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-debian'>
|
||||
<title>Debian renaming - <filename>debian.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
This class renames packages so that they follow the Debian naming
|
||||
policy, i.e. 'glibc' becomes 'libc6' and 'glibc-devel' becomes
|
||||
'libc6-dev'.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-pkgconfig'>
|
||||
<title>Pkg-config - <filename>pkgconfig.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Pkg-config brought standardisation and this class aims to make its
|
||||
integration smooth for all libraries which make use of it.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
During staging Bitbake installs pkg-config data into the <filename
|
||||
class="directory">staging/</filename> directory. By making use of
|
||||
sysroot functionality within pkgconfig this class no longer has to
|
||||
manipulate the files.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-src-distribute'>
|
||||
<title>Distribution of sources - <filename>src_distribute_local.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Many software licenses require providing the sources for compiled
|
||||
binaries. To simplify this process two classes were created:
|
||||
<filename>src_distribute.bbclass</filename> and
|
||||
<filename>src_distribute_local.bbclass</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Result of their work are <filename class="directory">tmp/deploy/source/</filename>
|
||||
subdirs with sources sorted by <glossterm><link linkend='var-LICENSE'>LICENSE</link>
|
||||
</glossterm> field. If recipe lists few licenses (or has entries like "Bitstream Vera") source archive is put in each
|
||||
license dir.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Src_distribute_local class has three modes of operating:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem><para>copy - copies the files to the distribute dir</para></listitem>
|
||||
<listitem><para>symlink - symlinks the files to the distribute dir</para></listitem>
|
||||
<listitem><para>move+symlink - moves the files into distribute dir, and symlinks them back</para></listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-perl'>
|
||||
<title>Perl modules - <filename>cpan.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Recipes for Perl modules are simple - usually needs only
|
||||
pointing to source archive and inheriting of proper bbclass.
|
||||
Building is split into two methods dependly on method used by
|
||||
module authors.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Modules which use old Makefile.PL based build system require
|
||||
using of <filename>cpan.bbclass</filename> in their recipes.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Modules which use Build.PL based build system require
|
||||
using of <filename>cpan_build.bbclass</filename> in their recipes.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-distutils'>
|
||||
<title>Python extensions - <filename>distutils.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Recipes for Python extensions are simple - usually needs only
|
||||
pointing to source archive and inheriting of proper bbclass.
|
||||
Building is split into two methods dependly on method used by
|
||||
module authors.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Extensions which use autotools based build system require using
|
||||
of autotools and distutils-base bbclasses in their recipes.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Extensions which use distutils build system require using
|
||||
of <filename>distutils.bbclass</filename> in their recipes.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-devshell'>
|
||||
<title>Developer Shell - <filename>devshell.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
This class adds the devshell task. Its usually up to distribution policy
|
||||
to include this class (Poky does). See the <link
|
||||
linkend='platdev-appdev-devshell'>developing with 'devshell' section</link>
|
||||
for more information about using devshell.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-package'>
|
||||
<title>Packaging - <filename>package*.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
The packaging classes add support for generating packages from the output
|
||||
from builds. The core generic functionality is in
|
||||
<filename>package.bbclass</filename>, code specific to particular package
|
||||
types is contained in various sub classes such as
|
||||
<filename>package_deb.bbclass</filename> and <filename>package_ipk.bbclass</filename>.
|
||||
Most users will
|
||||
want one or more of these classes and this is controlled by the <glossterm>
|
||||
<link linkend='var-PACKAGE_CLASSES'>PACKAGE_CLASSES</link></glossterm>
|
||||
variable. The first class listed in this variable will be used for image
|
||||
generation. Since images are generated from packages a packaging class is
|
||||
needed to enable image generation.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-kernel'>
|
||||
<title>Building kernels - <filename>kernel.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
This class handle building of Linux kernels and the class contains code to know how to build both 2.4 and 2.6 kernel trees. All needed headers are
|
||||
staged into <glossterm><link
|
||||
linkend='var-STAGING_KERNEL_DIR'>STAGING_KERNEL_DIR</link></glossterm>
|
||||
directory to allow building of out-of-tree modules using <filename>module.bbclass</filename>.
|
||||
</para>
|
||||
<para>
|
||||
The means that each kerel module built is packaged separately and inter-modules dependencies are
|
||||
created by parsing the <command>modinfo</command> output. If all modules are
|
||||
required then installing "kernel-modules" package will install all
|
||||
packages with modules and various other kernel packages such as "kernel-vmlinux" are also generated.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Various other classes are used by the kernel and module classes internally including
|
||||
<filename>kernel-arch.bbclass</filename>, <filename>module_strip.bbclass</filename>,
|
||||
<filename>module-base.bbclass</filename> and <filename>linux-kernel-base.bbclass</filename>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-image'>
|
||||
<title>Creating images - <filename>image.bbclass</filename> and <filename>rootfs*.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Those classes add support for creating images in many formats. First the
|
||||
rootfs is created from packages by one of the <filename>rootfs_*.bbclass</filename>
|
||||
files (depending on package format used) and then image is created.
|
||||
|
||||
The <glossterm><link
|
||||
linkend='var-IMAGE_FSTYPES'>IMAGE_FSTYPES</link></glossterm>
|
||||
variable controls which types of image to generate.
|
||||
|
||||
The list of packages to install into the image is controlled by the
|
||||
<glossterm><link
|
||||
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm>
|
||||
variable.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-sanity'>
|
||||
<title>Host System sanity checks - <filename>sanity.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
This class checks prerequisite software is present to try and identify
|
||||
and notify the user of problems which will affect their build. It also
|
||||
performs basic checks of the users configuration from local.conf to
|
||||
prevent common mistakes and resulting build failures. Its usually up to
|
||||
distribution policy to include this class (Poky does).
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-insane'>
|
||||
<title>Generated output quality assurance checks - <filename>insane.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
This class adds a step to package generation which sanity checks the
|
||||
packages generated by Poky. There are an ever increasing range of checks
|
||||
this makes, checking for common problems which break builds/packages/images,
|
||||
see the bbclass file for more information. Its usually up to distribution
|
||||
policy to include this class (Poky doesn't at the time of writing but plans
|
||||
to soon).
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-siteinfo'>
|
||||
<title>Autotools configuration data cache - <filename>siteinfo.bbclass</filename></title>
|
||||
|
||||
<para>
|
||||
Autotools can require tests which have to execute on the target hardware.
|
||||
Since this isn't possible in general when cross compiling, siteinfo is
|
||||
used to provide cached test results so these tests can be skipped over but
|
||||
the correct values used. The <link linkend='structure-meta-site'>meta/site directory</link>
|
||||
contains test results sorted into different categories like architecture, endianess and
|
||||
the libc used. Siteinfo provides a list of files containing data relevant to
|
||||
the current build in the <glossterm><link linkend='var-CONFIG_SITE'>CONFIG_SITE
|
||||
</link></glossterm> variable which autotools will automatically pick up.
|
||||
</para>
|
||||
<para>
|
||||
The class also provides variables like <glossterm><link
|
||||
linkend='var-SITEINFO_ENDIANESS'>SITEINFO_ENDIANESS</link></glossterm>
|
||||
and <glossterm><link linkend='var-SITEINFO_BITS'>SITEINFO_BITS</link>
|
||||
</glossterm> which can be used elsewhere in the metadata.
|
||||
</para>
|
||||
<para>
|
||||
This class is included from <filename>base.bbclass</filename> and is hence always active.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='ref-classes-others'>
|
||||
<title>Other Classes</title>
|
||||
|
||||
<para>
|
||||
Only the most useful/important classes are covered here but there are
|
||||
others, see the <filename class="directory">meta/classes</filename> directory for the rest.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<!-- Undocumented classes are:
|
||||
base_srpm.bbclass
|
||||
bootimg.bbclass
|
||||
ccache.inc
|
||||
ccdv.bbclass
|
||||
cml1.bbclass
|
||||
cross.bbclass
|
||||
flow-lossage.bbclass
|
||||
gconf.bbclass
|
||||
gettext.bbclass
|
||||
gnome.bbclass
|
||||
gtk-icon-cache.bbclass
|
||||
icecc.bbclass
|
||||
lib_package.bbclass
|
||||
mozilla.bbclass
|
||||
multimachine.bbclass
|
||||
native.bbclass
|
||||
oelint.bbclass
|
||||
patch.bbclass
|
||||
patcher.bbclass
|
||||
pkg_distribute.bbclass
|
||||
pkg_metainfo.bbclass
|
||||
poky.bbclass
|
||||
rm_work.bbclass
|
||||
rpm_core.bbclass
|
||||
scons.bbclass
|
||||
sdk.bbclass
|
||||
sdl.bbclass
|
||||
sip.bbclass
|
||||
sourcepkg.bbclass
|
||||
srec.bbclass
|
||||
syslinux.bbclass
|
||||
tinderclient.bbclass
|
||||
tmake.bbclass
|
||||
xfce.bbclass
|
||||
xlibs.bbclass
|
||||
-->
|
||||
|
||||
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
302
handbook/ref-features.xml
Normal file
@@ -0,0 +1,302 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='ref-features'>
|
||||
<title>Reference: Features</title>
|
||||
|
||||
<para>'Features' provide a mechanism for working out which packages
|
||||
should be included in the generated images. Distributions can
|
||||
select which features they want to support through the
|
||||
<glossterm linkend='var-DISTRO_FEATURES'><link
|
||||
linkend='var-DISTRO_FEATURES'>DISTRO_FEATURES</link></glossterm>
|
||||
variable which is set in the distribution configuration file
|
||||
(poky.conf for Poky). Machine features are set in the
|
||||
<glossterm linkend='var-MACHINE_FEATURES'><link
|
||||
linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES</link></glossterm>
|
||||
variable which is set in the machine configuration file and
|
||||
specifies which hardware features a given machine has.
|
||||
</para>
|
||||
|
||||
<para>These two variables are combined to work out which kernel modules,
|
||||
utilities and other packages to include. A given distribution can
|
||||
support a selected subset of features so some machine features might not
|
||||
be included if the distribution itself doesn't support them.
|
||||
</para>
|
||||
|
||||
<section id='ref-features-distro'>
|
||||
<title>Distro</title>
|
||||
|
||||
<para>The items below are valid options for <glossterm linkend='var-DISTRO_FEATURES'><link
|
||||
linkend='var-DISTRO_FEATURES'>DISTRO_FEATURES</link></glossterm>.
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
alsa - ALSA support will be included (OSS compatibility
|
||||
kernel modules will be installed if available)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
bluetooth - Include bluetooth support (integrated BT only)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
ext2 - Include tools for supporting for devices with internal
|
||||
HDD/Microdrive for storing files (instead of Flash only devices)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
irda - Include Irda support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
keyboard - Include keyboard support (e.g. keymaps will be
|
||||
loaded during boot).
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
pci - Include PCI bus support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
pcmcia - Include PCMCIA/CompactFlash support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
usbgadget - USB Gadget Device support (for USB
|
||||
networking/serial/storage)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
usbhost - USB Host support (allows to connect external
|
||||
keyboard, mouse, storage, network etc)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
wifi - WiFi support (integrated only)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
cramfs - CramFS support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
ipsec - IPSec support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
ipv6 - IPv6 support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
nfs - NFS client support (for mounting NFS exports on
|
||||
device)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
ppp - PPP dialup support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
smbfs - SMB networks client support (for mounting
|
||||
Samba/Microsoft Windows shares on device)
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
<section id='ref-features-machine'>
|
||||
<title>Machine</title>
|
||||
|
||||
<para>The items below are valid options for <glossterm linkend='var-MACHINE_FEATURES'><link
|
||||
linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES</link></glossterm>.
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
acpi - Hardware has ACPI (x86/x86_64 only)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
alsa - Hardware has ALSA audio drivers
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
apm - Hardware uses APM (or APM emulation)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
bluetooth - Hardware has integrated BT
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
ext2 - Hardware HDD or Microdrive
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
irda - Hardware has Irda support
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
keyboard - Hardware has a keyboard
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
pci - Hardware has a PCI bus
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
pcmcia - Hardware has PCMCIA or CompactFlash sockets
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
screen - Hardware has a screen
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
serial - Hardware has serial support (usually RS232)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
touchscreen - Hardware has a touchscreen
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
usbgadget - Hardware is USB gadget device capable
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
usbhost - Hardware is USB Host capable
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
wifi - Hardware has integrated WiFi
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
<section id='ref-features-image'>
|
||||
<title>Reference: Images</title>
|
||||
|
||||
<para>
|
||||
The contents of images generated by Poky can be controlled by the <glossterm
|
||||
linkend='var-IMAGE_FEATURES'><link
|
||||
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
|
||||
variable in local.conf. Through this you can add several different
|
||||
predefined packages such as development utilities or packages with debug
|
||||
information needed to investigate application problems or profile applications.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Current list of <glossterm
|
||||
linkend='var-IMAGE_FEATURES'><link
|
||||
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm> contains:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
apps-console-core - Core console applications such as ssh daemon,
|
||||
avahi daemon, portmap (for mounting NFS shares)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
x11-base - X11 server + minimal desktop
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
x11-sato - OpenedHand Sato environment
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
apps-x11-core - Core X11 applications such as an X Terminal, file manager, file editor
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
apps-x11-games - A set of X11 games
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
apps-x11-pimlico - OpenedHand Pimlico application suite
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
tools-sdk - A full SDK which runs on device
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
tools-debug - Debugging tools such as strace and gdb
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
tools-profile - Profiling tools such as oprofile, exmap and LTTng
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
tools-testapps - Device testing tools (e.g. touchscreen debugging)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
nfs-server - NFS server (exports / over NFS to everybody)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
dev-pkgs - Development packages (headers and extra library links) for all packages
|
||||
installed in a given image
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
dbg-pkgs - Debug packages for all packages installed in a given image
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</appendix>
|
||||
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4 spell spelllang=en_gb
|
||||
-->
|
||||
69
handbook/ref-images.xml
Normal file
@@ -0,0 +1,69 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='ref-images'>
|
||||
<title>Reference: Images</title>
|
||||
|
||||
<para>
|
||||
Poky has several standard images covering most people's standard needs. A full
|
||||
list of image targets can be found by looking in the <filename class="directory">
|
||||
meta/packages/images/</filename> directory. The standard images are listed below
|
||||
along with details of what they contain:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>poky-image-minimal</emphasis> - A small image, just enough
|
||||
to allow a device to boot
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>poky-image-base</emphasis> - console only image with full
|
||||
support of target device hardware
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>poky-image-core</emphasis> - X11 image with simple apps like
|
||||
terminal, editor and file manager
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>poky-image-sato</emphasis> - X11 image with Sato theme and
|
||||
Pimlico applications. Also contains terminal, editor and file manager.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>poky-image-sdk</emphasis> - X11 image like poky-image-sato but
|
||||
also include native toolchain and libraries needed to build applications
|
||||
on the device itself. Also includes testing and profiling tools and debug
|
||||
symbols.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>meta-toolchain</emphasis> - This generates a tarball containing
|
||||
a standalone toolchain which can be used externally to Poky. It is self
|
||||
contained and unpacks to the <filename class="directory">/usr/local/poky</filename>
|
||||
directory. It also contains a copy of QEMU and the scripts neccessary to run
|
||||
poky QEMU images.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<emphasis>meta-toolchain-sdk</emphasis> - This includes everything in
|
||||
meta-toolchain but also includes development headers and libraries
|
||||
forming a complete standalone SDK. See the <link linkend='platdev-appdev-external-sdk'>
|
||||
Developing using the Poky SDK</link> and <link linkend='platdev-appdev-external-anjuta'>
|
||||
Developing using the Anjuta Plugin</link> sections for more information.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||