Compare commits
3 Commits
1.1_M2.rc3
...
bernard-1.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e08dc5aaae | ||
|
|
9ae2e2ef95 | ||
|
|
55b58a5d4c |
17
.gitignore
vendored
@@ -7,16 +7,31 @@ build/tmp/
|
||||
build/sstate-cache
|
||||
build/pyshtables.py
|
||||
pstage/
|
||||
scripts/oe-git-proxy-socks
|
||||
scripts/poky-git-proxy-socks
|
||||
sources/
|
||||
meta-darwin
|
||||
meta-maemo
|
||||
meta-extras
|
||||
meta-m2
|
||||
meta-prvt*
|
||||
poky-autobuilder*
|
||||
*.swp
|
||||
*.orig
|
||||
*.rej
|
||||
*~
|
||||
documentation/poky-ref-manual/poky-ref-manual.html
|
||||
documentation/poky-ref-manual/poky-ref-manual.pdf
|
||||
documentation/poky-ref-manual/poky-ref-manual.tgz
|
||||
documentation/poky-ref-manual/bsp-guide.html
|
||||
documentation/poky-ref-manual/bsp-guide.pdf
|
||||
documentation/bsp-guide/bsp-guide.html
|
||||
documentation/bsp-guide/bsp-guide.pdf
|
||||
documentation/bsp-guide/bsp-guide.tgz
|
||||
documentation/yocto-project-qs/yocto-project-qs.html
|
||||
documentation/yocto-project-qs/yocto-project-qs.tgz
|
||||
documentation/kernel-manual/kernel-manual.html
|
||||
documentation/kernel-manual/kernel-manual.tgz
|
||||
documentation/kernel-manual/kernel-manual.pdf
|
||||
|
||||
|
||||
|
||||
|
||||
30
README
@@ -1,25 +1,15 @@
|
||||
Poky
|
||||
====
|
||||
|
||||
Poky is an integration of various components to form a complete prepackaged
|
||||
build system and development environment. It features support for building
|
||||
customised embedded device style images. There are reference demo images
|
||||
featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
|
||||
cross-architecture application development using QEMU emulation and a
|
||||
standalone toolchain and SDK with IDE integration.
|
||||
Poky platform builder is a combined cross build system and development
|
||||
environment. It features support for building X11/Matchbox/GTK based
|
||||
filesystem images for various embedded devices and boards. It also
|
||||
supports cross-architecture application development using QEMU emulation
|
||||
and a standalone toolchain and SDK with IDE integration.
|
||||
|
||||
Poky has an extensive handbook, the source of which is contained in
|
||||
the handbook directory. For compiled HTML or pdf versions of this,
|
||||
see the Poky website http://pokylinux.org.
|
||||
|
||||
Additional information on the specifics of hardware that Poky supports
|
||||
is available in README.hardware. Further hardware support can easily be added
|
||||
in the form of layers which extend the systems capabilities in a modular way.
|
||||
|
||||
As an integration layer Poky consists of several upstream projects such as
|
||||
BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
|
||||
e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
|
||||
|
||||
The Yocto Project has extensive documentation about the system including a
|
||||
reference manual which can be found at:
|
||||
http://yoctoproject.org/community/documentation
|
||||
|
||||
For information about OpenEmbedded see their website:
|
||||
http://www.openembedded.org/
|
||||
|
||||
is available in README.hardware.
|
||||
|
||||
698
README.hardware
@@ -1,66 +1,429 @@
|
||||
Poky Hardware README
|
||||
====================
|
||||
Poky Hardware Reference Guide
|
||||
=============================
|
||||
|
||||
This file gives details about using Poky with different hardware reference
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device.
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device. To discuss
|
||||
support for further hardware reference boards/devices please contact OpenedHand.
|
||||
|
||||
Support for additional devices is normally added by creating BSP layers - for
|
||||
more information please see the Yocto Board Support Package (BSP) Developer's
|
||||
Guide - documentation source is in documentation/bspguide or download the PDF
|
||||
from:
|
||||
|
||||
http://yoctoproject.org/community/documentation
|
||||
|
||||
Support for machines other than QEMU may be moved out to separate BSP layers in
|
||||
future versions.
|
||||
|
||||
|
||||
QEMU Emulation Targets
|
||||
======================
|
||||
QEMU Emulation Images (qemuarm and qemux86)
|
||||
===========================================
|
||||
|
||||
To simplify development Poky supports building images to work with the QEMU
|
||||
emulator in system emulation mode. Several architectures are currently
|
||||
supported:
|
||||
|
||||
* ARM (qemuarm)
|
||||
* x86 (qemux86)
|
||||
* x86-64 (qemux86-64)
|
||||
* PowerPC (qemuppc)
|
||||
* MIPS (qemumips)
|
||||
|
||||
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
|
||||
MACHINE setting corresponding to the target is given in brackets.
|
||||
|
||||
emulator in system emulation mode. Two architectures are currently supported,
|
||||
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
|
||||
in the Poky Handbook.
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
The following boards are supported by Poky's core layer:
|
||||
The following boards are supported by Poky:
|
||||
|
||||
* Compulab CM-X270 (cm-x270)
|
||||
* Compulab EM-X270 (em-x270)
|
||||
* FreeScale iMX31ADS (mx31ads)
|
||||
* Marvell PXA3xx Zylonite (zylonite)
|
||||
* Logic iMX31 Lite Kit (mx31litekit)
|
||||
* Phytec phyCORE-iMX31 (mx31phy)
|
||||
* Texas Instruments Beagleboard (beagleboard)
|
||||
* Freescale MPC8315E-RDB (mpc8315e-rdb)
|
||||
* Ubiquiti Networks RouterStation Pro (routerstationpro)
|
||||
|
||||
For more information see the board's section below. The Poky MACHINE setting
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
The following consumer devices are supported by Poky's core layer:
|
||||
The following consumer devices are supported by Poky:
|
||||
|
||||
* Intel Atom based PCs and devices (atom-pc)
|
||||
* FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
* HTC Universal (htcuniversal)
|
||||
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
* Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
* Sharp Zaurus SL-C1000 (akita)
|
||||
* Sharp Zaurus SL-C3x00 series (spitz)
|
||||
|
||||
For more information see the device's section below. The Poky MACHINE setting
|
||||
corresponding to the device is given in brackets.
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
Specific Hardware Documentation
|
||||
===============================
|
||||
Compulab CM-X270 (cm-x270)
|
||||
==========================
|
||||
|
||||
The bootloader on this board doesn't support writing jffs2 images directly to
|
||||
NAND and normally uses a proprietary kernel flash driver. To allow the use of
|
||||
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
|
||||
is booted which contains mtd utilities and this is then used to write the main
|
||||
filesystem.
|
||||
|
||||
It is assumed the board is connected to a network where a TFTP server is
|
||||
available and that a serial terminal is available to communicate with the
|
||||
bootloader (38400, 8N1). If a DHCP server is available the device will use it
|
||||
to obtain an IP address. If not, run:
|
||||
|
||||
ARMmon > setip dhcp off
|
||||
ARMmon > setip ip 192.168.1.203
|
||||
ARMmon > setip mask 255.255.255.0
|
||||
|
||||
To reflash the kernel:
|
||||
|
||||
ARMmon > download kernel tftp zimage 192.168.1.202
|
||||
ARMmon > flash kernel
|
||||
|
||||
where zimage is the name of the kernel on the TFTP server and its IP address is
|
||||
192.168.1.202. The names of the files must be all lowercase.
|
||||
|
||||
To reflash the initrd/initramfs:
|
||||
|
||||
ARMmon > download ramdisk tftp diskimage 192.168.1.202
|
||||
ARMmon > flash ramdisk
|
||||
|
||||
where diskimage is the name of the initramfs image (a cpio.gz file).
|
||||
|
||||
To boot the initramfs:
|
||||
|
||||
ARMmon > ramdisk on
|
||||
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
|
||||
|
||||
To reflash the main image login to the system as user "root", then run:
|
||||
|
||||
# ifconfig eth0 192.168.1.203
|
||||
# tftp -g -r mainimage 192.168.1.202
|
||||
# flash_eraseall /dev/mtd1
|
||||
# nandwrite /dev/mtd1 mainimage
|
||||
|
||||
which configures the network interface with the IP address 192.168.1.203,
|
||||
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
|
||||
the flash and then writes the new image to the flash.
|
||||
|
||||
The main image can then be booted with:
|
||||
|
||||
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
|
||||
|
||||
Note that the initramfs image is built by poky in a slightly different mode to
|
||||
normal since it uses uclibc. To generate this use a command like:
|
||||
|
||||
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
|
||||
|
||||
|
||||
Compulab EM-X270 (em-x270)
|
||||
==========================
|
||||
|
||||
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
|
||||
Compulab website. Inside the images directory of this ZIP file is another ZIP
|
||||
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
|
||||
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
|
||||
|
||||
Insert this USB disk into the supplied adapter and connect this to the
|
||||
board. Whilst holding down the the suspend button press the reset button. The
|
||||
board will now boot off the USB key and into a version of Angstrom. On the
|
||||
desktop is an icon labelled "Updater". Run this program to launch the updater
|
||||
that will flash the Poky kernel and rootfs to the board.
|
||||
|
||||
|
||||
FreeScale iMX31ADS (mx31ads)
|
||||
===========================
|
||||
|
||||
The correct serial port is the top-most female connector to the right of the
|
||||
ethernet socket.
|
||||
|
||||
For uploading data to RedBoot we are going to use tftp. In this example we
|
||||
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
|
||||
|
||||
To set the IP address, run:
|
||||
|
||||
ip_address -l 192.168.9.2/24 -h 192.168.9.1
|
||||
|
||||
To download a kernel called "zimage" from the TFTP server, run:
|
||||
|
||||
load -r -b 0x100000 zimage
|
||||
|
||||
To write the kernel to flash run:
|
||||
|
||||
fis create kernel
|
||||
|
||||
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
|
||||
|
||||
load -r -b 0x100000 rootfs
|
||||
|
||||
To write the root filesystem to flash run:
|
||||
|
||||
fis create root
|
||||
|
||||
To load and boot a kernel and rootfs from flash:
|
||||
|
||||
fis load kernel
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
|
||||
|
||||
To load and boot a kernel from a TFTP server with the rootfs over NFS:
|
||||
|
||||
load -r -b 0x100000 zimage
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
|
||||
|
||||
The instructions above are for using the (default) NOR flash on the board,
|
||||
there is also 128M of NAND flash. It is possible to install Poky to the NAND
|
||||
flash which gives more space for the rootfs and instructions for using this are
|
||||
given below. To switch to the NAND flash:
|
||||
|
||||
factive NAND
|
||||
|
||||
This will then restart RedBoot using the NAND rather than the NOR. If you
|
||||
have not used the NAND before then it is unlikely that there will be a
|
||||
partition table yet. You can get the list of partitions with 'fis list'.
|
||||
|
||||
If this shows no partitions then you can create them with:
|
||||
|
||||
fis init
|
||||
|
||||
The output of 'fis list' should now show:
|
||||
|
||||
Name FLASH addr Mem addr Length Entry point
|
||||
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
|
||||
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
|
||||
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
|
||||
|
||||
Partitions for the kernel and rootfs need to be created:
|
||||
|
||||
fis create -l 0x1A0000 -e 0x00100000 kernel
|
||||
fis create -l 0x5000000 -e 0x00100000 root
|
||||
|
||||
You may now use the instructions above for flashing. However it is important
|
||||
to note that the erase block size for the NAND is different to the NOR so the
|
||||
JFFS erase size will need to be changed to 0x4000. Stardard images are built
|
||||
for NOR and you will need to build custom images for NAND.
|
||||
|
||||
You will also need to update the kernel command line to use the correct root
|
||||
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
|
||||
scheme shown above. If this fails then you can doublecheck against the output
|
||||
from the kernel when it evaluates the available mtd partitions.
|
||||
|
||||
|
||||
Marvell PXA3xx Zylonite (zylonite)
|
||||
==================================
|
||||
|
||||
These instructions assume the Zylonite is connected to a machine running a TFTP
|
||||
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
|
||||
to access the blob bootloader. The kernel is on the TFTP server as
|
||||
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
|
||||
the images are to be saved in NAND flash.
|
||||
|
||||
The following commands setup blob:
|
||||
|
||||
blob> setip client 192.168.123.4
|
||||
blob> setip server 192.168.123.5
|
||||
|
||||
To flash the kernel:
|
||||
|
||||
blob> tftp zylonite-kernel
|
||||
blob> nandwrite -j 0x80800000 0x60000 0x200000
|
||||
|
||||
To flash the rootfs:
|
||||
|
||||
blob> tftp zylonite-rootfs
|
||||
blob> nanderase -j 0x260000 0x5000000
|
||||
blob> nandwrite -j 0x80800000 0x260000 <length>
|
||||
|
||||
(where <length> is the rootfs size which will be printed by the tftp step)
|
||||
|
||||
To boot the board:
|
||||
|
||||
blob> nkernel
|
||||
blob> boot
|
||||
|
||||
|
||||
Logic iMX31 Lite Kit (mx31litekit)
|
||||
===============================
|
||||
|
||||
The easiest method to boot this board is to take an MMC/SD card and format
|
||||
the first partition as ext2, then extract the poky image onto this as root.
|
||||
Assuming the board is network connected, a TFTP server is available at
|
||||
192.168.1.33 and a serial terminal is available (115200 8N1), the following
|
||||
commands will boot a kernel called "mx31kern" from the TFTP server:
|
||||
|
||||
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
|
||||
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
|
||||
losh> exec 0x80100000 -
|
||||
|
||||
|
||||
Phytec phyCORE-iMX31 (mx31phy)
|
||||
==============================
|
||||
|
||||
Support for this board is currently being developed. Experimental jffs2
|
||||
images and a suitable kernel are available and are known to work with the
|
||||
board.
|
||||
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
========================================
|
||||
|
||||
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
|
||||
which you can build with "bitbake dfu-util-native" command.
|
||||
|
||||
Flashing requires these steps:
|
||||
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB.
|
||||
3. Hold AUX key and press Power key. There should be a bootmenu
|
||||
on screen.
|
||||
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
|
||||
The output should look like this:
|
||||
|
||||
dfu-util - (C) 2007 by OpenMoko Inc.
|
||||
This program is Free Software and has ABSOLUTELY NO WARRANTY
|
||||
|
||||
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
|
||||
|
||||
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
|
||||
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
|
||||
|
||||
|
||||
HTC Universal (htcuniversal)
|
||||
============================
|
||||
|
||||
Note: HTC Universal support is highly experimental.
|
||||
|
||||
On the HTC Universal, entirely replacing the Windows installation is not
|
||||
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
|
||||
has booted, Windows is no longer in memory or active but when power is removed,
|
||||
the user will be returned to windows and will need to return to Linux from
|
||||
there.
|
||||
|
||||
Once an MMC/SD card is available it is suggested its split into two partitions,
|
||||
one for a program called HaRET which lets you boot Linux from within Windows
|
||||
and the second for the rootfs. The HaRET partition should be the first partition
|
||||
on the card and be vfat formatted. It doesn't need to be large, just enough for
|
||||
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
|
||||
second partition. The first partition should be vfat so Windows recognises it
|
||||
as if it doesn't, it has been known to reformat cards.
|
||||
|
||||
On the first partition you need three files:
|
||||
|
||||
* a HaRET binary (version 0.5.1 works well and a working version
|
||||
should be part of the last Poky release)
|
||||
* a kernel renamed to "zImage"
|
||||
* a default.txt which contains:
|
||||
|
||||
set kernel "zImage"
|
||||
set mtype "855"
|
||||
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
|
||||
boot2
|
||||
|
||||
On the second parition the root file system is extracted as root. A different
|
||||
partition layout or other kernel options can be changed in the default.txt file.
|
||||
|
||||
When inserted into the device, Windows should see the card and let you browse
|
||||
its contents using File Explorer. Running the HaRET binary will present a dialog
|
||||
box (maybe after messages warning about running unsigned binaries) where you
|
||||
select OK and you should then see Poky boot. Kernel messages can be seen by
|
||||
adding psplash=false to the kernel commandline.
|
||||
|
||||
|
||||
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
============================================================
|
||||
|
||||
Note: Nokia tablet support is highly experimental.
|
||||
|
||||
The Nokia internet tablet devices are OMAP based tablet formfactor devices
|
||||
with large screens (800x480), wifi and touchscreen.
|
||||
|
||||
To flash images to these devices you need the "flasher" utility which can be
|
||||
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
|
||||
utility needs to be run as root and the usb filesystem needs to be mounted
|
||||
although most distributions will have done this for you. Once you have this
|
||||
follow these steps:
|
||||
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB
|
||||
(connecting power to the device doesn't hurt either).
|
||||
3. Run "flasher -i"
|
||||
4. Power on the device.
|
||||
5. The program should give an indication it's found
|
||||
a tablet device. If not, recheck the cables, make sure you're
|
||||
root and usbfs/usbdevfs is mounted.
|
||||
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
|
||||
and <kernel> is the kernel to use
|
||||
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
|
||||
7. Run "flasher -R" to reboot the device.
|
||||
8. The device should boot into Poky.
|
||||
|
||||
The nokia800 images and kernel will run on both the N800 and N810.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
==================================
|
||||
|
||||
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
|
||||
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
|
||||
these devices follow these steps:
|
||||
|
||||
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
|
||||
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
|
||||
card as "initrd.bin":
|
||||
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
|
||||
|
||||
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
|
||||
"zImage.bin":
|
||||
|
||||
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
|
||||
|
||||
4. Copy an updater script (updater.sh.c7x0) onto the card
|
||||
as "updater.sh":
|
||||
|
||||
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
|
||||
|
||||
5. Power down the Zaurus.
|
||||
6. Hold "OK" key and power on the device. An update menu should appear
|
||||
(in Japanese).
|
||||
7. Choose "Update" (item 4).
|
||||
8. The next screen will ask for the source, choose the appropriate
|
||||
card (CF or SD).
|
||||
9. Make sure AC power is connected.
|
||||
10. The next screen asks for confirmation, choose "Yes" (the left button).
|
||||
11. The update process will start, flash the files on the card onto
|
||||
the device and the device will then reboot into Poky.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C1000 (akita)
|
||||
=============================
|
||||
|
||||
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
|
||||
c7x0. To install Poky images on this device follow the instructions for
|
||||
the c7x0 but replace "c7x0" with "akita" where appropriate.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C3x00 series (spitz)
|
||||
====================================
|
||||
|
||||
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
|
||||
to akita but with an internal microdrive. The installation procedure
|
||||
assumes a standard microdrive based device where the root (first)
|
||||
partition has been enlarged to fit the image (at least 100MB,
|
||||
400MB for the SDK).
|
||||
|
||||
The procedure is the same as for the c7x0 and akita models with the
|
||||
following differences:
|
||||
|
||||
1. Instead of a jffs2 image you need to copy a compressed tarball of the
|
||||
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
|
||||
card as "hdimage1.tgz":
|
||||
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
|
||||
|
||||
2. You additionally need to copy a special tar utility (gnu-tar) onto
|
||||
the card as "gnu-tar":
|
||||
|
||||
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar
|
||||
|
||||
|
||||
Intel Atom based PCs and devices (atom-pc)
|
||||
@@ -87,22 +450,22 @@ Hard Disk:
|
||||
1. Build a directdisk image format. This will generate proper partition tables
|
||||
that will in turn be written to the physical media. For example:
|
||||
|
||||
$ bitbake core-image-minimal-directdisk
|
||||
$ bitbake poky-image-minimal-directdisk
|
||||
|
||||
2. Use the "dd" utility to write the image to the raw block device. For example:
|
||||
|
||||
# dd if=core-image-minimal-directdisk-atom-pc.hdddirect of=/dev/sdb
|
||||
# dd if=poky-image-minimal-directdisk-atom-pc.hdddirect of=/dev/sdb
|
||||
|
||||
USB Device:
|
||||
1. Build an hddimg image format. This is a simple filesystem without partition
|
||||
tables and is suitable for USB keys. For example:
|
||||
|
||||
$ bitbake core-image-minimal-live
|
||||
$ bitbake poky-image-minimal-live
|
||||
|
||||
2. Use the "dd" utility to write the image to the raw block device. For
|
||||
example:
|
||||
|
||||
# dd if=core-image-minimal-live-atom-pc.hddimg of=/dev/sdb
|
||||
# dd if=poky-image-minimal-live-atom-pc.hddimg of=/dev/sdb
|
||||
|
||||
If the device fails to boot with "Boot error" displayed, it is likely the BIOS
|
||||
cannot understand the physical layout of the disk (or rather it expects a
|
||||
@@ -126,7 +489,7 @@ USB Device:
|
||||
|
||||
b. Copy the contents of the poky image to the USB-ZIP mode device:
|
||||
|
||||
# mount -o loop core-image-minimal-live-atom-pc.hddimg /tmp/image
|
||||
# mount -o loop poky-image-minimal-live-atom-pc.hddimg /tmp/image
|
||||
# mount /dev/sdb4 /tmp/usbkey
|
||||
# cp -rf /tmp/image/* /tmp/usbkey
|
||||
|
||||
@@ -149,26 +512,15 @@ The Beagleboard is an ARM Cortex-A8 development board with USB, DVI-D, S-Video,
|
||||
faster CPU, more RAM, an ethernet port, more USB ports, microSD, and removes
|
||||
the NAND flash. The beagleboard MACHINE is tested on the following platforms:
|
||||
|
||||
o Beagleboard C4
|
||||
o Beagleboard xM Rev A
|
||||
o Beagleboard xM
|
||||
|
||||
The Beagleboard C4 has NAND, while the xM does not. For the sake of simplicity,
|
||||
these instructions assume you have erased the NAND on the C4 so its boot
|
||||
behavior matches that of the xM. To do this, issue the following commands from
|
||||
the u-boot prompt (note that the unlock may be unecessary depending on the
|
||||
version of u-boot installed on your board and only one of the erase commands
|
||||
will succeed):
|
||||
TODO: need someone with a Beagleboard C4 to verify these instructions.
|
||||
|
||||
# nand unlock
|
||||
# nand erase
|
||||
# nand erase.chip
|
||||
|
||||
To further tailor these instructions for your board, please refer to the
|
||||
documentation at http://www.beagleboard.org.
|
||||
|
||||
From a Linux system with access to the image files perform the following steps
|
||||
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
|
||||
if used via a usb card reader):
|
||||
Due to the lack of NAND on the xM, the install and boot process varies a bit
|
||||
between boards. The C4 can run the x-loader and u-boot binaries from NAND or
|
||||
the SD, while the xM can only run them from the SD. The following instructions
|
||||
apply to both the C4 and the xM, but the C4 can skip step 2 (as noted below),
|
||||
and may require modification of the NAND environment.
|
||||
|
||||
1. Partition and format an SD card:
|
||||
# fdisk -lu /dev/mmcblk0
|
||||
@@ -184,19 +536,19 @@ if used via a usb card reader):
|
||||
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
|
||||
# mke2fs -j -L "root" /dev/mmcblk0p2
|
||||
|
||||
The following assumes the SD card partition 1 and 2 are mounted at
|
||||
/media/boot and /media/root respectively. Removing the card and reinserting
|
||||
it will do just that on most modern Linux desktop environments.
|
||||
|
||||
The files referenced below are made available after the build in
|
||||
build/tmp/deploy/images.
|
||||
The following assumes the SD card partition 1 and 2 are mounted at
|
||||
/media/boot and /media/root respectively. The files referenced here
|
||||
are made available after the build in build/tmp/deploy/images.
|
||||
|
||||
2. Install the boot loaders
|
||||
This step can be omitted for the C4 as it can have the x-loader and
|
||||
u-boot installed in NAND.
|
||||
|
||||
# cp MLO-beagleboard /media/boot/MLO
|
||||
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
|
||||
|
||||
3. Install the root filesystem
|
||||
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beagleboard.tar.bz2
|
||||
# tar x -C /media/root -f poky-image-$IMAGE_TYPE-beagleboard.tar.bz2
|
||||
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
|
||||
|
||||
4. Install the kernel uImage
|
||||
@@ -216,213 +568,15 @@ if used via a usb card reader):
|
||||
boot
|
||||
EOF
|
||||
) > serial-boot.cmd
|
||||
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Core Minimal" -d ./serial-boot.cmd ./boot.scr
|
||||
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Poky Minimal" -d ./serial-boot.cmd ./boot.scr
|
||||
# cp boot.scr /media/boot
|
||||
|
||||
6. Unmount the SD partitions, insert the SD card into the Beagleboard, and
|
||||
boot the Beagleboard
|
||||
6. Unmount the SD partitions and boot the Beagleboard
|
||||
|
||||
Note: As of the 2.6.37 linux-yocto kernel recipe, the Beagleboard uses the
|
||||
OMAP_SERIAL device (ttyO2). If you are using an older kernel, such as the
|
||||
2.6.34 linux-yocto-stable, be sure to replace ttyO2 with ttyS2 above. You
|
||||
2.6.35 linux-yocto-stable, be sure replace ttyO2 with ttyS2 above. You
|
||||
should also override the machine SERIAL_CONSOLE in your local.conf in
|
||||
order to setup the getty on the serial line:
|
||||
|
||||
SERIAL_CONSOLE_beagleboard = "115200 ttyS2"
|
||||
|
||||
|
||||
Freescale MPC8315E-RDB (mpc8315e-rdb)
|
||||
=====================================
|
||||
|
||||
The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
|
||||
software development of network attached storage (NAS) and digital media server
|
||||
applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
|
||||
includes a built-in security accelerator.
|
||||
|
||||
Setup instructions
|
||||
------------------
|
||||
|
||||
You will need the following:
|
||||
* nfs root setup on your workstation
|
||||
* tftp server installed on your workstation
|
||||
|
||||
Load the kernel and boot it as follows:
|
||||
|
||||
1. Get the kernel (uImage.mpc8315erdb) and dtb (mpc8315erdb.dtb) files from
|
||||
the Poky build tmp/deploy directory, and make them available on your tftp
|
||||
server.
|
||||
|
||||
2. Set up the environment in U-Boot:
|
||||
|
||||
=>setenv ipaddr <board ip>
|
||||
=>setenv serverip <tftp server ip>
|
||||
=>setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
|
||||
|
||||
3. Download kernel and dtb to boot kernel.
|
||||
|
||||
=>tftp 800000 uImage.mpc8315erdb
|
||||
=>tftp 780000 mpc8315erdb.dtb
|
||||
=>bootm 800000 - 780000
|
||||
|
||||
|
||||
Ubiquiti Networks RouterStation Pro (routerstationpro)
|
||||
======================================================
|
||||
|
||||
The RouterStation Pro is an Atheros AR7161 MIPS-based board. Geared towards
|
||||
networking applications, it has all of the usual features as well as three
|
||||
type IIIA mini-PCI slots and an on-board 3-port 10/100/1000 Ethernet switch,
|
||||
in addition to the 10/100/1000 Ethernet WAN port which supports
|
||||
Power-over-Ethernet.
|
||||
|
||||
Setup instructions
|
||||
------------------
|
||||
|
||||
You will need the following:
|
||||
* A serial cable - female to female (or female to male + gender changer)
|
||||
NOTE: cable must be straight through, *not* a null modem cable.
|
||||
* USB flash drive or hard disk that is able to be powered from the
|
||||
board's USB port.
|
||||
* tftp server installed on your workstation
|
||||
|
||||
NOTE: in the following instructions it is assumed that /dev/sdb corresponds
|
||||
to the USB disk when it is plugged into your workstation. If this is not the
|
||||
case in your setup then please be careful to substitute the correct device
|
||||
name in all commands where appropriate.
|
||||
|
||||
--- Preparation ---
|
||||
|
||||
1) Build an image (e.g. core-image-minimal) using "routerstationpro" as the
|
||||
MACHINE
|
||||
|
||||
2) Partition the USB drive so that primary partition 1 is type Linux (83).
|
||||
Minimum size depends on your root image size - core-image-minimal probably
|
||||
only needs 8-16MB, other images will need more.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
Command (m for help): p
|
||||
|
||||
Disk /dev/sdb: 4011 MB, 4011491328 bytes
|
||||
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
|
||||
Units = sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disk identifier: 0x0009e87d
|
||||
|
||||
Device Boot Start End Blocks Id System
|
||||
/dev/sdb1 62 1952751 976345 83 Linux
|
||||
|
||||
3) Format partition 1 on the USB as ext3
|
||||
|
||||
# mke2fs -j /dev/sdb1
|
||||
|
||||
4) Mount partition 1 and then extract the contents of
|
||||
tmp/deploy/images/core-image-XXXX.tar.bz2 into it (preserving permissions).
|
||||
|
||||
# mount /dev/sdb1 /media/sdb1
|
||||
# cd /media/sdb1
|
||||
# tar -xvjpf tmp/deploy/images/core-image-XXXX.tar.bz2
|
||||
|
||||
5) Unmount the USB drive and then plug it into the board's USB port
|
||||
|
||||
6) Connect the board's serial port to your workstation and then start up
|
||||
your favourite serial terminal so that you will be able to interact with
|
||||
the serial console. If you don't have a favourite, picocom is suggested:
|
||||
|
||||
$ picocom /dev/ttyUSB0 -b 115200
|
||||
|
||||
7) Connect the network into eth0 (the one that is NOT the 3 port switch). If
|
||||
you are using power-over-ethernet then the board will power up at this point.
|
||||
|
||||
8) Start up the board, watch the serial console. Hit Ctrl+C to abort the
|
||||
autostart if the board is configured that way (it is by default). The
|
||||
bootloader's fconfig command can be used to disable autostart and configure
|
||||
the IP settings if you need to change them (default IP is 192.168.1.20).
|
||||
|
||||
9) Make the kernel (tmp/deploy/images/vmlinux-routerstationpro.bin) available
|
||||
on the tftp server.
|
||||
|
||||
10) If you are going to write the kernel to flash (optional - see "Booting a
|
||||
kernel directly" below for the alternative), remove the current kernel and
|
||||
rootfs flash partitions. You can list the partitions using the following
|
||||
bootloader command:
|
||||
|
||||
RedBoot> fis list
|
||||
|
||||
You can delete the existing kernel and rootfs with these commands:
|
||||
|
||||
RedBoot> fis delete kernel
|
||||
RedBoot> fis delete rootfs
|
||||
|
||||
--- Booting a kernel directly ---
|
||||
|
||||
1) Load the kernel using the following bootloader command:
|
||||
|
||||
RedBoot> load -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin
|
||||
|
||||
You should see a message on it being successfully loaded.
|
||||
|
||||
2) Execute the kernel:
|
||||
|
||||
RedBoot> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
|
||||
|
||||
Note that specifying the command line with -c is important as linux-yocto does
|
||||
not provide a default command line.
|
||||
|
||||
--- Writing a kernel to flash ---
|
||||
|
||||
1) Go to your tftp server and gzip the kernel you want in flash. It should
|
||||
halve the size.
|
||||
|
||||
2) Load the kernel using the following bootloader command:
|
||||
|
||||
RedBoot> load -r -b 0x80600000 -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin.gz
|
||||
|
||||
This should output something similar to the following:
|
||||
|
||||
Raw file loaded 0x80600000-0x8087c537, assumed entry at 0x80600000
|
||||
|
||||
Calculate the length by subtracting the first number from the second number
|
||||
and then rounding the result up to the nearest 0x1000.
|
||||
|
||||
3) Using the length calculated above, create a flash partition for the kernel:
|
||||
|
||||
RedBoot> fis create -b 0x80600000 -l 0x240000 kernel
|
||||
|
||||
(change 0x240000 to your rounded length -- change "kernel" to whatever
|
||||
you want to name your kernel)
|
||||
|
||||
--- Booting a kernel from flash ---
|
||||
|
||||
To boot the flashed kernel perform the following steps.
|
||||
|
||||
1) At the bootloader prompt, load the kernel:
|
||||
|
||||
RedBoot> fis load -d -e kernel
|
||||
|
||||
(Change the name "kernel" above if you chose something different earlier)
|
||||
|
||||
(-e means 'elf', -d 'decompress')
|
||||
|
||||
2) Execute the kernel using the exec command as above.
|
||||
|
||||
--- Automating the boot process ---
|
||||
|
||||
After writing the kernel to flash and testing the load and exec commands
|
||||
manually, you can automate the boot process with a boot script.
|
||||
|
||||
1) RedBoot> fconfig
|
||||
(Answer the questions not specified here as they pertain to your environment)
|
||||
2) Run script at boot: true
|
||||
Boot script:
|
||||
.. fis load -d -e kernel
|
||||
.. exec
|
||||
Enter script, terminate with empty line
|
||||
>> fis load -d -e kernel
|
||||
>> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
|
||||
>>
|
||||
3) Answer the remaining questions and write the changes to flash:
|
||||
Update RedBoot non-volatile configuration - continue (y/n)? y
|
||||
... Erase from 0xbfff0000-0xc0000000: .
|
||||
... Program from 0x87ff0000-0x88000000 at 0xbfff0000: .
|
||||
4) Power cycle the board.
|
||||
|
||||
|
||||
@@ -32,15 +32,17 @@ import warnings
|
||||
from traceback import format_exception
|
||||
try:
|
||||
import bb
|
||||
except RuntimeError as exc:
|
||||
except RuntimeError, exc:
|
||||
sys.exit(str(exc))
|
||||
from bb import event
|
||||
import bb.msg
|
||||
from bb import cooker
|
||||
from bb import ui
|
||||
from bb import server
|
||||
from bb.server import none
|
||||
#from bb.server import xmlrpc
|
||||
|
||||
__version__ = "1.13.2"
|
||||
__version__ = "1.11.0"
|
||||
logger = logging.getLogger("BitBake")
|
||||
|
||||
|
||||
@@ -118,10 +120,7 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
action = "store", dest = "cmd")
|
||||
|
||||
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
|
||||
action = "append", dest = "prefile", default = [])
|
||||
|
||||
parser.add_option("-R", "--postread", help = "read the specified file after bitbake.conf",
|
||||
action = "append", dest = "postfile", default = [])
|
||||
action = "append", dest = "file", default = [])
|
||||
|
||||
parser.add_option("-v", "--verbose", help = "output more chit-chat to the terminal",
|
||||
action = "store_true", dest = "verbose", default = False)
|
||||
@@ -138,6 +137,9 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
|
||||
action = "store_true", dest = "parse_only", default = False)
|
||||
|
||||
parser.add_option("-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
|
||||
action = "store_true", dest = "disable_psyco", default = False)
|
||||
|
||||
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
|
||||
action = "store_true", dest = "show_versions", default = False)
|
||||
|
||||
@@ -159,9 +161,6 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
parser.add_option("-u", "--ui", help = "userinterface to use",
|
||||
action = "store", dest = "ui")
|
||||
|
||||
parser.add_option("-t", "--servertype", help = "Choose which server to use, none, process or xmlrpc",
|
||||
action = "store", dest = "servertype")
|
||||
|
||||
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
|
||||
action = "store_true", dest = "revisions_changed", default = False)
|
||||
|
||||
@@ -169,22 +168,15 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
|
||||
configuration = BBConfiguration(options)
|
||||
configuration.pkgs_to_build.extend(args[1:])
|
||||
configuration.initial_path = os.environ['PATH']
|
||||
|
||||
ui_main = get_ui(configuration)
|
||||
|
||||
# Server type could be xmlrpc or none currently, if nothing is specified,
|
||||
# default server would be none
|
||||
if configuration.servertype:
|
||||
server_type = configuration.servertype
|
||||
else:
|
||||
server_type = 'process'
|
||||
loghandler = event.LogHandler()
|
||||
logger.addHandler(loghandler)
|
||||
|
||||
try:
|
||||
module = __import__("bb.server", fromlist = [server_type])
|
||||
server = getattr(module, server_type)
|
||||
except AttributeError:
|
||||
sys.exit("FATAL: Invalid server type '%s' specified.\n"
|
||||
"Valid interfaces: xmlrpc, process, none [default]." % servertype)
|
||||
#server = bb.server.xmlrpc
|
||||
server = bb.server.none
|
||||
|
||||
# Save a logfile for cooker into the current working directory. When the
|
||||
# server is daemonized this logfile will be truncated.
|
||||
@@ -193,42 +185,35 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
bb.utils.init_logger(bb.msg, configuration.verbose, configuration.debug,
|
||||
configuration.debug_domains)
|
||||
|
||||
# Ensure logging messages get sent to the UI as events
|
||||
handler = bb.event.LogHandler()
|
||||
logger.addHandler(handler)
|
||||
|
||||
# Clear away any spurious environment variables. But don't wipe the
|
||||
# environment totally. This is necessary to ensure the correct operation
|
||||
# of the UIs (e.g. for DISPLAY, etc.)
|
||||
bb.utils.clean_environment()
|
||||
|
||||
server = server.BitBakeServer()
|
||||
|
||||
server.initServer()
|
||||
idle = server.getServerIdleCB()
|
||||
|
||||
cooker = bb.cooker.BBCooker(configuration, idle)
|
||||
cooker = bb.cooker.BBCooker(configuration, server)
|
||||
cooker.parseCommandLine()
|
||||
|
||||
server.addcooker(cooker)
|
||||
server.saveConnectionDetails()
|
||||
server.detach(cooker_logfile)
|
||||
serverinfo = server.BitbakeServerInfo(cooker.server)
|
||||
|
||||
# Should no longer need to ever reference cooker
|
||||
server.BitBakeServerFork(cooker, cooker.server, serverinfo, cooker_logfile)
|
||||
del cooker
|
||||
|
||||
logger.removeHandler(handler)
|
||||
logger.removeHandler(loghandler)
|
||||
|
||||
# Setup a connection to the server (cooker)
|
||||
server_connection = server.establishConnection()
|
||||
server_connection = server.BitBakeServerConnection(serverinfo)
|
||||
|
||||
# Launch the UI
|
||||
if configuration.ui:
|
||||
ui = configuration.ui
|
||||
else:
|
||||
ui = "knotty"
|
||||
|
||||
try:
|
||||
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
|
||||
return server.BitbakeUILauch().launch(serverinfo, ui_main, server_connection.connection, server_connection.events)
|
||||
finally:
|
||||
server_connection.terminate()
|
||||
|
||||
return 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
ret = main()
|
||||
@@ -237,4 +222,3 @@ if __name__ == "__main__":
|
||||
import traceback
|
||||
traceback.print_exc(5)
|
||||
sys.exit(ret)
|
||||
|
||||
|
||||
129
bitbake/bin/bitbake-layers
Executable file → Normal file
@@ -1,10 +1,4 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# This script has subcommands which operate against your bitbake layers, either
|
||||
# displaying useful information, or acting against them.
|
||||
# Currently, it only provides a show_appends command, which shows you what
|
||||
# bbappends are in effect, and warns you if you have appends which are not being
|
||||
# utilized.
|
||||
#!/usr/bin/env python2.6
|
||||
|
||||
import cmd
|
||||
import logging
|
||||
@@ -18,7 +12,6 @@ sys.path[0:0] = [os.path.join(topdir, 'lib')]
|
||||
import bb.cache
|
||||
import bb.cooker
|
||||
import bb.providers
|
||||
import bb.utils
|
||||
from bb.cooker import state
|
||||
|
||||
|
||||
@@ -55,7 +48,7 @@ class Commands(cmd.Cmd):
|
||||
|
||||
def prepare_cooker(self):
|
||||
sys.stderr.write("Parsing recipes..")
|
||||
logger.setLevel(logging.WARNING)
|
||||
logger.setLevel(logging.ERROR)
|
||||
|
||||
try:
|
||||
while self.cooker.state in (state.initial, state.parsing):
|
||||
@@ -74,74 +67,6 @@ class Commands(cmd.Cmd):
|
||||
def do_show_layers(self, args):
|
||||
logger.info(str(self.config_data.getVar('BBLAYERS', True)))
|
||||
|
||||
def do_show_overlayed(self, args):
|
||||
if self.cooker.overlayed:
|
||||
logger.info('Overlayed recipes:')
|
||||
for f in self.cooker.overlayed.iterkeys():
|
||||
logger.info('%s' % f)
|
||||
for of in self.cooker.overlayed[f]:
|
||||
logger.info(' %s' % of)
|
||||
else:
|
||||
logger.info('No overlayed recipes found')
|
||||
|
||||
def do_flatten(self, args):
|
||||
arglist = args.split()
|
||||
if len(arglist) != 1:
|
||||
logger.error('syntax: flatten <outputdir>')
|
||||
return
|
||||
|
||||
if os.path.exists(arglist[0]) and os.listdir(arglist[0]):
|
||||
logger.error('Directory %s exists and is non-empty, please clear it out first' % arglist[0])
|
||||
return
|
||||
|
||||
layers = (self.config_data.getVar('BBLAYERS', True) or "").split()
|
||||
for layer in layers:
|
||||
overlayed = []
|
||||
for f in self.cooker.overlayed.iterkeys():
|
||||
for of in self.cooker.overlayed[f]:
|
||||
if of.startswith(layer):
|
||||
overlayed.append(of)
|
||||
|
||||
logger.info('Copying files from %s...' % layer )
|
||||
for root, dirs, files in os.walk(layer):
|
||||
for f1 in files:
|
||||
f1full = os.sep.join([root, f1])
|
||||
if f1full in overlayed:
|
||||
logger.info(' Skipping overlayed file %s' % f1full )
|
||||
else:
|
||||
ext = os.path.splitext(f1)[1]
|
||||
if ext != '.bbappend':
|
||||
fdest = f1full[len(layer):]
|
||||
fdest = os.path.normpath(os.sep.join([arglist[0],fdest]))
|
||||
bb.utils.mkdirhier(os.path.dirname(fdest))
|
||||
if os.path.exists(fdest):
|
||||
if f1 == 'layer.conf' and root.endswith('/conf'):
|
||||
logger.info(' Skipping layer config file %s' % f1full )
|
||||
continue
|
||||
else:
|
||||
logger.warn('Overwriting file %s', fdest)
|
||||
bb.utils.copyfile(f1full, fdest)
|
||||
if ext == '.bb':
|
||||
if f1 in self.cooker_data.appends:
|
||||
appends = self.cooker_data.appends[f1]
|
||||
if appends:
|
||||
logger.info(' Applying appends to %s' % fdest )
|
||||
for appendname in appends:
|
||||
self.apply_append(appendname, fdest)
|
||||
|
||||
def get_append_layer(self, appendname):
|
||||
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
|
||||
if regex.match(appendname):
|
||||
return layer
|
||||
return "?"
|
||||
|
||||
def apply_append(self, appendname, recipename):
|
||||
appendfile = open(appendname, 'r')
|
||||
recipefile = open(recipename, 'a')
|
||||
recipefile.write('\n')
|
||||
recipefile.write('##### bbappended from %s #####\n' % self.get_append_layer(appendname))
|
||||
recipefile.writelines(appendfile.readlines())
|
||||
|
||||
def do_show_appends(self, args):
|
||||
if not self.cooker_data.appends:
|
||||
logger.info('No append files found')
|
||||
@@ -149,12 +74,10 @@ class Commands(cmd.Cmd):
|
||||
|
||||
logger.info('State of append files:')
|
||||
|
||||
pnlist = list(self.cooker_data.pkg_pn.keys())
|
||||
pnlist.sort()
|
||||
for pn in pnlist:
|
||||
for pn in self.cooker_data.pkg_pn:
|
||||
self.show_appends_for_pn(pn)
|
||||
|
||||
self.show_appends_for_skipped()
|
||||
self.show_appends_with_no_recipes()
|
||||
|
||||
def show_appends_for_pn(self, pn):
|
||||
filenames = self.cooker_data.pkg_pn[pn]
|
||||
@@ -165,30 +88,20 @@ class Commands(cmd.Cmd):
|
||||
self.cooker_data.pkg_pn)
|
||||
best_filename = os.path.basename(best[3])
|
||||
|
||||
self.show_appends_output(filenames, best_filename)
|
||||
|
||||
def show_appends_for_skipped(self):
|
||||
filenames = [os.path.basename(f)
|
||||
for f in self.cooker.skiplist.iterkeys()]
|
||||
self.show_appends_output(filenames, None, " (skipped)")
|
||||
|
||||
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
|
||||
appended, missing = self.get_appends_for_files(filenames)
|
||||
if appended:
|
||||
for basename, appends in appended:
|
||||
logger.info('%s%s:', basename, name_suffix)
|
||||
logger.info('%s:', basename)
|
||||
for append in appends:
|
||||
logger.info(' %s', append)
|
||||
|
||||
if best_filename:
|
||||
if best_filename in missing:
|
||||
logger.warn('%s: missing append for preferred version',
|
||||
best_filename)
|
||||
self.returncode |= 1
|
||||
|
||||
if best_filename in missing:
|
||||
logger.warn('%s: missing append for preferred version',
|
||||
best_filename)
|
||||
self.returncode |= 1
|
||||
|
||||
def get_appends_for_files(self, filenames):
|
||||
appended, notappended = [], []
|
||||
appended, notappended = set(), set()
|
||||
for filename in filenames:
|
||||
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
|
||||
if cls:
|
||||
@@ -197,11 +110,26 @@ class Commands(cmd.Cmd):
|
||||
basename = os.path.basename(filename)
|
||||
appends = self.cooker_data.appends.get(basename)
|
||||
if appends:
|
||||
appended.append((basename, list(appends)))
|
||||
appended.add((basename, frozenset(appends)))
|
||||
else:
|
||||
notappended.append(basename)
|
||||
notappended.add(basename)
|
||||
return appended, notappended
|
||||
|
||||
def show_appends_with_no_recipes(self):
|
||||
recipes = set(os.path.basename(f)
|
||||
for f in self.cooker_data.pkg_fn.iterkeys())
|
||||
appended_recipes = self.cooker_data.appends.iterkeys()
|
||||
appends_without_recipes = [self.cooker_data.appends[recipe]
|
||||
for recipe in appended_recipes
|
||||
if recipe not in recipes]
|
||||
if appends_without_recipes:
|
||||
appendlines = (' %s' % append
|
||||
for appends in appends_without_recipes
|
||||
for append in appends)
|
||||
logger.warn('No recipes available for:\n%s',
|
||||
'\n'.join(appendlines))
|
||||
self.returncode |= 4
|
||||
|
||||
def do_EOF(self, line):
|
||||
return True
|
||||
|
||||
@@ -211,8 +139,7 @@ class Config(object):
|
||||
self.pkgs_to_build = []
|
||||
self.debug_domains = []
|
||||
self.extra_assume_provided = []
|
||||
self.prefile = []
|
||||
self.postfile = []
|
||||
self.file = []
|
||||
self.debug = 0
|
||||
self.__dict__.update(options)
|
||||
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import sys,logging
|
||||
import optparse
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
|
||||
|
||||
import prserv
|
||||
import prserv.serv
|
||||
|
||||
__version__="1.0.0"
|
||||
|
||||
PRHOST_DEFAULT=''
|
||||
PRPORT_DEFAULT=8585
|
||||
|
||||
def main():
|
||||
parser = optparse.OptionParser(
|
||||
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
|
||||
usage = "%prog [options]")
|
||||
|
||||
parser.add_option("-f", "--file", help="database filename(default prserv.db)", action="store",
|
||||
dest="dbfile", type="string", default="prserv.db")
|
||||
parser.add_option("-l", "--log", help="log filename(default prserv.log)", action="store",
|
||||
dest="logfile", type="string", default="prserv.log")
|
||||
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
|
||||
action = "store", type="string", dest="loglevel", default = "WARNING")
|
||||
parser.add_option("--start", help="start daemon",
|
||||
action="store_true", dest="start", default="True")
|
||||
parser.add_option("--stop", help="stop daemon",
|
||||
action="store_false", dest="start")
|
||||
parser.add_option("--host", help="ip address to bind", action="store",
|
||||
dest="host", type="string", default=PRHOST_DEFAULT)
|
||||
parser.add_option("--port", help="port number(default 8585)", action="store",
|
||||
dest="port", type="int", default=PRPORT_DEFAULT)
|
||||
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
|
||||
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
|
||||
|
||||
if options.start:
|
||||
prserv.serv.start_daemon(options)
|
||||
else:
|
||||
prserv.serv.stop_daemon()
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
ret = main()
|
||||
except Exception:
|
||||
ret = 1
|
||||
import traceback
|
||||
traceback.print_exc(5)
|
||||
sys.exit(ret)
|
||||
|
||||
@@ -497,7 +497,7 @@ def main():
|
||||
doc.insert_doc_item(doc_ins)
|
||||
|
||||
# let us create the HTML now
|
||||
bb.utils.mkdirhier(output_dir)
|
||||
bb.mkdirhier(output_dir)
|
||||
os.chdir(output_dir)
|
||||
|
||||
# Let us create the sites now. We do it in the following order
|
||||
|
||||
@@ -85,6 +85,9 @@ don't execute, just go through the motions
|
||||
.B \-p, \-\-parse-only
|
||||
quit after parsing the BB files (developers only)
|
||||
.TP
|
||||
.B \-d, \-\-disable-psyco
|
||||
disable using the psyco just-in-time compiler (not recommended)
|
||||
.TP
|
||||
.B \-s, \-\-show-versions
|
||||
show current and preferred versions of all packages
|
||||
.TP
|
||||
|
||||
@@ -45,7 +45,7 @@ endif
|
||||
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
|
||||
|
||||
$(xmltotypes): $(manual)
|
||||
$(call command,xmlto --with-dblatex --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
|
||||
$(call command,xmlto --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
|
||||
|
||||
clean:
|
||||
rm -rf $(cleanfiles)
|
||||
|
||||
@@ -29,7 +29,7 @@ tasks and managing metadata. As such, its similarities to GNU make and other
|
||||
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions, including OpenZaurus and Familiar.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Background and goals</title>
|
||||
<title>Background and Goals</title>
|
||||
<para>Prior to BitBake, no other build tool adequately met
|
||||
the needs of an aspiring embedded Linux distribution. All of the
|
||||
buildsystems used by traditional desktop Linux distributions lacked
|
||||
@@ -42,9 +42,9 @@ embedded space, were scalable or maintainable.</para>
|
||||
<listitem><para>Handle crosscompilation.</para></listitem>
|
||||
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
|
||||
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
|
||||
<listitem><para>Must be Linux distribution agnostic (both build and target).</para></listitem>
|
||||
<listitem><para>Must be linux distribution agnostic (both build and target).</para></listitem>
|
||||
<listitem><para>Must be architecture agnostic</para></listitem>
|
||||
<listitem><para>Must support multiple build and target operating systems (including Cygwin, the BSDs, etc).</para></listitem>
|
||||
<listitem><para>Must support multiple build and target operating systems (including cygwin, the BSDs, etc).</para></listitem>
|
||||
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
|
||||
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
|
||||
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
|
||||
@@ -91,7 +91,7 @@ share common metadata between many packages.</para></listitem>
|
||||
<section>
|
||||
<title>Setting a default value (?=)</title>
|
||||
<para><screen><varname>A</varname> ?= "aval"</screen></para>
|
||||
<para>If <varname>A</varname> is set before the above is called, it will retain its previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
|
||||
<para>If <varname>A</varname> is set before the above is called, it will retain it's previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Setting a default value (??=)</title>
|
||||
@@ -125,7 +125,7 @@ share common metadata between many packages.</para></listitem>
|
||||
<varname>B</varname> .= "additionaldata"
|
||||
<varname>C</varname> = "cval"
|
||||
<varname>C</varname> =. "test"</screen></para>
|
||||
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above appending and prepending operators, no additional space
|
||||
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above Appending and Prepending operators no additional space
|
||||
will be introduced.</para>
|
||||
</section>
|
||||
<section>
|
||||
@@ -147,12 +147,12 @@ will be introduced.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inclusion</title>
|
||||
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
|
||||
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse in whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Requiring inclusion</title>
|
||||
<title>Requiring Inclusion</title>
|
||||
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
|
||||
raise an ParseError if the file to be included cannot be found. Otherwise it will behave just like the <literal>
|
||||
raise an ParseError if the to be included file can not be found. Otherwise it will behave just like the <literal>
|
||||
include</literal> directive.</para>
|
||||
</section>
|
||||
<section>
|
||||
@@ -171,10 +171,10 @@ include</literal> directive.</para>
|
||||
import time
|
||||
print time.strftime('%Y%m%d', time.gmtime())
|
||||
}</screen></para>
|
||||
<para>This is the similar to the previous, but flags it as Python so that BitBake knows it is Python code.</para>
|
||||
<para>This is the similar to the previous, but flags it as python so that BitBake knows it is python code.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Defining Python functions into the global Python namespace</title>
|
||||
<title>Defining python functions into the global python namespace</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para><screen>def get_depends(bb, d):
|
||||
if bb.data.getVar('SOMECONDITION', d, True):
|
||||
@@ -187,8 +187,8 @@ include</literal> directive.</para>
|
||||
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variable flags</title>
|
||||
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by BitBake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
|
||||
<title>Variable Flags</title>
|
||||
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
|
||||
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
|
||||
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
|
||||
</section>
|
||||
@@ -200,19 +200,19 @@ include</literal> directive.</para>
|
||||
<section>
|
||||
<title>Tasks</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined Python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
|
||||
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
|
||||
<para><screen>python do_printdate () {
|
||||
import time
|
||||
print time.strftime('%Y%m%d', time.gmtime())
|
||||
}
|
||||
|
||||
addtask printdate before do_build</screen></para>
|
||||
<para>This defines the necessary Python function and adds it as a task which is now a dependency of do_build, the default task. If anyone executes the do_build task, that will result in do_printdate being run first.</para>
|
||||
<para>This defines the necessary python function and adds it as a task which is now a dependency of do_build (the default task). If anyone executes the do_build task, that will result in do_printdate being run first.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Events</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>BitBake allows installation of event handlers. Events are triggered at certain points during operation, such as the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent is to make it easy to do things like email notification on build failure.</para>
|
||||
<para>BitBake allows to install event handlers. Events are triggered at certain points during operation, such as, the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent was to make it easy to do things like email notifications on build failure.</para>
|
||||
<para><screen>addhandler myclass_eventhandler
|
||||
python myclass_eventhandler() {
|
||||
from bb.event import getName
|
||||
@@ -228,20 +228,20 @@ of the event and the content of the <varname>FILE</varname> variable.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variants</title>
|
||||
<para>Two BitBake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
|
||||
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes used to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
|
||||
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
|
||||
<para>Two Bitbake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
|
||||
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes to utilize to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
|
||||
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to be able to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
|
||||
<para><screen>BBVERSIONS = "1.0 2.0 git"
|
||||
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
|
||||
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
|
||||
1.0.[7-9]:1.0.7+"
|
||||
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
|
||||
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides; it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
|
||||
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides, it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Dependency handling</title>
|
||||
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
|
||||
<title>Dependency Handling</title>
|
||||
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
|
||||
<section>
|
||||
<title>Dependencies internal to the .bb file</title>
|
||||
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
|
||||
@@ -249,26 +249,26 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
|
||||
<section>
|
||||
<title>DEPENDS</title>
|
||||
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
|
||||
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>RDEPENDS</title>
|
||||
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
|
||||
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
|
||||
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive DEPENDS</title>
|
||||
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
|
||||
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive RDEPENDS</title>
|
||||
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
|
||||
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inter task</title>
|
||||
<title>Inter Task</title>
|
||||
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
|
||||
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
|
||||
@@ -278,34 +278,35 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
<section>
|
||||
<title>Parsing</title>
|
||||
<section>
|
||||
<title>Configuration files</title>
|
||||
<para>The first kind of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
|
||||
<para>BitBake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list, a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
|
||||
<para>BitBake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on).</para>
|
||||
<title>Configuration Files</title>
|
||||
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
|
||||
<para>Bitbake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
|
||||
<para>Bitbake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
|
||||
<para>Only variable definitions and include directives are allowed in .conf files.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Classes</title>
|
||||
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the directories in <envar>BBPATH</envar>.</para>
|
||||
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>.bb files</title>
|
||||
<title>.bb Files</title>
|
||||
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
|
||||
<chapter>
|
||||
<title>File download support</title>
|
||||
<title>File Download support</title>
|
||||
<section>
|
||||
<title>Overview</title>
|
||||
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to tell BitBake which files to fetch. The next sections will describe the available fetchers and their options. Each fetcher honors a set of variables and per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantics of the variables and parameters are defined by the fetcher. BitBake tries to have consistent semantics between the different fetchers.
|
||||
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to indicate BitBake which files to fetch. The next sections will describe th available fetchers and the options they have. Each Fetcher honors a set of Variables and
|
||||
a per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantic of the Variables and Parameters are defined by the Fetcher. BitBakes tries to have a consistent semantic between the different Fetchers.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Local file fetcher</title>
|
||||
<para>The URN for the local file fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative, <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file, depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
|
||||
<title>Local File Fetcher</title>
|
||||
<para>The URN for the Local File Fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
|
||||
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
|
||||
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
|
||||
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
|
||||
@@ -314,11 +315,10 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>CVS file fetcher</title>
|
||||
<para>The URN for the CVS fetcher is <emphasis>cvs</emphasis>. This fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build). <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables to use for the CVS checkout or update.
|
||||
<title>CVS File Fetcher</title>
|
||||
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
|
||||
</para>
|
||||
<para>The supported parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>. If <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally, <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
|
||||
|
||||
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>. If <varname>scmdata</varname> is set to <quote>keep</quote>
|
||||
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
|
||||
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
|
||||
</screen>
|
||||
@@ -326,10 +326,11 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>HTTP/FTP fetcher</title>
|
||||
<para>The URNs for the HTTP/FTP fetcher are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file. <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the URI and basename of the file to be fetched. <varname>PREMIRRORS</varname> will be tried first when fetching a file. If that fails, the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
|
||||
<title>HTTP/FTP Fetcher</title>
|
||||
<para>The URNs for the HTTP/FTP are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file, <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the uri and basename of the to be fetched file. <varname>PREMIRRORS</varname>
|
||||
will be tried first when fetching a file if that fails the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
|
||||
</para>
|
||||
<para>The only supported parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
|
||||
<para>The only supported Parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac;md5sum=12343"
|
||||
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac;md5sum=1234"
|
||||
@@ -338,19 +339,19 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>SVK fetcher</title>
|
||||
<title>SVK Fetcher</title>
|
||||
<para>
|
||||
<emphasis>Currently NOT supported</emphasis>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>SVN fetcher</title>
|
||||
<para>The URN for the SVN fetcher is <emphasis>svn</emphasis>.
|
||||
<title>SVN Fetcher</title>
|
||||
<para>The URN for the SVN Fetcher is <emphasis>svn</emphasis>.
|
||||
</para>
|
||||
<para>This fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command. <varname>DL_DIR</varname> is the directory where tarballs will be saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
|
||||
<para>This Fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command, <varname>DL_DIR</varname> is the directory where tarballs will be saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
|
||||
</para>
|
||||
<para>The supported parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the Subversion protocol, <varname>rev</varname> is the Subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
|
||||
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the subversion protocol, <varname>rev</varname> is the subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
|
||||
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
|
||||
@@ -358,12 +359,12 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>GIT fetcher</title>
|
||||
<title>GIT Fetcher</title>
|
||||
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
|
||||
</para>
|
||||
<para>The Variables <varname>DL_DIR</varname>, <varname>GITDIR</varname> are used. <varname>DL_DIR</varname> will be used to store the checkedout version. <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
|
||||
</para>
|
||||
<para>The parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a Git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the Git protocol to use and defaults to <quote>rsync</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
|
||||
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
|
||||
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
|
||||
@@ -374,13 +375,13 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
|
||||
|
||||
<chapter>
|
||||
<title>The BitBake command</title>
|
||||
<title>The bitbake command</title>
|
||||
<section>
|
||||
<title>Introduction</title>
|
||||
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Usage and syntax</title>
|
||||
<title>Usage and Syntax</title>
|
||||
<para>
|
||||
<screen><prompt>$ </prompt>bitbake --help
|
||||
usage: bitbake [options] [package ...]
|
||||
@@ -416,6 +417,8 @@ options:
|
||||
than once.
|
||||
-n, --dry-run don't execute, just go through the motions
|
||||
-p, --parse-only quit after parsing the BB files (developers only)
|
||||
-d, --disable-psyco disable using the psyco just-in-time compiler (not
|
||||
recommended)
|
||||
-s, --show-versions show current and preferred versions of all packages
|
||||
-e, --environment show the global or per-package environment (this is
|
||||
what used to be bbread)
|
||||
@@ -435,7 +438,7 @@ options:
|
||||
<para>
|
||||
<example>
|
||||
<title>Executing a task against a single .bb</title>
|
||||
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and BitBake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
|
||||
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and bitbake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
|
||||
<para><quote>clean</quote> task:</para>
|
||||
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
|
||||
<para><quote>build</quote> task:</para>
|
||||
@@ -445,8 +448,8 @@ options:
|
||||
<para>
|
||||
<example>
|
||||
<title>Executing tasks against a set of .bb files</title>
|
||||
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell BitBake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
|
||||
<para>The next section, Metadata, outlines how to specify such things.</para>
|
||||
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell bitbake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
|
||||
<para>The next section, Metadata, outlines how one goes about specifying such things.</para>
|
||||
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
|
||||
<screen><prompt>$ </prompt>bitbake blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
|
||||
@@ -458,8 +461,8 @@ options:
|
||||
<example>
|
||||
<title>Generating dependency graphs</title>
|
||||
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
|
||||
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">Graphviz</ulink>.
|
||||
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends, one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. This way, <varname>DEPENDS</varname> from inherited classes such as base.bbclass can be removed from the graph.</para>
|
||||
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
|
||||
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
|
||||
<screen><prompt>$ </prompt>bitbake -g blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
|
||||
</example>
|
||||
@@ -467,20 +470,20 @@ Two files will be written into the current working directory, <emphasis>depends.
|
||||
</section>
|
||||
<section>
|
||||
<title>Special variables</title>
|
||||
<para>Certain variables affect BitBake operation:</para>
|
||||
<para>Certain variables affect bitbake operation:</para>
|
||||
<section>
|
||||
<title><varname>BB_NUMBER_THREADS</varname></title>
|
||||
<para> The number of threads BitBake should run at once (default: 1).</para>
|
||||
<para> The number of threads bitbake should run at once (default: 1).</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Metadata</title>
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the <varname>BBFILES</varname> variable is how the BitBake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
|
||||
<example>
|
||||
<title>Setting BBFILES</title>
|
||||
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
|
||||
</example></para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, set to a component of the .bb filename by default.</para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
|
||||
<example>
|
||||
<title>Depending on another .bb</title>
|
||||
<para>a.bb:
|
||||
@@ -493,7 +496,7 @@ DEPENDS += "package-b"</screen>
|
||||
</example>
|
||||
<example>
|
||||
<title>Using PROVIDES</title>
|
||||
<para>This example shows the usage of the <varname>PROVIDES</varname> variable, which allows a given .bb to specify what functionality it provides.</para>
|
||||
<para>This example shows the usage of the PROVIDES variable, which allows a given .bb to specify what functionality it provides.</para>
|
||||
<para>package1.bb:
|
||||
<screen>PROVIDES += "virtual/package"</screen>
|
||||
</para>
|
||||
@@ -503,16 +506,16 @@ DEPENDS += "package-b"</screen>
|
||||
<para>package3.bb:
|
||||
<screen>PROVIDES += "virtual/package"</screen>
|
||||
</para>
|
||||
<para>As you can see, we have two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running BitBake to control which of those providers gets used. There is, indeed, such a way.</para>
|
||||
<para>As you can see, here there are two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running bitbake to control which of those providers gets used. There is, indeed, such a way.</para>
|
||||
<para>The following would go into a .conf file, to select package1:
|
||||
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
|
||||
</para>
|
||||
</example>
|
||||
<example>
|
||||
<title>Specifying version preference</title>
|
||||
<para>When there are multiple <quote>versions</quote> of a given package, BitBake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preference for the default selected version. In addition, the user can specify their preferred version.</para>
|
||||
<para>When there are multiple <quote>versions</quote> of a given package, bitbake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preferences for the default selected version. In addition, the user can specify their preferences with regard to version.</para>
|
||||
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
|
||||
<para>If we then have an <filename>a_1.2.bb</filename>, BitBake will choose 1.2 by default. However, if we define the following variable in a .conf that BitBake parses, we can change that.
|
||||
<para>If we then have an <filename>a_1.2.bb</filename>, bitbake will choose 1.2 by default. However, if we define the following variable in a .conf that bitbake parses, we can change that.
|
||||
<screen>PREFERRED_VERSION_a = "1.1"</screen>
|
||||
</para>
|
||||
</example>
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
__version__ = "1.13.2"
|
||||
__version__ = "1.11.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (2, 6, 0):
|
||||
@@ -29,7 +29,7 @@ if sys.version_info < (2, 6, 0):
|
||||
|
||||
import os
|
||||
import logging
|
||||
|
||||
import traceback
|
||||
|
||||
class NullHandler(logging.Handler):
|
||||
def emit(self, record):
|
||||
@@ -51,6 +51,9 @@ class BBLogger(Logger):
|
||||
def verbose(self, msg, *args, **kwargs):
|
||||
return self.log(logging.INFO - 1, msg, *args, **kwargs)
|
||||
|
||||
def exception(self, msg, *args, **kwargs):
|
||||
return self.critical("%s\n%s" % (msg, traceback.format_exc()), *args, **kwargs)
|
||||
|
||||
logging.raiseExceptions = False
|
||||
logging.setLoggerClass(BBLogger)
|
||||
|
||||
@@ -76,10 +79,6 @@ def plain(*args):
|
||||
logger.plain(''.join(args))
|
||||
|
||||
def debug(lvl, *args):
|
||||
if isinstance(lvl, basestring):
|
||||
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
|
||||
args = (lvl,) + args
|
||||
lvl = 1
|
||||
logger.debug(lvl, ''.join(args))
|
||||
|
||||
def note(*args):
|
||||
@@ -96,7 +95,7 @@ def fatal(*args):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def deprecated(func, name=None, advice=""):
|
||||
def deprecated(func, name = None, advice = ""):
|
||||
"""This is a decorator which can be used to mark functions
|
||||
as deprecated. It will result in a warning being emmitted
|
||||
when the function is used."""
|
||||
@@ -110,8 +109,8 @@ def deprecated(func, name=None, advice=""):
|
||||
def newFunc(*args, **kwargs):
|
||||
warnings.warn("Call to deprecated function %s%s." % (name,
|
||||
advice),
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2)
|
||||
category = PendingDeprecationWarning,
|
||||
stacklevel = 2)
|
||||
return func(*args, **kwargs)
|
||||
newFunc.__name__ = func.__name__
|
||||
newFunc.__doc__ = func.__doc__
|
||||
|
||||
@@ -28,12 +28,11 @@
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import shlex
|
||||
import bb
|
||||
import bb.msg
|
||||
import bb.process
|
||||
from contextlib import nested
|
||||
from bb import data, event, utils
|
||||
from bb import data, event, mkdirhier, utils
|
||||
|
||||
bblogger = logging.getLogger('BitBake')
|
||||
logger = logging.getLogger('BitBake.Build')
|
||||
@@ -163,7 +162,6 @@ def exec_func(func, d, dirs = None):
|
||||
lockfiles = None
|
||||
|
||||
tempdir = data.getVar('T', d, 1)
|
||||
bb.utils.mkdirhier(tempdir)
|
||||
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
|
||||
|
||||
with bb.utils.fileslocked(lockfiles):
|
||||
@@ -183,16 +181,16 @@ def exec_func_python(func, d, runfile, cwd=None):
|
||||
"""Execute a python BB 'function'"""
|
||||
|
||||
bbfile = d.getVar('FILE', True)
|
||||
try:
|
||||
olddir = os.getcwd()
|
||||
except OSError:
|
||||
olddir = None
|
||||
code = _functionfmt.format(function=func, body=d.getVar(func, True))
|
||||
bb.utils.mkdirhier(os.path.dirname(runfile))
|
||||
with open(runfile, 'w') as script:
|
||||
script.write(code)
|
||||
|
||||
if cwd:
|
||||
try:
|
||||
olddir = os.getcwd()
|
||||
except OSError:
|
||||
olddir = None
|
||||
os.chdir(cwd)
|
||||
|
||||
try:
|
||||
@@ -204,11 +202,8 @@ def exec_func_python(func, d, runfile, cwd=None):
|
||||
|
||||
raise FuncFailed(func, None)
|
||||
finally:
|
||||
if cwd and olddir:
|
||||
try:
|
||||
os.chdir(olddir)
|
||||
except OSError:
|
||||
pass
|
||||
if olddir:
|
||||
os.chdir(olddir)
|
||||
|
||||
def exec_func_shell(function, d, runfile, cwd=None):
|
||||
"""Execute a shell function from the metadata
|
||||
@@ -226,11 +221,14 @@ def exec_func_shell(function, d, runfile, cwd=None):
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
script.write("set -x\n")
|
||||
data.emit_func(function, script, d)
|
||||
if cwd:
|
||||
script.write("cd %s\n" % cwd)
|
||||
script.write("%s\n" % function)
|
||||
|
||||
os.chmod(runfile, 0775)
|
||||
script.write("%s\n" % function)
|
||||
os.fchmod(script.fileno(), 0775)
|
||||
|
||||
env = {
|
||||
'PATH': d.getVar('PATH', True),
|
||||
'LC_ALL': 'C',
|
||||
}
|
||||
|
||||
cmd = runfile
|
||||
|
||||
@@ -240,7 +238,8 @@ def exec_func_shell(function, d, runfile, cwd=None):
|
||||
logfile = sys.stdout
|
||||
|
||||
try:
|
||||
bb.process.run(cmd, shell=False, stdin=NULL, log=logfile)
|
||||
bb.process.run(cmd, env=env, cwd=cwd, shell=False, stdin=NULL,
|
||||
log=logfile)
|
||||
except bb.process.CmdError:
|
||||
logfn = d.getVar('BB_LOGFILE', True)
|
||||
raise FuncFailed(function, logfn)
|
||||
@@ -383,10 +382,10 @@ def stamp_internal(taskname, d, file_name):
|
||||
taskflagname = taskname.replace("_setscene", "")
|
||||
|
||||
if file_name:
|
||||
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
|
||||
stamp = d.stamp[file_name]
|
||||
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
|
||||
else:
|
||||
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
|
||||
stamp = d.getVar('STAMP', True)
|
||||
file_name = d.getVar('BB_FILENAME', True)
|
||||
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
|
||||
|
||||
@@ -412,12 +411,6 @@ def make_stamp(task, d, file_name = None):
|
||||
f = open(stamp, "w")
|
||||
f.close()
|
||||
|
||||
# If we're in task context, write out a signature file for each task
|
||||
# as it completes
|
||||
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
|
||||
file_name = d.getVar('BB_FILENAME', True)
|
||||
bb.parse.siggen.dump_sigtask(file_name, task, d.getVar('STAMP', True), True)
|
||||
|
||||
def del_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Removes a stamp for a given task
|
||||
@@ -463,7 +456,6 @@ def add_tasks(tasklist, d):
|
||||
getTask('nostamp')
|
||||
getTask('fakeroot')
|
||||
getTask('noexec')
|
||||
getTask('umask')
|
||||
task_deps['parents'][task] = []
|
||||
for dep in flags['deps']:
|
||||
dep = data.expand(dep, d)
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
|
||||
import os
|
||||
import logging
|
||||
from collections import defaultdict
|
||||
from collections import defaultdict, namedtuple
|
||||
import bb.data
|
||||
import bb.utils
|
||||
|
||||
@@ -43,15 +43,46 @@ except ImportError:
|
||||
logger.info("Importing cPickle failed. "
|
||||
"Falling back to a very slow implementation.")
|
||||
|
||||
__cache_version__ = "141"
|
||||
__cache_version__ = "137"
|
||||
|
||||
def getCacheFile(path, filename):
|
||||
return os.path.join(path, filename)
|
||||
recipe_fields = (
|
||||
'pn',
|
||||
'pv',
|
||||
'pr',
|
||||
'pe',
|
||||
'defaultpref',
|
||||
'depends',
|
||||
'provides',
|
||||
'task_deps',
|
||||
'stamp',
|
||||
'stamp_extrainfo',
|
||||
'broken',
|
||||
'not_world',
|
||||
'skipped',
|
||||
'timestamp',
|
||||
'packages',
|
||||
'packages_dynamic',
|
||||
'rdepends',
|
||||
'rdepends_pkg',
|
||||
'rprovides',
|
||||
'rprovides_pkg',
|
||||
'rrecommends',
|
||||
'rrecommends_pkg',
|
||||
'nocache',
|
||||
'variants',
|
||||
'file_depends',
|
||||
'tasks',
|
||||
'basetaskhashes',
|
||||
'hashfilename',
|
||||
'inherits',
|
||||
'summary',
|
||||
'license',
|
||||
'section',
|
||||
)
|
||||
|
||||
# RecipeInfoCommon defines common data retrieving methods
|
||||
# from meta data for caches. CoreRecipeInfo as well as other
|
||||
# Extra RecipeInfo needs to inherit this class
|
||||
class RecipeInfoCommon(object):
|
||||
|
||||
class RecipeInfo(namedtuple('RecipeInfo', recipe_fields)):
|
||||
__slots__ = ()
|
||||
|
||||
@classmethod
|
||||
def listvar(cls, var, metadata):
|
||||
@@ -84,166 +115,64 @@ class RecipeInfoCommon(object):
|
||||
def getvar(cls, var, metadata):
|
||||
return metadata.getVar(var, True) or ''
|
||||
|
||||
|
||||
class CoreRecipeInfo(RecipeInfoCommon):
|
||||
__slots__ = ()
|
||||
|
||||
cachefile = "bb_cache.dat"
|
||||
|
||||
def __init__(self, filename, metadata):
|
||||
self.file_depends = metadata.getVar('__depends', False)
|
||||
self.timestamp = bb.parse.cached_mtime(filename)
|
||||
self.variants = self.listvar('__VARIANTS', metadata) + ['']
|
||||
self.appends = self.listvar('__BBAPPEND', metadata)
|
||||
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
|
||||
|
||||
self.skipreason = self.getvar('__SKIPPED', metadata)
|
||||
if self.skipreason:
|
||||
self.skipped = True
|
||||
self.provides = self.depvar('PROVIDES', metadata)
|
||||
self.rprovides = self.depvar('RPROVIDES', metadata)
|
||||
return
|
||||
|
||||
self.tasks = metadata.getVar('__BBTASKS', False)
|
||||
|
||||
self.pn = self.getvar('PN', metadata)
|
||||
self.packages = self.listvar('PACKAGES', metadata)
|
||||
if not self.pn in self.packages:
|
||||
self.packages.append(self.pn)
|
||||
|
||||
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
|
||||
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
|
||||
|
||||
self.file_depends = metadata.getVar('__depends', False)
|
||||
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
|
||||
|
||||
self.skipped = False
|
||||
self.pe = self.getvar('PE', metadata)
|
||||
self.pv = self.getvar('PV', metadata)
|
||||
self.pr = self.getvar('PR', metadata)
|
||||
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
|
||||
self.broken = self.getvar('BROKEN', metadata)
|
||||
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
|
||||
self.stamp = self.getvar('STAMP', metadata)
|
||||
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
|
||||
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
|
||||
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
|
||||
self.depends = self.depvar('DEPENDS', metadata)
|
||||
self.provides = self.depvar('PROVIDES', metadata)
|
||||
self.rdepends = self.depvar('RDEPENDS', metadata)
|
||||
self.rprovides = self.depvar('RPROVIDES', metadata)
|
||||
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
|
||||
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
|
||||
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
|
||||
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
|
||||
self.inherits = self.getvar('__inherit_cache', metadata)
|
||||
self.summary = self.getvar('SUMMARY', metadata)
|
||||
self.license = self.getvar('LICENSE', metadata)
|
||||
self.section = self.getvar('SECTION', metadata)
|
||||
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
|
||||
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
|
||||
@classmethod
|
||||
def make_optional(cls, default=None, **kwargs):
|
||||
"""Construct the namedtuple from the specified keyword arguments,
|
||||
with every value considered optional, using the default value if
|
||||
it was not specified."""
|
||||
for field in cls._fields:
|
||||
kwargs[field] = kwargs.get(field, default)
|
||||
return cls(**kwargs)
|
||||
|
||||
@classmethod
|
||||
def init_cacheData(cls, cachedata):
|
||||
# CacheData in Core RecipeInfo Class
|
||||
cachedata.task_deps = {}
|
||||
cachedata.pkg_fn = {}
|
||||
cachedata.pkg_pn = defaultdict(list)
|
||||
cachedata.pkg_pepvpr = {}
|
||||
cachedata.pkg_dp = {}
|
||||
def from_metadata(cls, filename, metadata):
|
||||
if cls.getvar('__SKIPPED', metadata):
|
||||
return cls.make_optional(skipped=True)
|
||||
|
||||
cachedata.stamp = {}
|
||||
cachedata.stamp_base = {}
|
||||
cachedata.stamp_extrainfo = {}
|
||||
cachedata.fn_provides = {}
|
||||
cachedata.pn_provides = defaultdict(list)
|
||||
cachedata.all_depends = []
|
||||
tasks = metadata.getVar('__BBTASKS', False)
|
||||
|
||||
cachedata.deps = defaultdict(list)
|
||||
cachedata.packages = defaultdict(list)
|
||||
cachedata.providers = defaultdict(list)
|
||||
cachedata.rproviders = defaultdict(list)
|
||||
cachedata.packages_dynamic = defaultdict(list)
|
||||
pn = cls.getvar('PN', metadata)
|
||||
packages = cls.listvar('PACKAGES', metadata)
|
||||
if not pn in packages:
|
||||
packages.append(pn)
|
||||
|
||||
cachedata.rundeps = defaultdict(lambda: defaultdict(list))
|
||||
cachedata.runrecs = defaultdict(lambda: defaultdict(list))
|
||||
cachedata.possible_world = []
|
||||
cachedata.universe_target = []
|
||||
cachedata.hashfn = {}
|
||||
return RecipeInfo(
|
||||
tasks = tasks,
|
||||
basetaskhashes = cls.taskvar('BB_BASEHASH', tasks, metadata),
|
||||
hashfilename = cls.getvar('BB_HASHFILENAME', metadata),
|
||||
|
||||
cachedata.basetaskhash = {}
|
||||
cachedata.inherits = {}
|
||||
cachedata.summary = {}
|
||||
cachedata.license = {}
|
||||
cachedata.section = {}
|
||||
cachedata.fakerootenv = {}
|
||||
cachedata.fakerootdirs = {}
|
||||
|
||||
def add_cacheData(self, cachedata, fn):
|
||||
cachedata.task_deps[fn] = self.task_deps
|
||||
cachedata.pkg_fn[fn] = self.pn
|
||||
cachedata.pkg_pn[self.pn].append(fn)
|
||||
cachedata.pkg_pepvpr[fn] = (self.pe, self.pv, self.pr)
|
||||
cachedata.pkg_dp[fn] = self.defaultpref
|
||||
cachedata.stamp[fn] = self.stamp
|
||||
cachedata.stamp_base[fn] = self.stamp_base
|
||||
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
|
||||
|
||||
provides = [self.pn]
|
||||
for provide in self.provides:
|
||||
if provide not in provides:
|
||||
provides.append(provide)
|
||||
cachedata.fn_provides[fn] = provides
|
||||
|
||||
for provide in provides:
|
||||
cachedata.providers[provide].append(fn)
|
||||
if provide not in cachedata.pn_provides[self.pn]:
|
||||
cachedata.pn_provides[self.pn].append(provide)
|
||||
|
||||
for dep in self.depends:
|
||||
if dep not in cachedata.deps[fn]:
|
||||
cachedata.deps[fn].append(dep)
|
||||
if dep not in cachedata.all_depends:
|
||||
cachedata.all_depends.append(dep)
|
||||
|
||||
rprovides = self.rprovides
|
||||
for package in self.packages:
|
||||
cachedata.packages[package].append(fn)
|
||||
rprovides += self.rprovides_pkg[package]
|
||||
|
||||
for rprovide in rprovides:
|
||||
cachedata.rproviders[rprovide].append(fn)
|
||||
|
||||
for package in self.packages_dynamic:
|
||||
cachedata.packages_dynamic[package].append(fn)
|
||||
|
||||
# Build hash of runtime depends and rececommends
|
||||
for package in self.packages + [self.pn]:
|
||||
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
|
||||
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not self.broken and not self.not_world:
|
||||
cachedata.possible_world.append(fn)
|
||||
|
||||
# create a collection of all targets for sanity checking
|
||||
# tasks, such as upstream versions, license, and tools for
|
||||
# task and image creation.
|
||||
cachedata.universe_target.append(self.pn)
|
||||
|
||||
cachedata.hashfn[fn] = self.hashfilename
|
||||
for task, taskhash in self.basetaskhashes.iteritems():
|
||||
identifier = '%s.%s' % (fn, task)
|
||||
cachedata.basetaskhash[identifier] = taskhash
|
||||
|
||||
cachedata.inherits[fn] = self.inherits
|
||||
cachedata.summary[fn] = self.summary
|
||||
cachedata.license[fn] = self.license
|
||||
cachedata.section[fn] = self.section
|
||||
cachedata.fakerootenv[fn] = self.fakerootenv
|
||||
cachedata.fakerootdirs[fn] = self.fakerootdirs
|
||||
file_depends = metadata.getVar('__depends', False),
|
||||
task_deps = metadata.getVar('_task_deps', False) or
|
||||
{'tasks': [], 'parents': {}},
|
||||
variants = cls.listvar('__VARIANTS', metadata) + [''],
|
||||
|
||||
skipped = False,
|
||||
timestamp = bb.parse.cached_mtime(filename),
|
||||
packages = cls.listvar('PACKAGES', metadata),
|
||||
pn = pn,
|
||||
pe = cls.getvar('PE', metadata),
|
||||
pv = cls.getvar('PV', metadata),
|
||||
pr = cls.getvar('PR', metadata),
|
||||
nocache = cls.getvar('__BB_DONT_CACHE', metadata),
|
||||
defaultpref = cls.intvar('DEFAULT_PREFERENCE', metadata),
|
||||
broken = cls.getvar('BROKEN', metadata),
|
||||
not_world = cls.getvar('EXCLUDE_FROM_WORLD', metadata),
|
||||
stamp = cls.getvar('STAMP', metadata),
|
||||
stamp_extrainfo = cls.flaglist('stamp-extra-info', tasks, metadata),
|
||||
packages_dynamic = cls.listvar('PACKAGES_DYNAMIC', metadata),
|
||||
depends = cls.depvar('DEPENDS', metadata),
|
||||
provides = cls.depvar('PROVIDES', metadata),
|
||||
rdepends = cls.depvar('RDEPENDS', metadata),
|
||||
rprovides = cls.depvar('RPROVIDES', metadata),
|
||||
rrecommends = cls.depvar('RRECOMMENDS', metadata),
|
||||
rprovides_pkg = cls.pkgvar('RPROVIDES', packages, metadata),
|
||||
rdepends_pkg = cls.pkgvar('RDEPENDS', packages, metadata),
|
||||
rrecommends_pkg = cls.pkgvar('RRECOMMENDS', packages, metadata),
|
||||
inherits = cls.getvar('__inherit_cache', metadata),
|
||||
summary = cls.getvar('SUMMARY', metadata),
|
||||
license = cls.getvar('LICENSE', metadata),
|
||||
section = cls.getvar('SECTION', metadata),
|
||||
)
|
||||
|
||||
|
||||
class Cache(object):
|
||||
@@ -251,11 +180,7 @@ class Cache(object):
|
||||
BitBake Cache implementation
|
||||
"""
|
||||
|
||||
def __init__(self, data, caches_array):
|
||||
# Pass caches_array information into Cache Constructor
|
||||
# It will be used in later for deciding whether we
|
||||
# need extra cache file dump/load support
|
||||
self.caches_array = caches_array
|
||||
def __init__(self, data):
|
||||
self.cachedir = bb.data.getVar("CACHE", data, True)
|
||||
self.clean = set()
|
||||
self.checked = set()
|
||||
@@ -271,7 +196,7 @@ class Cache(object):
|
||||
return
|
||||
|
||||
self.has_cache = True
|
||||
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat")
|
||||
self.cachefile = os.path.join(self.cachedir, "bb_cache.dat")
|
||||
|
||||
logger.debug(1, "Using cache in '%s'", self.cachedir)
|
||||
bb.utils.mkdirhier(self.cachedir)
|
||||
@@ -285,21 +210,12 @@ class Cache(object):
|
||||
old_mtimes.append(newest_mtime)
|
||||
newest_mtime = max(old_mtimes)
|
||||
|
||||
bNeedUpdate = True
|
||||
if self.caches_array:
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
|
||||
bNeedUpdate = bNeedUpdate and (bb.parse.cached_mtime_noerror(cachefile) >= newest_mtime)
|
||||
cache_class.init_cacheData(self)
|
||||
if bNeedUpdate:
|
||||
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
|
||||
self.load_cachefile()
|
||||
elif os.path.isfile(self.cachefile):
|
||||
logger.info("Out of date cache found, rebuilding...")
|
||||
|
||||
def load_cachefile(self):
|
||||
# Firstly, using core cache file information for
|
||||
# valid checking
|
||||
with open(self.cachefile, "rb") as cachefile:
|
||||
pickled = pickle.Unpickler(cachefile)
|
||||
try:
|
||||
@@ -316,52 +232,31 @@ class Cache(object):
|
||||
logger.info('Bitbake version mismatch, rebuilding...')
|
||||
return
|
||||
|
||||
cachesize = os.fstat(cachefile.fileno()).st_size
|
||||
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
|
||||
|
||||
cachesize = 0
|
||||
previous_progress = 0
|
||||
previous_percent = 0
|
||||
previous_percent = 0
|
||||
while cachefile:
|
||||
try:
|
||||
key = pickled.load()
|
||||
value = pickled.load()
|
||||
except Exception:
|
||||
break
|
||||
|
||||
# Calculate the correct cachesize of all those cache files
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
|
||||
with open(cachefile, "rb") as cachefile:
|
||||
cachesize += os.fstat(cachefile.fileno()).st_size
|
||||
self.depends_cache[key] = value
|
||||
|
||||
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
|
||||
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
|
||||
with open(cachefile, "rb") as cachefile:
|
||||
pickled = pickle.Unpickler(cachefile)
|
||||
while cachefile:
|
||||
try:
|
||||
key = pickled.load()
|
||||
value = pickled.load()
|
||||
except Exception:
|
||||
break
|
||||
if self.depends_cache.has_key(key):
|
||||
self.depends_cache[key].append(value)
|
||||
else:
|
||||
self.depends_cache[key] = [value]
|
||||
# only fire events on even percentage boundaries
|
||||
current_progress = cachefile.tell() + previous_progress
|
||||
current_percent = 100 * current_progress / cachesize
|
||||
if current_percent > previous_percent:
|
||||
previous_percent = current_percent
|
||||
bb.event.fire(bb.event.CacheLoadProgress(current_progress),
|
||||
self.data)
|
||||
# only fire events on even percentage boundaries
|
||||
current_progress = cachefile.tell()
|
||||
current_percent = 100 * current_progress / cachesize
|
||||
if current_percent > previous_percent:
|
||||
previous_percent = current_percent
|
||||
bb.event.fire(bb.event.CacheLoadProgress(current_progress),
|
||||
self.data)
|
||||
|
||||
previous_progress += current_progress
|
||||
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
|
||||
len(self.depends_cache)),
|
||||
self.data)
|
||||
|
||||
# Note: depends cache number is corresponding to the parsing file numbers.
|
||||
# The same file has several caches, still regarded as one item in the cache
|
||||
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
|
||||
len(self.depends_cache)),
|
||||
self.data)
|
||||
|
||||
|
||||
@staticmethod
|
||||
def virtualfn2realfn(virtualfn):
|
||||
"""
|
||||
@@ -395,12 +290,11 @@ class Cache(object):
|
||||
|
||||
logger.debug(1, "Parsing %s (full)", fn)
|
||||
|
||||
cfgData.setVar("__ONLYFINALISE", virtual or "default")
|
||||
bb_data = cls.load_bbfile(fn, appends, cfgData)
|
||||
return bb_data[virtual]
|
||||
|
||||
@classmethod
|
||||
def parse(cls, filename, appends, configdata, caches_array):
|
||||
def parse(cls, filename, appends, configdata):
|
||||
"""Parse the specified filename, returning the recipe information"""
|
||||
infos = []
|
||||
datastores = cls.load_bbfile(filename, appends, configdata)
|
||||
@@ -412,14 +306,8 @@ class Cache(object):
|
||||
depends |= (data.getVar("__depends", False) or set())
|
||||
if depends and not variant:
|
||||
data.setVar("__depends", depends)
|
||||
|
||||
info_array = []
|
||||
for cache_class in caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
info = cache_class(filename, data)
|
||||
info_array.append(info)
|
||||
infos.append((virtualfn, info_array))
|
||||
|
||||
info = RecipeInfo.from_metadata(filename, data)
|
||||
infos.append((virtualfn, info))
|
||||
return infos
|
||||
|
||||
def load(self, filename, appends, configdata):
|
||||
@@ -430,17 +318,16 @@ class Cache(object):
|
||||
automatically add the information to the cache or to your
|
||||
CacheData. Use the add or add_info method to do so after
|
||||
running this, or use loadData instead."""
|
||||
cached = self.cacheValid(filename, appends)
|
||||
cached = self.cacheValid(filename)
|
||||
if cached:
|
||||
infos = []
|
||||
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
|
||||
info_array = self.depends_cache[filename]
|
||||
for variant in info_array[0].variants:
|
||||
info = self.depends_cache[filename]
|
||||
for variant in info.variants:
|
||||
virtualfn = self.realfn2virtual(filename, variant)
|
||||
infos.append((virtualfn, self.depends_cache[virtualfn]))
|
||||
else:
|
||||
logger.debug(1, "Parsing %s", filename)
|
||||
return self.parse(filename, appends, configdata, self.caches_array)
|
||||
return self.parse(filename, appends, configdata)
|
||||
|
||||
return cached, infos
|
||||
|
||||
@@ -451,23 +338,23 @@ class Cache(object):
|
||||
skipped, virtuals = 0, 0
|
||||
|
||||
cached, infos = self.load(fn, appends, cfgData)
|
||||
for virtualfn, info_array in infos:
|
||||
if info_array[0].skipped:
|
||||
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
|
||||
for virtualfn, info in infos:
|
||||
if info.skipped:
|
||||
logger.debug(1, "Skipping %s", virtualfn)
|
||||
skipped += 1
|
||||
else:
|
||||
self.add_info(virtualfn, info_array, cacheData, not cached)
|
||||
self.add_info(virtualfn, info, cacheData, not cached)
|
||||
virtuals += 1
|
||||
|
||||
return cached, skipped, virtuals
|
||||
|
||||
def cacheValid(self, fn, appends):
|
||||
def cacheValid(self, fn):
|
||||
"""
|
||||
Is the cache valid for fn?
|
||||
Fast version, no timestamps checked.
|
||||
"""
|
||||
if fn not in self.checked:
|
||||
self.cacheValidUpdate(fn, appends)
|
||||
self.cacheValidUpdate(fn)
|
||||
|
||||
# Is cache enabled?
|
||||
if not self.has_cache:
|
||||
@@ -476,7 +363,7 @@ class Cache(object):
|
||||
return True
|
||||
return False
|
||||
|
||||
def cacheValidUpdate(self, fn, appends):
|
||||
def cacheValidUpdate(self, fn):
|
||||
"""
|
||||
Is the cache valid for fn?
|
||||
Make thorough (slower) checks including timestamps.
|
||||
@@ -500,15 +387,15 @@ class Cache(object):
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
info_array = self.depends_cache[fn]
|
||||
info = self.depends_cache[fn]
|
||||
# Check the file's timestamp
|
||||
if mtime != info_array[0].timestamp:
|
||||
if mtime != info.timestamp:
|
||||
logger.debug(2, "Cache: %s changed", fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# Check dependencies are still valid
|
||||
depends = info_array[0].file_depends
|
||||
depends = info.file_depends
|
||||
if depends:
|
||||
for f, old_mtime in depends:
|
||||
fmtime = bb.parse.cached_mtime_noerror(f)
|
||||
@@ -525,14 +412,8 @@ class Cache(object):
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
if appends != info_array[0].appends:
|
||||
logger.debug(2, "Cache: appends for %s changed", fn)
|
||||
bb.note("%s to %s" % (str(appends), str(info_array[0].appends)))
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
invalid = False
|
||||
for cls in info_array[0].variants:
|
||||
for cls in info.variants:
|
||||
virtualfn = self.realfn2virtual(fn, cls)
|
||||
self.clean.add(virtualfn)
|
||||
if virtualfn not in self.depends_cache:
|
||||
@@ -579,30 +460,13 @@ class Cache(object):
|
||||
logger.debug(2, "Cache is clean, not saving.")
|
||||
return
|
||||
|
||||
file_dict = {}
|
||||
pickler_dict = {}
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cache_class_name = cache_class.__name__
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
|
||||
file_dict[cache_class_name] = open(cachefile, "wb")
|
||||
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
|
||||
|
||||
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
|
||||
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
|
||||
|
||||
try:
|
||||
for key, info_array in self.depends_cache.iteritems():
|
||||
for info in info_array:
|
||||
if isinstance(info, RecipeInfoCommon):
|
||||
cache_class_name = info.__class__.__name__
|
||||
pickler_dict[cache_class_name].dump(key)
|
||||
pickler_dict[cache_class_name].dump(info)
|
||||
finally:
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cache_class_name = cache_class.__name__
|
||||
file_dict[cache_class_name].close()
|
||||
with open(self.cachefile, "wb") as cachefile:
|
||||
pickler = pickle.Pickler(cachefile, pickle.HIGHEST_PROTOCOL)
|
||||
pickler.dump(__cache_version__)
|
||||
pickler.dump(bb.__version__)
|
||||
for key, value in self.depends_cache.iteritems():
|
||||
pickler.dump(key)
|
||||
pickler.dump(value)
|
||||
|
||||
del self.depends_cache
|
||||
|
||||
@@ -610,17 +474,15 @@ class Cache(object):
|
||||
def mtime(cachefile):
|
||||
return bb.parse.cached_mtime_noerror(cachefile)
|
||||
|
||||
def add_info(self, filename, info_array, cacheData, parsed=None):
|
||||
if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
|
||||
cacheData.add_from_recipeinfo(filename, info_array)
|
||||
|
||||
def add_info(self, filename, info, cacheData, parsed=None):
|
||||
cacheData.add_from_recipeinfo(filename, info)
|
||||
if not self.has_cache:
|
||||
return
|
||||
|
||||
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
|
||||
if 'SRCREVINACTION' not in info.pv and not info.nocache:
|
||||
if parsed:
|
||||
self.cacheclean = False
|
||||
self.depends_cache[filename] = info_array
|
||||
self.depends_cache[filename] = info
|
||||
|
||||
def add(self, file_name, data, cacheData, parsed=None):
|
||||
"""
|
||||
@@ -628,12 +490,8 @@ class Cache(object):
|
||||
"""
|
||||
|
||||
realfn = self.virtualfn2realfn(file_name)[0]
|
||||
|
||||
info_array = []
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
info_array.append(cache_class(realfn, data))
|
||||
self.add_info(file_name, info_array, cacheData, parsed)
|
||||
info = RecipeInfo.from_metadata(realfn, data)
|
||||
self.add_info(file_name, info, cacheData, parsed)
|
||||
|
||||
@staticmethod
|
||||
def load_bbfile(bbfile, appends, config):
|
||||
@@ -697,23 +555,95 @@ class CacheData(object):
|
||||
The data structures we compile from the cached data
|
||||
"""
|
||||
|
||||
def __init__(self, caches_array):
|
||||
self.caches_array = caches_array
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cache_class.init_cacheData(self)
|
||||
|
||||
def __init__(self):
|
||||
# Direct cache variables
|
||||
self.providers = defaultdict(list)
|
||||
self.rproviders = defaultdict(list)
|
||||
self.packages = defaultdict(list)
|
||||
self.packages_dynamic = defaultdict(list)
|
||||
self.possible_world = []
|
||||
self.pkg_pn = defaultdict(list)
|
||||
self.pkg_fn = {}
|
||||
self.pkg_pepvpr = {}
|
||||
self.pkg_dp = {}
|
||||
self.pn_provides = defaultdict(list)
|
||||
self.fn_provides = {}
|
||||
self.all_depends = []
|
||||
self.deps = defaultdict(list)
|
||||
self.rundeps = defaultdict(lambda: defaultdict(list))
|
||||
self.runrecs = defaultdict(lambda: defaultdict(list))
|
||||
self.task_queues = {}
|
||||
self.task_deps = {}
|
||||
self.stamp = {}
|
||||
self.stamp_extrainfo = {}
|
||||
self.preferred = {}
|
||||
self.tasks = {}
|
||||
self.basetaskhash = {}
|
||||
self.hashfn = {}
|
||||
self.inherits = {}
|
||||
self.summary = {}
|
||||
self.license = {}
|
||||
self.section = {}
|
||||
|
||||
# Indirect Cache variables (set elsewhere)
|
||||
self.ignored_dependencies = []
|
||||
self.world_target = set()
|
||||
self.bbfile_priority = {}
|
||||
self.bbfile_config_priorities = []
|
||||
|
||||
def add_from_recipeinfo(self, fn, info_array):
|
||||
for info in info_array:
|
||||
info.add_cacheData(self, fn)
|
||||
def add_from_recipeinfo(self, fn, info):
|
||||
self.task_deps[fn] = info.task_deps
|
||||
self.pkg_fn[fn] = info.pn
|
||||
self.pkg_pn[info.pn].append(fn)
|
||||
self.pkg_pepvpr[fn] = (info.pe, info.pv, info.pr)
|
||||
self.pkg_dp[fn] = info.defaultpref
|
||||
self.stamp[fn] = info.stamp
|
||||
self.stamp_extrainfo[fn] = info.stamp_extrainfo
|
||||
|
||||
|
||||
provides = [info.pn]
|
||||
for provide in info.provides:
|
||||
if provide not in provides:
|
||||
provides.append(provide)
|
||||
self.fn_provides[fn] = provides
|
||||
|
||||
for provide in provides:
|
||||
self.providers[provide].append(fn)
|
||||
if provide not in self.pn_provides[info.pn]:
|
||||
self.pn_provides[info.pn].append(provide)
|
||||
|
||||
for dep in info.depends:
|
||||
if dep not in self.deps[fn]:
|
||||
self.deps[fn].append(dep)
|
||||
if dep not in self.all_depends:
|
||||
self.all_depends.append(dep)
|
||||
|
||||
rprovides = info.rprovides
|
||||
for package in info.packages:
|
||||
self.packages[package].append(fn)
|
||||
rprovides += info.rprovides_pkg[package]
|
||||
|
||||
for rprovide in rprovides:
|
||||
self.rproviders[rprovide].append(fn)
|
||||
|
||||
for package in info.packages_dynamic:
|
||||
self.packages_dynamic[package].append(fn)
|
||||
|
||||
# Build hash of runtime depends and rececommends
|
||||
for package in info.packages + [info.pn]:
|
||||
self.rundeps[fn][package] = list(info.rdepends) + info.rdepends_pkg[package]
|
||||
self.runrecs[fn][package] = list(info.rrecommends) + info.rrecommends_pkg[package]
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not info.broken and not info.not_world:
|
||||
self.possible_world.append(fn)
|
||||
|
||||
self.hashfn[fn] = info.hashfilename
|
||||
for task, taskhash in info.basetaskhashes.iteritems():
|
||||
identifier = '%s.%s' % (fn, task)
|
||||
self.basetaskhash[identifier] = taskhash
|
||||
|
||||
self.inherits[fn] = info.inherits
|
||||
self.summary[fn] = info.summary
|
||||
self.license[fn] = info.license
|
||||
self.section[fn] = info.section
|
||||
|
||||
@@ -1,54 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Extra RecipeInfo will be all defined in this file. Currently,
|
||||
# Only Hob (Image Creator) Requests some extra fields. So
|
||||
# HobRecipeInfo is defined. It's named HobRecipeInfo because it
|
||||
# is introduced by 'hob'. Users could also introduce other
|
||||
# RecipeInfo or simply use those already defined RecipeInfo.
|
||||
# In the following patch, this newly defined new extra RecipeInfo
|
||||
# will be dynamically loaded and used for loading/saving the extra
|
||||
# cache fields
|
||||
|
||||
# Copyright (C) 2011, Intel Corporation. All rights reserved.
|
||||
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from bb.cache import RecipeInfoCommon
|
||||
|
||||
class HobRecipeInfo(RecipeInfoCommon):
|
||||
__slots__ = ()
|
||||
|
||||
classname = "HobRecipeInfo"
|
||||
# please override this member with the correct data cache file
|
||||
# such as (bb_cache.dat, bb_extracache_hob.dat)
|
||||
cachefile = "bb_extracache_" + classname +".dat"
|
||||
|
||||
def __init__(self, filename, metadata):
|
||||
|
||||
self.summary = self.getvar('SUMMARY', metadata)
|
||||
self.license = self.getvar('LICENSE', metadata)
|
||||
self.section = self.getvar('SECTION', metadata)
|
||||
|
||||
@classmethod
|
||||
def init_cacheData(cls, cachedata):
|
||||
# CacheData in Hob RecipeInfo Class
|
||||
cachedata.summary = {}
|
||||
cachedata.license = {}
|
||||
cachedata.section = {}
|
||||
|
||||
def add_cacheData(self, cachedata, fn):
|
||||
cachedata.summary[fn] = self.summary
|
||||
cachedata.license[fn] = self.license
|
||||
cachedata.section[fn] = self.section
|
||||
@@ -21,13 +21,13 @@ def check_indent(codestr):
|
||||
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
|
||||
|
||||
i = 0
|
||||
while codestr[i] in ["\n", "\t", " "]:
|
||||
while codestr[i] in ["\n", " ", " "]:
|
||||
i = i + 1
|
||||
|
||||
if i == 0:
|
||||
return codestr
|
||||
|
||||
if codestr[i-1] == "\t" or codestr[i-1] == " ":
|
||||
if codestr[i-1] is " " or codestr[i-1] is " ":
|
||||
return "if 1:\n" + codestr
|
||||
|
||||
return codestr
|
||||
@@ -70,85 +70,8 @@ def parser_cache_save(d):
|
||||
if not cachefile:
|
||||
return
|
||||
|
||||
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
|
||||
|
||||
i = os.getpid()
|
||||
lf = None
|
||||
while not lf:
|
||||
shellcache = {}
|
||||
pythoncache = {}
|
||||
|
||||
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
|
||||
if not lf or os.path.exists(cachefile + "-" + str(i)):
|
||||
if lf:
|
||||
bb.utils.unlockfile(lf)
|
||||
lf = None
|
||||
i = i + 1
|
||||
continue
|
||||
|
||||
try:
|
||||
p = pickle.Unpickler(file(cachefile, "rb"))
|
||||
data, version = p.load()
|
||||
except (IOError, EOFError, ValueError):
|
||||
data, version = None, None
|
||||
|
||||
if version != PARSERCACHE_VERSION:
|
||||
shellcache = shellparsecache
|
||||
pythoncache = pythonparsecache
|
||||
else:
|
||||
for h in pythonparsecache:
|
||||
if h not in data[0]:
|
||||
pythoncache[h] = pythonparsecache[h]
|
||||
for h in shellparsecache:
|
||||
if h not in data[1]:
|
||||
shellcache[h] = shellparsecache[h]
|
||||
|
||||
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
|
||||
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
|
||||
|
||||
bb.utils.unlockfile(lf)
|
||||
bb.utils.unlockfile(glf)
|
||||
|
||||
def parser_cache_savemerge(d):
|
||||
cachefile = parser_cachefile(d)
|
||||
if not cachefile:
|
||||
return
|
||||
|
||||
glf = bb.utils.lockfile(cachefile + ".lock")
|
||||
|
||||
try:
|
||||
p = pickle.Unpickler(file(cachefile, "rb"))
|
||||
data, version = p.load()
|
||||
except (IOError, EOFError):
|
||||
data, version = None, None
|
||||
|
||||
if version != PARSERCACHE_VERSION:
|
||||
data = [{}, {}]
|
||||
|
||||
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
|
||||
f = os.path.join(os.path.dirname(cachefile), f)
|
||||
try:
|
||||
p = pickle.Unpickler(file(f, "rb"))
|
||||
extradata, version = p.load()
|
||||
except (IOError, EOFError):
|
||||
extradata, version = [{}, {}], None
|
||||
|
||||
if version != PARSERCACHE_VERSION:
|
||||
continue
|
||||
|
||||
for h in extradata[0]:
|
||||
if h not in data[0]:
|
||||
data[0][h] = extradata[0][h]
|
||||
for h in extradata[1]:
|
||||
if h not in data[1]:
|
||||
data[1][h] = extradata[1][h]
|
||||
os.unlink(f)
|
||||
|
||||
p = pickle.Pickler(file(cachefile, "wb"), -1)
|
||||
p.dump([data, PARSERCACHE_VERSION])
|
||||
|
||||
bb.utils.unlockfile(glf)
|
||||
|
||||
p.dump([[pythonparsecache, shellparsecache], PARSERCACHE_VERSION])
|
||||
|
||||
class PythonParser():
|
||||
class ValueVisitor():
|
||||
|
||||
@@ -82,7 +82,7 @@ class Command:
|
||||
if command not in CommandsAsync.__dict__:
|
||||
return "No such command"
|
||||
self.currentAsyncCommand = (command, commandline)
|
||||
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
|
||||
self.cooker.server.register_idle_function(self.cooker.runCommands, self.cooker)
|
||||
return True
|
||||
except:
|
||||
import traceback
|
||||
@@ -224,19 +224,11 @@ class CommandsAsync:
|
||||
|
||||
def generateTargetsTree(self, command, params):
|
||||
"""
|
||||
Generate a tree of buildable targets.
|
||||
If klass is provided ensure all recipes that inherit the class are
|
||||
included in the package list.
|
||||
If pkg_list provided use that list (plus any extras brought in by
|
||||
klass) rather than generating a tree for all packages.
|
||||
Generate a tree of all buildable targets.
|
||||
"""
|
||||
klass = params[0]
|
||||
if len(params) > 1:
|
||||
pkg_list = params[1]
|
||||
else:
|
||||
pkg_list = []
|
||||
|
||||
command.cooker.generateTargetsTree(klass, pkg_list)
|
||||
command.cooker.generateTargetsTree(klass)
|
||||
command.finishAsyncCommand()
|
||||
generateTargetsTree.needcache = True
|
||||
|
||||
@@ -251,28 +243,6 @@ class CommandsAsync:
|
||||
command.finishAsyncCommand()
|
||||
findConfigFiles.needcache = True
|
||||
|
||||
def findFilesMatchingInDir(self, command, params):
|
||||
"""
|
||||
Find implementation files matching the specified pattern
|
||||
in the requested subdirectory of a BBPATH
|
||||
"""
|
||||
pattern = params[0]
|
||||
directory = params[1]
|
||||
|
||||
command.cooker.findFilesMatchingInDir(pattern, directory)
|
||||
command.finishAsyncCommand()
|
||||
findFilesMatchingInDir.needcache = True
|
||||
|
||||
def findConfigFilePath(self, command, params):
|
||||
"""
|
||||
Find the path of the requested configuration file
|
||||
"""
|
||||
configfile = params[0]
|
||||
|
||||
command.cooker.findConfigFilePath(configfile)
|
||||
command.finishAsyncCommand()
|
||||
findConfigFilePath.needcache = False
|
||||
|
||||
def showVersions(self, command, params):
|
||||
"""
|
||||
Show the currently selected versions
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
"""Code pulled from future python versions, here for compatibility"""
|
||||
|
||||
def total_ordering(cls):
|
||||
"""Class decorator that fills in missing ordering methods"""
|
||||
convert = {
|
||||
'__lt__': [('__gt__', lambda self, other: other < self),
|
||||
('__le__', lambda self, other: not other < self),
|
||||
('__ge__', lambda self, other: not self < other)],
|
||||
'__le__': [('__ge__', lambda self, other: other <= self),
|
||||
('__lt__', lambda self, other: not other <= self),
|
||||
('__gt__', lambda self, other: not self <= other)],
|
||||
'__gt__': [('__lt__', lambda self, other: other > self),
|
||||
('__ge__', lambda self, other: not other > self),
|
||||
('__le__', lambda self, other: not self > other)],
|
||||
'__ge__': [('__le__', lambda self, other: other >= self),
|
||||
('__gt__', lambda self, other: not other >= self),
|
||||
('__lt__', lambda self, other: not self >= other)]
|
||||
}
|
||||
roots = set(dir(cls)) & set(convert)
|
||||
if not roots:
|
||||
raise ValueError('must define at least one ordering operation: < > <= >=')
|
||||
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
|
||||
for opname, opfunc in convert[root]:
|
||||
if opname not in roots:
|
||||
opfunc.__name__ = opname
|
||||
opfunc.__doc__ = getattr(int, opname).__doc__
|
||||
setattr(cls, opname, opfunc)
|
||||
return cls
|
||||
@@ -28,13 +28,12 @@ import atexit
|
||||
import itertools
|
||||
import logging
|
||||
import multiprocessing
|
||||
import signal
|
||||
import sre_constants
|
||||
import threading
|
||||
from cStringIO import StringIO
|
||||
from contextlib import closing
|
||||
from functools import wraps
|
||||
from collections import defaultdict
|
||||
import bb, bb.exceptions
|
||||
import bb
|
||||
from bb import utils, data, parse, event, cache, providers, taskdata, command, runqueue
|
||||
|
||||
logger = logging.getLogger("BitBake")
|
||||
@@ -56,20 +55,6 @@ class NothingToBuild(Exception):
|
||||
class state:
|
||||
initial, parsing, running, shutdown, stop = range(5)
|
||||
|
||||
|
||||
class SkippedPackage:
|
||||
def __init__(self, info = None, reason = None):
|
||||
self.skipreason = None
|
||||
self.provides = None
|
||||
self.rprovides = None
|
||||
|
||||
if info:
|
||||
self.skipreason = info.skipreason
|
||||
self.provides = info.provides
|
||||
self.rprovides = info.rprovides
|
||||
elif reason:
|
||||
self.skipreason = reason
|
||||
|
||||
#============================================================================#
|
||||
# BBCooker
|
||||
#============================================================================#
|
||||
@@ -78,65 +63,23 @@ class BBCooker:
|
||||
Manages one bitbake build run
|
||||
"""
|
||||
|
||||
def __init__(self, configuration, server_registration_cb):
|
||||
def __init__(self, configuration, server):
|
||||
self.status = None
|
||||
self.appendlist = {}
|
||||
self.skiplist = {}
|
||||
|
||||
self.server_registration_cb = server_registration_cb
|
||||
if server:
|
||||
self.server = server.BitBakeServer(self)
|
||||
|
||||
self.configuration = configuration
|
||||
|
||||
self.caches_array = []
|
||||
# Currently, only Image Creator hob ui needs extra cache.
|
||||
# So, we save Extra Cache class name and container file
|
||||
# information into a extraCaches field in hob UI.
|
||||
# TODO: In future, bin/bitbake should pass information into cooker,
|
||||
# instead of getting information from configuration.ui. Also, some
|
||||
# UI start up issues need to be addressed at the same time.
|
||||
caches_name_array = ['bb.cache:CoreRecipeInfo']
|
||||
if configuration.ui:
|
||||
try:
|
||||
module = __import__('bb.ui', fromlist=[configuration.ui])
|
||||
name_array = (getattr(module, configuration.ui)).extraCaches
|
||||
for recipeInfoName in name_array:
|
||||
caches_name_array.append(recipeInfoName)
|
||||
except ImportError as exc:
|
||||
# bb.ui.XXX is not defined and imported. It's an error!
|
||||
logger.critical("Unable to import '%s' interface from bb.ui: %s" % (configuration.ui, exc))
|
||||
sys.exit("FATAL: Failed to import '%s' interface." % configuration.ui)
|
||||
except AttributeError:
|
||||
# This is not an error. If the field is not defined in the ui,
|
||||
# this interface might need no extra cache fields, so
|
||||
# just skip this error!
|
||||
logger.debug(2, "UI '%s' does not require extra cache!" % (configuration.ui))
|
||||
|
||||
# At least CoreRecipeInfo will be loaded, so caches_array will never be empty!
|
||||
# This is the entry point, no further check needed!
|
||||
for var in caches_name_array:
|
||||
try:
|
||||
module_name, cache_name = var.split(':')
|
||||
module = __import__(module_name, fromlist=(cache_name,))
|
||||
self.caches_array.append(getattr(module, cache_name))
|
||||
except ImportError as exc:
|
||||
logger.critical("Unable to import extra RecipeInfo '%s' from '%s': %s" % (cache_name, module_name, exc))
|
||||
sys.exit("FATAL: Failed to import extra cache class '%s'." % cache_name)
|
||||
|
||||
self.configuration.data = bb.data.init()
|
||||
|
||||
if not self.server_registration_cb:
|
||||
if not server:
|
||||
bb.data.setVar("BB_WORKERCONTEXT", "1", self.configuration.data)
|
||||
|
||||
bb.data.inheritFromOS(self.configuration.data)
|
||||
|
||||
try:
|
||||
self.parseConfigurationFiles(self.configuration.prefile,
|
||||
self.configuration.postfile)
|
||||
except SyntaxError:
|
||||
sys.exit(1)
|
||||
except Exception:
|
||||
logger.exception("Error parsing configuration files")
|
||||
sys.exit(1)
|
||||
self.parseConfigurationFiles(self.configuration.file)
|
||||
|
||||
if not self.configuration.cmd:
|
||||
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data, True) or "build"
|
||||
@@ -164,8 +107,6 @@ class BBCooker:
|
||||
self.command = bb.command.Command(self)
|
||||
self.state = state.initial
|
||||
|
||||
self.parser = None
|
||||
|
||||
def parseConfiguration(self):
|
||||
|
||||
|
||||
@@ -178,39 +119,39 @@ class BBCooker:
|
||||
|
||||
def parseCommandLine(self):
|
||||
# Parse any commandline into actions
|
||||
self.commandlineAction = {'action':None, 'msg':None}
|
||||
if self.configuration.show_environment:
|
||||
self.commandlineAction = None
|
||||
|
||||
if 'world' in self.configuration.pkgs_to_build:
|
||||
self.commandlineAction['msg'] = "'world' is not a valid target for --environment."
|
||||
elif 'universe' in self.configuration.pkgs_to_build:
|
||||
self.commandlineAction['msg'] = "'universe' is not a valid target for --environment."
|
||||
buildlog.error("'world' is not a valid target for --environment.")
|
||||
elif len(self.configuration.pkgs_to_build) > 1:
|
||||
self.commandlineAction['msg'] = "Only one target can be used with the --environment option."
|
||||
buildlog.error("Only one target can be used with the --environment option.")
|
||||
elif self.configuration.buildfile and len(self.configuration.pkgs_to_build) > 0:
|
||||
self.commandlineAction['msg'] = "No target should be used with the --environment and --buildfile options."
|
||||
buildlog.error("No target should be used with the --environment and --buildfile options.")
|
||||
elif len(self.configuration.pkgs_to_build) > 0:
|
||||
self.commandlineAction['action'] = ["showEnvironmentTarget", self.configuration.pkgs_to_build]
|
||||
self.commandlineAction = ["showEnvironmentTarget", self.configuration.pkgs_to_build]
|
||||
else:
|
||||
self.commandlineAction['action'] = ["showEnvironment", self.configuration.buildfile]
|
||||
self.commandlineAction = ["showEnvironment", self.configuration.buildfile]
|
||||
elif self.configuration.buildfile is not None:
|
||||
self.commandlineAction['action'] = ["buildFile", self.configuration.buildfile, self.configuration.cmd]
|
||||
self.commandlineAction = ["buildFile", self.configuration.buildfile, self.configuration.cmd]
|
||||
elif self.configuration.revisions_changed:
|
||||
self.commandlineAction['action'] = ["compareRevisions"]
|
||||
self.commandlineAction = ["compareRevisions"]
|
||||
elif self.configuration.show_versions:
|
||||
self.commandlineAction['action'] = ["showVersions"]
|
||||
self.commandlineAction = ["showVersions"]
|
||||
elif self.configuration.parse_only:
|
||||
self.commandlineAction['action'] = ["parseFiles"]
|
||||
self.commandlineAction = ["parseFiles"]
|
||||
elif self.configuration.dot_graph:
|
||||
if self.configuration.pkgs_to_build:
|
||||
self.commandlineAction['action'] = ["generateDotGraph", self.configuration.pkgs_to_build, self.configuration.cmd]
|
||||
self.commandlineAction = ["generateDotGraph", self.configuration.pkgs_to_build, self.configuration.cmd]
|
||||
else:
|
||||
self.commandlineAction['msg'] = "Please specify a package name for dependency graph generation."
|
||||
self.commandlineAction = None
|
||||
buildlog.error("Please specify a package name for dependency graph generation.")
|
||||
else:
|
||||
if self.configuration.pkgs_to_build:
|
||||
self.commandlineAction['action'] = ["buildTargets", self.configuration.pkgs_to_build, self.configuration.cmd]
|
||||
self.commandlineAction = ["buildTargets", self.configuration.pkgs_to_build, self.configuration.cmd]
|
||||
else:
|
||||
#self.commandlineAction['msg'] = "Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information."
|
||||
self.commandlineAction = None
|
||||
buildlog.error("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
|
||||
|
||||
def runCommands(self, server, data, abort):
|
||||
"""
|
||||
@@ -280,7 +221,7 @@ class BBCooker:
|
||||
if fn:
|
||||
try:
|
||||
envdata = bb.cache.Cache.loadDataFull(fn, self.get_file_appends(fn), self.configuration.data)
|
||||
except Exception as e:
|
||||
except Exception, e:
|
||||
parselog.exception("Unable to read %s", fn)
|
||||
raise
|
||||
|
||||
@@ -312,9 +253,7 @@ class BBCooker:
|
||||
localdata = data.createCopy(self.configuration.data)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
# We set abort to False here to prevent unbuildable targets raising
|
||||
# an exception when we're just generating data
|
||||
taskdata = bb.taskdata.TaskData(False)
|
||||
taskdata = bb.taskdata.TaskData(self.configuration.abort)
|
||||
|
||||
runlist = []
|
||||
for k in pkgs_to_build:
|
||||
@@ -327,11 +266,9 @@ class BBCooker:
|
||||
|
||||
return taskdata, rq
|
||||
|
||||
def generateDepTreeData(self, pkgs_to_build, task, more_meta=False):
|
||||
def generateDepTreeData(self, pkgs_to_build, task):
|
||||
"""
|
||||
Create a dependency tree of pkgs_to_build, returning the data.
|
||||
When more_meta is set to True include summary, license and group
|
||||
information in the returned tree.
|
||||
"""
|
||||
taskdata, rq = self.prepareTreeData(pkgs_to_build, task)
|
||||
|
||||
@@ -351,18 +288,10 @@ class BBCooker:
|
||||
fn = taskdata.fn_index[fnid]
|
||||
pn = self.status.pkg_fn[fn]
|
||||
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
|
||||
if more_meta:
|
||||
summary = self.status.summary[fn]
|
||||
lic = self.status.license[fn]
|
||||
section = self.status.section[fn]
|
||||
if pn not in depend_tree["pn"]:
|
||||
depend_tree["pn"][pn] = {}
|
||||
depend_tree["pn"][pn]["filename"] = fn
|
||||
depend_tree["pn"][pn]["version"] = version
|
||||
if more_meta:
|
||||
depend_tree["pn"][pn]["summary"] = summary
|
||||
depend_tree["pn"][pn]["license"] = lic
|
||||
depend_tree["pn"][pn]["section"] = section
|
||||
for dep in rq.rqdata.runq_depends[task]:
|
||||
depfn = taskdata.fn_index[rq.rqdata.runq_fnid[dep]]
|
||||
deppn = self.status.pkg_fn[depfn]
|
||||
@@ -472,36 +401,6 @@ class BBCooker:
|
||||
print("}", file=tdepends_file)
|
||||
logger.info("Task dependencies saved to 'task-depends.dot'")
|
||||
|
||||
def calc_bbfile_priority( self, filename, matched = None ):
|
||||
for _, _, regex, pri in self.status.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
if matched != None:
|
||||
if not regex in matched:
|
||||
matched.add(regex)
|
||||
return pri
|
||||
return 0
|
||||
|
||||
def show_appends_with_no_recipes( self ):
|
||||
recipes = set(os.path.basename(f)
|
||||
for f in self.status.pkg_fn.iterkeys())
|
||||
recipes |= set(os.path.basename(f)
|
||||
for f in self.skiplist.iterkeys())
|
||||
appended_recipes = self.appendlist.iterkeys()
|
||||
appends_without_recipes = [self.appendlist[recipe]
|
||||
for recipe in appended_recipes
|
||||
if recipe not in recipes]
|
||||
if appends_without_recipes:
|
||||
appendlines = (' %s' % append
|
||||
for appends in appends_without_recipes
|
||||
for append in appends)
|
||||
msg = 'No recipes available for:\n%s' % '\n'.join(appendlines)
|
||||
warn_only = data.getVar("BB_DANGLINGAPPENDS_WARNONLY", \
|
||||
self.configuration.data, False) or "no"
|
||||
if warn_only.lower() in ("1", "yes", "true"):
|
||||
bb.warn(msg)
|
||||
else:
|
||||
bb.fatal(msg)
|
||||
|
||||
def buildDepgraph( self ):
|
||||
all_depends = self.status.all_depends
|
||||
pn_provides = self.status.pn_provides
|
||||
@@ -510,6 +409,15 @@ class BBCooker:
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
|
||||
matched = set()
|
||||
def calc_bbfile_priority(filename):
|
||||
for _, _, regex, pri in self.status.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
if not regex in matched:
|
||||
matched.add(regex)
|
||||
return pri
|
||||
return 0
|
||||
|
||||
# Handle PREFERRED_PROVIDERS
|
||||
for p in (bb.data.getVar('PREFERRED_PROVIDERS', localdata, 1) or "").split():
|
||||
try:
|
||||
@@ -522,60 +430,13 @@ class BBCooker:
|
||||
self.status.preferred[providee] = provider
|
||||
|
||||
# Calculate priorities for each file
|
||||
matched = set()
|
||||
for p in self.status.pkg_fn:
|
||||
self.status.bbfile_priority[p] = self.calc_bbfile_priority(p, matched)
|
||||
|
||||
# Don't show the warning if the BBFILE_PATTERN did match .bbappend files
|
||||
unmatched = set()
|
||||
for _, _, regex, pri in self.status.bbfile_config_priorities:
|
||||
if not regex in matched:
|
||||
unmatched.add(regex)
|
||||
|
||||
def findmatch(regex):
|
||||
for bbfile in self.appendlist:
|
||||
for append in self.appendlist[bbfile]:
|
||||
if regex.match(append):
|
||||
return True
|
||||
return False
|
||||
|
||||
for unmatch in unmatched.copy():
|
||||
if findmatch(unmatch):
|
||||
unmatched.remove(unmatch)
|
||||
self.status.bbfile_priority[p] = calc_bbfile_priority(p)
|
||||
|
||||
for collection, pattern, regex, _ in self.status.bbfile_config_priorities:
|
||||
if regex in unmatched:
|
||||
if not regex in matched:
|
||||
collectlog.warn("No bb files matched BBFILE_PATTERN_%s '%s'" % (collection, pattern))
|
||||
|
||||
def findConfigFilePath(self, configfile):
|
||||
path = self._findConfigFile(configfile)
|
||||
if path:
|
||||
bb.event.fire(bb.event.ConfigFilePathFound(path), self.configuration.data)
|
||||
|
||||
def findFilesMatchingInDir(self, filepattern, directory):
|
||||
"""
|
||||
Searches for files matching the regex 'pattern' which are children of
|
||||
'directory' in each BBPATH. i.e. to find all rootfs package classes available
|
||||
to BitBake one could call findFilesMatchingInDir(self, 'rootfs_', 'classes')
|
||||
or to find all machine configuration files one could call:
|
||||
findFilesMatchingInDir(self, 'conf/machines', 'conf')
|
||||
"""
|
||||
import re
|
||||
|
||||
matches = []
|
||||
p = re.compile(re.escape(filepattern))
|
||||
bbpaths = bb.data.getVar('BBPATH', self.configuration.data, True).split(':')
|
||||
for path in bbpaths:
|
||||
dirpath = os.path.join(path, directory)
|
||||
if os.path.exists(dirpath):
|
||||
for root, dirs, files in os.walk(dirpath):
|
||||
for f in files:
|
||||
if p.search(f):
|
||||
matches.append(f)
|
||||
|
||||
if matches:
|
||||
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.configuration.data)
|
||||
|
||||
def findConfigFiles(self, varname):
|
||||
"""
|
||||
Find config files which are appropriate values for varname.
|
||||
@@ -597,8 +458,7 @@ class BBCooker:
|
||||
if end == 'conf':
|
||||
possible.append(val)
|
||||
|
||||
if possible:
|
||||
bb.event.fire(bb.event.ConfigFilesFound(var, possible), self.configuration.data)
|
||||
bb.event.fire(bb.event.ConfigFilesFound(var, possible), self.configuration.data)
|
||||
|
||||
def findInheritsClass(self, klass):
|
||||
"""
|
||||
@@ -613,14 +473,54 @@ class BBCooker:
|
||||
|
||||
return pkg_list
|
||||
|
||||
def generateTargetsTree(self, klass=None, pkgs=[]):
|
||||
def generateTargetsTreeData(self, pkgs_to_build, task):
|
||||
"""
|
||||
Create a tree of pkgs_to_build metadata, returning the data.
|
||||
"""
|
||||
taskdata, rq = self.prepareTreeData(pkgs_to_build, task)
|
||||
|
||||
seen_fnids = []
|
||||
target_tree = {}
|
||||
target_tree["depends"] = {}
|
||||
target_tree["pn"] = {}
|
||||
target_tree["rdepends-pn"] = {}
|
||||
|
||||
for task in xrange(len(rq.rqdata.runq_fnid)):
|
||||
taskname = rq.rqdata.runq_task[task]
|
||||
fnid = rq.rqdata.runq_fnid[task]
|
||||
fn = taskdata.fn_index[fnid]
|
||||
pn = self.status.pkg_fn[fn]
|
||||
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
|
||||
summary = self.status.summary[fn]
|
||||
license = self.status.license[fn]
|
||||
section = self.status.section[fn]
|
||||
if pn not in target_tree["pn"]:
|
||||
target_tree["pn"][pn] = {}
|
||||
target_tree["pn"][pn]["filename"] = fn
|
||||
target_tree["pn"][pn]["version"] = version
|
||||
target_tree["pn"][pn]["summary"] = summary
|
||||
target_tree["pn"][pn]["license"] = license
|
||||
target_tree["pn"][pn]["section"] = section
|
||||
if fnid not in seen_fnids:
|
||||
seen_fnids.append(fnid)
|
||||
packages = []
|
||||
|
||||
target_tree["depends"][pn] = []
|
||||
for dep in taskdata.depids[fnid]:
|
||||
target_tree["depends"][pn].append(taskdata.build_names_index[dep])
|
||||
|
||||
target_tree["rdepends-pn"][pn] = []
|
||||
for rdep in taskdata.rdepids[fnid]:
|
||||
target_tree["rdepends-pn"][pn].append(taskdata.run_names_index[rdep])
|
||||
|
||||
return target_tree
|
||||
|
||||
def generateTargetsTree(self, klass):
|
||||
"""
|
||||
Generate a dependency tree of buildable targets
|
||||
Generate an event with the result
|
||||
"""
|
||||
# if the caller hasn't specified a pkgs list default to universe
|
||||
if not len(pkgs):
|
||||
pkgs = ['universe']
|
||||
pkgs = ['world']
|
||||
# if inherited_class passed ensure all recipes which inherit the
|
||||
# specified class are included in pkgs
|
||||
if klass:
|
||||
@@ -628,7 +528,7 @@ class BBCooker:
|
||||
pkgs = pkgs + extra_pkgs
|
||||
|
||||
# generate a dependency tree for all our packages
|
||||
tree = self.generateDepTreeData(pkgs, 'build', more_meta=True)
|
||||
tree = self.generateTargetsTreeData(pkgs, 'build')
|
||||
bb.event.fire(bb.event.TargetsTreeGenerated(tree), self.configuration.data)
|
||||
|
||||
def buildWorldTargetList(self):
|
||||
@@ -665,25 +565,26 @@ class BBCooker:
|
||||
else:
|
||||
shell.start( self )
|
||||
|
||||
def _findConfigFile(self, configfile):
|
||||
def _findLayerConf(self):
|
||||
path = os.getcwd()
|
||||
while path != "/":
|
||||
confpath = os.path.join(path, "conf", configfile)
|
||||
if os.path.exists(confpath):
|
||||
return confpath
|
||||
bblayers = os.path.join(path, "conf", "bblayers.conf")
|
||||
if os.path.exists(bblayers):
|
||||
return bblayers
|
||||
|
||||
path, _ = os.path.split(path)
|
||||
return None
|
||||
|
||||
def _findLayerConf(self):
|
||||
return self._findConfigFile("bblayers.conf")
|
||||
def parseConfigurationFiles(self, files):
|
||||
def _parse(f, data, include=False):
|
||||
try:
|
||||
return bb.parse.handle(f, data, include)
|
||||
except (IOError, bb.parse.ParseError) as exc:
|
||||
parselog.critical("Unable to parse %s: %s" % (f, exc))
|
||||
sys.exit(1)
|
||||
|
||||
def parseConfigurationFiles(self, prefiles, postfiles):
|
||||
data = self.configuration.data
|
||||
bb.parse.init_parser(data)
|
||||
|
||||
# Parse files for loading *before* bitbake.conf and any includes
|
||||
for f in prefiles:
|
||||
for f in files:
|
||||
data = _parse(f, data)
|
||||
|
||||
layerconf = self._findLayerConf()
|
||||
@@ -707,112 +608,47 @@ class BBCooker:
|
||||
|
||||
data = _parse(os.path.join("conf", "bitbake.conf"), data)
|
||||
|
||||
# Parse files for loading *after* bitbake.conf and any includes
|
||||
for p in postfiles:
|
||||
data = _parse(p, data)
|
||||
self.configuration.data = data
|
||||
|
||||
# Handle any INHERITs and inherit the base class
|
||||
bbclasses = ["base"] + (data.getVar('INHERIT', True) or "").split()
|
||||
for bbclass in bbclasses:
|
||||
data = _inherit(bbclass, data)
|
||||
inherits = ["base"] + (bb.data.getVar('INHERIT', self.configuration.data, True ) or "").split()
|
||||
for inherit in inherits:
|
||||
self.configuration.data = _parse(os.path.join('classes', '%s.bbclass' % inherit), self.configuration.data, True )
|
||||
|
||||
# Nomally we only register event handlers at the end of parsing .bb files
|
||||
# We register any handlers we've found so far here...
|
||||
for var in bb.data.getVar('__BBHANDLERS', data) or []:
|
||||
bb.event.register(var, bb.data.getVar(var, data))
|
||||
for var in bb.data.getVar('__BBHANDLERS', self.configuration.data) or []:
|
||||
bb.event.register(var, bb.data.getVar(var, self.configuration.data))
|
||||
|
||||
if data.getVar("BB_WORKERCONTEXT", False) is None:
|
||||
bb.fetch.fetcher_init(data)
|
||||
bb.codeparser.parser_cache_init(data)
|
||||
if bb.data.getVar("BB_WORKERCONTEXT", self.configuration.data) is None:
|
||||
bb.fetch.fetcher_init(self.configuration.data)
|
||||
bb.codeparser.parser_cache_init(self.configuration.data)
|
||||
bb.parse.init_parser(data)
|
||||
bb.event.fire(bb.event.ConfigParsed(), data)
|
||||
self.configuration.data = data
|
||||
bb.event.fire(bb.event.ConfigParsed(), self.configuration.data)
|
||||
|
||||
def handleCollections( self, collections ):
|
||||
"""Handle collections"""
|
||||
self.status.bbfile_config_priorities = []
|
||||
if collections:
|
||||
collection_priorities = {}
|
||||
collection_depends = {}
|
||||
collection_list = collections.split()
|
||||
min_prio = 0
|
||||
for c in collection_list:
|
||||
# Get collection priority if defined explicitly
|
||||
priority = bb.data.getVar("BBFILE_PRIORITY_%s" % c, self.configuration.data, 1)
|
||||
if priority:
|
||||
try:
|
||||
prio = int(priority)
|
||||
except ValueError:
|
||||
parselog.error("invalid value for BBFILE_PRIORITY_%s: \"%s\"", c, priority)
|
||||
if min_prio == 0 or prio < min_prio:
|
||||
min_prio = prio
|
||||
collection_priorities[c] = prio
|
||||
else:
|
||||
collection_priorities[c] = None
|
||||
|
||||
# Check dependencies and store information for priority calculation
|
||||
deps = bb.data.getVar("LAYERDEPENDS_%s" % c, self.configuration.data, 1)
|
||||
if deps:
|
||||
depnamelist = []
|
||||
deplist = deps.split()
|
||||
for dep in deplist:
|
||||
depsplit = dep.split(':')
|
||||
if len(depsplit) > 1:
|
||||
try:
|
||||
depver = int(depsplit[1])
|
||||
except ValueError:
|
||||
parselog.error("invalid version value in LAYERDEPENDS_%s: \"%s\"", c, dep)
|
||||
continue
|
||||
else:
|
||||
depver = None
|
||||
dep = depsplit[0]
|
||||
depnamelist.append(dep)
|
||||
|
||||
if dep in collection_list:
|
||||
if depver:
|
||||
layerver = bb.data.getVar("LAYERVERSION_%s" % dep, self.configuration.data, 1)
|
||||
if layerver:
|
||||
try:
|
||||
lver = int(layerver)
|
||||
except ValueError:
|
||||
parselog.error("invalid value for LAYERVERSION_%s: \"%s\"", c, layerver)
|
||||
continue
|
||||
if lver <> depver:
|
||||
parselog.error("Layer dependency %s of layer %s is at version %d, expected %d", dep, c, lver, depver)
|
||||
else:
|
||||
parselog.error("Layer dependency %s of layer %s has no version, expected %d", dep, c, depver)
|
||||
else:
|
||||
parselog.error("Layer dependency %s of layer %s not found", dep, c)
|
||||
collection_depends[c] = depnamelist
|
||||
else:
|
||||
collection_depends[c] = []
|
||||
|
||||
# Recursively work out collection priorities based on dependencies
|
||||
def calc_layer_priority(collection):
|
||||
if not collection_priorities[collection]:
|
||||
max_depprio = min_prio
|
||||
for dep in collection_depends[collection]:
|
||||
calc_layer_priority(dep)
|
||||
depprio = collection_priorities[dep]
|
||||
if depprio > max_depprio:
|
||||
max_depprio = depprio
|
||||
max_depprio += 1
|
||||
parselog.debug(1, "Calculated priority of layer %s as %d", collection, max_depprio)
|
||||
collection_priorities[collection] = max_depprio
|
||||
|
||||
# Calculate all layer priorities using calc_layer_priority and store in bbfile_config_priorities
|
||||
for c in collection_list:
|
||||
calc_layer_priority(c)
|
||||
regex = bb.data.getVar("BBFILE_PATTERN_%s" % c, self.configuration.data, 1)
|
||||
if regex == None:
|
||||
parselog.error("BBFILE_PATTERN_%s not defined" % c)
|
||||
continue
|
||||
priority = bb.data.getVar("BBFILE_PRIORITY_%s" % c, self.configuration.data, 1)
|
||||
if priority == None:
|
||||
parselog.error("BBFILE_PRIORITY_%s not defined" % c)
|
||||
continue
|
||||
try:
|
||||
cre = re.compile(regex)
|
||||
except re.error:
|
||||
parselog.error("BBFILE_PATTERN_%s \"%s\" is not a valid regular expression", c, regex)
|
||||
continue
|
||||
self.status.bbfile_config_priorities.append((c, regex, cre, collection_priorities[c]))
|
||||
try:
|
||||
pri = int(priority)
|
||||
self.status.bbfile_config_priorities.append((c, regex, cre, pri))
|
||||
except ValueError:
|
||||
parselog.error("invalid value for BBFILE_PRIORITY_%s: \"%s\"", c, priority)
|
||||
|
||||
def buildSetVars(self):
|
||||
"""
|
||||
@@ -822,22 +658,22 @@ class BBCooker:
|
||||
bb.data.setVar("BUILDNAME", time.strftime('%Y%m%d%H%M'), self.configuration.data)
|
||||
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', time.gmtime()), self.configuration.data)
|
||||
|
||||
def matchFiles(self, bf):
|
||||
def matchFiles(self, buildfile):
|
||||
"""
|
||||
Find the .bb files which match the expression in 'buildfile'.
|
||||
"""
|
||||
|
||||
if bf.startswith("/") or bf.startswith("../"):
|
||||
bf = os.path.abspath(bf)
|
||||
bf = os.path.abspath(buildfile)
|
||||
filelist, masked = self.collect_bbfiles()
|
||||
try:
|
||||
os.stat(bf)
|
||||
return [bf]
|
||||
except OSError:
|
||||
regexp = re.compile(bf)
|
||||
regexp = re.compile(buildfile)
|
||||
matches = []
|
||||
for f in filelist:
|
||||
if regexp.search(f) and os.path.isfile(f):
|
||||
bf = f
|
||||
matches.append(f)
|
||||
return matches
|
||||
|
||||
@@ -862,33 +698,28 @@ class BBCooker:
|
||||
# Parse the configuration here. We need to do it explicitly here since
|
||||
# buildFile() doesn't use the cache
|
||||
self.parseConfiguration()
|
||||
self.status = bb.cache.CacheData(self.caches_array)
|
||||
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
|
||||
|
||||
# If we are told to do the None task then query the default task
|
||||
if (task == None):
|
||||
task = self.configuration.cmd
|
||||
|
||||
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
|
||||
fn = self.matchFile(fn)
|
||||
(fn, cls) = bb.cache.Cache.virtualfn2realfn(buildfile)
|
||||
buildfile = self.matchFile(fn)
|
||||
fn = bb.cache.Cache.realfn2virtual(buildfile, cls)
|
||||
|
||||
self.buildSetVars()
|
||||
|
||||
self.status = bb.cache.CacheData(self.caches_array)
|
||||
self.status = bb.cache.CacheData()
|
||||
infos = bb.cache.Cache.parse(fn, self.get_file_appends(fn), \
|
||||
self.configuration.data,
|
||||
self.caches_array)
|
||||
infos = dict(infos)
|
||||
|
||||
fn = bb.cache.Cache.realfn2virtual(fn, cls)
|
||||
try:
|
||||
info_array = infos[fn]
|
||||
except KeyError:
|
||||
bb.fatal("%s does not exist" % fn)
|
||||
self.status.add_from_recipeinfo(fn, info_array)
|
||||
self.configuration.data)
|
||||
maininfo = None
|
||||
for vfn, info in infos:
|
||||
self.status.add_from_recipeinfo(vfn, info)
|
||||
if vfn == fn:
|
||||
maininfo = info
|
||||
|
||||
# Tweak some variables
|
||||
item = info_array[0].pn
|
||||
item = maininfo.pn
|
||||
self.status.ignored_dependencies = set()
|
||||
self.status.bbfile_priority[fn] = 1
|
||||
|
||||
@@ -910,6 +741,9 @@ class BBCooker:
|
||||
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.configuration.event_data)
|
||||
|
||||
# Clear locks
|
||||
bb.fetch.persistent_database_connection = {}
|
||||
|
||||
# Execute the runqueue
|
||||
runlist = [[item, "do_%s" % task]]
|
||||
|
||||
@@ -929,10 +763,6 @@ class BBCooker:
|
||||
buildlog.error("'%s' failed" % taskdata.fn_index[fnid])
|
||||
failures += len(exc.args)
|
||||
retval = False
|
||||
except SystemExit as exc:
|
||||
self.command.finishAsyncCommand()
|
||||
return False
|
||||
|
||||
if not retval:
|
||||
bb.event.fire(bb.event.BuildCompleted(buildname, item, failures), self.configuration.event_data)
|
||||
self.command.finishAsyncCommand()
|
||||
@@ -941,7 +771,7 @@ class BBCooker:
|
||||
return True
|
||||
return retval
|
||||
|
||||
self.server_registration_cb(buildFileIdle, rq)
|
||||
self.server.register_idle_function(buildFileIdle, rq)
|
||||
|
||||
def buildTargets(self, targets, task):
|
||||
"""
|
||||
@@ -970,10 +800,6 @@ class BBCooker:
|
||||
buildlog.error("'%s' failed" % taskdata.fn_index[fnid])
|
||||
failures += len(exc.args)
|
||||
retval = False
|
||||
except SystemExit as exc:
|
||||
self.command.finishAsyncCommand()
|
||||
return False
|
||||
|
||||
if not retval:
|
||||
bb.event.fire(bb.event.BuildCompleted(buildname, targets, failures), self.configuration.event_data)
|
||||
self.command.finishAsyncCommand()
|
||||
@@ -999,22 +825,34 @@ class BBCooker:
|
||||
runlist.append([k, "do_%s" % task])
|
||||
taskdata.add_unresolved(localdata, self.status)
|
||||
|
||||
# Clear locks
|
||||
bb.fetch.persistent_database_connection = {}
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
|
||||
|
||||
self.server_registration_cb(buildTargetsIdle, rq)
|
||||
self.server.register_idle_function(buildTargetsIdle, rq)
|
||||
|
||||
def updateCache(self):
|
||||
if self.state == state.running:
|
||||
return
|
||||
|
||||
if self.state in (state.shutdown, state.stop):
|
||||
self.parser.shutdown(clean=False)
|
||||
sys.exit(1)
|
||||
|
||||
if self.state != state.parsing:
|
||||
self.parseConfiguration ()
|
||||
|
||||
self.status = bb.cache.CacheData(self.caches_array)
|
||||
# Import Psyco if available and not disabled
|
||||
import platform
|
||||
if platform.machine() in ['i386', 'i486', 'i586', 'i686']:
|
||||
if not self.configuration.disable_psyco:
|
||||
try:
|
||||
import psyco
|
||||
except ImportError:
|
||||
collectlog.info("Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
|
||||
else:
|
||||
psyco.bind( CookerParser.parse_next )
|
||||
else:
|
||||
collectlog.info("You have disabled Psyco. This decreases performance.")
|
||||
|
||||
self.status = bb.cache.CacheData()
|
||||
|
||||
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
|
||||
self.status.ignored_dependencies = set(ignore.split())
|
||||
@@ -1032,7 +870,6 @@ class BBCooker:
|
||||
|
||||
if not self.parser.parse_next():
|
||||
collectlog.debug(1, "parsing complete")
|
||||
self.show_appends_with_no_recipes()
|
||||
self.buildDepgraph()
|
||||
self.state = state.running
|
||||
return None
|
||||
@@ -1050,12 +887,6 @@ class BBCooker:
|
||||
for t in self.status.world_target:
|
||||
pkgs_to_build.append(t)
|
||||
|
||||
if 'universe' in pkgs_to_build:
|
||||
parselog.debug(1, "collating packages for \"universe\"")
|
||||
pkgs_to_build.remove('universe')
|
||||
for t in self.status.universe_target:
|
||||
pkgs_to_build.append(t)
|
||||
|
||||
return pkgs_to_build
|
||||
|
||||
def get_bbfiles( self, path = os.getcwd() ):
|
||||
@@ -1090,9 +921,6 @@ class BBCooker:
|
||||
files = (data.getVar( "BBFILES", self.configuration.data, 1 ) or "").split()
|
||||
data.setVar("BBFILES", " ".join(files), self.configuration.data)
|
||||
|
||||
# Sort files by priority
|
||||
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem) )
|
||||
|
||||
if not len(files):
|
||||
files = self.get_bbfiles()
|
||||
|
||||
@@ -1100,21 +928,16 @@ class BBCooker:
|
||||
collectlog.error("no recipe files to build, check your BBPATH and BBFILES?")
|
||||
bb.event.fire(CookerExit(), self.configuration.event_data)
|
||||
|
||||
# Can't use set here as order is important
|
||||
newfiles = []
|
||||
newfiles = set()
|
||||
for f in files:
|
||||
if os.path.isdir(f):
|
||||
dirfiles = self.find_bbfiles(f)
|
||||
for g in dirfiles:
|
||||
if g not in newfiles:
|
||||
newfiles.append(g)
|
||||
newfiles.update(dirfiles)
|
||||
else:
|
||||
globbed = glob.glob(f)
|
||||
if not globbed and os.path.exists(f):
|
||||
globbed = [f]
|
||||
for g in globbed:
|
||||
if g not in newfiles:
|
||||
newfiles.append(g)
|
||||
newfiles.update(globbed)
|
||||
|
||||
bbmask = bb.data.getVar('BBMASK', self.configuration.data, 1)
|
||||
|
||||
@@ -1146,18 +969,6 @@ class BBCooker:
|
||||
self.appendlist[base] = []
|
||||
self.appendlist[base].append(f)
|
||||
|
||||
# Find overlayed recipes
|
||||
# bbfiles will be in priority order which makes this easy
|
||||
bbfile_seen = dict()
|
||||
self.overlayed = defaultdict(list)
|
||||
for f in reversed(bbfiles):
|
||||
base = os.path.basename(f)
|
||||
if base not in bbfile_seen:
|
||||
bbfile_seen[base] = f
|
||||
else:
|
||||
topfile = bbfile_seen[base]
|
||||
self.overlayed[topfile].append(f)
|
||||
|
||||
return (bbfiles, masked)
|
||||
|
||||
def get_file_appends(self, fn):
|
||||
@@ -1234,45 +1045,24 @@ class CookerExit(bb.event.Event):
|
||||
def __init__(self):
|
||||
bb.event.Event.__init__(self)
|
||||
|
||||
def catch_parse_error(func):
|
||||
"""Exception handling bits for our parsing"""
|
||||
@wraps(func)
|
||||
def wrapped(fn, *args):
|
||||
try:
|
||||
return func(fn, *args)
|
||||
except (IOError, bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
|
||||
parselog.critical("Unable to parse %s: %s" % (fn, exc))
|
||||
sys.exit(1)
|
||||
return wrapped
|
||||
|
||||
@catch_parse_error
|
||||
def _parse(fn, data, include=False):
|
||||
return bb.parse.handle(fn, data, include)
|
||||
|
||||
@catch_parse_error
|
||||
def _inherit(bbclass, data):
|
||||
bb.parse.BBHandler.inherit([bbclass], data)
|
||||
return data
|
||||
|
||||
class ParsingFailure(Exception):
|
||||
def __init__(self, realexception, recipe):
|
||||
self.realexception = realexception
|
||||
self.recipe = recipe
|
||||
Exception.__init__(self, realexception, recipe)
|
||||
Exception.__init__(self, "Failure when parsing %s" % recipe)
|
||||
self.args = (realexception, recipe)
|
||||
|
||||
def parse_file(task):
|
||||
filename, appends, caches_array = task
|
||||
filename, appends = task
|
||||
try:
|
||||
return True, bb.cache.Cache.parse(filename, appends, parse_file.cfg, caches_array)
|
||||
except Exception as exc:
|
||||
tb = sys.exc_info()[2]
|
||||
return True, bb.cache.Cache.parse(filename, appends, parse_file.cfg)
|
||||
except Exception, exc:
|
||||
exc.recipe = filename
|
||||
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
raise exc
|
||||
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
|
||||
# and for example a worker thread doesn't just exit on its own in response to
|
||||
# a SystemExit event for example.
|
||||
except BaseException as exc:
|
||||
except BaseException, exc:
|
||||
raise ParsingFailure(exc, filename)
|
||||
|
||||
class CookerParser(object):
|
||||
@@ -1295,13 +1085,13 @@ class CookerParser(object):
|
||||
self.num_processes = int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS", True) or
|
||||
multiprocessing.cpu_count())
|
||||
|
||||
self.bb_cache = bb.cache.Cache(self.cfgdata, cooker.caches_array)
|
||||
self.bb_cache = bb.cache.Cache(self.cfgdata)
|
||||
self.fromcache = []
|
||||
self.willparse = []
|
||||
for filename in self.filelist:
|
||||
appends = self.cooker.get_file_appends(filename)
|
||||
if not self.bb_cache.cacheValid(filename, appends):
|
||||
self.willparse.append((filename, appends, cooker.caches_array))
|
||||
if not self.bb_cache.cacheValid(filename):
|
||||
self.willparse.append((filename, appends))
|
||||
else:
|
||||
self.fromcache.append((filename, appends))
|
||||
self.toparse = self.total - len(self.fromcache)
|
||||
@@ -1311,24 +1101,18 @@ class CookerParser(object):
|
||||
|
||||
def start(self):
|
||||
def init(cfg):
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
parse_file.cfg = cfg
|
||||
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, args=(self.cooker.configuration.data, ), exitpriority=1)
|
||||
|
||||
self.results = self.load_cached()
|
||||
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
|
||||
|
||||
if self.toparse:
|
||||
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
|
||||
self.pool = multiprocessing.Pool(self.num_processes, init, [self.cfgdata])
|
||||
parsed = self.pool.imap(parse_file, self.willparse)
|
||||
self.pool.close()
|
||||
|
||||
self.pool = multiprocessing.Pool(self.num_processes, init, [self.cfgdata])
|
||||
parsed = self.pool.imap(parse_file, self.willparse)
|
||||
self.pool.close()
|
||||
|
||||
self.results = itertools.chain(self.results, parsed)
|
||||
self.results = itertools.chain(self.load_cached(), parsed)
|
||||
|
||||
def shutdown(self, clean=True):
|
||||
if not self.toparse:
|
||||
return
|
||||
|
||||
if clean:
|
||||
event = bb.event.ParseCompleted(self.cached, self.parsed,
|
||||
self.skipped, self.masked,
|
||||
@@ -1341,8 +1125,11 @@ class CookerParser(object):
|
||||
|
||||
sync = threading.Thread(target=self.bb_cache.sync)
|
||||
sync.start()
|
||||
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
|
||||
bb.codeparser.parser_cache_savemerge(self.cooker.configuration.data)
|
||||
atexit.register(lambda: sync.join())
|
||||
|
||||
codesync = threading.Thread(target=bb.codeparser.parser_cache_save(self.cooker.configuration.data))
|
||||
codesync.start()
|
||||
atexit.register(lambda: codesync.join())
|
||||
|
||||
def load_cached(self):
|
||||
for filename, appends in self.fromcache:
|
||||
@@ -1355,21 +1142,12 @@ class CookerParser(object):
|
||||
except StopIteration:
|
||||
self.shutdown()
|
||||
return False
|
||||
except ParsingFailure as exc:
|
||||
except KeyboardInterrupt:
|
||||
self.shutdown(clean=False)
|
||||
bb.fatal('Unable to parse %s: %s' %
|
||||
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
|
||||
except (bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
|
||||
bb.fatal(str(exc))
|
||||
except SyntaxError as exc:
|
||||
logger.error('Unable to parse %s', exc.recipe)
|
||||
sys.exit(1)
|
||||
raise
|
||||
except Exception as exc:
|
||||
etype, value, tb = sys.exc_info()
|
||||
logger.error('Unable to parse %s', value.recipe,
|
||||
exc_info=(etype, value, exc.traceback))
|
||||
self.shutdown(clean=False)
|
||||
sys.exit(1)
|
||||
bb.fatal('Error parsing %s: %s' % (exc.recipe, exc))
|
||||
|
||||
self.current += 1
|
||||
self.virtuals += len(result)
|
||||
@@ -1381,17 +1159,17 @@ class CookerParser(object):
|
||||
else:
|
||||
self.cached += 1
|
||||
|
||||
for virtualfn, info_array in result:
|
||||
if info_array[0].skipped:
|
||||
for virtualfn, info in result:
|
||||
if info.skipped:
|
||||
self.skipped += 1
|
||||
self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
|
||||
self.bb_cache.add_info(virtualfn, info_array, self.cooker.status,
|
||||
else:
|
||||
self.bb_cache.add_info(virtualfn, info, self.cooker.status,
|
||||
parsed=parsed)
|
||||
return True
|
||||
|
||||
def reparse(self, filename):
|
||||
infos = self.bb_cache.parse(filename,
|
||||
self.cooker.get_file_appends(filename),
|
||||
self.cfgdata, self.cooker.caches_array)
|
||||
for vfn, info_array in infos:
|
||||
self.cooker.status.add_from_recipeinfo(vfn, info_array)
|
||||
self.cfgdata)
|
||||
for vfn, info in infos:
|
||||
self.cooker.status.add_from_recipeinfo(vfn, info)
|
||||
|
||||
@@ -187,7 +187,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
val = getVar(var, d, 1)
|
||||
except (KeyboardInterrupt, bb.build.FuncFailed):
|
||||
raise
|
||||
except Exception as exc:
|
||||
except Exception, exc:
|
||||
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
|
||||
return 0
|
||||
|
||||
@@ -234,20 +234,25 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
|
||||
for key in keys:
|
||||
emit_var(key, o, d, all and not isfunc) and o.write('\n')
|
||||
|
||||
def exported_keys(d):
|
||||
return (key for key in d.keys() if not key.startswith('__') and
|
||||
d.getVarFlag(key, 'export') and
|
||||
not d.getVarFlag(key, 'unexport'))
|
||||
|
||||
def exported_vars(d):
|
||||
for key in exported_keys(d):
|
||||
def export_vars(d):
|
||||
keys = (key for key in d.keys() if d.getVarFlag(key, "export"))
|
||||
ret = {}
|
||||
for k in keys:
|
||||
try:
|
||||
value = d.getVar(key, True)
|
||||
except Exception:
|
||||
v = d.getVar(k, True)
|
||||
if v:
|
||||
ret[k] = v
|
||||
except (KeyboardInterrupt, bb.build.FuncFailed):
|
||||
raise
|
||||
except Exception, exc:
|
||||
pass
|
||||
return ret
|
||||
|
||||
if value is not None:
|
||||
yield key, str(value)
|
||||
def export_envvars(v, d):
|
||||
for s in os.environ.keys():
|
||||
if s not in v:
|
||||
v[s] = os.environ[s]
|
||||
return v
|
||||
|
||||
def emit_func(func, o=sys.__stdout__, d = init()):
|
||||
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
|
||||
|
||||
@@ -172,12 +172,11 @@ class DataSmart(MutableMapping):
|
||||
if o not in self._seen_overrides:
|
||||
continue
|
||||
|
||||
vars = self._seen_overrides[o].copy()
|
||||
vars = self._seen_overrides[o]
|
||||
for var in vars:
|
||||
name = var[:-l]
|
||||
try:
|
||||
self.setVar(name, self.getVar(var, False))
|
||||
self.delVar(var)
|
||||
except Exception:
|
||||
logger.info("Untracked delVar")
|
||||
|
||||
@@ -192,11 +191,11 @@ class DataSmart(MutableMapping):
|
||||
keep.append((a ,o))
|
||||
continue
|
||||
|
||||
if op == "_append":
|
||||
if op is "_append":
|
||||
sval = self.getVar(append, False) or ""
|
||||
sval += a
|
||||
self.setVar(append, sval)
|
||||
elif op == "_prepend":
|
||||
elif op is "_prepend":
|
||||
sval = a + (self.getVar(append, False) or "")
|
||||
self.setVar(append, sval)
|
||||
|
||||
@@ -259,16 +258,19 @@ class DataSmart(MutableMapping):
|
||||
# more cookies for the cookie monster
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
if len(override) > 0:
|
||||
if override not in self._seen_overrides:
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override].add( var )
|
||||
if override not in self._seen_overrides:
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override].add( var )
|
||||
|
||||
# setting var
|
||||
self.dict[var]["content"] = value
|
||||
|
||||
def getVar(self, var, expand=False, noweakdefault=False):
|
||||
return self.getVarFlag(var, "content", expand, noweakdefault)
|
||||
def getVar(self, var, exp):
|
||||
value = self.getVarFlag(var, "content")
|
||||
|
||||
if exp and value:
|
||||
return self.expand(value, var)
|
||||
return value
|
||||
|
||||
def renameVar(self, key, newkey):
|
||||
"""
|
||||
@@ -296,23 +298,19 @@ class DataSmart(MutableMapping):
|
||||
def delVar(self, var):
|
||||
self.expand_cache = {}
|
||||
self.dict[var] = {}
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
|
||||
self._seen_overrides[override].remove(var)
|
||||
|
||||
def setVarFlag(self, var, flag, flagvalue):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
self.dict[var][flag] = flagvalue
|
||||
|
||||
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
|
||||
def getVarFlag(self, var, flag, expand=False):
|
||||
local_var = self._findVar(var)
|
||||
value = None
|
||||
if local_var:
|
||||
if flag in local_var:
|
||||
value = copy.copy(local_var[flag])
|
||||
elif flag == "content" and "defaultval" in local_var and not noweakdefault:
|
||||
elif flag == "content" and "defaultval" in local_var:
|
||||
value = copy.copy(local_var["defaultval"])
|
||||
if expand and value:
|
||||
value = self.expand(value, None)
|
||||
@@ -400,22 +398,18 @@ class DataSmart(MutableMapping):
|
||||
yield key
|
||||
|
||||
def __iter__(self):
|
||||
def keylist(d):
|
||||
klist = set()
|
||||
for key in d:
|
||||
if key == "_data":
|
||||
continue
|
||||
if not d[key]:
|
||||
continue
|
||||
klist.add(key)
|
||||
|
||||
seen = set()
|
||||
def _keys(d):
|
||||
if "_data" in d:
|
||||
klist |= keylist(d["_data"])
|
||||
for key in _keys(d["_data"]):
|
||||
yield key
|
||||
|
||||
return klist
|
||||
|
||||
for k in keylist(self.dict):
|
||||
yield k
|
||||
for key in d:
|
||||
if key != "_data":
|
||||
if not key in seen:
|
||||
seen.add(key)
|
||||
yield key
|
||||
return _keys(self.dict)
|
||||
|
||||
def __len__(self):
|
||||
return len(frozenset(self))
|
||||
|
||||
@@ -30,7 +30,6 @@ except ImportError:
|
||||
import pickle
|
||||
import logging
|
||||
import atexit
|
||||
import traceback
|
||||
import bb.utils
|
||||
|
||||
# This is the pid for which we should generate the event. This is set when
|
||||
@@ -38,8 +37,6 @@ import bb.utils
|
||||
worker_pid = 0
|
||||
worker_pipe = None
|
||||
|
||||
logger = logging.getLogger('BitBake.Event')
|
||||
|
||||
class Event(object):
|
||||
"""Base class for events"""
|
||||
|
||||
@@ -61,35 +58,23 @@ _ui_handler_seq = 0
|
||||
bb.utils._context["NotHandled"] = NotHandled
|
||||
bb.utils._context["Handled"] = Handled
|
||||
|
||||
def execute_handler(name, handler, event, d):
|
||||
event.data = d
|
||||
try:
|
||||
ret = handler(event)
|
||||
except Exception:
|
||||
etype, value, tb = sys.exc_info()
|
||||
logger.error("Execution of event handler '%s' failed" % name,
|
||||
exc_info=(etype, value, tb.tb_next))
|
||||
raise
|
||||
except SystemExit as exc:
|
||||
if exc.code != 0:
|
||||
logger.error("Execution of event handler '%s' failed" % name)
|
||||
raise
|
||||
finally:
|
||||
del event.data
|
||||
|
||||
if ret is not None:
|
||||
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
|
||||
DeprecationWarning, stacklevel = 2)
|
||||
|
||||
def fire_class_handlers(event, d):
|
||||
if isinstance(event, logging.LogRecord):
|
||||
return
|
||||
|
||||
for name, handler in _handlers.iteritems():
|
||||
try:
|
||||
execute_handler(name, handler, event, d)
|
||||
except Exception:
|
||||
continue
|
||||
for handler in _handlers:
|
||||
h = _handlers[handler]
|
||||
event.data = d
|
||||
if type(h).__name__ == "code":
|
||||
locals = {"e": event}
|
||||
bb.utils.simple_exec(h, locals)
|
||||
ret = bb.utils.better_eval("tmpHandler(e)", locals)
|
||||
if ret is not None:
|
||||
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
|
||||
DeprecationWarning, stacklevel = 2)
|
||||
else:
|
||||
h(event)
|
||||
del event.data
|
||||
|
||||
ui_queue = []
|
||||
@atexit.register
|
||||
@@ -120,10 +105,7 @@ def fire_ui_handlers(event, d):
|
||||
# We use pickle here since it better handles object instances
|
||||
# which xmlrpc's marshaller does not. Events *must* be serializable
|
||||
# by pickle.
|
||||
if hasattr(_ui_handlers[h].event, "sendpickle"):
|
||||
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
|
||||
else:
|
||||
_ui_handlers[h].event.send(event)
|
||||
_ui_handlers[h].event.send((pickle.dumps(event)))
|
||||
except:
|
||||
errors.append(h)
|
||||
for h in errors:
|
||||
@@ -154,7 +136,6 @@ def fire_from_worker(event, d):
|
||||
event = pickle.loads(event[7:-8])
|
||||
fire_ui_handlers(event, d)
|
||||
|
||||
noop = lambda _: None
|
||||
def register(name, handler):
|
||||
"""Register an Event handler"""
|
||||
|
||||
@@ -165,18 +146,9 @@ def register(name, handler):
|
||||
if handler is not None:
|
||||
# handle string containing python code
|
||||
if isinstance(handler, basestring):
|
||||
tmp = "def %s(e):\n%s" % (name, handler)
|
||||
try:
|
||||
code = compile(tmp, "%s(e)" % name, "exec")
|
||||
except SyntaxError:
|
||||
logger.error("Unable to register event handler '%s':\n%s", name,
|
||||
''.join(traceback.format_exc(limit=0)))
|
||||
_handlers[name] = noop
|
||||
return
|
||||
env = {}
|
||||
bb.utils.simple_exec(code, env)
|
||||
func = bb.utils.better_eval(name, env)
|
||||
_handlers[name] = func
|
||||
tmp = "def tmpHandler(e):\n%s" % handler
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
|
||||
_handlers[name] = comp
|
||||
else:
|
||||
_handlers[name] = handler
|
||||
|
||||
@@ -206,17 +178,13 @@ def getName(e):
|
||||
class ConfigParsed(Event):
|
||||
"""Configuration Parsing Complete"""
|
||||
|
||||
class RecipeEvent(Event):
|
||||
class RecipeParsed(Event):
|
||||
""" Recipe Parsing Complete """
|
||||
|
||||
def __init__(self, fn):
|
||||
self.fn = fn
|
||||
Event.__init__(self)
|
||||
|
||||
class RecipePreFinalise(RecipeEvent):
|
||||
""" Recipe Parsing Complete but not yet finialised"""
|
||||
|
||||
class RecipeParsed(RecipeEvent):
|
||||
""" Recipe Parsing Complete """
|
||||
|
||||
class StampUpdate(Event):
|
||||
"""Trigger for any adjustment of the stamp files to happen"""
|
||||
|
||||
@@ -390,16 +358,6 @@ class TargetsTreeGenerated(Event):
|
||||
Event.__init__(self)
|
||||
self._model = model
|
||||
|
||||
class FilesMatchingFound(Event):
|
||||
"""
|
||||
Event when a list of files matching the supplied pattern has
|
||||
been generated
|
||||
"""
|
||||
def __init__(self, pattern, matches):
|
||||
Event.__init__(self)
|
||||
self._pattern = pattern
|
||||
self._matches = matches
|
||||
|
||||
class ConfigFilesFound(Event):
|
||||
"""
|
||||
Event when a list of appropriate config files has been generated
|
||||
@@ -409,14 +367,6 @@ class ConfigFilesFound(Event):
|
||||
self._variable = variable
|
||||
self._values = values
|
||||
|
||||
class ConfigFilePathFound(Event):
|
||||
"""
|
||||
Event when a path for a config file has been found
|
||||
"""
|
||||
def __init__(self, path):
|
||||
Event.__init__(self)
|
||||
self._path = path
|
||||
|
||||
class MsgBase(Event):
|
||||
"""Base class for messages"""
|
||||
|
||||
@@ -446,12 +396,6 @@ class LogHandler(logging.Handler):
|
||||
"""Dispatch logging messages as bitbake events"""
|
||||
|
||||
def emit(self, record):
|
||||
if record.exc_info:
|
||||
etype, value, tb = record.exc_info
|
||||
if hasattr(tb, 'tb_next'):
|
||||
tb = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
record.bb_exc_info = (etype, value, tb)
|
||||
record.exc_info = None
|
||||
fire(record, None)
|
||||
|
||||
def filter(self, record):
|
||||
|
||||
@@ -1,84 +0,0 @@
|
||||
from __future__ import absolute_import
|
||||
import inspect
|
||||
import traceback
|
||||
import bb.namedtuple_with_abc
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class TracebackEntry(namedtuple.abc):
|
||||
"""Pickleable representation of a traceback entry"""
|
||||
_fields = 'filename lineno function args code_context index'
|
||||
_header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
|
||||
|
||||
def format(self, formatter=None):
|
||||
if not self.code_context:
|
||||
return self._header.format(self) + '\n'
|
||||
|
||||
formatted = [self._header.format(self) + ':\n']
|
||||
|
||||
for lineindex, line in enumerate(self.code_context):
|
||||
if formatter:
|
||||
line = formatter(line)
|
||||
|
||||
if lineindex == self.index:
|
||||
formatted.append(' >%s' % line)
|
||||
else:
|
||||
formatted.append(' %s' % line)
|
||||
return formatted
|
||||
|
||||
def __str__(self):
|
||||
return ''.join(self.format())
|
||||
|
||||
def _get_frame_args(frame):
|
||||
"""Get the formatted arguments and class (if available) for a frame"""
|
||||
arginfo = inspect.getargvalues(frame)
|
||||
if not arginfo.args:
|
||||
return '', None
|
||||
|
||||
firstarg = arginfo.args[0]
|
||||
if firstarg == 'self':
|
||||
self = arginfo.locals['self']
|
||||
cls = self.__class__.__name__
|
||||
|
||||
arginfo.args.pop(0)
|
||||
del arginfo.locals['self']
|
||||
else:
|
||||
cls = None
|
||||
|
||||
formatted = inspect.formatargvalues(*arginfo)
|
||||
return formatted, cls
|
||||
|
||||
def extract_traceback(tb, context=1):
|
||||
frames = inspect.getinnerframes(tb, context)
|
||||
for frame, filename, lineno, function, code_context, index in frames:
|
||||
formatted_args, cls = _get_frame_args(frame)
|
||||
if cls:
|
||||
function = '%s.%s' % (cls, function)
|
||||
yield TracebackEntry(filename, lineno, function, formatted_args,
|
||||
code_context, index)
|
||||
|
||||
def format_extracted(extracted, formatter=None, limit=None):
|
||||
if limit:
|
||||
extracted = extracted[-limit:]
|
||||
|
||||
formatted = []
|
||||
for tracebackinfo in extracted:
|
||||
formatted.extend(tracebackinfo.format(formatter))
|
||||
return formatted
|
||||
|
||||
|
||||
def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
|
||||
formatted = ['Traceback (most recent call last):\n']
|
||||
|
||||
if hasattr(tb, 'tb_next'):
|
||||
tb = extract_traceback(tb, context)
|
||||
|
||||
formatted.extend(format_extracted(tb, formatter, limit))
|
||||
formatted.extend(traceback.format_exception_only(etype, value))
|
||||
return formatted
|
||||
|
||||
def to_string(exc):
|
||||
if isinstance(exc, SystemExit):
|
||||
if not isinstance(exc.code, basestring):
|
||||
return 'Exited with "%d"' % exc.code
|
||||
return str(exc)
|
||||
@@ -153,18 +153,18 @@ def fetcher_init(d):
|
||||
Called to initialize the fetchers once the configuration data is known.
|
||||
Calls before this must not hit the cache.
|
||||
"""
|
||||
pd = persist_data.persist(d)
|
||||
# When to drop SCM head revisions controlled by user policy
|
||||
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
revs = persist_data.persist('BB_URI_HEADREVS', d)
|
||||
try:
|
||||
bb.fetch.saved_headrevs = revs.items()
|
||||
bb.fetch.saved_headrevs = pd['BB_URI_HEADREVS'].items()
|
||||
except:
|
||||
pass
|
||||
revs.clear()
|
||||
del pd['BB_URI_HEADREVS']
|
||||
else:
|
||||
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
|
||||
@@ -178,7 +178,8 @@ def fetcher_compare_revisions(d):
|
||||
return true/false on whether they've changed.
|
||||
"""
|
||||
|
||||
data = persist_data.persist('BB_URI_HEADREVS', d).items()
|
||||
pd = persist_data.persist(d)
|
||||
data = pd['BB_URI_HEADREVS'].items()
|
||||
data2 = bb.fetch.saved_headrevs
|
||||
|
||||
changed = False
|
||||
@@ -755,13 +756,15 @@ class Fetch(object):
|
||||
if not hasattr(self, "_latest_revision"):
|
||||
raise ParameterError
|
||||
|
||||
revs = persist_data.persist('BB_URI_HEADREVS', d)
|
||||
pd = persist_data.persist(d)
|
||||
revs = pd['BB_URI_HEADREVS']
|
||||
key = self.generate_revision_key(url, ud, d)
|
||||
try:
|
||||
return revs[key]
|
||||
except KeyError:
|
||||
revs[key] = rev = self._latest_revision(url, ud, d)
|
||||
return rev
|
||||
rev = revs[key]
|
||||
if rev != None:
|
||||
return str(rev)
|
||||
|
||||
revs[key] = rev = self._latest_revision(url, ud, d)
|
||||
return rev
|
||||
|
||||
def sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
@@ -770,17 +773,18 @@ class Fetch(object):
|
||||
if hasattr(self, "_sortable_revision"):
|
||||
return self._sortable_revision(url, ud, d)
|
||||
|
||||
localcounts = persist_data.persist('BB_URI_LOCALCOUNT', d)
|
||||
pd = persist_data.persist(d)
|
||||
localcounts = pd['BB_URI_LOCALCOUNT']
|
||||
key = self.generate_revision_key(url, ud, d)
|
||||
|
||||
latest_rev = self._build_revision(url, ud, d)
|
||||
last_rev = localcounts.get(key + '_rev')
|
||||
last_rev = localcounts[key + '_rev']
|
||||
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
|
||||
count = None
|
||||
if uselocalcount:
|
||||
count = Fetch.localcount_internal_helper(ud, d)
|
||||
if count is None:
|
||||
count = localcounts.get(key + '_count')
|
||||
count = localcounts[key + '_count']
|
||||
|
||||
if last_rev == latest_rev:
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
@@ -67,15 +67,15 @@ class Bzr(Fetch):
|
||||
|
||||
options = []
|
||||
|
||||
if command == "revno":
|
||||
if command is "revno":
|
||||
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
if command is "fetch":
|
||||
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
elif command == "update":
|
||||
elif command is "update":
|
||||
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid bzr command %s" % command)
|
||||
@@ -94,7 +94,7 @@ class Bzr(Fetch):
|
||||
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
logger.debug(1, "BZR Checkout %s", loc)
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", bzrcmd)
|
||||
runfetchcmd(bzrcmd, d)
|
||||
|
||||
@@ -137,7 +137,7 @@ class Cvs(Fetch):
|
||||
else:
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(pkgdir)
|
||||
bb.mkdirhier(pkgdir)
|
||||
os.chdir(pkgdir)
|
||||
logger.debug(1, "Running %s", cvscmd)
|
||||
myret = os.system(cvscmd)
|
||||
|
||||
@@ -131,7 +131,7 @@ class Git(Fetch):
|
||||
|
||||
# If the checkout doesn't exist and the mirror tarball does, extract it
|
||||
if not os.path.exists(ud.clonedir) and os.path.exists(repofile):
|
||||
bb.utils.mkdirhier(ud.clonedir)
|
||||
bb.mkdirhier(ud.clonedir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % (repofile), d)
|
||||
|
||||
@@ -188,7 +188,7 @@ class Git(Fetch):
|
||||
os.chdir(coprefix)
|
||||
runfetchcmd("%s checkout -q -f %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
|
||||
else:
|
||||
bb.utils.mkdirhier(codir)
|
||||
bb.mkdirhier(codir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f --prefix=%s -a" % (ud.basecmd, coprefix), d)
|
||||
@@ -242,36 +242,36 @@ class Git(Fetch):
|
||||
"""
|
||||
Look in the cache for the latest revision, if not present ask the SCM.
|
||||
"""
|
||||
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
|
||||
persisted = bb.persist_data.persist(d)
|
||||
revs = persisted['BB_URI_HEADREVS']
|
||||
|
||||
key = self.generate_revision_key(url, ud, d, branch=True)
|
||||
|
||||
try:
|
||||
return revs[key]
|
||||
except KeyError:
|
||||
rev = revs[key]
|
||||
if rev is None:
|
||||
# Compatibility with old key format, no branch included
|
||||
oldkey = self.generate_revision_key(url, ud, d, branch=False)
|
||||
try:
|
||||
rev = revs[oldkey]
|
||||
except KeyError:
|
||||
rev = self._latest_revision(url, ud, d)
|
||||
else:
|
||||
rev = revs[oldkey]
|
||||
if rev is not None:
|
||||
del revs[oldkey]
|
||||
else:
|
||||
rev = self._latest_revision(url, ud, d)
|
||||
revs[key] = rev
|
||||
return rev
|
||||
|
||||
return str(rev)
|
||||
|
||||
def sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
|
||||
"""
|
||||
localcounts = bb.persist_data.persist('BB_URI_LOCALCOUNT', d)
|
||||
pd = bb.persist_data.persist(d)
|
||||
localcounts = pd['BB_URI_LOCALCOUNT']
|
||||
key = self.generate_revision_key(url, ud, d, branch=True)
|
||||
oldkey = self.generate_revision_key(url, ud, d, branch=False)
|
||||
|
||||
latest_rev = self._build_revision(url, ud, d)
|
||||
last_rev = localcounts.get(key + '_rev')
|
||||
last_rev = localcounts[key + '_rev']
|
||||
if last_rev is None:
|
||||
last_rev = localcounts.get(oldkey + '_rev')
|
||||
last_rev = localcounts[oldkey + '_rev']
|
||||
if last_rev is not None:
|
||||
del localcounts[oldkey + '_rev']
|
||||
localcounts[key + '_rev'] = last_rev
|
||||
@@ -281,9 +281,9 @@ class Git(Fetch):
|
||||
if uselocalcount:
|
||||
count = Fetch.localcount_internal_helper(ud, d)
|
||||
if count is None:
|
||||
count = localcounts.get(key + '_count')
|
||||
count = localcounts[key + '_count']
|
||||
if count is None:
|
||||
count = localcounts.get(oldkey + '_count')
|
||||
count = localcounts[oldkey + '_count']
|
||||
if count is not None:
|
||||
del localcounts[oldkey + '_count']
|
||||
localcounts[key + '_count'] = count
|
||||
|
||||
@@ -131,7 +131,7 @@ class Hg(Fetch):
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", fetchcmd)
|
||||
runfetchcmd(fetchcmd, d)
|
||||
|
||||
@@ -98,7 +98,7 @@ class Osc(Fetch):
|
||||
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", oscfetchcmd)
|
||||
runfetchcmd(oscfetchcmd, d)
|
||||
|
||||
@@ -154,7 +154,7 @@ class Perforce(Fetch):
|
||||
|
||||
# create temp directory
|
||||
logger.debug(2, "Fetch: creating temporary directory")
|
||||
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
|
||||
@@ -71,7 +71,7 @@ class Repo(Fetch):
|
||||
else:
|
||||
username = ""
|
||||
|
||||
bb.utils.mkdirhier(os.path.join(codir, "repo"))
|
||||
bb.mkdirhier(os.path.join(codir, "repo"))
|
||||
os.chdir(os.path.join(codir, "repo"))
|
||||
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
|
||||
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
|
||||
|
||||
@@ -71,7 +71,7 @@ class Svk(Fetch):
|
||||
localdata = data.createCopy(d)
|
||||
data.update_data(localdata)
|
||||
logger.debug(2, "Fetch: creating temporary directory")
|
||||
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
|
||||
@@ -146,7 +146,7 @@ class Svn(Fetch):
|
||||
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", svnfetchcmd)
|
||||
runfetchcmd(svnfetchcmd, d)
|
||||
|
||||
@@ -28,8 +28,10 @@ from __future__ import absolute_import
|
||||
from __future__ import print_function
|
||||
import os, re
|
||||
import logging
|
||||
import bb.data, bb.persist_data, bb.utils
|
||||
from bb import data
|
||||
import bb
|
||||
from bb import data
|
||||
from bb import persist_data
|
||||
from bb import utils
|
||||
|
||||
__version__ = "2"
|
||||
|
||||
@@ -203,10 +205,7 @@ def uri_replace(ud, uri_find, uri_replace, d):
|
||||
result_decoded[loc] = uri_decoded[loc]
|
||||
if isinstance(i, basestring):
|
||||
if (re.match(i, uri_decoded[loc])):
|
||||
if not uri_replace_decoded[loc]:
|
||||
result_decoded[loc] = ""
|
||||
else:
|
||||
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
|
||||
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
|
||||
if uri_find_decoded.index(i) == 2:
|
||||
if ud.mirrortarball:
|
||||
result_decoded[loc] = os.path.join(os.path.dirname(result_decoded[loc]), os.path.basename(ud.mirrortarball))
|
||||
@@ -225,18 +224,18 @@ def fetcher_init(d):
|
||||
Called to initialize the fetchers once the configuration data is known.
|
||||
Calls before this must not hit the cache.
|
||||
"""
|
||||
pd = persist_data.persist(d)
|
||||
# When to drop SCM head revisions controlled by user policy
|
||||
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, True) or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
|
||||
try:
|
||||
bb.fetch2.saved_headrevs = revs.items()
|
||||
bb.fetch2.saved_headrevs = pd['BB_URI_HEADREVS'].items()
|
||||
except:
|
||||
pass
|
||||
revs.clear()
|
||||
del pd['BB_URI_HEADREVS']
|
||||
else:
|
||||
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
|
||||
@@ -250,7 +249,8 @@ def fetcher_compare_revisions(d):
|
||||
return true/false on whether they've changed.
|
||||
"""
|
||||
|
||||
data = bb.persist_data.persist('BB_URI_HEADREVS', d).items()
|
||||
pd = persist_data.persist(d)
|
||||
data = pd['BB_URI_HEADREVS'].items()
|
||||
data2 = bb.fetch2.saved_headrevs
|
||||
|
||||
changed = False
|
||||
@@ -300,22 +300,6 @@ def verify_checksum(u, ud, d):
|
||||
if ud.sha256_expected != sha256data:
|
||||
raise SHA256SumError(ud.localpath, ud.sha256_expected, sha256data, u)
|
||||
|
||||
def update_stamp(u, ud, d):
|
||||
"""
|
||||
donestamp is file stamp indicating the whole fetching is done
|
||||
this function update the stamp after verifying the checksum
|
||||
"""
|
||||
if os.path.exists(ud.donestamp):
|
||||
# Touch the done stamp file to show active use of the download
|
||||
try:
|
||||
os.utime(ud.donestamp, None)
|
||||
except:
|
||||
# Errors aren't fatal here
|
||||
pass
|
||||
else:
|
||||
verify_checksum(u, ud, d)
|
||||
open(ud.donestamp, 'w').close()
|
||||
|
||||
def subprocess_setup():
|
||||
import signal
|
||||
# Python installs a SIGPIPE handler by default. This is usually not what
|
||||
@@ -368,7 +352,7 @@ def get_srcrev(d):
|
||||
|
||||
def localpath(url, d):
|
||||
fetcher = bb.fetch2.Fetch([url], d)
|
||||
return fetcher.localpath(url)
|
||||
return fetcher.localpath(url)
|
||||
|
||||
def runfetchcmd(cmd, d, quiet = False, cleanup = []):
|
||||
"""
|
||||
@@ -388,7 +372,7 @@ def runfetchcmd(cmd, d, quiet = False, cleanup = []):
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
|
||||
|
||||
for var in exportvars:
|
||||
val = bb.data.getVar(var, d, True)
|
||||
val = data.getVar(var, d, True)
|
||||
if val:
|
||||
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
|
||||
|
||||
@@ -514,15 +498,15 @@ def srcrev_internal_helper(ud, d, name):
|
||||
return ud.parm['tag']
|
||||
|
||||
rev = None
|
||||
pn = bb.data.getVar("PN", d, True)
|
||||
pn = data.getVar("PN", d, True)
|
||||
if name != '':
|
||||
rev = bb.data.getVar("SRCREV_%s_pn-%s" % (name, pn), d, True)
|
||||
rev = data.getVar("SRCREV_%s_pn-%s" % (name, pn), d, True)
|
||||
if not rev:
|
||||
rev = bb.data.getVar("SRCREV_%s" % name, d, True)
|
||||
rev = data.getVar("SRCREV_%s" % name, d, True)
|
||||
if not rev:
|
||||
rev = bb.data.getVar("SRCREV_pn-%s" % pn, d, True)
|
||||
rev = data.getVar("SRCREV_pn-%s" % pn, d, True)
|
||||
if not rev:
|
||||
rev = bb.data.getVar("SRCREV", d, True)
|
||||
rev = data.getVar("SRCREV", d, True)
|
||||
if rev == "INVALID":
|
||||
raise FetchError("Please set SRCREV to a valid value", ud.url)
|
||||
if rev == "AUTOINC":
|
||||
@@ -541,7 +525,6 @@ class FetchData(object):
|
||||
self.localpath = None
|
||||
self.lockfile = None
|
||||
self.mirrortarball = None
|
||||
self.basename = None
|
||||
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(data.expand(url, d))
|
||||
self.date = self.getSRCDate(d)
|
||||
self.url = url
|
||||
@@ -571,6 +554,15 @@ class FetchData(object):
|
||||
if not self.method:
|
||||
raise NoMethodError(url)
|
||||
|
||||
if self.method.supports_srcrev():
|
||||
self.revisions = {}
|
||||
for name in self.names:
|
||||
self.revisions[name] = srcrev_internal_helper(self, d, name)
|
||||
|
||||
# add compatibility code for non name specified case
|
||||
if len(self.names) == 1:
|
||||
self.revision = self.revisions[self.names[0]]
|
||||
|
||||
if hasattr(self.method, "urldata_init"):
|
||||
self.method.urldata_init(self, d)
|
||||
|
||||
@@ -581,19 +573,11 @@ class FetchData(object):
|
||||
elif self.localfile:
|
||||
self.localpath = self.method.localpath(self.url, self, d)
|
||||
|
||||
# Note: These files should always be in DL_DIR whereas localpath may not be.
|
||||
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath or self.basename), d)
|
||||
self.donestamp = basepath + '.done'
|
||||
self.lockfile = basepath + '.lock'
|
||||
|
||||
def setup_revisons(self, d):
|
||||
self.revisions = {}
|
||||
for name in self.names:
|
||||
self.revisions[name] = srcrev_internal_helper(self, d, name)
|
||||
|
||||
# add compatibility code for non name specified case
|
||||
if len(self.names) == 1:
|
||||
self.revision = self.revisions[self.names[0]]
|
||||
if self.localfile and self.localpath:
|
||||
# Note: These files should always be in DL_DIR whereas localpath may not be.
|
||||
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath), d)
|
||||
self.donestamp = basepath + '.done'
|
||||
self.lockfile = basepath + '.lock'
|
||||
|
||||
def setup_localpath(self, d):
|
||||
if not self.localpath:
|
||||
@@ -608,12 +592,12 @@ class FetchData(object):
|
||||
if "srcdate" in self.parm:
|
||||
return self.parm['srcdate']
|
||||
|
||||
pn = bb.data.getVar("PN", d, True)
|
||||
pn = data.getVar("PN", d, True)
|
||||
|
||||
if pn:
|
||||
return bb.data.getVar("SRCDATE_%s" % pn, d, True) or bb.data.getVar("SRCDATE", d, True) or bb.data.getVar("DATE", d, True)
|
||||
return data.getVar("SRCDATE_%s" % pn, d, True) or data.getVar("SRCDATE", d, True) or data.getVar("DATE", d, True)
|
||||
|
||||
return bb.data.getVar("SRCDATE", d, True) or bb.data.getVar("DATE", d, True)
|
||||
return data.getVar("SRCDATE", d, True) or data.getVar("DATE", d, True)
|
||||
|
||||
class FetchMethod(object):
|
||||
"""Base class for 'fetch'ing data"""
|
||||
@@ -679,7 +663,7 @@ class FetchMethod(object):
|
||||
|
||||
try:
|
||||
unpack = bb.utils.to_boolean(urldata.parm.get('unpack'), True)
|
||||
except ValueError as exc:
|
||||
except ValueError, exc:
|
||||
bb.fatal("Invalid value for 'unpack' parameter for %s: %s" %
|
||||
(file, urldata.parm.get('unpack')))
|
||||
|
||||
@@ -708,7 +692,7 @@ class FetchMethod(object):
|
||||
elif file.endswith('.zip') or file.endswith('.jar'):
|
||||
try:
|
||||
dos = bb.utils.to_boolean(urldata.parm.get('dos'), False)
|
||||
except ValueError as exc:
|
||||
except ValueError, exc:
|
||||
bb.fatal("Invalid value for 'dos' parameter for %s: %s" %
|
||||
(file, urldata.parm.get('dos')))
|
||||
cmd = 'unzip -q -o'
|
||||
@@ -747,7 +731,7 @@ class FetchMethod(object):
|
||||
destdir = urldata.path.rsplit("/", 1)[0]
|
||||
else:
|
||||
destdir = "."
|
||||
bb.utils.mkdirhier("%s/%s" % (rootdir, destdir))
|
||||
bb.mkdirhier("%s/%s" % (rootdir, destdir))
|
||||
cmd = 'cp %s %s/%s/' % (file, rootdir, destdir)
|
||||
|
||||
if not cmd:
|
||||
@@ -758,7 +742,7 @@ class FetchMethod(object):
|
||||
os.chdir(rootdir)
|
||||
if 'subdir' in urldata.parm:
|
||||
newdir = ("%s/%s" % (rootdir, urldata.parm.get('subdir')))
|
||||
bb.utils.mkdirhier(newdir)
|
||||
bb.mkdirhier(newdir)
|
||||
os.chdir(newdir)
|
||||
|
||||
cmd = "PATH=\"%s\" %s" % (bb.data.getVar('PATH', data, True), cmd)
|
||||
@@ -806,10 +790,10 @@ class FetchMethod(object):
|
||||
|
||||
localcount = None
|
||||
if name != '':
|
||||
pn = bb.data.getVar("PN", d, True)
|
||||
localcount = bb.data.getVar("LOCALCOUNT_" + name, d, True)
|
||||
pn = data.getVar("PN", d, True)
|
||||
localcount = data.getVar("LOCALCOUNT_" + name, d, True)
|
||||
if not localcount:
|
||||
localcount = bb.data.getVar("LOCALCOUNT", d, True)
|
||||
localcount = data.getVar("LOCALCOUNT", d, True)
|
||||
return localcount
|
||||
|
||||
localcount_internal_helper = staticmethod(localcount_internal_helper)
|
||||
@@ -821,13 +805,15 @@ class FetchMethod(object):
|
||||
if not hasattr(self, "_latest_revision"):
|
||||
raise ParameterError("The fetcher for this URL does not support _latest_revision", url)
|
||||
|
||||
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
|
||||
pd = persist_data.persist(d)
|
||||
revs = pd['BB_URI_HEADREVS']
|
||||
key = self.generate_revision_key(url, ud, d, name)
|
||||
try:
|
||||
return revs[key]
|
||||
except KeyError:
|
||||
revs[key] = rev = self._latest_revision(url, ud, d, name)
|
||||
return rev
|
||||
rev = revs[key]
|
||||
if rev != None:
|
||||
return str(rev)
|
||||
|
||||
revs[key] = rev = self._latest_revision(url, ud, d, name)
|
||||
return rev
|
||||
|
||||
def sortable_revision(self, url, ud, d, name):
|
||||
"""
|
||||
@@ -836,17 +822,18 @@ class FetchMethod(object):
|
||||
if hasattr(self, "_sortable_revision"):
|
||||
return self._sortable_revision(url, ud, d)
|
||||
|
||||
localcounts = bb.persist_data.persist('BB_URI_LOCALCOUNT', d)
|
||||
pd = persist_data.persist(d)
|
||||
localcounts = pd['BB_URI_LOCALCOUNT']
|
||||
key = self.generate_revision_key(url, ud, d, name)
|
||||
|
||||
latest_rev = self._build_revision(url, ud, d, name)
|
||||
last_rev = localcounts.get(key + '_rev')
|
||||
last_rev = localcounts[key + '_rev']
|
||||
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
|
||||
count = None
|
||||
if uselocalcount:
|
||||
count = FetchMethod.localcount_internal_helper(ud, d, name)
|
||||
if count is None:
|
||||
count = localcounts.get(key + '_count') or "0"
|
||||
count = localcounts[key + '_count'] or "0"
|
||||
|
||||
if last_rev == latest_rev:
|
||||
return str(count + "+" + latest_rev)
|
||||
@@ -926,6 +913,9 @@ class Fetch(object):
|
||||
m = ud.method
|
||||
localpath = ""
|
||||
|
||||
if not ud.localfile:
|
||||
continue
|
||||
|
||||
lf = bb.utils.lockfile(ud.lockfile)
|
||||
|
||||
try:
|
||||
@@ -948,9 +938,6 @@ class Fetch(object):
|
||||
if hasattr(m, "build_mirror_data"):
|
||||
m.build_mirror_data(u, ud, self.d)
|
||||
localpath = ud.localpath
|
||||
# early checksum verify, so that if checksum mismatched,
|
||||
# fetcher still have chance to fetch from mirror
|
||||
update_stamp(u, ud, self.d)
|
||||
|
||||
except bb.fetch2.NetworkAccess:
|
||||
raise
|
||||
@@ -964,10 +951,20 @@ class Fetch(object):
|
||||
mirrors = mirror_from_string(bb.data.getVar('MIRRORS', self.d, True))
|
||||
localpath = try_mirrors (self.d, ud, mirrors)
|
||||
|
||||
if not localpath or ((not os.path.exists(localpath)) and localpath.find("*") == -1):
|
||||
if not localpath or not os.path.exists(localpath):
|
||||
raise FetchError("Unable to fetch URL %s from any source." % u, u)
|
||||
|
||||
update_stamp(u, ud, self.d)
|
||||
if os.path.exists(ud.donestamp):
|
||||
# Touch the done stamp file to show active use of the download
|
||||
try:
|
||||
os.utime(ud.donestamp, None)
|
||||
except:
|
||||
# Errors aren't fatal here
|
||||
pass
|
||||
else:
|
||||
# Only check the checksums if we've not seen this item before, then create the stamp
|
||||
verify_checksum(u, ud, self.d)
|
||||
open(ud.donestamp, 'w').close()
|
||||
|
||||
finally:
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
@@ -45,8 +45,6 @@ class Bzr(FetchMethod):
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
if not ud.revision:
|
||||
ud.revision = self.latest_revision(ud.url, ud, d)
|
||||
|
||||
@@ -66,15 +64,15 @@ class Bzr(FetchMethod):
|
||||
|
||||
options = []
|
||||
|
||||
if command == "revno":
|
||||
if command is "revno":
|
||||
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
if command is "fetch":
|
||||
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
elif command == "update":
|
||||
elif command is "update":
|
||||
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid bzr command %s" % command, ud.url)
|
||||
@@ -95,7 +93,7 @@ class Bzr(FetchMethod):
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
|
||||
logger.debug(1, "BZR Checkout %s", loc)
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", bzrcmd)
|
||||
runfetchcmd(bzrcmd, d)
|
||||
|
||||
@@ -139,7 +139,7 @@ class Cvs(FetchMethod):
|
||||
else:
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(pkgdir)
|
||||
bb.mkdirhier(pkgdir)
|
||||
os.chdir(pkgdir)
|
||||
logger.debug(1, "Running %s", cvscmd)
|
||||
bb.fetch2.check_network_access(d, cvscmd, ud.url)
|
||||
|
||||
@@ -3,41 +3,6 @@
|
||||
"""
|
||||
BitBake 'Fetch' git implementation
|
||||
|
||||
git fetcher support the SRC_URI with format of:
|
||||
SRC_URI = "git://some.host/somepath;OptionA=xxx;OptionB=xxx;..."
|
||||
|
||||
Supported SRC_URI options are:
|
||||
|
||||
- branch
|
||||
The git branch to retrieve from. The default is "master"
|
||||
|
||||
this option also support multiple branches fetching, branches
|
||||
are seperated by comma. in multiple branches case, the name option
|
||||
must have the same number of names to match the branches, which is
|
||||
used to specify the SRC_REV for the branch
|
||||
e.g:
|
||||
SRC_URI="git://some.host/somepath;branch=branchX,branchY;name=nameX,nameY"
|
||||
SRCREV_nameX = "xxxxxxxxxxxxxxxxxxxx"
|
||||
SRCREV_nameY = "YYYYYYYYYYYYYYYYYYYY"
|
||||
|
||||
- tag
|
||||
The git tag to retrieve. The default is "master"
|
||||
|
||||
- protocol
|
||||
The method to use to access the repository. Common options are "git",
|
||||
"http", "file" and "rsync". The default is "git"
|
||||
|
||||
- rebaseable
|
||||
rebaseable indicates that the upstream git repo may rebase in the future,
|
||||
and current revision may disappear from upstream repo. This option will
|
||||
reminder fetcher to preserve local cache carefully for future use.
|
||||
The default value is "0", set rebaseable=1 for rebaseable git repo
|
||||
|
||||
- nocheckout
|
||||
Don't checkout source code when unpacking. set this option for the recipe
|
||||
who has its own routine to checkout code.
|
||||
The default is "0", set nocheckout=1 if needed.
|
||||
|
||||
"""
|
||||
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
@@ -86,14 +51,11 @@ class Git(FetchMethod):
|
||||
elif not ud.host:
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "git"
|
||||
ud.proto = "rsync"
|
||||
|
||||
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https'):
|
||||
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
|
||||
|
||||
ud.nocheckout = ud.parm.get("nocheckout","0") == "1"
|
||||
|
||||
ud.rebaseable = ud.parm.get("rebaseable","0") == "1"
|
||||
ud.nocheckout = False
|
||||
if 'nocheckout' in ud.parm:
|
||||
ud.nocheckout = True
|
||||
|
||||
branches = ud.parm.get("branch", "master").split(',')
|
||||
if len(branches) != len(ud.names):
|
||||
@@ -103,29 +65,19 @@ class Git(FetchMethod):
|
||||
branch = branches[ud.names.index(name)]
|
||||
ud.branches[name] = branch
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
|
||||
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
|
||||
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
|
||||
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
|
||||
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
|
||||
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
for name in ud.names:
|
||||
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
|
||||
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
|
||||
ud.branches[name] = ud.revisions[name]
|
||||
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
|
||||
# for rebaseable git repo, it is necessary to keep mirror tar ball
|
||||
# per revision, so that even the revision disappears from the
|
||||
# upstream repo in the future, the mirror will remain intact and still
|
||||
# contains the revision
|
||||
if ud.rebaseable:
|
||||
for name in ud.names:
|
||||
gitsrcname = gitsrcname + '_' + ud.revisions[name]
|
||||
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
|
||||
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
|
||||
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
ud.write_tarballs = (data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0"
|
||||
|
||||
ud.localfile = ud.clonedir
|
||||
|
||||
@@ -164,16 +116,14 @@ class Git(FetchMethod):
|
||||
|
||||
# If the checkout doesn't exist and the mirror tarball does, extract it
|
||||
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
|
||||
bb.utils.mkdirhier(ud.clonedir)
|
||||
bb.mkdirhier(ud.clonedir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
|
||||
|
||||
# If the repo still doesn't exist, fallback to cloning it
|
||||
if not os.path.exists(ud.clonedir):
|
||||
clone_cmd = "%s clone --bare --mirror %s://%s%s%s %s" % \
|
||||
(ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir)
|
||||
bb.fetch2.check_network_access(d, clone_cmd)
|
||||
runfetchcmd(clone_cmd, d)
|
||||
bb.fetch2.check_network_access(d, "git clone --bare %s%s" % (ud.host, ud.path))
|
||||
runfetchcmd("%s clone --bare %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
# Update the checkout if needed
|
||||
@@ -182,16 +132,15 @@ class Git(FetchMethod):
|
||||
if not self._contains_ref(ud.revisions[name], d):
|
||||
needupdate = True
|
||||
if needupdate:
|
||||
bb.fetch2.check_network_access(d, "git fetch %s%s" % (ud.host, ud.path), ud.url)
|
||||
try:
|
||||
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
|
||||
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
|
||||
except bb.fetch2.FetchError:
|
||||
logger.debug(1, "No Origin")
|
||||
|
||||
runfetchcmd("%s remote add --mirror origin %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
|
||||
fetch_cmd = "%s fetch --all -t" % ud.basecmd
|
||||
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
|
||||
runfetchcmd(fetch_cmd, d)
|
||||
runfetchcmd("%s remote add origin %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
|
||||
runfetchcmd("%s fetch --all -t" % ud.basecmd, d)
|
||||
runfetchcmd("%s prune-packed" % ud.basecmd, d)
|
||||
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
|
||||
ud.repochanged = True
|
||||
@@ -219,11 +168,8 @@ class Git(FetchMethod):
|
||||
runfetchcmd("git clone -s -n %s %s" % (ud.clonedir, destdir), d)
|
||||
if not ud.nocheckout:
|
||||
os.chdir(destdir)
|
||||
if subdir != "":
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
|
||||
else:
|
||||
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
|
||||
return True
|
||||
|
||||
def clean(self, ud, d):
|
||||
@@ -255,10 +201,9 @@ class Git(FetchMethod):
|
||||
else:
|
||||
username = ""
|
||||
|
||||
bb.fetch2.check_network_access(d, "git ls-remote %s%s %s" % (ud.host, ud.path, ud.branches[name]))
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
cmd = "%s ls-remote %s://%s%s%s %s" % \
|
||||
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
|
||||
bb.fetch2.check_network_access(d, cmd)
|
||||
cmd = "%s ls-remote %s://%s%s%s %s" % (basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
|
||||
output = runfetchcmd(cmd, d, True)
|
||||
if not output:
|
||||
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
|
||||
@@ -278,13 +223,10 @@ class Git(FetchMethod):
|
||||
# Check if we have the rev already
|
||||
|
||||
if not os.path.exists(ud.clonedir):
|
||||
logging.debug("GIT repository for %s does not exist in %s. \
|
||||
Downloading.", url, ud.clonedir)
|
||||
print("no repo")
|
||||
self.download(None, ud, d)
|
||||
if not os.path.exists(ud.clonedir):
|
||||
logger.error("GIT repository for %s does not exist in %s after \
|
||||
download. Cannot get sortable buildnumber, using \
|
||||
old value", url, ud.clonedir)
|
||||
logger.error("GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value", url, ud.clonedir)
|
||||
return None
|
||||
|
||||
|
||||
|
||||
@@ -57,8 +57,6 @@ class Hg(FetchMethod):
|
||||
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
elif not ud.revision:
|
||||
@@ -94,21 +92,21 @@ class Hg(FetchMethod):
|
||||
else:
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
|
||||
if command == "info":
|
||||
if command is "info":
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
|
||||
options = [];
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
if command is "fetch":
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
elif command == "pull":
|
||||
elif command is "pull":
|
||||
# do not pass options list; limiting pull to rev causes the local
|
||||
# repo not to contain it and immediately following "update" command
|
||||
# will crash
|
||||
cmd = "%s pull" % (basecmd)
|
||||
elif command == "update":
|
||||
elif command is "update":
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid hg command %s" % command, ud.url)
|
||||
@@ -133,7 +131,7 @@ class Hg(FetchMethod):
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", fetchcmd)
|
||||
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
|
||||
|
||||
@@ -40,7 +40,6 @@ class Local(FetchMethod):
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
# We don't set localfile as for this fetcher the file is already local!
|
||||
ud.basename = os.path.basename(ud.url.split("://")[1].split(";")[0])
|
||||
return
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
@@ -50,9 +49,6 @@ class Local(FetchMethod):
|
||||
path = url.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
newpath = path
|
||||
dldirfile = os.path.join(data.getVar("DL_DIR", d, True), os.path.basename(path))
|
||||
if os.path.exists(dldirfile):
|
||||
return dldirfile
|
||||
if path[0] != "/":
|
||||
filespath = data.getVar('FILESPATH', d, True)
|
||||
if filespath:
|
||||
@@ -61,17 +57,8 @@ class Local(FetchMethod):
|
||||
filesdir = data.getVar('FILESDIR', d, True)
|
||||
if filesdir:
|
||||
newpath = os.path.join(filesdir, path)
|
||||
if not os.path.exists(newpath) and path.find("*") == -1:
|
||||
return dldirfile
|
||||
return newpath
|
||||
|
||||
def need_update(self, url, ud, d):
|
||||
if url.find("*") != -1:
|
||||
return False
|
||||
if os.path.exists(ud.localpath):
|
||||
return False
|
||||
return True
|
||||
|
||||
def download(self, url, urldata, d):
|
||||
"""Fetch urls (no-op for Local method)"""
|
||||
# no need to fetch local files, we'll deal with them in place.
|
||||
|
||||
@@ -68,9 +68,9 @@ class Osc(FetchMethod):
|
||||
|
||||
coroot = self._strip_leading_slashes(ud.path)
|
||||
|
||||
if command == "fetch":
|
||||
if command is "fetch":
|
||||
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
|
||||
elif command == "update":
|
||||
elif command is "update":
|
||||
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid osc command %s" % command, ud.url)
|
||||
@@ -96,7 +96,7 @@ class Osc(FetchMethod):
|
||||
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", oscfetchcmd)
|
||||
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
|
||||
|
||||
@@ -152,7 +152,7 @@ class Perforce(FetchMethod):
|
||||
|
||||
# create temp directory
|
||||
logger.debug(2, "Fetch: creating temporary directory")
|
||||
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
|
||||
@@ -69,7 +69,7 @@ class Repo(FetchMethod):
|
||||
else:
|
||||
username = ""
|
||||
|
||||
bb.utils.mkdirhier(os.path.join(codir, "repo"))
|
||||
bb.mkdirhier(os.path.join(codir, "repo"))
|
||||
os.chdir(os.path.join(codir, "repo"))
|
||||
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
|
||||
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
|
||||
|
||||
@@ -75,7 +75,7 @@ class Svk(FetchMethod):
|
||||
localdata = data.createCopy(d)
|
||||
data.update_data(localdata)
|
||||
logger.debug(2, "Fetch: creating temporary directory")
|
||||
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
|
||||
@@ -56,8 +56,6 @@ class Svn(FetchMethod):
|
||||
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
@@ -87,7 +85,7 @@ class Svn(FetchMethod):
|
||||
if ud.pswd:
|
||||
options.append("--password %s" % ud.pswd)
|
||||
|
||||
if command == "info":
|
||||
if command is "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
@@ -95,9 +93,9 @@ class Svn(FetchMethod):
|
||||
options.append("-r %s" % ud.revision)
|
||||
suffix = "@%s" % (ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
if command is "fetch":
|
||||
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
|
||||
elif command == "update":
|
||||
elif command is "update":
|
||||
svncmd = "%s update %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid svn command %s" % command, ud.url)
|
||||
@@ -124,7 +122,7 @@ class Svn(FetchMethod):
|
||||
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", svnfetchcmd)
|
||||
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
|
||||
|
||||
@@ -65,15 +65,9 @@ class BBLogFormatter(logging.Formatter):
|
||||
def format(self, record):
|
||||
record.levelname = self.getLevelName(record.levelno)
|
||||
if record.levelno == self.PLAIN:
|
||||
msg = record.getMessage()
|
||||
return record.getMessage()
|
||||
else:
|
||||
msg = logging.Formatter.format(self, record)
|
||||
|
||||
if hasattr(record, 'bb_exc_info'):
|
||||
etype, value, tb = record.bb_exc_info
|
||||
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
|
||||
msg += '\n' + ''.join(formatted)
|
||||
return msg
|
||||
return logging.Formatter.format(self, record)
|
||||
|
||||
class Loggers(dict):
|
||||
def __getitem__(self, key):
|
||||
@@ -153,8 +147,8 @@ def set_debug_domains(domainargs):
|
||||
#
|
||||
|
||||
def debug(level, msgdomain, msg):
|
||||
warnings.warn("bb.msg.debug is deprecated in favor of the python 'logging' module",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
warnings.warn("bb.msg.debug will soon be deprecated in favor of the python 'logging' module",
|
||||
PendingDeprecationWarning, stacklevel=2)
|
||||
level = logging.DEBUG - (level - 1)
|
||||
if not msgdomain:
|
||||
logger.debug(level, msg)
|
||||
@@ -162,13 +156,13 @@ def debug(level, msgdomain, msg):
|
||||
loggers[msgdomain].debug(level, msg)
|
||||
|
||||
def plain(msg):
|
||||
warnings.warn("bb.msg.plain is deprecated in favor of the python 'logging' module",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
warnings.warn("bb.msg.plain will soon be deprecated in favor of the python 'logging' module",
|
||||
PendingDeprecationWarning, stacklevel=2)
|
||||
logger.plain(msg)
|
||||
|
||||
def note(level, msgdomain, msg):
|
||||
warnings.warn("bb.msg.note is deprecated in favor of the python 'logging' module",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
warnings.warn("bb.msg.note will soon be deprecated in favor of the python 'logging' module",
|
||||
PendingDeprecationWarning, stacklevel=2)
|
||||
if level > 1:
|
||||
if msgdomain:
|
||||
logger.verbose(msg)
|
||||
@@ -181,22 +175,24 @@ def note(level, msgdomain, msg):
|
||||
loggers[msgdomain].info(msg)
|
||||
|
||||
def warn(msgdomain, msg):
|
||||
warnings.warn("bb.msg.warn is deprecated in favor of the python 'logging' module",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
warnings.warn("bb.msg.warn will soon be deprecated in favor of the python 'logging' module",
|
||||
PendingDeprecationWarning, stacklevel=2)
|
||||
if not msgdomain:
|
||||
logger.warn(msg)
|
||||
else:
|
||||
loggers[msgdomain].warn(msg)
|
||||
|
||||
def error(msgdomain, msg):
|
||||
warnings.warn("bb.msg.error is deprecated in favor of the python 'logging' module",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
warnings.warn("bb.msg.error will soon be deprecated in favor of the python 'logging' module",
|
||||
PendingDeprecationWarning, stacklevel=2)
|
||||
if not msgdomain:
|
||||
logger.error(msg)
|
||||
else:
|
||||
loggers[msgdomain].error(msg)
|
||||
|
||||
def fatal(msgdomain, msg):
|
||||
warnings.warn("bb.msg.fatal will soon be deprecated in favor of raising appropriate exceptions",
|
||||
PendingDeprecationWarning, stacklevel=2)
|
||||
if not msgdomain:
|
||||
logger.critical(msg)
|
||||
else:
|
||||
|
||||
@@ -1,255 +0,0 @@
|
||||
# http://code.activestate.com/recipes/577629-namedtupleabc-abstract-base-class-mix-in-for-named/
|
||||
#!/usr/bin/env python
|
||||
# Copyright (c) 2011 Jan Kaliszewski (zuo). Available under the MIT License.
|
||||
|
||||
"""
|
||||
namedtuple_with_abc.py:
|
||||
* named tuple mix-in + ABC (abstract base class) recipe,
|
||||
* works under Python 2.6, 2.7 as well as 3.x.
|
||||
|
||||
Import this module to patch collections.namedtuple() factory function
|
||||
-- enriching it with the 'abc' attribute (an abstract base class + mix-in
|
||||
for named tuples) and decorating it with a wrapper that registers each
|
||||
newly created named tuple as a subclass of namedtuple.abc.
|
||||
|
||||
How to import:
|
||||
import collections, namedtuple_with_abc
|
||||
or:
|
||||
import namedtuple_with_abc
|
||||
from collections import namedtuple
|
||||
# ^ in this variant you must import namedtuple function
|
||||
# *after* importing namedtuple_with_abc module
|
||||
or simply:
|
||||
from namedtuple_with_abc import namedtuple
|
||||
|
||||
Simple usage example:
|
||||
class Credentials(namedtuple.abc):
|
||||
_fields = 'username password'
|
||||
def __str__(self):
|
||||
return ('{0.__class__.__name__}'
|
||||
'(username={0.username}, password=...)'.format(self))
|
||||
print(Credentials("alice", "Alice's password"))
|
||||
|
||||
For more advanced examples -- see below the "if __name__ == '__main__':".
|
||||
"""
|
||||
|
||||
import collections
|
||||
from abc import ABCMeta, abstractproperty
|
||||
from functools import wraps
|
||||
from sys import version_info
|
||||
|
||||
__all__ = ('namedtuple',)
|
||||
_namedtuple = collections.namedtuple
|
||||
|
||||
|
||||
class _NamedTupleABCMeta(ABCMeta):
|
||||
'''The metaclass for the abstract base class + mix-in for named tuples.'''
|
||||
def __new__(mcls, name, bases, namespace):
|
||||
fields = namespace.get('_fields')
|
||||
for base in bases:
|
||||
if fields is not None:
|
||||
break
|
||||
fields = getattr(base, '_fields', None)
|
||||
if not isinstance(fields, abstractproperty):
|
||||
basetuple = _namedtuple(name, fields)
|
||||
bases = (basetuple,) + bases
|
||||
namespace.pop('_fields', None)
|
||||
namespace.setdefault('__doc__', basetuple.__doc__)
|
||||
namespace.setdefault('__slots__', ())
|
||||
return ABCMeta.__new__(mcls, name, bases, namespace)
|
||||
|
||||
|
||||
exec(
|
||||
# Python 2.x metaclass declaration syntax
|
||||
"""class _NamedTupleABC(object):
|
||||
'''The abstract base class + mix-in for named tuples.'''
|
||||
__metaclass__ = _NamedTupleABCMeta
|
||||
_fields = abstractproperty()""" if version_info[0] < 3 else
|
||||
# Python 3.x metaclass declaration syntax
|
||||
"""class _NamedTupleABC(metaclass=_NamedTupleABCMeta):
|
||||
'''The abstract base class + mix-in for named tuples.'''
|
||||
_fields = abstractproperty()"""
|
||||
)
|
||||
|
||||
|
||||
_namedtuple.abc = _NamedTupleABC
|
||||
#_NamedTupleABC.register(type(version_info)) # (and similar, in the future...)
|
||||
|
||||
@wraps(_namedtuple)
|
||||
def namedtuple(*args, **kwargs):
|
||||
'''Named tuple factory with namedtuple.abc subclass registration.'''
|
||||
cls = _namedtuple(*args, **kwargs)
|
||||
_NamedTupleABC.register(cls)
|
||||
return cls
|
||||
|
||||
collections.namedtuple = namedtuple
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
'''Examples and explanations'''
|
||||
|
||||
# Simple usage
|
||||
|
||||
class MyRecord(namedtuple.abc):
|
||||
_fields = 'x y z' # such form will be transformed into ('x', 'y', 'z')
|
||||
def _my_custom_method(self):
|
||||
return list(self._asdict().items())
|
||||
# (the '_fields' attribute belongs to the named tuple public API anyway)
|
||||
|
||||
rec = MyRecord(1, 2, 3)
|
||||
print(rec)
|
||||
print(rec._my_custom_method())
|
||||
print(rec._replace(y=222))
|
||||
print(rec._replace(y=222)._my_custom_method())
|
||||
|
||||
# Custom abstract classes...
|
||||
|
||||
class MyAbstractRecord(namedtuple.abc):
|
||||
def _my_custom_method(self):
|
||||
return list(self._asdict().items())
|
||||
|
||||
try:
|
||||
MyAbstractRecord() # (abstract classes cannot be instantiated)
|
||||
except TypeError as exc:
|
||||
print(exc)
|
||||
|
||||
class AnotherAbstractRecord(MyAbstractRecord):
|
||||
def __str__(self):
|
||||
return '<<<{0}>>>'.format(super(AnotherAbstractRecord,
|
||||
self).__str__())
|
||||
|
||||
# ...and their non-abstract subclasses
|
||||
|
||||
class MyRecord2(MyAbstractRecord):
|
||||
_fields = 'a, b'
|
||||
|
||||
class MyRecord3(AnotherAbstractRecord):
|
||||
_fields = 'p', 'q', 'r'
|
||||
|
||||
rec2 = MyRecord2('foo', 'bar')
|
||||
print(rec2)
|
||||
print(rec2._my_custom_method())
|
||||
print(rec2._replace(b=222))
|
||||
print(rec2._replace(b=222)._my_custom_method())
|
||||
|
||||
rec3 = MyRecord3('foo', 'bar', 'baz')
|
||||
print(rec3)
|
||||
print(rec3._my_custom_method())
|
||||
print(rec3._replace(q=222))
|
||||
print(rec3._replace(q=222)._my_custom_method())
|
||||
|
||||
# You can also subclass non-abstract ones...
|
||||
|
||||
class MyRecord33(MyRecord3):
|
||||
def __str__(self):
|
||||
return '< {0!r}, ..., {0!r} >'.format(self.p, self.r)
|
||||
|
||||
rec33 = MyRecord33('foo', 'bar', 'baz')
|
||||
print(rec33)
|
||||
print(rec33._my_custom_method())
|
||||
print(rec33._replace(q=222))
|
||||
print(rec33._replace(q=222)._my_custom_method())
|
||||
|
||||
# ...and even override the magic '_fields' attribute again
|
||||
|
||||
class MyRecord345(MyRecord3):
|
||||
_fields = 'e f g h i j k'
|
||||
|
||||
rec345 = MyRecord345(1, 2, 3, 4, 3, 2, 1)
|
||||
print(rec345)
|
||||
print(rec345._my_custom_method())
|
||||
print(rec345._replace(f=222))
|
||||
print(rec345._replace(f=222)._my_custom_method())
|
||||
|
||||
# Mixing-in some other classes is also possible:
|
||||
|
||||
class MyMixIn(object):
|
||||
def method(self):
|
||||
return "MyMixIn.method() called"
|
||||
def _my_custom_method(self):
|
||||
return "MyMixIn._my_custom_method() called"
|
||||
def count(self, item):
|
||||
return "MyMixIn.count({0}) called".format(item)
|
||||
def _asdict(self): # (cannot override a namedtuple method, see below)
|
||||
return "MyMixIn._asdict() called"
|
||||
|
||||
class MyRecord4(MyRecord33, MyMixIn): # mix-in on the right
|
||||
_fields = 'j k l x'
|
||||
|
||||
class MyRecord5(MyMixIn, MyRecord33): # mix-in on the left
|
||||
_fields = 'j k l x y'
|
||||
|
||||
rec4 = MyRecord4(1, 2, 3, 2)
|
||||
print(rec4)
|
||||
print(rec4.method())
|
||||
print(rec4._my_custom_method()) # MyRecord33's
|
||||
print(rec4.count(2)) # tuple's
|
||||
print(rec4._replace(k=222))
|
||||
print(rec4._replace(k=222).method())
|
||||
print(rec4._replace(k=222)._my_custom_method()) # MyRecord33's
|
||||
print(rec4._replace(k=222).count(8)) # tuple's
|
||||
|
||||
rec5 = MyRecord5(1, 2, 3, 2, 1)
|
||||
print(rec5)
|
||||
print(rec5.method())
|
||||
print(rec5._my_custom_method()) # MyMixIn's
|
||||
print(rec5.count(2)) # MyMixIn's
|
||||
print(rec5._replace(k=222))
|
||||
print(rec5._replace(k=222).method())
|
||||
print(rec5._replace(k=222)._my_custom_method()) # MyMixIn's
|
||||
print(rec5._replace(k=222).count(2)) # MyMixIn's
|
||||
|
||||
# None that behavior: the standard namedtuple methods cannot be
|
||||
# overriden by a foreign mix-in -- even if the mix-in is declared
|
||||
# as the leftmost base class (but, obviously, you can override them
|
||||
# in the defined class or its subclasses):
|
||||
|
||||
print(rec4._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
|
||||
print(rec5._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
|
||||
|
||||
class MyRecord6(MyRecord33):
|
||||
_fields = 'j k l x y z'
|
||||
def _asdict(self):
|
||||
return "MyRecord6._asdict() called"
|
||||
rec6 = MyRecord6(1, 2, 3, 1, 2, 3)
|
||||
print(rec6._asdict()) # (this returns "MyRecord6._asdict() called")
|
||||
|
||||
# All that record classes are real subclasses of namedtuple.abc:
|
||||
|
||||
assert issubclass(MyRecord, namedtuple.abc)
|
||||
assert issubclass(MyAbstractRecord, namedtuple.abc)
|
||||
assert issubclass(AnotherAbstractRecord, namedtuple.abc)
|
||||
assert issubclass(MyRecord2, namedtuple.abc)
|
||||
assert issubclass(MyRecord3, namedtuple.abc)
|
||||
assert issubclass(MyRecord33, namedtuple.abc)
|
||||
assert issubclass(MyRecord345, namedtuple.abc)
|
||||
assert issubclass(MyRecord4, namedtuple.abc)
|
||||
assert issubclass(MyRecord5, namedtuple.abc)
|
||||
assert issubclass(MyRecord6, namedtuple.abc)
|
||||
|
||||
# ...but abstract ones are not subclasses of tuple
|
||||
# (and this is what you probably want):
|
||||
|
||||
assert not issubclass(MyAbstractRecord, tuple)
|
||||
assert not issubclass(AnotherAbstractRecord, tuple)
|
||||
|
||||
assert issubclass(MyRecord, tuple)
|
||||
assert issubclass(MyRecord2, tuple)
|
||||
assert issubclass(MyRecord3, tuple)
|
||||
assert issubclass(MyRecord33, tuple)
|
||||
assert issubclass(MyRecord345, tuple)
|
||||
assert issubclass(MyRecord4, tuple)
|
||||
assert issubclass(MyRecord5, tuple)
|
||||
assert issubclass(MyRecord6, tuple)
|
||||
|
||||
# Named tuple classes created with namedtuple() factory function
|
||||
# (in the "traditional" way) are registered as "virtual" subclasses
|
||||
# of namedtuple.abc:
|
||||
|
||||
MyTuple = namedtuple('MyTuple', 'a b c')
|
||||
mt = MyTuple(1, 2, 3)
|
||||
assert issubclass(MyTuple, namedtuple.abc)
|
||||
assert isinstance(mt, namedtuple.abc)
|
||||
@@ -84,9 +84,9 @@ class DataNode(AstNode):
|
||||
|
||||
def getFunc(self, key, data):
|
||||
if 'flag' in self.groupd and self.groupd['flag'] != None:
|
||||
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
|
||||
return bb.data.getVarFlag(key, self.groupd['flag'], data)
|
||||
else:
|
||||
return data.getVar(key, noweakdefault=True)
|
||||
return bb.data.getVar(key, data)
|
||||
|
||||
def eval(self, data):
|
||||
groupd = self.groupd
|
||||
@@ -100,7 +100,7 @@ class DataNode(AstNode):
|
||||
elif "colon" in groupd and groupd["colon"] != None:
|
||||
e = data.createCopy()
|
||||
bb.data.update_data(e)
|
||||
val = bb.data.expand(groupd["value"], e, key + "[:=]")
|
||||
val = bb.data.expand(groupd["value"], e)
|
||||
elif "append" in groupd and groupd["append"] != None:
|
||||
val = "%s %s" % ((self.getFunc(key, data) or ""), groupd["value"])
|
||||
elif "prepend" in groupd and groupd["prepend"] != None:
|
||||
@@ -307,14 +307,6 @@ def handleInherit(statements, filename, lineno, m):
|
||||
statements.append(InheritNode(filename, lineno, classes.split()))
|
||||
|
||||
def finalize(fn, d, variant = None):
|
||||
all_handlers = {}
|
||||
for var in bb.data.getVar('__BBHANDLERS', d) or []:
|
||||
# try to add the handler
|
||||
handler = bb.data.getVar(var, d)
|
||||
bb.event.register(var, handler)
|
||||
|
||||
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
|
||||
|
||||
bb.data.expandKeys(d)
|
||||
bb.data.update_data(d)
|
||||
code = []
|
||||
@@ -323,6 +315,12 @@ def finalize(fn, d, variant = None):
|
||||
bb.utils.simple_exec("\n".join(code), {"d": d})
|
||||
bb.data.update_data(d)
|
||||
|
||||
all_handlers = {}
|
||||
for var in bb.data.getVar('__BBHANDLERS', d) or []:
|
||||
# try to add the handler
|
||||
handler = bb.data.getVar(var, d)
|
||||
bb.event.register(var, handler)
|
||||
|
||||
tasklist = bb.data.getVar('__BBTASKS', d) or []
|
||||
bb.build.add_tasks(tasklist, d)
|
||||
|
||||
@@ -371,14 +369,12 @@ def multi_finalize(fn, d):
|
||||
logger.debug(2, "Appending .bbappend file %s to %s", append, fn)
|
||||
bb.parse.BBHandler.handle(append, d, True)
|
||||
|
||||
onlyfinalise = d.getVar("__ONLYFINALISE", False)
|
||||
|
||||
safe_d = d
|
||||
d = bb.data.createCopy(safe_d)
|
||||
try:
|
||||
finalize(fn, d)
|
||||
except bb.parse.SkipPackage as e:
|
||||
bb.data.setVar("__SKIPPED", e.args[0], d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, d)
|
||||
datastores = {"": safe_d}
|
||||
|
||||
versions = (d.getVar("BBVERSIONS", True) or "").split()
|
||||
@@ -420,46 +416,27 @@ def multi_finalize(fn, d):
|
||||
verfunc(pv, d, safe_d)
|
||||
try:
|
||||
finalize(fn, d)
|
||||
except bb.parse.SkipPackage as e:
|
||||
bb.data.setVar("__SKIPPED", e.args[0], d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, d)
|
||||
|
||||
_create_variants(datastores, versions, verfunc)
|
||||
|
||||
extended = d.getVar("BBCLASSEXTEND", True) or ""
|
||||
if extended:
|
||||
# the following is to support bbextends with argument, for e.g. multilib
|
||||
# an example is as follow:
|
||||
# BBCLASSEXTEND = "multilib:lib32"
|
||||
# it will create foo-lib32, inheriting multilib.bbclass and set
|
||||
# CURRENTEXTEND to "lib32"
|
||||
extendedmap = {}
|
||||
|
||||
for ext in extended.split():
|
||||
eext = ext.split(':')
|
||||
if len(eext) > 1:
|
||||
extendedmap[eext[1]] = eext[0]
|
||||
else:
|
||||
extendedmap[ext] = ext
|
||||
|
||||
pn = d.getVar("PN", True)
|
||||
def extendfunc(name, d):
|
||||
if name != extendedmap[name]:
|
||||
d.setVar("BBEXTENDCURR", extendedmap[name])
|
||||
d.setVar("BBEXTENDVARIANT", name)
|
||||
else:
|
||||
d.setVar("PN", "%s-%s" % (pn, name))
|
||||
bb.parse.BBHandler.inherit([extendedmap[name]], d)
|
||||
d.setVar("PN", "%s-%s" % (pn, name))
|
||||
bb.parse.BBHandler.inherit([name], d)
|
||||
|
||||
safe_d.setVar("BBCLASSEXTEND", extended)
|
||||
_create_variants(datastores, extendedmap.keys(), extendfunc)
|
||||
_create_variants(datastores, extended.split(), extendfunc)
|
||||
|
||||
for variant, variant_d in datastores.iteritems():
|
||||
if variant:
|
||||
try:
|
||||
if not onlyfinalise or variant in onlyfinalise:
|
||||
finalize(fn, variant_d, variant)
|
||||
except bb.parse.SkipPackage as e:
|
||||
bb.data.setVar("__SKIPPED", e.args[0], variant_d)
|
||||
finalize(fn, variant_d, variant)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, variant_d)
|
||||
|
||||
if len(datastores) > 1:
|
||||
variants = filter(None, datastores.iterkeys())
|
||||
|
||||
@@ -26,8 +26,7 @@ import logging
|
||||
import os.path
|
||||
import sys
|
||||
import warnings
|
||||
from bb.compat import total_ordering
|
||||
from collections import Mapping
|
||||
import bb.msg, bb.data, bb.utils
|
||||
|
||||
try:
|
||||
import sqlite3
|
||||
@@ -40,11 +39,8 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
|
||||
|
||||
logger = logging.getLogger("BitBake.PersistData")
|
||||
if hasattr(sqlite3, 'enable_shared_cache'):
|
||||
sqlite3.enable_shared_cache(True)
|
||||
|
||||
|
||||
@total_ordering
|
||||
class SQLTable(collections.MutableMapping):
|
||||
"""Object representing a table/domain in the database"""
|
||||
def __init__(self, cursor, table):
|
||||
@@ -66,31 +62,16 @@ class SQLTable(collections.MutableMapping):
|
||||
continue
|
||||
raise
|
||||
|
||||
def __enter__(self):
|
||||
self.cursor.__enter__()
|
||||
return self
|
||||
|
||||
def __exit__(self, *excinfo):
|
||||
self.cursor.__exit__(*excinfo)
|
||||
|
||||
def __getitem__(self, key):
|
||||
data = self._execute("SELECT * from %s where key=?;" %
|
||||
self.table, [key])
|
||||
for row in data:
|
||||
return row[1]
|
||||
raise KeyError(key)
|
||||
|
||||
def __delitem__(self, key):
|
||||
if key not in self:
|
||||
raise KeyError(key)
|
||||
self._execute("DELETE from %s where key=?;" % self.table, [key])
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
if not isinstance(key, basestring):
|
||||
raise TypeError('Only string keys are supported')
|
||||
elif not isinstance(value, basestring):
|
||||
raise TypeError('Only string values are supported')
|
||||
|
||||
data = self._execute("SELECT * from %s where key=?;" %
|
||||
self.table, [key])
|
||||
exists = len(list(data))
|
||||
@@ -111,40 +92,53 @@ class SQLTable(collections.MutableMapping):
|
||||
|
||||
def __iter__(self):
|
||||
data = self._execute("SELECT key FROM %s;" % self.table)
|
||||
return (row[0] for row in data)
|
||||
for row in data:
|
||||
yield row[0]
|
||||
|
||||
def __lt__(self, other):
|
||||
if not isinstance(other, Mapping):
|
||||
raise NotImplemented
|
||||
|
||||
return len(self) < len(other)
|
||||
|
||||
def values(self):
|
||||
return list(self.itervalues())
|
||||
def iteritems(self):
|
||||
data = self._execute("SELECT * FROM %s;" % self.table)
|
||||
for row in data:
|
||||
yield row[0], row[1]
|
||||
|
||||
def itervalues(self):
|
||||
data = self._execute("SELECT value FROM %s;" % self.table)
|
||||
return (row[0] for row in data)
|
||||
for row in data:
|
||||
yield row[0]
|
||||
|
||||
def items(self):
|
||||
return list(self.iteritems())
|
||||
|
||||
def iteritems(self):
|
||||
return self._execute("SELECT * FROM %s;" % self.table)
|
||||
class SQLData(object):
|
||||
"""Object representing the persistent data"""
|
||||
def __init__(self, filename):
|
||||
bb.utils.mkdirhier(os.path.dirname(filename))
|
||||
|
||||
def clear(self):
|
||||
self._execute("DELETE FROM %s;" % self.table)
|
||||
self.filename = filename
|
||||
self.connection = sqlite3.connect(filename, timeout=5,
|
||||
isolation_level=None)
|
||||
self.cursor = self.connection.cursor()
|
||||
self._tables = {}
|
||||
|
||||
def has_key(self, key):
|
||||
return key in self
|
||||
def __getitem__(self, table):
|
||||
if not isinstance(table, basestring):
|
||||
raise TypeError("table argument must be a string, not '%s'" %
|
||||
type(table))
|
||||
|
||||
if table in self._tables:
|
||||
return self._tables[table]
|
||||
else:
|
||||
tableobj = self._tables[table] = SQLTable(self.cursor, table)
|
||||
return tableobj
|
||||
|
||||
def __delitem__(self, table):
|
||||
if table in self._tables:
|
||||
del self._tables[table]
|
||||
self.cursor.execute("DROP TABLE IF EXISTS %s;" % table)
|
||||
|
||||
|
||||
class PersistData(object):
|
||||
"""Deprecated representation of the bitbake persistent data store"""
|
||||
def __init__(self, d):
|
||||
warnings.warn("Use of PersistData is deprecated. Please use "
|
||||
"persist(domain, d) instead.",
|
||||
category=DeprecationWarning,
|
||||
warnings.warn("Use of PersistData will be deprecated in the future",
|
||||
category=PendingDeprecationWarning,
|
||||
stacklevel=2)
|
||||
|
||||
self.data = persist(d)
|
||||
@@ -187,19 +181,14 @@ class PersistData(object):
|
||||
"""
|
||||
del self.data[domain][key]
|
||||
|
||||
def connect(database):
|
||||
return sqlite3.connect(database, timeout=30, isolation_level=None)
|
||||
|
||||
def persist(domain, d):
|
||||
"""Convenience factory for SQLTable objects based upon metadata"""
|
||||
import bb.data, bb.utils
|
||||
def persist(d):
|
||||
"""Convenience factory for construction of SQLData based upon metadata"""
|
||||
cachedir = (bb.data.getVar("PERSISTENT_DIR", d, True) or
|
||||
bb.data.getVar("CACHE", d, True))
|
||||
if not cachedir:
|
||||
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
|
||||
sys.exit(1)
|
||||
|
||||
bb.utils.mkdirhier(cachedir)
|
||||
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
|
||||
connection = connect(cachefile)
|
||||
return SQLTable(connection, domain)
|
||||
return SQLData(cachefile)
|
||||
|
||||
@@ -93,7 +93,7 @@ def run(cmd, input=None, log=None, **options):
|
||||
|
||||
try:
|
||||
pipe = Popen(cmd, **options)
|
||||
except OSError as exc:
|
||||
except OSError, exc:
|
||||
if exc.errno == 2:
|
||||
raise NotFoundError(cmd)
|
||||
else:
|
||||
|
||||
@@ -84,10 +84,10 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
preferred_ver = None
|
||||
|
||||
localdata = data.createCopy(cfgData)
|
||||
bb.data.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn), localdata)
|
||||
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
|
||||
bb.data.update_data(localdata)
|
||||
|
||||
preferred_v = bb.data.getVar('PREFERRED_VERSION', localdata, True)
|
||||
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
|
||||
if preferred_v:
|
||||
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
|
||||
if m:
|
||||
|
||||
@@ -151,7 +151,7 @@ def builtin_trap(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
for sig in args[1:]:
|
||||
try:
|
||||
env.traps[sig] = action
|
||||
except Exception as e:
|
||||
except Exception, e:
|
||||
stderr.write('trap: %s\n' % str(e))
|
||||
return 0
|
||||
|
||||
@@ -214,7 +214,7 @@ def utility_cat(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
data = f.read()
|
||||
finally:
|
||||
f.close()
|
||||
except IOError as e:
|
||||
except IOError, e:
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
status = 1
|
||||
@@ -433,7 +433,7 @@ def utility_mkdir(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if option.has_p:
|
||||
try:
|
||||
os.makedirs(path)
|
||||
except IOError as e:
|
||||
except IOError, e:
|
||||
if e.errno != errno.EEXIST:
|
||||
raise
|
||||
else:
|
||||
@@ -561,7 +561,7 @@ def utility_sort(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
lines = f.readlines()
|
||||
finally:
|
||||
f.close()
|
||||
except IOError as e:
|
||||
except IOError, e:
|
||||
stderr.write(str(e) + '\n')
|
||||
return 1
|
||||
|
||||
@@ -679,7 +679,7 @@ def run_command(name, args, interp, env, stdin, stdout,
|
||||
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
|
||||
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
||||
out, err = p.communicate()
|
||||
except WindowsError as e:
|
||||
except WindowsError, e:
|
||||
raise UtilityError(str(e))
|
||||
|
||||
if not unixoutput:
|
||||
|
||||
@@ -248,7 +248,7 @@ class Redirections:
|
||||
raise NotImplementedError('cannot open absolute path %s' % repr(filename))
|
||||
else:
|
||||
f = file(filename, mode+'b')
|
||||
except IOError as e:
|
||||
except IOError, e:
|
||||
raise RedirectionError(str(e))
|
||||
|
||||
wrapper = None
|
||||
@@ -368,7 +368,7 @@ def resolve_shebang(path, ignoreshell=False):
|
||||
if arg is None:
|
||||
return [cmd, win32_to_unix_path(path)]
|
||||
return [cmd, arg, win32_to_unix_path(path)]
|
||||
except IOError as e:
|
||||
except IOError, e:
|
||||
if e.errno!=errno.ENOENT and \
|
||||
(e.errno!=errno.EPERM and not os.path.isdir(path)): # Opening a directory raises EPERM
|
||||
raise
|
||||
@@ -747,7 +747,7 @@ class Interpreter:
|
||||
for cmd in cmds:
|
||||
try:
|
||||
status = self.execute(cmd)
|
||||
except ExitSignal as e:
|
||||
except ExitSignal, e:
|
||||
if sourced:
|
||||
raise
|
||||
status = int(e.args[0])
|
||||
@@ -758,13 +758,13 @@ class Interpreter:
|
||||
if 'debug-utility' in self._debugflags or 'debug-cmd' in self._debugflags:
|
||||
self.log('returncode ' + str(status)+ '\n')
|
||||
return status
|
||||
except CommandNotFound as e:
|
||||
except CommandNotFound, e:
|
||||
print >>self._redirs.stderr, str(e)
|
||||
self._redirs.stderr.flush()
|
||||
# Command not found by non-interactive shell
|
||||
# return 127
|
||||
raise
|
||||
except RedirectionError as e:
|
||||
except RedirectionError, e:
|
||||
# TODO: should be handled depending on the utility status
|
||||
print >>self._redirs.stderr, str(e)
|
||||
self._redirs.stderr.flush()
|
||||
@@ -948,7 +948,7 @@ class Interpreter:
|
||||
status = self.execute(func, redirs)
|
||||
finally:
|
||||
redirs.close()
|
||||
except ReturnSignal as e:
|
||||
except ReturnSignal, e:
|
||||
status = int(e.args[0])
|
||||
env['?'] = status
|
||||
return status
|
||||
@@ -1044,7 +1044,7 @@ class Interpreter:
|
||||
|
||||
except ReturnSignal:
|
||||
raise
|
||||
except ShellError as e:
|
||||
except ShellError, e:
|
||||
if is_special or isinstance(e, (ExitSignal,
|
||||
ShellSyntaxError, ExpansionError)):
|
||||
raise e
|
||||
|
||||
@@ -105,11 +105,6 @@ class RunQueueScheduler(object):
|
||||
if self.rq.runq_running[taskid] == 1:
|
||||
continue
|
||||
if self.rq.runq_buildable[taskid] == 1:
|
||||
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[taskid]]
|
||||
taskname = self.rqdata.runq_task[taskid]
|
||||
stamp = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
|
||||
if stamp in self.rq.build_stamps.values():
|
||||
continue
|
||||
return taskid
|
||||
|
||||
def next(self):
|
||||
@@ -209,9 +204,9 @@ class RunQueueData:
|
||||
ret.extend([nam])
|
||||
return ret
|
||||
|
||||
def get_user_idstring(self, task, task_name_suffix = ""):
|
||||
def get_user_idstring(self, task):
|
||||
fn = self.taskData.fn_index[self.runq_fnid[task]]
|
||||
taskname = self.runq_task[task] + task_name_suffix
|
||||
taskname = self.runq_task[task]
|
||||
return "%s, %s" % (fn, taskname)
|
||||
|
||||
def get_task_id(self, fnid, taskname):
|
||||
@@ -758,6 +753,7 @@ class RunQueueData:
|
||||
self.rqdata.runq_depends[task],
|
||||
self.rqdata.runq_revdeps[task])
|
||||
|
||||
|
||||
class RunQueue:
|
||||
def __init__(self, cooker, cfgData, dataCache, taskData, targets):
|
||||
|
||||
@@ -933,7 +929,7 @@ class RunQueue:
|
||||
|
||||
if self.state is runQueuePrepare:
|
||||
self.rqexe = RunQueueExecuteDummy(self)
|
||||
if self.rqdata.prepare() == 0:
|
||||
if self.rqdata.prepare() is 0:
|
||||
self.state = runQueueComplete
|
||||
else:
|
||||
self.state = runQueueSceneInit
|
||||
@@ -1014,7 +1010,6 @@ class RunQueueExecute:
|
||||
self.runq_complete = []
|
||||
self.build_pids = {}
|
||||
self.build_pipes = {}
|
||||
self.build_stamps = {}
|
||||
self.failed_fnids = []
|
||||
|
||||
def runqueue_process_waitpid(self):
|
||||
@@ -1023,15 +1018,12 @@ class RunQueueExecute:
|
||||
collect the process exit codes and close the information pipe.
|
||||
"""
|
||||
result = os.waitpid(-1, os.WNOHANG)
|
||||
if result[0] == 0 and result[1] == 0:
|
||||
if result[0] is 0 and result[1] is 0:
|
||||
return None
|
||||
task = self.build_pids[result[0]]
|
||||
del self.build_pids[result[0]]
|
||||
self.build_pipes[result[0]].close()
|
||||
del self.build_pipes[result[0]]
|
||||
# self.build_stamps[result[0]] may not exist when use shared work directory.
|
||||
if result[0] in self.build_stamps.keys():
|
||||
del self.build_stamps[result[0]]
|
||||
if result[1] != 0:
|
||||
self.task_fail(task, result[1]>>8)
|
||||
else:
|
||||
@@ -1068,32 +1060,27 @@ class RunQueueExecute:
|
||||
return
|
||||
|
||||
def fork_off_task(self, fn, task, taskname, quieterrors=False):
|
||||
# We need to setup the environment BEFORE the fork, since
|
||||
# a fork() or exec*() activates PSEUDO...
|
||||
the_data = bb.cache.Cache.loadDataFull(fn, self.cooker.get_file_appends(fn), self.cooker.configuration.data)
|
||||
|
||||
envbackup = {}
|
||||
umask = None
|
||||
env = bb.data.export_vars(the_data)
|
||||
env = bb.data.export_envvars(env, the_data)
|
||||
|
||||
taskdep = self.rqdata.dataCache.task_deps[fn]
|
||||
if 'umask' in taskdep and taskname in taskdep['umask']:
|
||||
# umask might come in as a number or text string..
|
||||
try:
|
||||
umask = int(taskdep['umask'][taskname],8)
|
||||
except TypeError:
|
||||
umask = taskdep['umask'][taskname]
|
||||
|
||||
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot']:
|
||||
envvars = (self.rqdata.dataCache.fakerootenv[fn] or "").split()
|
||||
for key, value in (var.split('=') for var in envvars):
|
||||
envbackup[key] = os.environ.get(key)
|
||||
os.environ[key] = value
|
||||
|
||||
fakedirs = (self.rqdata.dataCache.fakerootdirs[fn] or "").split()
|
||||
envvars = the_data.getVar("FAKEROOTENV", True).split()
|
||||
for var in envvars:
|
||||
comps = var.split("=")
|
||||
env[comps[0]] = comps[1]
|
||||
fakedirs = (the_data.getVar("FAKEROOTDIRS", True) or "").split()
|
||||
for p in fakedirs:
|
||||
bb.utils.mkdirhier(p)
|
||||
bb.mkdirhier(p)
|
||||
logger.debug(2, "Running %s:%s under fakeroot, state dir is %s" % (fn, taskname, fakedirs))
|
||||
|
||||
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
|
||||
(fn, taskname, ', '.join(fakedirs)))
|
||||
envbackup = os.environ.copy()
|
||||
for e in envbackup:
|
||||
os.unsetenv(e)
|
||||
for e in env:
|
||||
os.putenv(e, env[e])
|
||||
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
@@ -1104,7 +1091,6 @@ class RunQueueExecute:
|
||||
pid = os.fork()
|
||||
except OSError as e:
|
||||
bb.msg.fatal(bb.msg.domain.RunQueue, "fork failed: %d (%s)" % (e.errno, e.strerror))
|
||||
|
||||
if pid == 0:
|
||||
pipein.close()
|
||||
|
||||
@@ -1112,6 +1098,12 @@ class RunQueueExecute:
|
||||
# events
|
||||
bb.event.worker_pid = os.getpid()
|
||||
bb.event.worker_pipe = pipeout
|
||||
bb.event.useStdout = False
|
||||
|
||||
# Child processes should send their messages to the UI
|
||||
# process via the server process, not print them
|
||||
# themselves
|
||||
bblogger.handlers = [bb.event.LogHandler()]
|
||||
|
||||
self.rq.state = runQueueChildProcess
|
||||
# Make the child the process group leader
|
||||
@@ -1119,44 +1111,33 @@ class RunQueueExecute:
|
||||
# No stdin
|
||||
newsi = os.open(os.devnull, os.O_RDWR)
|
||||
os.dup2(newsi, sys.stdin.fileno())
|
||||
if quieterrors:
|
||||
the_data.setVarFlag(taskname, "quieterrors", "1")
|
||||
|
||||
if umask:
|
||||
os.umask(umask)
|
||||
|
||||
bb.data.setVar("BB_WORKERCONTEXT", "1", self.cooker.configuration.data)
|
||||
bb.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY", self, self.cooker.configuration.data)
|
||||
bb.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY2", fn, self.cooker.configuration.data)
|
||||
bb.data.setVar("BB_WORKERCONTEXT", "1", the_data)
|
||||
bb.parse.siggen.set_taskdata(self.rqdata.hashes, self.rqdata.hash_deps)
|
||||
|
||||
for h in self.rqdata.hashes:
|
||||
bb.data.setVar("BBHASH_%s" % h, self.rqdata.hashes[h], the_data)
|
||||
for h in self.rqdata.hash_deps:
|
||||
bb.data.setVar("BBHASHDEPS_%s" % h, self.rqdata.hash_deps[h], the_data)
|
||||
|
||||
bb.data.setVar("BB_TASKHASH", self.rqdata.runq_hash[task], the_data)
|
||||
|
||||
ret = 0
|
||||
try:
|
||||
the_data = bb.cache.Cache.loadDataFull(fn, self.cooker.get_file_appends(fn), self.cooker.configuration.data)
|
||||
the_data.setVar('BB_TASKHASH', self.rqdata.runq_hash[task])
|
||||
for h in self.rqdata.hashes:
|
||||
the_data.setVar("BBHASH_%s" % h, self.rqdata.hashes[h])
|
||||
for h in self.rqdata.hash_deps:
|
||||
the_data.setVar("BBHASHDEPS_%s" % h, self.rqdata.hash_deps[h])
|
||||
|
||||
os.environ.update(bb.data.exported_vars(the_data))
|
||||
|
||||
if quieterrors:
|
||||
the_data.setVarFlag(taskname, "quieterrors", "1")
|
||||
|
||||
except Exception as exc:
|
||||
if not quieterrors:
|
||||
logger.critical(str(exc))
|
||||
os._exit(1)
|
||||
try:
|
||||
if not self.cooker.configuration.dry_run:
|
||||
ret = bb.build.exec_task(fn, taskname, the_data)
|
||||
os._exit(ret)
|
||||
except:
|
||||
os._exit(1)
|
||||
else:
|
||||
for key, value in envbackup.iteritems():
|
||||
if value is None:
|
||||
del os.environ[key]
|
||||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
for e in env:
|
||||
os.unsetenv(e)
|
||||
for e in envbackup:
|
||||
os.putenv(e, envbackup[e])
|
||||
|
||||
return pid, pipein, pipeout
|
||||
|
||||
@@ -1196,25 +1177,6 @@ class RunQueueExecuteTasks(RunQueueExecute):
|
||||
self.rq.scenequeue_covered.add(task)
|
||||
found = True
|
||||
|
||||
# Detect when the real task needs to be run anyway by looking to see
|
||||
# if any of its dependencies within the same package are scheduled
|
||||
# to be run.
|
||||
covered_remove = set()
|
||||
for task in self.rq.scenequeue_covered:
|
||||
task_fnid = self.rqdata.runq_fnid[task]
|
||||
for dep in self.rqdata.runq_depends[task]:
|
||||
if self.rqdata.runq_fnid[dep] == task_fnid:
|
||||
if dep not in self.rq.scenequeue_covered:
|
||||
covered_remove.add(task)
|
||||
break
|
||||
|
||||
for task in covered_remove:
|
||||
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[task]]
|
||||
taskname = self.rqdata.runq_task[task] + '_setscene'
|
||||
bb.build.del_stamp(taskname, self.rqdata.dataCache, fn)
|
||||
logger.debug(1, 'Not skipping task %s because it will have to be run anyway', task)
|
||||
self.rq.scenequeue_covered.remove(task)
|
||||
|
||||
logger.debug(1, 'Full skip list %s', self.rq.scenequeue_covered)
|
||||
|
||||
for task in self.rq.scenequeue_covered:
|
||||
@@ -1248,7 +1210,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
|
||||
modname, name = sched.rsplit(".", 1)
|
||||
try:
|
||||
module = __import__(modname, fromlist=(name,))
|
||||
except ImportError as exc:
|
||||
except ImportError, exc:
|
||||
logger.critical("Unable to import scheduler '%s' from '%s': %s" % (name, modname, exc))
|
||||
raise SystemExit(1)
|
||||
else:
|
||||
@@ -1339,7 +1301,6 @@ class RunQueueExecuteTasks(RunQueueExecute):
|
||||
|
||||
self.build_pids[pid] = task
|
||||
self.build_pipes[pid] = runQueuePipe(pipein, pipeout, self.cfgData)
|
||||
self.build_stamps[pid] = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
|
||||
self.runq_running[task] = 1
|
||||
self.stats.taskActive()
|
||||
if self.stats.active < self.number_tasks:
|
||||
@@ -1462,25 +1423,16 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
|
||||
sq_taskname = []
|
||||
sq_task = []
|
||||
noexec = []
|
||||
stamppresent = []
|
||||
for task in xrange(len(self.sq_revdeps)):
|
||||
realtask = self.rqdata.runq_setscene[task]
|
||||
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[realtask]]
|
||||
taskname = self.rqdata.runq_task[realtask]
|
||||
taskdep = self.rqdata.dataCache.task_deps[fn]
|
||||
|
||||
if 'noexec' in taskdep and taskname in taskdep['noexec']:
|
||||
noexec.append(task)
|
||||
self.task_skip(task)
|
||||
bb.build.make_stamp(taskname + "_setscene", self.rqdata.dataCache, fn)
|
||||
continue
|
||||
|
||||
if self.rq.check_stamp_task(realtask, taskname + "_setscene"):
|
||||
logger.debug(2, 'Setscene stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(realtask))
|
||||
stamppresent.append(task)
|
||||
self.task_skip(task)
|
||||
continue
|
||||
|
||||
sq_fn.append(fn)
|
||||
sq_hashfn.append(self.rqdata.dataCache.hashfn[fn])
|
||||
sq_hash.append(self.rqdata.runq_hash[realtask])
|
||||
@@ -1490,7 +1442,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
|
||||
locs = { "sq_fn" : sq_fn, "sq_task" : sq_taskname, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn, "d" : self.cooker.configuration.data }
|
||||
valid = bb.utils.better_eval(call, locs)
|
||||
|
||||
valid_new = stamppresent
|
||||
valid_new = []
|
||||
for v in valid:
|
||||
valid_new.append(sq_task[v])
|
||||
|
||||
@@ -1531,7 +1483,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
|
||||
def task_fail(self, task, result):
|
||||
self.stats.taskFailed()
|
||||
index = self.rqdata.runq_setscene[task]
|
||||
bb.event.fire(sceneQueueTaskFailed(index, self.stats, result, self), self.cfgData)
|
||||
bb.event.fire(runQueueTaskFailed(task, self.stats, result, self), self.cfgData)
|
||||
self.scenequeue_notcovered.add(task)
|
||||
self.scenequeue_updatecounters(task)
|
||||
|
||||
@@ -1664,14 +1616,6 @@ class runQueueTaskFailed(runQueueEvent):
|
||||
runQueueEvent.__init__(self, task, stats, rq)
|
||||
self.exitcode = exitcode
|
||||
|
||||
class sceneQueueTaskFailed(runQueueTaskFailed):
|
||||
"""
|
||||
Event notifing a setscene task failed
|
||||
"""
|
||||
def __init__(self, task, stats, exitcode, rq):
|
||||
runQueueTaskFailed.__init__(self, task, stats, exitcode, rq)
|
||||
self.taskstring = rq.rqdata.get_user_idstring(task, "_setscene")
|
||||
|
||||
class runQueueTaskCompleted(runQueueEvent):
|
||||
"""
|
||||
Event notifing a task completed
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
|
||||
import time
|
||||
import bb
|
||||
import pickle
|
||||
import signal
|
||||
|
||||
DEBUG = False
|
||||
@@ -35,7 +36,8 @@ DEBUG = False
|
||||
import inspect, select
|
||||
|
||||
class BitBakeServerCommands():
|
||||
def __init__(self, server):
|
||||
def __init__(self, server, cooker):
|
||||
self.cooker = cooker
|
||||
self.server = server
|
||||
|
||||
def runCommand(self, command):
|
||||
@@ -67,7 +69,7 @@ class BBUIEventQueue:
|
||||
self.parent = parent
|
||||
@staticmethod
|
||||
def send(event):
|
||||
bb.server.none.eventQueue.append(event)
|
||||
bb.server.none.eventQueue.append(pickle.loads(event))
|
||||
@staticmethod
|
||||
def quit():
|
||||
return
|
||||
@@ -104,17 +106,13 @@ class BBUIEventQueue:
|
||||
def chldhandler(signum, stackframe):
|
||||
pass
|
||||
|
||||
class BitBakeNoneServer():
|
||||
class BitBakeServer():
|
||||
# remove this when you're done with debugging
|
||||
# allow_reuse_address = True
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, cooker):
|
||||
self._idlefuns = {}
|
||||
self.commands = BitBakeServerCommands(self)
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.commands.cooker = cooker
|
||||
self.commands = BitBakeServerCommands(self, cooker)
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
@@ -159,10 +157,25 @@ class BitBakeNoneServer():
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitBakeServerConnection():
|
||||
class BitbakeServerInfo():
|
||||
def __init__(self, server):
|
||||
self.server = server.server
|
||||
self.connection = self.server.commands
|
||||
self.server = server
|
||||
self.commands = server.commands
|
||||
|
||||
class BitBakeServerFork():
|
||||
def __init__(self, cooker, server, serverinfo, logfile):
|
||||
serverinfo.logfile = logfile
|
||||
serverinfo.cooker = cooker
|
||||
serverinfo.server = server
|
||||
|
||||
class BitbakeUILauch():
|
||||
def launch(self, serverinfo, uifunc, *args):
|
||||
return bb.cooker.server_main(serverinfo.cooker, uifunc, *args)
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, serverinfo):
|
||||
self.server = serverinfo.server
|
||||
self.connection = serverinfo.commands
|
||||
self.events = bb.server.none.BBUIEventQueue(self.server)
|
||||
for event in bb.event.ui_queue:
|
||||
self.events.queue_event(event)
|
||||
@@ -176,28 +189,3 @@ class BitBakeServerConnection():
|
||||
self.connection.terminateServer()
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitBakeServer(object):
|
||||
def initServer(self):
|
||||
self.server = BitBakeNoneServer()
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.server.addcooker(cooker)
|
||||
|
||||
def getServerIdleCB(self):
|
||||
return self.server.register_idle_function
|
||||
|
||||
def saveConnectionDetails(self):
|
||||
return
|
||||
|
||||
def detach(self, cooker_logfile):
|
||||
self.logfile = cooker_logfile
|
||||
|
||||
def establishConnection(self):
|
||||
self.connection = BitBakeServerConnection(self)
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return bb.cooker.server_main(self.cooker, uifunc, *args)
|
||||
|
||||
|
||||
@@ -1,270 +0,0 @@
|
||||
#
|
||||
# BitBake Process based server.
|
||||
#
|
||||
# Copyright (C) 2010 Bob Foerster <robert@erafx.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
This module implements a multiprocessing.Process based server for bitbake.
|
||||
"""
|
||||
|
||||
import bb
|
||||
import bb.event
|
||||
import itertools
|
||||
import logging
|
||||
import multiprocessing
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
from Queue import Empty
|
||||
from multiprocessing import Event, Process, util, Queue, Pipe, queues
|
||||
|
||||
logger = logging.getLogger('BitBake')
|
||||
|
||||
class ServerCommunicator():
|
||||
def __init__(self, connection):
|
||||
self.connection = connection
|
||||
|
||||
def runCommand(self, command):
|
||||
# @todo try/except
|
||||
self.connection.send(command)
|
||||
|
||||
while True:
|
||||
# don't let the user ctrl-c while we're waiting for a response
|
||||
try:
|
||||
if self.connection.poll(.5):
|
||||
return self.connection.recv()
|
||||
else:
|
||||
return None
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
|
||||
class EventAdapter():
|
||||
"""
|
||||
Adapter to wrap our event queue since the caller (bb.event) expects to
|
||||
call a send() method, but our actual queue only has put()
|
||||
"""
|
||||
def __init__(self, queue):
|
||||
self.queue = queue
|
||||
|
||||
def send(self, event):
|
||||
try:
|
||||
self.queue.put(event)
|
||||
except Exception as err:
|
||||
print("EventAdapter puked: %s" % str(err))
|
||||
|
||||
|
||||
class ProcessServer(Process):
|
||||
profile_filename = "profile.log"
|
||||
profile_processed_filename = "profile.log.processed"
|
||||
|
||||
def __init__(self, command_channel, event_queue):
|
||||
Process.__init__(self)
|
||||
self.command_channel = command_channel
|
||||
self.event_queue = event_queue
|
||||
self.event = EventAdapter(event_queue)
|
||||
self._idlefunctions = {}
|
||||
self.quit = False
|
||||
|
||||
self.keep_running = Event()
|
||||
self.keep_running.set()
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
assert hasattr(function, '__call__')
|
||||
self._idlefunctions[function] = data
|
||||
|
||||
def run(self):
|
||||
for event in bb.event.ui_queue:
|
||||
self.event_queue.put(event)
|
||||
self.event_handle = bb.event.register_UIHhandler(self)
|
||||
bb.cooker.server_main(self.cooker, self.main)
|
||||
|
||||
def main(self):
|
||||
# Ignore SIGINT within the server, as all SIGINT handling is done by
|
||||
# the UI and communicated to us
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
while self.keep_running.is_set():
|
||||
try:
|
||||
if self.command_channel.poll():
|
||||
command = self.command_channel.recv()
|
||||
self.runCommand(command)
|
||||
|
||||
self.idle_commands(.1)
|
||||
except Exception:
|
||||
logger.exception('Running command %s', command)
|
||||
|
||||
self.event_queue.cancel_join_thread()
|
||||
bb.event.unregister_UIHhandler(self.event_handle)
|
||||
self.command_channel.close()
|
||||
self.cooker.stop()
|
||||
self.idle_commands(.1)
|
||||
|
||||
def idle_commands(self, delay):
|
||||
nextsleep = delay
|
||||
|
||||
for function, data in self._idlefunctions.items():
|
||||
try:
|
||||
retval = function(self, data, False)
|
||||
if retval is False:
|
||||
del self._idlefunctions[function]
|
||||
elif retval is True:
|
||||
nextsleep = None
|
||||
elif nextsleep is None:
|
||||
continue
|
||||
elif retval < nextsleep:
|
||||
nextsleep = retval
|
||||
except SystemExit:
|
||||
raise
|
||||
except Exception:
|
||||
logger.exception('Running idle function')
|
||||
|
||||
if nextsleep is not None:
|
||||
time.sleep(nextsleep)
|
||||
|
||||
def runCommand(self, command):
|
||||
"""
|
||||
Run a cooker command on the server
|
||||
"""
|
||||
self.command_channel.send(self.cooker.command.runCommand(command))
|
||||
|
||||
def stop(self):
|
||||
self.keep_running.clear()
|
||||
|
||||
def bootstrap_2_6_6(self):
|
||||
"""Pulled from python 2.6.6. Needed to ensure we have the fix from
|
||||
http://bugs.python.org/issue5313 when running on python version 2.6.2
|
||||
or lower."""
|
||||
|
||||
try:
|
||||
self._children = set()
|
||||
self._counter = itertools.count(1)
|
||||
try:
|
||||
sys.stdin.close()
|
||||
sys.stdin = open(os.devnull)
|
||||
except (OSError, ValueError):
|
||||
pass
|
||||
multiprocessing._current_process = self
|
||||
util._finalizer_registry.clear()
|
||||
util._run_after_forkers()
|
||||
util.info('child process calling self.run()')
|
||||
try:
|
||||
self.run()
|
||||
exitcode = 0
|
||||
finally:
|
||||
util._exit_function()
|
||||
except SystemExit as e:
|
||||
if not e.args:
|
||||
exitcode = 1
|
||||
elif type(e.args[0]) is int:
|
||||
exitcode = e.args[0]
|
||||
else:
|
||||
sys.stderr.write(e.args[0] + '\n')
|
||||
sys.stderr.flush()
|
||||
exitcode = 1
|
||||
except:
|
||||
exitcode = 1
|
||||
import traceback
|
||||
sys.stderr.write('Process %s:\n' % self.name)
|
||||
sys.stderr.flush()
|
||||
traceback.print_exc()
|
||||
|
||||
util.info('process exiting with exitcode %d' % exitcode)
|
||||
return exitcode
|
||||
|
||||
# Python versions 2.6.0 through 2.6.2 suffer from a multiprocessing bug
|
||||
# which can result in a bitbake server hang during the parsing process
|
||||
if (2, 6, 0) <= sys.version_info < (2, 6, 3):
|
||||
_bootstrap = bootstrap_2_6_6
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, server):
|
||||
self.server = server
|
||||
self.procserver = server.server
|
||||
self.connection = ServerCommunicator(server.ui_channel)
|
||||
self.events = server.event_queue
|
||||
|
||||
def terminate(self, force = False):
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
self.procserver.stop()
|
||||
if force:
|
||||
self.procserver.join(0.5)
|
||||
if self.procserver.is_alive():
|
||||
self.procserver.terminate()
|
||||
self.procserver.join()
|
||||
else:
|
||||
self.procserver.join()
|
||||
while True:
|
||||
try:
|
||||
event = self.server.event_queue.get(block=False)
|
||||
except (Empty, IOError):
|
||||
break
|
||||
if isinstance(event, logging.LogRecord):
|
||||
logger.handle(event)
|
||||
self.server.ui_channel.close()
|
||||
self.server.event_queue.close()
|
||||
if force:
|
||||
sys.exit(1)
|
||||
|
||||
# Wrap Queue to provide API which isn't server implementation specific
|
||||
class ProcessEventQueue(multiprocessing.queues.Queue):
|
||||
def waitEvent(self, timeout):
|
||||
try:
|
||||
return self.get(True, timeout)
|
||||
except Empty:
|
||||
return None
|
||||
|
||||
def getEvent(self):
|
||||
try:
|
||||
return self.get(False)
|
||||
except Empty:
|
||||
return None
|
||||
|
||||
|
||||
class BitBakeServer(object):
|
||||
def initServer(self):
|
||||
# establish communication channels. We use bidirectional pipes for
|
||||
# ui <--> server command/response pairs
|
||||
# and a queue for server -> ui event notifications
|
||||
#
|
||||
self.ui_channel, self.server_channel = Pipe()
|
||||
self.event_queue = ProcessEventQueue(0)
|
||||
|
||||
self.server = ProcessServer(self.server_channel, self.event_queue)
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.server.cooker = cooker
|
||||
|
||||
def getServerIdleCB(self):
|
||||
return self.server.register_idle_function
|
||||
|
||||
def saveConnectionDetails(self):
|
||||
return
|
||||
|
||||
def detach(self, cooker_logfile):
|
||||
self.server.start()
|
||||
return
|
||||
|
||||
def establishConnection(self):
|
||||
self.connection = BitBakeServerConnection(self)
|
||||
signal.signal(signal.SIGTERM, lambda i, s: self.connection.terminate(force=True))
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return bb.cooker.server_main(self.cooker, uifunc, *args)
|
||||
|
||||
@@ -52,8 +52,6 @@ if sys.hexversion < 0x020600F0:
|
||||
# implementations from Python 2.6.6's xmlrpclib.
|
||||
#
|
||||
# Upstream Python bug is #8194 (http://bugs.python.org/issue8194)
|
||||
# This bug is relevant for Python 2.7.0 and 2.7.1 but was fixed for
|
||||
# Python > 2.7.2
|
||||
##
|
||||
|
||||
class BBTransport(xmlrpclib.Transport):
|
||||
@@ -109,28 +107,17 @@ class BBTransport(xmlrpclib.Transport):
|
||||
|
||||
return u.close()
|
||||
|
||||
def _create_server(host, port):
|
||||
# Python 2.7.0 and 2.7.1 have a buggy Transport implementation
|
||||
# For those versions of Python, and only those versions, use our
|
||||
# own copy/paste BBTransport class.
|
||||
if (2, 7, 0) <= sys.version_info < (2, 7, 2):
|
||||
t = BBTransport()
|
||||
s = xmlrpclib.Server("http://%s:%d/" % (host, port), transport=t, allow_none=True)
|
||||
else:
|
||||
s = xmlrpclib.Server("http://%s:%d/" % (host, port), allow_none=True)
|
||||
|
||||
return s
|
||||
|
||||
class BitBakeServerCommands():
|
||||
def __init__(self, server):
|
||||
def __init__(self, server, cooker):
|
||||
self.cooker = cooker
|
||||
self.server = server
|
||||
|
||||
def registerEventHandler(self, host, port):
|
||||
"""
|
||||
Register a remote UI Event Handler
|
||||
"""
|
||||
s = _create_server(host, port)
|
||||
|
||||
t = BBTransport()
|
||||
s = xmlrpclib.Server("http://%s:%d/" % (host, port), transport=t, allow_none=True)
|
||||
return bb.event.register_UIHhandler(s)
|
||||
|
||||
def unregisterEventHandler(self, handlerNum):
|
||||
@@ -150,7 +137,7 @@ class BitBakeServerCommands():
|
||||
Trigger the server to quit
|
||||
"""
|
||||
self.server.quit = True
|
||||
print("Server (cooker) exiting")
|
||||
print("Server (cooker) exitting")
|
||||
return
|
||||
|
||||
def ping(self):
|
||||
@@ -159,11 +146,11 @@ class BitBakeServerCommands():
|
||||
"""
|
||||
return True
|
||||
|
||||
class BitBakeXMLRPCServer(SimpleXMLRPCServer):
|
||||
class BitBakeServer(SimpleXMLRPCServer):
|
||||
# remove this when you're done with debugging
|
||||
# allow_reuse_address = True
|
||||
|
||||
def __init__(self, interface = ("localhost", 0)):
|
||||
def __init__(self, cooker, interface = ("localhost", 0)):
|
||||
"""
|
||||
Constructor
|
||||
"""
|
||||
@@ -173,12 +160,9 @@ class BitBakeXMLRPCServer(SimpleXMLRPCServer):
|
||||
self._idlefuns = {}
|
||||
self.host, self.port = self.socket.getsockname()
|
||||
#self.register_introspection_functions()
|
||||
self.commands = BitBakeServerCommands(self)
|
||||
self.autoregister_all_functions(self.commands, "")
|
||||
|
||||
def addcooker(self, cooker):
|
||||
commands = BitBakeServerCommands(self, cooker)
|
||||
self.autoregister_all_functions(commands, "")
|
||||
self.cooker = cooker
|
||||
self.commands.cooker = cooker
|
||||
|
||||
def autoregister_all_functions(self, context, prefix):
|
||||
"""
|
||||
@@ -246,9 +230,18 @@ class BitbakeServerInfo():
|
||||
self.host = server.host
|
||||
self.port = server.port
|
||||
|
||||
class BitBakeServerFork():
|
||||
def __init__(self, cooker, server, serverinfo, logfile):
|
||||
daemonize.createDaemon(server.serve_forever, logfile)
|
||||
|
||||
class BitbakeUILauch():
|
||||
def launch(self, serverinfo, uifunc, *args):
|
||||
return uifunc(*args)
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, serverinfo):
|
||||
self.connection = _create_server(serverinfo.host, serverinfo.port)
|
||||
t = BBTransport()
|
||||
self.connection = xmlrpclib.Server("http://%s:%s" % (serverinfo.host, serverinfo.port), transport=t, allow_none=True)
|
||||
self.events = uievent.BBUIEventQueue(self.connection)
|
||||
for event in bb.event.ui_queue:
|
||||
self.events.queue_event(event)
|
||||
@@ -265,31 +258,3 @@ class BitBakeServerConnection():
|
||||
self.connection.terminateServer()
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitBakeServer(object):
|
||||
def initServer(self):
|
||||
self.server = BitBakeXMLRPCServer()
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.server.addcooker(cooker)
|
||||
|
||||
def getServerIdleCB(self):
|
||||
return self.server.register_idle_function
|
||||
|
||||
def saveConnectionDetails(self):
|
||||
self.serverinfo = BitbakeServerInfo(self.server)
|
||||
|
||||
def detach(self, cooker_logfile):
|
||||
daemonize.createDaemon(self.server.serve_forever, cooker_logfile)
|
||||
del self.cooker
|
||||
del self.server
|
||||
|
||||
def establishConnection(self):
|
||||
self.connection = BitBakeServerConnection(self.serverinfo)
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return uifunc(*args)
|
||||
|
||||
|
||||
|
||||
@@ -407,7 +407,7 @@ SRC_URI = ""
|
||||
|
||||
def parse( self, params ):
|
||||
"""(Re-)parse .bb files and calculate the dependency graph"""
|
||||
cooker.status = cache.CacheData(cooker.caches_array)
|
||||
cooker.status = cache.CacheData()
|
||||
ignore = data.getVar("ASSUME_PROVIDED", cooker.configuration.data, 1) or ""
|
||||
cooker.status.ignored_dependencies = set( ignore.split() )
|
||||
cooker.handleCollections( data.getVar("BBFILE_COLLECTIONS", cooker.configuration.data, 1) )
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import bb.data
|
||||
|
||||
@@ -47,9 +46,6 @@ class SignatureGenerator(object):
|
||||
def stampfile(self, stampbase, file_name, taskname, extrainfo):
|
||||
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
|
||||
|
||||
def dump_sigtask(self, fn, task, stampbase, runtime):
|
||||
return
|
||||
|
||||
class SignatureGeneratorBasic(SignatureGenerator):
|
||||
"""
|
||||
"""
|
||||
@@ -82,10 +78,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
data = d.getVar(task, False)
|
||||
lookupcache[task] = data
|
||||
|
||||
if data is None:
|
||||
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
|
||||
data = ''
|
||||
|
||||
newdeps = gendeps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
@@ -107,7 +99,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
var = d.getVar(dep, False)
|
||||
lookupcache[dep] = var
|
||||
if var:
|
||||
data = data + str(var)
|
||||
data = data + var
|
||||
if data is None:
|
||||
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
|
||||
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
|
||||
taskdeps[task] = sorted(alldeps)
|
||||
|
||||
|
||||
@@ -1,278 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import copy
|
||||
import re, os
|
||||
from bb import data
|
||||
|
||||
class Configurator(gobject.GObject):
|
||||
|
||||
"""
|
||||
A GObject to handle writing modified configuration values back
|
||||
to conf files.
|
||||
"""
|
||||
__gsignals__ = {
|
||||
"layers-loaded" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"layers-changed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
())
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
gobject.GObject.__init__(self)
|
||||
self.local = None
|
||||
self.bblayers = None
|
||||
self.enabled_layers = {}
|
||||
self.loaded_layers = {}
|
||||
self.config = {}
|
||||
self.orig_config = {}
|
||||
|
||||
# NOTE: cribbed from the cooker...
|
||||
def _parse(self, f, data, include=False):
|
||||
try:
|
||||
return bb.parse.handle(f, data, include)
|
||||
except (IOError, bb.parse.ParseError) as exc:
|
||||
parselog.critical("Unable to parse %s: %s" % (f, exc))
|
||||
sys.exit(1)
|
||||
|
||||
def _loadLocalConf(self, path):
|
||||
def getString(var):
|
||||
return bb.data.getVar(var, data, True) or ""
|
||||
|
||||
self.local = path
|
||||
|
||||
if self.orig_config:
|
||||
del self.orig_config
|
||||
self.orig_config = {}
|
||||
|
||||
data = bb.data.init()
|
||||
data = self._parse(self.local, data)
|
||||
|
||||
# We only need to care about certain variables
|
||||
mach = getString('MACHINE')
|
||||
if mach and mach != self.config.get('MACHINE', ''):
|
||||
self.config['MACHINE'] = mach
|
||||
sdkmach = getString('SDKMACHINE')
|
||||
if sdkmach and sdkmach != self.config.get('SDKMACHINE', ''):
|
||||
self.config['SDKMACHINE'] = sdkmach
|
||||
distro = getString('DISTRO')
|
||||
if distro and distro != self.config.get('DISTRO', ''):
|
||||
self.config['DISTRO'] = distro
|
||||
bbnum = getString('BB_NUMBER_THREADS')
|
||||
if bbnum and bbnum != self.config.get('BB_NUMBER_THREADS', ''):
|
||||
self.config['BB_NUMBER_THREADS'] = bbnum
|
||||
pmake = getString('PARALLEL_MAKE')
|
||||
if pmake and pmake != self.config.get('PARALLEL_MAKE', ''):
|
||||
self.config['PARALLEL_MAKE'] = pmake
|
||||
incompat = getString('INCOMPATIBLE_LICENSE')
|
||||
if incompat and incompat != self.config.get('INCOMPATIBLE_LICENSE', ''):
|
||||
self.config['INCOMPATIBLE_LICENSE'] = incompat
|
||||
pclass = getString('PACKAGE_CLASSES')
|
||||
if pclass and pclass != self.config.get('PACKAGE_CLASSES', ''):
|
||||
self.config['PACKAGE_CLASSES'] = pclass
|
||||
|
||||
self.orig_config = copy.deepcopy(self.config)
|
||||
|
||||
def setLocalConfVar(self, var, val):
|
||||
if var in self.config:
|
||||
self.config[var] = val
|
||||
|
||||
def _loadLayerConf(self, path):
|
||||
self.bblayers = path
|
||||
self.enabled_layers = {}
|
||||
self.loaded_layers = {}
|
||||
data = bb.data.init()
|
||||
data = self._parse(self.bblayers, data)
|
||||
layers = (bb.data.getVar('BBLAYERS', data, True) or "").split()
|
||||
for layer in layers:
|
||||
# TODO: we may be better off calling the layer by its
|
||||
# BBFILE_COLLECTIONS value?
|
||||
name = self._getLayerName(layer)
|
||||
self.loaded_layers[name] = layer
|
||||
|
||||
self.enabled_layers = copy.deepcopy(self.loaded_layers)
|
||||
self.emit("layers-loaded")
|
||||
|
||||
def _addConfigFile(self, path):
|
||||
pref, sep, filename = path.rpartition("/")
|
||||
if filename == "local.conf" or filename == "hob.local.conf":
|
||||
self._loadLocalConf(path)
|
||||
elif filename == "bblayers.conf":
|
||||
self._loadLayerConf(path)
|
||||
|
||||
def _splitLayer(self, path):
|
||||
# we only care about the path up to /conf/layer.conf
|
||||
layerpath, conf, end = path.rpartition("/conf/")
|
||||
return layerpath
|
||||
|
||||
def _getLayerName(self, path):
|
||||
# Should this be the collection name?
|
||||
layerpath, sep, name = path.rpartition("/")
|
||||
return name
|
||||
|
||||
def disableLayer(self, layer):
|
||||
if layer in self.enabled_layers:
|
||||
del self.enabled_layers[layer]
|
||||
|
||||
def addLayerConf(self, confpath):
|
||||
layerpath = self._splitLayer(confpath)
|
||||
name = self._getLayerName(layerpath)
|
||||
if name not in self.enabled_layers:
|
||||
self.addLayer(name, layerpath)
|
||||
return name, layerpath
|
||||
|
||||
def addLayer(self, name, path):
|
||||
self.enabled_layers[name] = path
|
||||
|
||||
def _isLayerConfDirty(self):
|
||||
# if a different number of layers enabled to what was
|
||||
# loaded, definitely different
|
||||
if len(self.enabled_layers) != len(self.loaded_layers):
|
||||
return True
|
||||
|
||||
for layer in self.loaded_layers:
|
||||
# if layer loaded but no longer present, definitely dirty
|
||||
if layer not in self.enabled_layers:
|
||||
return True
|
||||
|
||||
for layer in self.enabled_layers:
|
||||
# if this layer wasn't present at load, definitely dirty
|
||||
if layer not in self.loaded_layers:
|
||||
return True
|
||||
# if this layers path has changed, definitely dirty
|
||||
if self.enabled_layers[layer] != self.loaded_layers[layer]:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _constructLayerEntry(self):
|
||||
"""
|
||||
Returns a string representing the new layer selection
|
||||
"""
|
||||
layers = self.enabled_layers.copy()
|
||||
# Construct BBLAYERS entry
|
||||
layer_entry = "BBLAYERS = \" \\\n"
|
||||
if 'meta' in layers:
|
||||
layer_entry = layer_entry + " %s \\\n" % layers['meta']
|
||||
del layers['meta']
|
||||
for layer in layers:
|
||||
layer_entry = layer_entry + " %s \\\n" % layers[layer]
|
||||
layer_entry = layer_entry + " \""
|
||||
|
||||
return "".join(layer_entry)
|
||||
|
||||
def writeLocalConf(self):
|
||||
# Dictionary containing only new or modified variables
|
||||
changed_values = {}
|
||||
for var in self.config:
|
||||
val = self.config[var]
|
||||
if self.orig_config.get(var, None) != val:
|
||||
changed_values[var] = val
|
||||
|
||||
if not len(changed_values):
|
||||
return
|
||||
|
||||
# Create a backup of the local.conf
|
||||
bkup = "%s~" % self.local
|
||||
os.rename(self.local, bkup)
|
||||
|
||||
# read the original conf into a list
|
||||
with open(bkup, 'r') as config:
|
||||
config_lines = config.readlines()
|
||||
|
||||
new_config_lines = ["\n"]
|
||||
for var in changed_values:
|
||||
# Convenience function for re.subn(). If the pattern matches
|
||||
# return a string which contains an assignment using the same
|
||||
# assignment operator as the old assignment.
|
||||
def replace_val(matchobj):
|
||||
var = matchobj.group(1) # config variable
|
||||
op = matchobj.group(2) # assignment operator
|
||||
val = changed_values[var] # new config value
|
||||
return "%s %s \"%s\"" % (var, op, val)
|
||||
|
||||
pattern = '^\s*(%s)\s*([+=?.]+)(.*)' % re.escape(var)
|
||||
p = re.compile(pattern)
|
||||
cnt = 0
|
||||
replaced = False
|
||||
|
||||
# Iterate over the local.conf lines and if they are a match
|
||||
# for the pattern comment out the line and append a new line
|
||||
# with the new VAR op "value" entry
|
||||
for line in config_lines:
|
||||
new_line, replacements = p.subn(replace_val, line)
|
||||
if replacements:
|
||||
config_lines[cnt] = "#%s" % line
|
||||
new_config_lines.append(new_line)
|
||||
replaced = True
|
||||
cnt = cnt + 1
|
||||
|
||||
if not replaced:
|
||||
new_config_lines.append("%s = \"%s\"" % (var, changed_values[var]))
|
||||
|
||||
# Add the modified variables
|
||||
config_lines.extend(new_config_lines)
|
||||
|
||||
# Write the updated lines list object to the local.conf
|
||||
with open(self.local, "w") as n:
|
||||
n.write("".join(config_lines))
|
||||
|
||||
del self.orig_config
|
||||
self.orig_config = copy.deepcopy(self.config)
|
||||
|
||||
def writeLayerConf(self):
|
||||
# If we've not added/removed new layers don't write
|
||||
if not self._isLayerConfDirty():
|
||||
return
|
||||
|
||||
# This pattern should find the existing BBLAYERS
|
||||
pattern = 'BBLAYERS\s=\s\".*\"'
|
||||
|
||||
# Backup the users bblayers.conf
|
||||
bkup = "%s~" % self.bblayers
|
||||
os.rename(self.bblayers, bkup)
|
||||
|
||||
replacement = self._constructLayerEntry()
|
||||
|
||||
with open(bkup, "r") as f:
|
||||
contents = f.read()
|
||||
p = re.compile(pattern, re.DOTALL)
|
||||
new = p.sub(replacement, contents)
|
||||
|
||||
with open(self.bblayers, "w") as n:
|
||||
n.write(new)
|
||||
|
||||
# At some stage we should remove the backup we've created
|
||||
# though we should probably verify it first
|
||||
#os.remove(bkup)
|
||||
|
||||
# set loaded_layers for dirtiness tracking
|
||||
self.loaded_layers = copy.deepcopy(self.enabled_layers)
|
||||
|
||||
self.emit("layers-changed")
|
||||
|
||||
def configFound(self, handler, path):
|
||||
self._addConfigFile(path)
|
||||
|
||||
def loadConfig(self, path):
|
||||
self._addConfigFile(path)
|
||||
@@ -1,61 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import gtk
|
||||
"""
|
||||
The following are convenience classes for implementing GNOME HIG compliant
|
||||
BitBake GUI's
|
||||
In summary: spacing = 12px, border-width = 6px
|
||||
"""
|
||||
|
||||
class CrumbsDialog(gtk.Dialog):
|
||||
"""
|
||||
A GNOME HIG compliant dialog widget.
|
||||
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
|
||||
"""
|
||||
def __init__(self, parent=None, label="", icon=gtk.STOCK_INFO):
|
||||
gtk.Dialog.__init__(self, "", parent, gtk.DIALOG_DESTROY_WITH_PARENT)
|
||||
|
||||
#self.set_property("has-separator", False) # note: deprecated in 2.22
|
||||
|
||||
self.set_border_width(6)
|
||||
self.vbox.set_property("spacing", 12)
|
||||
self.action_area.set_property("spacing", 12)
|
||||
self.action_area.set_property("border-width", 6)
|
||||
|
||||
first_row = gtk.HBox(spacing=12)
|
||||
first_row.set_property("border-width", 6)
|
||||
first_row.show()
|
||||
self.vbox.add(first_row)
|
||||
|
||||
self.icon = gtk.Image()
|
||||
self.icon.set_from_stock(icon, gtk.ICON_SIZE_DIALOG)
|
||||
self.icon.set_property("yalign", 0.00)
|
||||
self.icon.show()
|
||||
first_row.add(self.icon)
|
||||
|
||||
self.label = gtk.Label()
|
||||
self.label.set_use_markup(True)
|
||||
self.label.set_line_wrap(True)
|
||||
self.label.set_markup(label)
|
||||
self.label.set_property("yalign", 0.00)
|
||||
self.label.show()
|
||||
first_row.add(self.label)
|
||||
@@ -19,6 +19,7 @@
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
from bb.ui.crumbs.progress import ProgressBar
|
||||
|
||||
progress_total = 0
|
||||
|
||||
@@ -28,78 +29,46 @@ class HobHandler(gobject.GObject):
|
||||
This object does BitBake event handling for the hob gui.
|
||||
"""
|
||||
__gsignals__ = {
|
||||
"machines-updated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"sdk-machines-updated": (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"distros-updated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"package-formats-found" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"config-found" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING,)),
|
||||
"generating-data" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"data-generated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"error" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING,)),
|
||||
"build-complete" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"reload-triggered" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING)),
|
||||
"machines-updated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"distros-updated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"generating-data" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"data-generated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
())
|
||||
}
|
||||
|
||||
def __init__(self, taskmodel, server):
|
||||
gobject.GObject.__init__(self)
|
||||
self.current_command = None
|
||||
self.building = None
|
||||
self.gplv3_excluded = False
|
||||
self.build_toolchain = False
|
||||
self.build_toolchain_headers = False
|
||||
self.generating = False
|
||||
self.build_queue = []
|
||||
|
||||
self.model = taskmodel
|
||||
self.server = server
|
||||
self.current_command = None
|
||||
self.building = False
|
||||
|
||||
self.command_map = {
|
||||
"findConfigFilePathLocal" : ("findConfigFilePath", ["hob.local.conf"], "findConfigFilePathHobLocal"),
|
||||
"findConfigFilePathHobLocal" : ("findConfigFilePath", ["bblayers.conf"], "findConfigFilePathLayers"),
|
||||
"findConfigFilePathLayers" : ("findConfigFiles", ["DISTRO"], "findConfigFilesDistro"),
|
||||
"findConfigFilesDistro" : ("findConfigFiles", ["MACHINE"], "findConfigFilesMachine"),
|
||||
"findConfigFilesMachine" : ("findConfigFiles", ["MACHINE-SDK"], "findConfigFilesSdkMachine"),
|
||||
"findConfigFilesSdkMachine" : ("findFilesMatchingInDir", ["rootfs_", "classes"], "findFilesMatchingPackage"),
|
||||
"findFilesMatchingPackage" : ("generateTargetsTree", ["classes/image.bbclass"], None),
|
||||
"generateTargetsTree" : (None, [], None),
|
||||
"findConfigFilesDistro" : ("findConfigFiles", "MACHINE", "findConfigFilesMachine"),
|
||||
"findConfigFilesMachine" : ("generateTargetsTree", "classes/image.bbclass", None),
|
||||
"generateTargetsTree" : (None, None, None),
|
||||
}
|
||||
|
||||
def run_next_command(self):
|
||||
# FIXME: this is ugly and I *will* replace it
|
||||
if self.current_command:
|
||||
if not self.generating:
|
||||
self.emit("generating-data")
|
||||
self.generating = True
|
||||
next_cmd = self.command_map[self.current_command]
|
||||
command = next_cmd[0]
|
||||
argument = next_cmd[1]
|
||||
self.current_command = next_cmd[2]
|
||||
args = [command]
|
||||
args.extend(argument)
|
||||
self.server.runCommand(args)
|
||||
if command == "generateTargetsTree":
|
||||
self.emit("generating-data")
|
||||
self.server.runCommand([command, argument])
|
||||
|
||||
def handle_event(self, event, running_build, pbar):
|
||||
def handle_event(self, event, running_build, pbar=None):
|
||||
if not event:
|
||||
return
|
||||
|
||||
@@ -108,9 +77,9 @@ class HobHandler(gobject.GObject):
|
||||
running_build.handle_event(event)
|
||||
elif isinstance(event, bb.event.TargetsTreeGenerated):
|
||||
self.emit("data-generated")
|
||||
self.generating = False
|
||||
if event._model:
|
||||
self.model.populate(event._model)
|
||||
|
||||
elif isinstance(event, bb.event.ConfigFilesFound):
|
||||
var = event._variable
|
||||
if var == "distro":
|
||||
@@ -121,44 +90,26 @@ class HobHandler(gobject.GObject):
|
||||
machines = event._values
|
||||
machines.sort()
|
||||
self.emit("machines-updated", machines)
|
||||
elif var == "machine-sdk":
|
||||
sdk_machines = event._values
|
||||
sdk_machines.sort()
|
||||
self.emit("sdk-machines-updated", sdk_machines)
|
||||
elif isinstance(event, bb.event.ConfigFilePathFound):
|
||||
path = event._path
|
||||
self.emit("config-found", path)
|
||||
elif isinstance(event, bb.event.FilesMatchingFound):
|
||||
# FIXME: hard coding, should at least be a variable shared between
|
||||
# here and the caller
|
||||
if event._pattern == "rootfs_":
|
||||
formats = []
|
||||
for match in event._matches:
|
||||
classname, sep, cls = match.rpartition(".")
|
||||
fs, sep, format = classname.rpartition("_")
|
||||
formats.append(format)
|
||||
formats.sort()
|
||||
self.emit("package-formats-found", formats)
|
||||
|
||||
elif isinstance(event, bb.command.CommandCompleted):
|
||||
self.run_next_command()
|
||||
elif isinstance(event, bb.command.CommandFailed):
|
||||
self.emit("error", event.error)
|
||||
elif isinstance(event, bb.event.CacheLoadStarted):
|
||||
elif isinstance(event, bb.event.CacheLoadStarted) and pbar:
|
||||
pbar.set_title("Loading cache")
|
||||
bb.ui.crumbs.hobeventhandler.progress_total = event.total
|
||||
pbar.set_text("Loading cache: %s/%s" % (0, bb.ui.crumbs.hobeventhandler.progress_total))
|
||||
elif isinstance(event, bb.event.CacheLoadProgress):
|
||||
pbar.set_text("Loading cache: %s/%s" % (event.current, bb.ui.crumbs.hobeventhandler.progress_total))
|
||||
elif isinstance(event, bb.event.CacheLoadCompleted):
|
||||
pbar.set_text("Loading cache: %s/%s" % (bb.ui.crumbs.hobeventhandler.progress_total, bb.ui.crumbs.hobeventhandler.progress_total))
|
||||
elif isinstance(event, bb.event.ParseStarted):
|
||||
if event.total == 0:
|
||||
return
|
||||
pbar.update(0, bb.ui.crumbs.hobeventhandler.progress_total)
|
||||
elif isinstance(event, bb.event.CacheLoadProgress) and pbar:
|
||||
pbar.update(event.current, bb.ui.crumbs.hobeventhandler.progress_total)
|
||||
elif isinstance(event, bb.event.CacheLoadCompleted) and pbar:
|
||||
pbar.update(bb.ui.crumbs.hobeventhandler.progress_total, bb.ui.crumbs.hobeventhandler.progress_total)
|
||||
elif isinstance(event, bb.event.ParseStarted) and pbar:
|
||||
pbar.set_title("Processing recipes")
|
||||
bb.ui.crumbs.hobeventhandler.progress_total = event.total
|
||||
pbar.set_text("Processing recipes: %s/%s" % (0, bb.ui.crumbs.hobeventhandler.progress_total))
|
||||
elif isinstance(event, bb.event.ParseProgress):
|
||||
pbar.set_text("Processing recipes: %s/%s" % (event.current, bb.ui.crumbs.hobeventhandler.progress_total))
|
||||
elif isinstance(event, bb.event.ParseCompleted):
|
||||
pbar.set_fraction(1.0)
|
||||
pbar.update(0, bb.ui.crumbs.hobeventhandler.progress_total)
|
||||
elif isinstance(event, bb.event.ParseProgress) and pbar:
|
||||
pbar.update(event.current, bb.ui.crumbs.hobeventhandler.progress_total)
|
||||
elif isinstance(event, bb.event.ParseCompleted) and pbar:
|
||||
pbar.hide()
|
||||
|
||||
return
|
||||
|
||||
def event_handle_idle_func (self, eventHandler, running_build, pbar):
|
||||
@@ -171,95 +122,16 @@ class HobHandler(gobject.GObject):
|
||||
|
||||
def set_machine(self, machine):
|
||||
self.server.runCommand(["setVariable", "MACHINE", machine])
|
||||
|
||||
def set_sdk_machine(self, sdk_machine):
|
||||
self.server.runCommand(["setVariable", "SDKMACHINE", sdk_machine])
|
||||
self.current_command = "findConfigFilesMachine"
|
||||
self.run_next_command()
|
||||
|
||||
def set_distro(self, distro):
|
||||
self.server.runCommand(["setVariable", "DISTRO", distro])
|
||||
|
||||
def set_package_format(self, format):
|
||||
self.server.runCommand(["setVariable", "PACKAGE_CLASSES", "package_%s" % format])
|
||||
|
||||
def reload_data(self, config=None):
|
||||
img = self.model.selected_image
|
||||
selected_packages, _ = self.model.get_selected_packages()
|
||||
self.emit("reload-triggered", img, " ".join(selected_packages))
|
||||
self.server.runCommand(["reparseFiles"])
|
||||
self.current_command = "findConfigFilePathLayers"
|
||||
self.run_next_command()
|
||||
|
||||
def set_bbthreads(self, threads):
|
||||
self.server.runCommand(["setVariable", "BB_NUMBER_THREADS", threads])
|
||||
|
||||
def set_pmake(self, threads):
|
||||
pmake = "-j %s" % threads
|
||||
self.server.runCommand(["setVariable", "BB_NUMBER_THREADS", pmake])
|
||||
|
||||
def run_build(self, tgts):
|
||||
self.building = "image"
|
||||
targets = []
|
||||
targets.append(tgts)
|
||||
if self.build_toolchain and self.build_toolchain_headers:
|
||||
targets = ["meta-toolchain-sdk"] + targets
|
||||
elif self.build_toolchain:
|
||||
targets = ["meta-toolchain"] + targets
|
||||
def run_build(self, targets):
|
||||
self.building = True
|
||||
self.server.runCommand(["buildTargets", targets, "build"])
|
||||
|
||||
def build_packages(self, pkgs):
|
||||
self.building = "packages"
|
||||
if 'meta-toolchain' in self.build_queue:
|
||||
self.build_queue.remove('meta-toolchain')
|
||||
pkgs.extend('meta-toolchain')
|
||||
self.server.runCommand(["buildTargets", pkgs, "build"])
|
||||
|
||||
def build_file(self, image):
|
||||
self.building = "image"
|
||||
self.server.runCommand(["buildFile", image, "build"])
|
||||
|
||||
def cancel_build(self, force=False):
|
||||
if force:
|
||||
# Force the cooker to stop as quickly as possible
|
||||
self.server.runCommand(["stateStop"])
|
||||
else:
|
||||
# Wait for tasks to complete before shutting down, this helps
|
||||
# leave the workdir in a usable state
|
||||
self.server.runCommand(["stateShutdown"])
|
||||
|
||||
def toggle_gplv3(self, excluded):
|
||||
if self.gplv3_excluded != excluded:
|
||||
self.gplv3_excluded = excluded
|
||||
if excluded:
|
||||
self.server.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", "GPLv3"])
|
||||
else:
|
||||
self.server.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", ""])
|
||||
|
||||
def toggle_toolchain(self, enabled):
|
||||
if self.build_toolchain != enabled:
|
||||
self.build_toolchain = enabled
|
||||
|
||||
def toggle_toolchain_headers(self, enabled):
|
||||
if self.build_toolchain_headers != enabled:
|
||||
self.build_toolchain_headers = enabled
|
||||
|
||||
def queue_image_recipe_path(self, path):
|
||||
self.build_queue.append(path)
|
||||
|
||||
def build_complete_cb(self, running_build):
|
||||
if len(self.build_queue) > 0:
|
||||
next = self.build_queue.pop(0)
|
||||
if next.endswith('.bb'):
|
||||
self.build_file(next)
|
||||
self.building = 'image'
|
||||
self.build_file(next)
|
||||
else:
|
||||
self.build_packages(next.split(" "))
|
||||
else:
|
||||
self.building = None
|
||||
self.emit("build-complete")
|
||||
|
||||
def set_image_output_type(self, output_type):
|
||||
self.server.runCommand(["setVariable", "IMAGE_FSTYPES", output_type])
|
||||
|
||||
def get_image_deploy_dir(self):
|
||||
return self.server.runCommand(["getVariable", "DEPLOY_DIR_IMAGE"])
|
||||
def cancel_build(self):
|
||||
# Note: this may not be the right way to stop an in-progress build
|
||||
self.server.runCommand(["stateStop"])
|
||||
|
||||
@@ -1,293 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
from bb.ui.crumbs.configurator import Configurator
|
||||
|
||||
class HobPrefs(gtk.Dialog):
|
||||
"""
|
||||
"""
|
||||
def empty_combo_text(self, combo_text):
|
||||
model = combo_text.get_model()
|
||||
if model:
|
||||
model.clear()
|
||||
|
||||
def output_type_changed_cb(self, combo, handler):
|
||||
ot = combo.get_active_text()
|
||||
if ot != self.curr_output_type:
|
||||
self.curr_output_type = ot
|
||||
handler.set_image_output_type(ot)
|
||||
|
||||
def sdk_machine_combo_changed_cb(self, combo, handler):
|
||||
sdk_mach = combo.get_active_text()
|
||||
if sdk_mach != self.curr_sdk_mach:
|
||||
self.curr_sdk_mach = sdk_mach
|
||||
self.configurator.setLocalConfVar('SDKMACHINE', sdk_mach)
|
||||
handler.set_sdk_machine(sdk_mach)
|
||||
|
||||
def update_sdk_machines(self, handler, sdk_machines):
|
||||
active = 0
|
||||
# disconnect the signal handler before updating the combo model
|
||||
if self.sdk_machine_handler_id:
|
||||
self.sdk_machine_combo.disconnect(self.sdk_machine_handler_id)
|
||||
self.sdk_machine_handler_id = None
|
||||
|
||||
self.empty_combo_text(self.sdk_machine_combo)
|
||||
for sdk_machine in sdk_machines:
|
||||
self.sdk_machine_combo.append_text(sdk_machine)
|
||||
if sdk_machine == self.curr_sdk_mach:
|
||||
self.sdk_machine_combo.set_active(active)
|
||||
active = active + 1
|
||||
|
||||
self.sdk_machine_handler_id = self.sdk_machine_combo.connect("changed", self.sdk_machine_combo_changed_cb, handler)
|
||||
|
||||
def distro_combo_changed_cb(self, combo, handler):
|
||||
distro = combo.get_active_text()
|
||||
if distro != self.curr_distro:
|
||||
self.curr_distro = distro
|
||||
self.configurator.setLocalConfVar('DISTRO', distro)
|
||||
handler.set_distro(distro)
|
||||
self.reload_required = True
|
||||
|
||||
def update_distros(self, handler, distros):
|
||||
active = 0
|
||||
# disconnect the signal handler before updating combo model
|
||||
if self.distro_handler_id:
|
||||
self.distro_combo.disconnect(self.distro_handler_id)
|
||||
self.distro_handler_id = None
|
||||
|
||||
self.empty_combo_text(self.distro_combo)
|
||||
for distro in distros:
|
||||
self.distro_combo.append_text(distro)
|
||||
if distro == self.curr_distro:
|
||||
self.distro_combo.set_active(active)
|
||||
active = active + 1
|
||||
|
||||
self.distro_handler_id = self.distro_combo.connect("changed", self.distro_combo_changed_cb, handler)
|
||||
|
||||
def package_format_combo_changed_cb(self, combo, handler):
|
||||
package_format = combo.get_active_text()
|
||||
if package_format != self.curr_package_format:
|
||||
self.curr_package_format = package_format
|
||||
self.configurator.setLocalConfVar('PACKAGE_CLASSES', 'package_%s' % package_format)
|
||||
handler.set_package_format(package_format)
|
||||
|
||||
def update_package_formats(self, handler, formats):
|
||||
active = 0
|
||||
# disconnect the signal handler before updating the model
|
||||
if self.package_handler_id:
|
||||
self.package_combo.disconnect(self.package_handler_id)
|
||||
self.package_handler_id = None
|
||||
|
||||
self.empty_combo_text(self.package_combo)
|
||||
for format in formats:
|
||||
self.package_combo.append_text(format)
|
||||
if format == self.curr_package_format:
|
||||
self.package_combo.set_active(active)
|
||||
active = active + 1
|
||||
|
||||
self.package_handler_id = self.package_combo.connect("changed", self.package_format_combo_changed_cb, handler)
|
||||
|
||||
def include_gplv3_cb(self, toggle):
|
||||
excluded = toggle.get_active()
|
||||
self.handler.toggle_gplv3(excluded)
|
||||
if excluded:
|
||||
self.configurator.setLocalConfVar('INCOMPATIBLE_LICENSE', 'GPLv3')
|
||||
else:
|
||||
self.configurator.setLocalConfVar('INCOMPATIBLE_LICENSE', '')
|
||||
self.reload_required = True
|
||||
|
||||
def change_bb_threads_cb(self, spinner):
|
||||
val = spinner.get_value_as_int()
|
||||
self.handler.set_bbthreads(val)
|
||||
self.configurator.setLocalConfVar('BB_NUMBER_THREADS', val)
|
||||
|
||||
def change_make_threads_cb(self, spinner):
|
||||
val = spinner.get_value_as_int()
|
||||
self.handler.set_pmake(val)
|
||||
self.configurator.setLocalConfVar('PARALLEL_MAKE', "-j %s" % val)
|
||||
|
||||
def toggle_toolchain_cb(self, check):
|
||||
enabled = check.get_active()
|
||||
self.handler.toggle_toolchain(enabled)
|
||||
|
||||
def toggle_headers_cb(self, check):
|
||||
enabled = check.get_active()
|
||||
self.handler.toggle_toolchain_headers(enabled)
|
||||
|
||||
def set_parent_window(self, parent):
|
||||
self.set_transient_for(parent)
|
||||
|
||||
def write_changes(self):
|
||||
self.configurator.writeLocalConf()
|
||||
|
||||
def prefs_response_cb(self, dialog, response):
|
||||
if self.reload_required:
|
||||
glib.idle_add(self.handler.reload_data)
|
||||
|
||||
def __init__(self, configurator, handler, curr_sdk_mach, curr_distro, pclass,
|
||||
cpu_cnt, pmake, bbthread, image_types):
|
||||
"""
|
||||
"""
|
||||
gtk.Dialog.__init__(self, "Preferences", None,
|
||||
gtk.DIALOG_DESTROY_WITH_PARENT,
|
||||
(gtk.STOCK_CLOSE, gtk.RESPONSE_OK))
|
||||
|
||||
self.set_border_width(6)
|
||||
self.vbox.set_property("spacing", 12)
|
||||
self.action_area.set_property("spacing", 12)
|
||||
self.action_area.set_property("border-width", 6)
|
||||
|
||||
self.handler = handler
|
||||
self.configurator = configurator
|
||||
|
||||
self.curr_sdk_mach = curr_sdk_mach
|
||||
self.curr_distro = curr_distro
|
||||
self.curr_package_format = pclass
|
||||
self.curr_output_type = None
|
||||
self.cpu_cnt = cpu_cnt
|
||||
self.pmake = pmake
|
||||
self.bbthread = bbthread
|
||||
self.reload_required = False
|
||||
self.distro_handler_id = None
|
||||
self.sdk_machine_handler_id = None
|
||||
self.package_handler_id = None
|
||||
|
||||
left = gtk.SizeGroup(gtk.SIZE_GROUP_HORIZONTAL)
|
||||
right = gtk.SizeGroup(gtk.SIZE_GROUP_HORIZONTAL)
|
||||
|
||||
label = gtk.Label()
|
||||
label.set_markup("<b>Policy</b>")
|
||||
label.show()
|
||||
frame = gtk.Frame()
|
||||
frame.set_label_widget(label)
|
||||
frame.set_shadow_type(gtk.SHADOW_NONE)
|
||||
frame.show()
|
||||
self.vbox.pack_start(frame)
|
||||
pbox = gtk.VBox(False, 12)
|
||||
pbox.show()
|
||||
frame.add(pbox)
|
||||
hbox = gtk.HBox(False, 12)
|
||||
hbox.show()
|
||||
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
|
||||
# Distro selector
|
||||
label = gtk.Label("Distribution:")
|
||||
label.show()
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=6)
|
||||
self.distro_combo = gtk.combo_box_new_text()
|
||||
self.distro_combo.set_tooltip_text("Select the Yocto distribution you would like to use")
|
||||
self.distro_combo.show()
|
||||
hbox.pack_start(self.distro_combo, expand=False, fill=False, padding=6)
|
||||
# Exclude GPLv3
|
||||
check = gtk.CheckButton("Exclude GPLv3 packages")
|
||||
check.set_tooltip_text("Check this box to prevent GPLv3 packages from being included in your image")
|
||||
check.show()
|
||||
check.connect("toggled", self.include_gplv3_cb)
|
||||
hbox.pack_start(check, expand=False, fill=False, padding=6)
|
||||
hbox = gtk.HBox(False, 12)
|
||||
hbox.show()
|
||||
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
|
||||
# Package format selector
|
||||
label = gtk.Label("Package format:")
|
||||
label.show()
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=6)
|
||||
self.package_combo = gtk.combo_box_new_text()
|
||||
self.package_combo.set_tooltip_text("Select the package format you would like to use in your image")
|
||||
self.package_combo.show()
|
||||
hbox.pack_start(self.package_combo, expand=False, fill=False, padding=6)
|
||||
# Image output type selector
|
||||
label = gtk.Label("Image output type:")
|
||||
label.show()
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=6)
|
||||
output_combo = gtk.combo_box_new_text()
|
||||
if image_types:
|
||||
for it in image_types.split(" "):
|
||||
output_combo.append_text(it)
|
||||
output_combo.connect("changed", self.output_type_changed_cb, handler)
|
||||
else:
|
||||
output_combo.set_sensitive(False)
|
||||
output_combo.show()
|
||||
hbox.pack_start(output_combo)
|
||||
# BitBake
|
||||
label = gtk.Label()
|
||||
label.set_markup("<b>BitBake</b>")
|
||||
label.show()
|
||||
frame = gtk.Frame()
|
||||
frame.set_label_widget(label)
|
||||
frame.set_shadow_type(gtk.SHADOW_NONE)
|
||||
frame.show()
|
||||
self.vbox.pack_start(frame)
|
||||
pbox = gtk.VBox(False, 12)
|
||||
pbox.show()
|
||||
frame.add(pbox)
|
||||
hbox = gtk.HBox(False, 12)
|
||||
hbox.show()
|
||||
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
|
||||
label = gtk.Label("BitBake threads:")
|
||||
label.show()
|
||||
spin_max = 9 #self.cpu_cnt * 3
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=6)
|
||||
bbadj = gtk.Adjustment(value=self.bbthread, lower=1, upper=spin_max, step_incr=1)
|
||||
bbspinner = gtk.SpinButton(adjustment=bbadj, climb_rate=1, digits=0)
|
||||
bbspinner.show()
|
||||
bbspinner.connect("value-changed", self.change_bb_threads_cb)
|
||||
hbox.pack_start(bbspinner, expand=False, fill=False, padding=6)
|
||||
label = gtk.Label("Make threads:")
|
||||
label.show()
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=6)
|
||||
madj = gtk.Adjustment(value=self.pmake, lower=1, upper=spin_max, step_incr=1)
|
||||
makespinner = gtk.SpinButton(adjustment=madj, climb_rate=1, digits=0)
|
||||
makespinner.connect("value-changed", self.change_make_threads_cb)
|
||||
makespinner.show()
|
||||
hbox.pack_start(makespinner, expand=False, fill=False, padding=6)
|
||||
# Toolchain
|
||||
label = gtk.Label()
|
||||
label.set_markup("<b>External Toolchain</b>")
|
||||
label.show()
|
||||
frame = gtk.Frame()
|
||||
frame.set_label_widget(label)
|
||||
frame.set_shadow_type(gtk.SHADOW_NONE)
|
||||
frame.show()
|
||||
self.vbox.pack_start(frame)
|
||||
pbox = gtk.VBox(False, 12)
|
||||
pbox.show()
|
||||
frame.add(pbox)
|
||||
hbox = gtk.HBox(False, 12)
|
||||
hbox.show()
|
||||
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
|
||||
toolcheck = gtk.CheckButton("Build external development toolchain with image")
|
||||
toolcheck.show()
|
||||
toolcheck.connect("toggled", self.toggle_toolchain_cb)
|
||||
hbox.pack_start(toolcheck, expand=False, fill=False, padding=6)
|
||||
hbox = gtk.HBox(False, 12)
|
||||
hbox.show()
|
||||
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
|
||||
label = gtk.Label("Toolchain host:")
|
||||
label.show()
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=6)
|
||||
self.sdk_machine_combo = gtk.combo_box_new_text()
|
||||
self.sdk_machine_combo.set_tooltip_text("Select the host architecture of the external machine")
|
||||
self.sdk_machine_combo.show()
|
||||
hbox.pack_start(self.sdk_machine_combo, expand=False, fill=False, padding=6)
|
||||
headerscheck = gtk.CheckButton("Include development headers with toolchain")
|
||||
headerscheck.show()
|
||||
headerscheck.connect("toggled", self.toggle_headers_cb)
|
||||
hbox.pack_start(headerscheck, expand=False, fill=False, padding=6)
|
||||
self.connect("response", self.prefs_response_cb)
|
||||
@@ -1,136 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import gtk
|
||||
from bb.ui.crumbs.configurator import Configurator
|
||||
|
||||
class LayerEditor(gtk.Dialog):
|
||||
"""
|
||||
Gtk+ Widget for enabling and disabling layers.
|
||||
Layers are added through using an open dialog to find the layer.conf
|
||||
Disabled layers are deleted from conf/bblayers.conf
|
||||
"""
|
||||
def __init__(self, configurator, parent=None):
|
||||
gtk.Dialog.__init__(self, "Layers", None,
|
||||
gtk.DIALOG_DESTROY_WITH_PARENT,
|
||||
(gtk.STOCK_CLOSE, gtk.RESPONSE_OK))
|
||||
|
||||
# We want to show a little more of the treeview in the default,
|
||||
# emptier, case
|
||||
self.set_size_request(-1, 300)
|
||||
self.set_border_width(6)
|
||||
self.vbox.set_property("spacing", 0)
|
||||
self.action_area.set_property("border-width", 6)
|
||||
|
||||
self.configurator = configurator
|
||||
self.newly_added = {}
|
||||
|
||||
# Label to inform users that meta is enabled but that you can't
|
||||
# disable it as it'd be a *bad* idea
|
||||
msg = "As the core of the build system the <i>meta</i> layer must always be included and therefore can't be viewed or edited here."
|
||||
lbl = gtk.Label()
|
||||
lbl.show()
|
||||
lbl.set_use_markup(True)
|
||||
lbl.set_markup(msg)
|
||||
lbl.set_line_wrap(True)
|
||||
lbl.set_justify(gtk.JUSTIFY_FILL)
|
||||
self.vbox.pack_start(lbl, expand=False, fill=False, padding=6)
|
||||
|
||||
# Create a treeview in which to list layers
|
||||
# ListStore of Name, Path, Enabled
|
||||
self.layer_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
|
||||
self.tv = gtk.TreeView(self.layer_store)
|
||||
self.tv.set_headers_visible(True)
|
||||
|
||||
col0 = gtk.TreeViewColumn('Name')
|
||||
self.tv.append_column(col0)
|
||||
col1 = gtk.TreeViewColumn('Path')
|
||||
self.tv.append_column(col1)
|
||||
col2 = gtk.TreeViewColumn('Enabled')
|
||||
self.tv.append_column(col2)
|
||||
|
||||
cell0 = gtk.CellRendererText()
|
||||
col0.pack_start(cell0, True)
|
||||
col0.set_attributes(cell0, text=0)
|
||||
cell1 = gtk.CellRendererText()
|
||||
col1.pack_start(cell1, True)
|
||||
col1.set_attributes(cell1, text=1)
|
||||
cell2 = gtk.CellRendererToggle()
|
||||
cell2.connect("toggled", self._toggle_layer_cb)
|
||||
col2.pack_start(cell2, True)
|
||||
col2.set_attributes(cell2, active=2)
|
||||
|
||||
self.tv.show()
|
||||
self.vbox.pack_start(self.tv, expand=True, fill=True, padding=0)
|
||||
|
||||
tb = gtk.Toolbar()
|
||||
tb.set_icon_size(gtk.ICON_SIZE_SMALL_TOOLBAR)
|
||||
tb.set_style(gtk.TOOLBAR_BOTH)
|
||||
tb.set_tooltips(True)
|
||||
tb.show()
|
||||
icon = gtk.Image()
|
||||
icon.set_from_stock(gtk.STOCK_ADD, gtk.ICON_SIZE_SMALL_TOOLBAR)
|
||||
icon.show()
|
||||
tb.insert_item("Add Layer", "Add new layer", None, icon,
|
||||
self._find_layer_cb, None, -1)
|
||||
self.vbox.pack_start(tb, expand=False, fill=False, padding=0)
|
||||
|
||||
def set_parent_window(self, parent):
|
||||
self.set_transient_for(parent)
|
||||
|
||||
def load_current_layers(self, data):
|
||||
for layer, path in self.configurator.enabled_layers.items():
|
||||
if layer != 'meta':
|
||||
self.layer_store.append([layer, path, True])
|
||||
|
||||
def save_current_layers(self):
|
||||
self.configurator.writeLayerConf()
|
||||
|
||||
def _toggle_layer_cb(self, cell, path):
|
||||
name = self.layer_store[path][0]
|
||||
toggle = not self.layer_store[path][2]
|
||||
if toggle:
|
||||
self.configurator.addLayer(name, path)
|
||||
else:
|
||||
self.configurator.disableLayer(name)
|
||||
self.layer_store[path][2] = toggle
|
||||
|
||||
def _find_layer_cb(self, button):
|
||||
self.find_layer(self)
|
||||
|
||||
def find_layer(self, parent):
|
||||
dialog = gtk.FileChooserDialog("Add new layer", parent,
|
||||
gtk.FILE_CHOOSER_ACTION_OPEN,
|
||||
(gtk.STOCK_CANCEL, gtk.RESPONSE_NO,
|
||||
gtk.STOCK_OPEN, gtk.RESPONSE_YES))
|
||||
label = gtk.Label("Select the layer.conf of the layer you wish to add")
|
||||
label.show()
|
||||
dialog.set_extra_widget(label)
|
||||
response = dialog.run()
|
||||
path = dialog.get_filename()
|
||||
dialog.destroy()
|
||||
|
||||
if response == gtk.RESPONSE_YES:
|
||||
# FIXME: verify we've actually got a layer conf?
|
||||
if path.endswith(".conf"):
|
||||
name, layerpath = self.configurator.addLayerConf(path)
|
||||
self.newly_added[name] = layerpath
|
||||
self.layer_store.append([name, layerpath, True])
|
||||
@@ -47,18 +47,12 @@ class RunningBuildModel (gtk.TreeStore):
|
||||
|
||||
class RunningBuild (gobject.GObject):
|
||||
__gsignals__ = {
|
||||
'build-started' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'build-succeeded' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'build-failed' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'build-complete' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
())
|
||||
())
|
||||
}
|
||||
pids_to_task = {}
|
||||
tasks_to_iter = {}
|
||||
@@ -207,7 +201,6 @@ class RunningBuild (gobject.GObject):
|
||||
|
||||
elif isinstance(event, bb.event.BuildStarted):
|
||||
|
||||
self.emit("build-started")
|
||||
self.model.prepend(None, (None,
|
||||
None,
|
||||
None,
|
||||
@@ -225,9 +218,6 @@ class RunningBuild (gobject.GObject):
|
||||
Colors.OK,
|
||||
0))
|
||||
|
||||
# Emit a generic "build-complete" signal for things wishing to
|
||||
# handle when the build is finished
|
||||
self.emit("build-complete")
|
||||
# Emit the appropriate signal depending on the number of failures
|
||||
if (failures >= 1):
|
||||
self.emit ("build-failed")
|
||||
@@ -244,8 +234,6 @@ class RunningBuild (gobject.GObject):
|
||||
pbar.update(self.progress_total, self.progress_total)
|
||||
|
||||
elif isinstance(event, bb.event.ParseStarted) and pbar:
|
||||
if event.total == 0:
|
||||
return
|
||||
pbar.set_title("Processing recipes")
|
||||
self.progress_total = event.total
|
||||
pbar.update(0, self.progress_total)
|
||||
@@ -320,4 +308,4 @@ class RunningBuildTreeView (gtk.TreeView):
|
||||
|
||||
clipboard = gtk.clipboard_get()
|
||||
clipboard.set_text(paste_url)
|
||||
clipboard.store()
|
||||
clipboard.store()
|
||||
@@ -20,57 +20,6 @@
|
||||
|
||||
import gtk
|
||||
import gobject
|
||||
import re
|
||||
|
||||
class BuildRep(gobject.GObject):
|
||||
|
||||
def __init__(self, userpkgs, allpkgs, base_image=None):
|
||||
gobject.GObject.__init__(self)
|
||||
self.base_image = base_image
|
||||
self.allpkgs = allpkgs
|
||||
self.userpkgs = userpkgs
|
||||
|
||||
def loadRecipe(self, pathname):
|
||||
contents = []
|
||||
packages = ""
|
||||
base_image = ""
|
||||
|
||||
with open(pathname, 'r') as f:
|
||||
contents = f.readlines()
|
||||
|
||||
pkg_pattern = "^\s*(IMAGE_INSTALL)\s*([+=.?]+)\s*(\"\S*\")"
|
||||
img_pattern = "^\s*(require)\s+(\S+.bb)"
|
||||
|
||||
for line in contents:
|
||||
matchpkg = re.search(pkg_pattern, line)
|
||||
matchimg = re.search(img_pattern, line)
|
||||
if matchpkg:
|
||||
packages = packages + matchpkg.group(3).strip('"')
|
||||
if matchimg:
|
||||
base_image = os.path.basename(matchimg.group(2)).split(".")[0]
|
||||
|
||||
self.base_image = base_image
|
||||
self.userpkgs = packages
|
||||
|
||||
def writeRecipe(self, writepath, model):
|
||||
template = """
|
||||
# Recipe generated by the HOB
|
||||
|
||||
require %s
|
||||
|
||||
IMAGE_INSTALL += "%s"
|
||||
"""
|
||||
meta_path = model.find_image_path(self.base_image)
|
||||
|
||||
recipe = template % (meta_path, self.userpkgs)
|
||||
|
||||
if os.path.exists(writepath):
|
||||
os.rename(writepath, "%s~" % writepath)
|
||||
|
||||
with open(writepath, 'w') as r:
|
||||
r.write(recipe)
|
||||
|
||||
return writepath
|
||||
|
||||
class TaskListModel(gtk.ListStore):
|
||||
"""
|
||||
@@ -79,18 +28,12 @@ class TaskListModel(gtk.ListStore):
|
||||
providing convenience functions to access gtk.TreeModel subclasses which
|
||||
provide filtered views of the data.
|
||||
"""
|
||||
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_PATH) = range(10)
|
||||
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC) = range(8)
|
||||
|
||||
__gsignals__ = {
|
||||
"tasklist-populated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"contents-changed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_INT,)),
|
||||
"image-changed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING,)),
|
||||
())
|
||||
}
|
||||
|
||||
"""
|
||||
@@ -100,7 +43,6 @@ class TaskListModel(gtk.ListStore):
|
||||
self.tasks = None
|
||||
self.packages = None
|
||||
self.images = None
|
||||
self.selected_image = None
|
||||
|
||||
gtk.ListStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
@@ -110,22 +52,7 @@ class TaskListModel(gtk.ListStore):
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_STRING)
|
||||
|
||||
def contents_changed_cb(self, tree_model, path, it=None):
|
||||
pkg_cnt = self.contents.iter_n_children(None)
|
||||
self.emit("contents-changed", pkg_cnt)
|
||||
|
||||
def contents_model_filter(self, model, it):
|
||||
if not model.get_value(it, self.COL_INC) or model.get_value(it, self.COL_TYPE) == 'image':
|
||||
return False
|
||||
name = model.get_value(it, self.COL_NAME)
|
||||
if name.endswith('-native') or name.endswith('-cross'):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
gobject.TYPE_BOOLEAN)
|
||||
|
||||
"""
|
||||
Create, if required, and return a filtered gtk.TreeModel
|
||||
@@ -135,9 +62,7 @@ class TaskListModel(gtk.ListStore):
|
||||
def contents_model(self):
|
||||
if not self.contents:
|
||||
self.contents = self.filter_new()
|
||||
self.contents.set_visible_func(self.contents_model_filter)
|
||||
self.contents.connect("row-inserted", self.contents_changed_cb)
|
||||
self.contents.connect("row-deleted", self.contents_changed_cb)
|
||||
self.contents.set_visible_column(self.COL_INC)
|
||||
return self.contents
|
||||
|
||||
"""
|
||||
@@ -182,10 +107,10 @@ class TaskListModel(gtk.ListStore):
|
||||
Helper function to determine whether an item is a package
|
||||
"""
|
||||
def package_model_filter(self, model, it):
|
||||
if model.get_value(it, self.COL_TYPE) != 'package':
|
||||
return False
|
||||
else:
|
||||
if model.get_value(it, self.COL_TYPE) == 'package':
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
"""
|
||||
Create, if required, and return a filtered gtk.TreeModel
|
||||
@@ -204,78 +129,33 @@ class TaskListModel(gtk.ListStore):
|
||||
to notify any listeners that the model is ready
|
||||
"""
|
||||
def populate(self, event_model):
|
||||
# First clear the model, in case repopulating
|
||||
self.clear()
|
||||
for item in event_model["pn"]:
|
||||
atype = 'package'
|
||||
name = item
|
||||
summary = event_model["pn"][item]["summary"]
|
||||
lic = event_model["pn"][item]["license"]
|
||||
license = event_model["pn"][item]["license"]
|
||||
group = event_model["pn"][item]["section"]
|
||||
filename = event_model["pn"][item]["filename"]
|
||||
depends = event_model["depends"].get(item, "")
|
||||
|
||||
depends = event_model["depends"].get(item, "")
|
||||
rdepends = event_model["rdepends-pn"].get(item, "")
|
||||
if rdepends:
|
||||
for rdep in rdepends:
|
||||
if event_model["packages"].get(rdep, ""):
|
||||
pn = event_model["packages"][rdep].get("pn", "")
|
||||
if pn:
|
||||
depends.append(pn)
|
||||
|
||||
depends = depends + rdepends
|
||||
self.squish(depends)
|
||||
deps = " ".join(depends)
|
||||
|
||||
|
||||
if name.count('task-') > 0:
|
||||
atype = 'task'
|
||||
elif name.count('-image-') > 0:
|
||||
atype = 'image'
|
||||
|
||||
self.set(self.append(), self.COL_NAME, name, self.COL_DESC, summary,
|
||||
self.COL_LIC, lic, self.COL_GROUP, group,
|
||||
self.COL_DEPS, deps, self.COL_BINB, "",
|
||||
self.COL_TYPE, atype, self.COL_INC, False,
|
||||
self.COL_IMG, False, self.COL_PATH, filename)
|
||||
|
||||
self.COL_LIC, license, self.COL_GROUP, group,
|
||||
self.COL_DEPS, deps, self.COL_BINB, "",
|
||||
self.COL_TYPE, atype, self.COL_INC, False)
|
||||
|
||||
self.emit("tasklist-populated")
|
||||
|
||||
"""
|
||||
Load a BuildRep into the model
|
||||
"""
|
||||
def load_image_rep(self, rep):
|
||||
# Unset everything
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
path = self.get_path(it)
|
||||
self[path][self.COL_INC] = False
|
||||
self[path][self.COL_IMG] = False
|
||||
it = self.iter_next(it)
|
||||
|
||||
# Iterate the images and disable them all
|
||||
it = self.images.get_iter_first()
|
||||
while it:
|
||||
path = self.images.convert_path_to_child_path(self.images.get_path(it))
|
||||
name = self[path][self.COL_NAME]
|
||||
if name == rep.base_image:
|
||||
self.include_item(path, image_contents=True)
|
||||
else:
|
||||
self[path][self.COL_INC] = False
|
||||
it = self.images.iter_next(it)
|
||||
|
||||
# Mark all of the additional packages for inclusion
|
||||
packages = rep.packages.split(" ")
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
path = self.get_path(it)
|
||||
name = self[path][self.COL_NAME]
|
||||
if name in packages:
|
||||
self.include_item(path)
|
||||
packages.remove(name)
|
||||
it = self.iter_next(it)
|
||||
|
||||
self.emit("image-changed", rep.base_image)
|
||||
|
||||
"""
|
||||
squish lst so that it doesn't contain any duplicate entries
|
||||
squish lst so that it doesn't contain any duplicates
|
||||
"""
|
||||
def squish(self, lst):
|
||||
seen = {}
|
||||
@@ -293,61 +173,54 @@ class TaskListModel(gtk.ListStore):
|
||||
self[path][self.COL_INC] = False
|
||||
|
||||
"""
|
||||
recursively called to mark the item at opath and any package which
|
||||
depends on it for removal
|
||||
"""
|
||||
def mark(self, opath):
|
||||
removals = []
|
||||
def mark(self, path):
|
||||
name = self[path][self.COL_NAME]
|
||||
it = self.get_iter_first()
|
||||
name = self[opath][self.COL_NAME]
|
||||
removals = []
|
||||
#print("Removing %s" % name)
|
||||
|
||||
self.remove_item_path(opath)
|
||||
self.remove_item_path(path)
|
||||
|
||||
# Remove all dependent packages, update binb
|
||||
while it:
|
||||
path = self.get_path(it)
|
||||
inc = self[path][self.COL_INC]
|
||||
deps = self[path][self.COL_DEPS]
|
||||
binb = self[path][self.COL_BINB]
|
||||
|
||||
# FIXME: need to ensure partial name matching doesn't happen
|
||||
if inc and deps.count(name):
|
||||
# FIXME: need to ensure partial name matching doesn't happen, regexp?
|
||||
if self[path][self.COL_INC] and self[path][self.COL_DEPS].count(name):
|
||||
#print("%s depended on %s, marking for removal" % (self[path][self.COL_NAME], name))
|
||||
# found a dependency, remove it
|
||||
self.mark(path)
|
||||
if inc and binb.count(name):
|
||||
bib = self.find_alt_dependency(name)
|
||||
self[path][self.COL_BINB] = bib
|
||||
|
||||
if self[path][self.COL_INC] and self[path][self.COL_BINB].count(name):
|
||||
binb = self.find_alt_dependency(self[path][self.COL_NAME])
|
||||
#print("%s was brought in by %s, binb set to %s" % (self[path][self.COL_NAME], name, binb))
|
||||
self[path][self.COL_BINB] = binb
|
||||
it = self.iter_next(it)
|
||||
|
||||
"""
|
||||
Remove items from contents if the have an empty COL_BINB (brought in by)
|
||||
caused by all packages they are a dependency of being removed.
|
||||
If the item isn't a package we leave it included.
|
||||
"""
|
||||
def sweep_up(self):
|
||||
it = self.contents.get_iter_first()
|
||||
while it:
|
||||
binb = self.contents.get_value(it, self.COL_BINB)
|
||||
itype = self.contents.get_value(it, self.COL_TYPE)
|
||||
remove = False
|
||||
removals = []
|
||||
it = self.get_iter_first()
|
||||
|
||||
if itype == 'package' and not binb:
|
||||
oit = self.contents.convert_iter_to_child_iter(it)
|
||||
opath = self.get_path(oit)
|
||||
self.mark(opath)
|
||||
remove = True
|
||||
while it:
|
||||
path = self.get_path(it)
|
||||
binb = self[path][self.COL_BINB]
|
||||
if binb == "" or binb is None:
|
||||
#print("Sweeping up %s" % self[path][self.COL_NAME])
|
||||
if not path in removals:
|
||||
removals.extend(path)
|
||||
it = self.iter_next(it)
|
||||
|
||||
# When we remove a package from the contents model we alter the
|
||||
# model, so continuing to iterate is bad. *Furthermore* it's
|
||||
# likely that the removal has affected an already iterated item
|
||||
# so we should start from the beginning anyway.
|
||||
# Only when we've managed to iterate the entire contents model
|
||||
# without removing any items do we allow the loop to exit.
|
||||
if remove:
|
||||
it = self.contents.get_iter_first()
|
||||
else:
|
||||
it = self.contents.iter_next(it)
|
||||
while removals:
|
||||
path = removals.pop()
|
||||
self.mark(path)
|
||||
|
||||
"""
|
||||
Remove an item from the contents
|
||||
"""
|
||||
def remove_item(self, path):
|
||||
self.mark(path)
|
||||
self.sweep_up()
|
||||
|
||||
"""
|
||||
Find the name of an item in the image contents which depends on the item
|
||||
@@ -365,10 +238,17 @@ class TaskListModel(gtk.ListStore):
|
||||
inc = self[path][self.COL_INC]
|
||||
if itname != name and inc and deps.count(name) > 0:
|
||||
# if this item depends on the item, return this items name
|
||||
#print("%s depends on %s" % (itname, name))
|
||||
return itname
|
||||
it = self.iter_next(it)
|
||||
return ""
|
||||
|
||||
"""
|
||||
Convert a path in self to a path in the filtered contents model
|
||||
"""
|
||||
def contents_path_for_path(self, path):
|
||||
return self.contents.convert_child_path_to_path(path)
|
||||
|
||||
"""
|
||||
Check the self.contents gtk.TreeModel for an item
|
||||
where COL_NAME matches item_name
|
||||
@@ -386,38 +266,27 @@ class TaskListModel(gtk.ListStore):
|
||||
"""
|
||||
Add this item, and any of its dependencies, to the image contents
|
||||
"""
|
||||
def include_item(self, item_path, binb="", image_contents=False):
|
||||
def include_item(self, item_path, binb=""):
|
||||
name = self[item_path][self.COL_NAME]
|
||||
deps = self[item_path][self.COL_DEPS]
|
||||
cur_inc = self[item_path][self.COL_INC]
|
||||
#print("Adding %s for %s dependency" % (name, binb))
|
||||
if not cur_inc:
|
||||
self[item_path][self.COL_INC] = True
|
||||
self[item_path][self.COL_BINB] = binb
|
||||
|
||||
# We want to do some magic with things which are brought in by the
|
||||
# base image so tag them as so
|
||||
if image_contents:
|
||||
self[item_path][self.COL_IMG] = True
|
||||
if self[item_path][self.COL_TYPE] == 'image':
|
||||
self.selected_image = name
|
||||
|
||||
if deps:
|
||||
#print("Dependencies of %s are %s" % (name, deps))
|
||||
# add all of the deps and set their binb to this item
|
||||
for dep in deps.split(" "):
|
||||
# FIXME: this skipping virtuals can't be right? Unless we choose only to show target
|
||||
# packages? In which case we should handle this server side...
|
||||
# If the contents model doesn't already contain dep, add it
|
||||
# We only care to show things which will end up in the
|
||||
# resultant image, so filter cross and native recipes
|
||||
dep_included = self.contents_includes_name(dep)
|
||||
path = self.find_path_for_item(dep)
|
||||
if not dep_included and not dep.endswith("-native") and not dep.endswith("-cross"):
|
||||
if not dep.startswith("virtual") and not self.contents_includes_name(dep):
|
||||
path = self.find_path_for_item(dep)
|
||||
if path:
|
||||
self.include_item(path, name, image_contents)
|
||||
self.include_item(path, name)
|
||||
else:
|
||||
pass
|
||||
# Set brought in by for any no longer orphan packages
|
||||
elif dep_included and path:
|
||||
if not self[path][self.COL_BINB]:
|
||||
self[path][self.COL_BINB] = name
|
||||
|
||||
"""
|
||||
Find the model path for the item_name
|
||||
@@ -438,100 +307,40 @@ class TaskListModel(gtk.ListStore):
|
||||
Empty self.contents by setting the include of each entry to None
|
||||
"""
|
||||
def reset(self):
|
||||
# Deselect images - slightly more complex logic so that we don't
|
||||
# have to iterate all of the contents of the main model, instead
|
||||
# just iterate the images model.
|
||||
if self.selected_image:
|
||||
iit = self.images.get_iter_first()
|
||||
while iit:
|
||||
pit = self.images.convert_iter_to_child_iter(iit)
|
||||
self.set(pit, self.COL_INC, False)
|
||||
iit = self.images.iter_next(iit)
|
||||
self.selected_image = None
|
||||
|
||||
it = self.contents.get_iter_first()
|
||||
while it:
|
||||
oit = self.contents.convert_iter_to_child_iter(it)
|
||||
self.set(oit,
|
||||
self.COL_INC, False,
|
||||
self.COL_BINB, "",
|
||||
self.COL_IMG, False)
|
||||
path = self.contents.get_path(it)
|
||||
opath = self.contents.convert_path_to_child_path(path)
|
||||
self[opath][self.COL_INC] = False
|
||||
self[opath][self.COL_BINB] = ""
|
||||
# As we've just removed the first item...
|
||||
it = self.contents.get_iter_first()
|
||||
|
||||
"""
|
||||
Returns two lists. One of user selected packages and the other containing
|
||||
all selected packages
|
||||
Returns True if one of the selected tasks is an image, False otherwise
|
||||
"""
|
||||
def get_selected_packages(self):
|
||||
allpkgs = []
|
||||
userpkgs = []
|
||||
|
||||
it = self.contents.get_iter_first()
|
||||
while it:
|
||||
sel = self.contents.get_value(it, self.COL_BINB) == "User Selected"
|
||||
name = self.contents.get_value(it, self.COL_NAME)
|
||||
allpkgs.append(name)
|
||||
if sel:
|
||||
userpkgs.append(name)
|
||||
it = self.contents.iter_next(it)
|
||||
return userpkgs, allpkgs
|
||||
|
||||
def get_build_rep(self):
|
||||
userpkgs, allpkgs = self.get_selected_packages()
|
||||
image = self.selected_image
|
||||
|
||||
return BuildRep(" ".join(userpkgs), " ".join(allpkgs), image)
|
||||
|
||||
def find_reverse_depends(self, pn):
|
||||
revdeps = []
|
||||
it = self.contents.get_iter_first()
|
||||
|
||||
while it:
|
||||
if self.contents.get_value(it, self.COL_DEPS).count(pn) != 0:
|
||||
revdeps.append(self.contents.get_value(it, self.COL_NAME))
|
||||
it = self.contents.iter_next(it)
|
||||
|
||||
if pn in revdeps:
|
||||
revdeps.remove(pn)
|
||||
return revdeps
|
||||
|
||||
def set_selected_image(self, img):
|
||||
self.selected_image = img
|
||||
path = self.find_path_for_item(img)
|
||||
self.include_item(item_path=path,
|
||||
binb="User Selected",
|
||||
image_contents=True)
|
||||
|
||||
self.emit("image-changed", self.selected_image)
|
||||
|
||||
def set_selected_packages(self, pkglist):
|
||||
selected = pkglist
|
||||
it = self.get_iter_first()
|
||||
|
||||
while it:
|
||||
name = self.get_value(it, self.COL_NAME)
|
||||
if name in pkglist:
|
||||
pkglist.remove(name)
|
||||
path = self.get_path(it)
|
||||
self.include_item(item_path=path,
|
||||
binb="User Selected")
|
||||
if len(pkglist) == 0:
|
||||
return
|
||||
it = self.iter_next(it)
|
||||
|
||||
def find_image_path(self, image):
|
||||
def targets_contains_image(self):
|
||||
it = self.images.get_iter_first()
|
||||
|
||||
while it:
|
||||
image_name = self.images.get_value(it, self.COL_NAME)
|
||||
if image_name == image:
|
||||
path = self.images.get_value(it, self.COL_PATH)
|
||||
meta_pattern = "(\S*)/(meta*/)(\S*)"
|
||||
meta_match = re.search(meta_pattern, path)
|
||||
if meta_match:
|
||||
_, lyr, bbrel = path.partition(meta_match.group(2))
|
||||
if bbrel:
|
||||
path = bbrel
|
||||
return path
|
||||
path = self.images.get_path(it)
|
||||
inc = self.images[path][self.COL_INC]
|
||||
if inc:
|
||||
return True
|
||||
it = self.images.iter_next(it)
|
||||
return False
|
||||
|
||||
"""
|
||||
Return a list of all selected items which are not -native or -cross
|
||||
"""
|
||||
def get_targets(self):
|
||||
tasks = []
|
||||
|
||||
it = self.contents.get_iter_first()
|
||||
while it:
|
||||
path = self.contents.get_path(it)
|
||||
name = self.contents[path][self.COL_NAME]
|
||||
stype = self.contents[path][self.COL_TYPE]
|
||||
if not name.count('-native') and not name.count('-cross'):
|
||||
tasks.append(name)
|
||||
it = self.contents.iter_next(it)
|
||||
return tasks
|
||||
|
||||
@@ -199,13 +199,10 @@ class gtkthread(threading.Thread):
|
||||
def main(server, eventHandler):
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
if cmdline and not cmdline['action']:
|
||||
print(cmdline['msg'])
|
||||
return
|
||||
elif not cmdline or (cmdline['action'] and cmdline['action'][0] != "generateDotGraph"):
|
||||
if not cmdline or cmdline[0] != "generateDotGraph":
|
||||
print("This UI is only compatible with the -g option")
|
||||
return
|
||||
ret = server.runCommand(["generateDepTreeEvent", cmdline['action'][1], cmdline['action'][2]])
|
||||
ret = server.runCommand(["generateDepTreeEvent", cmdline[1], cmdline[2]])
|
||||
if ret != True:
|
||||
print("Couldn't run command! %s" % ret)
|
||||
return
|
||||
@@ -250,13 +247,13 @@ def main(server, eventHandler):
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.event.CacheLoadCompleted):
|
||||
pbar.hide()
|
||||
gtk.gdk.threads_enter()
|
||||
pbar.update(progress_total, progress_total)
|
||||
gtk.gdk.threads_leave()
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.event.ParseStarted):
|
||||
progress_total = event.total
|
||||
if progress_total == 0:
|
||||
continue
|
||||
gtk.gdk.threads_enter()
|
||||
pbar.set_title("Processing recipes")
|
||||
pbar.update(0, progress_total)
|
||||
|
||||
@@ -82,12 +82,8 @@ def main (server, eventHandler):
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
if not cmdline:
|
||||
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
|
||||
return 1
|
||||
elif not cmdline['action']:
|
||||
print(cmdline['msg'])
|
||||
return 1
|
||||
ret = server.runCommand(cmdline['action'])
|
||||
ret = server.runCommand(cmdline)
|
||||
if ret != True:
|
||||
print("Couldn't get default commandline! %s" % ret)
|
||||
return 1
|
||||
@@ -109,8 +105,6 @@ def main (server, eventHandler):
|
||||
# ignore interrupted io
|
||||
if ioerror.args[0] == 4:
|
||||
pass
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
server.runCommand(["stateStop"])
|
||||
|
||||
|
||||
@@ -80,12 +80,8 @@ def main(server, eventHandler):
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
if not cmdline:
|
||||
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
|
||||
return 1
|
||||
elif not cmdline['action']:
|
||||
print(cmdline['msg'])
|
||||
return 1
|
||||
ret = server.runCommand(cmdline['action'])
|
||||
ret = server.runCommand(cmdline)
|
||||
if ret != True:
|
||||
print("Couldn't get default commandline! %s" % ret)
|
||||
return 1
|
||||
@@ -154,17 +150,12 @@ def main(server, eventHandler):
|
||||
logger.info(event._message)
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseStarted):
|
||||
if event.total == 0:
|
||||
continue
|
||||
parseprogress = new_progress("Parsing recipes", event.total).start()
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseProgress):
|
||||
parseprogress.update(event.current)
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseCompleted):
|
||||
if not parseprogress:
|
||||
continue
|
||||
|
||||
parseprogress.finish()
|
||||
print(("Parsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
|
||||
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors)))
|
||||
@@ -232,7 +223,6 @@ def main(server, eventHandler):
|
||||
bb.event.StampUpdate,
|
||||
bb.event.ConfigParsed,
|
||||
bb.event.RecipeParsed,
|
||||
bb.event.RecipePreFinalise,
|
||||
bb.runqueue.runQueueEvent,
|
||||
bb.runqueue.runQueueExitWait)):
|
||||
continue
|
||||
|
||||
@@ -232,12 +232,8 @@ class NCursesUI:
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
if not cmdline:
|
||||
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
|
||||
return
|
||||
elif not cmdline['action']:
|
||||
print(cmdline['msg'])
|
||||
return
|
||||
ret = server.runCommand(cmdline['action'])
|
||||
ret = server.runCommand(cmdline)
|
||||
if ret != True:
|
||||
print("Couldn't get default commandlind! %s" % ret)
|
||||
return
|
||||
|
||||
@@ -76,7 +76,7 @@ class BBUIEventQueue:
|
||||
self.host, self.port = server.socket.getsockname()
|
||||
|
||||
server.register_function( self.system_quit, "event.quit" )
|
||||
server.register_function( self.send_event, "event.sendpickle" )
|
||||
server.register_function( self.send_event, "event.send" )
|
||||
server.socket.settimeout(1)
|
||||
|
||||
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)
|
||||
|
||||
@@ -402,24 +402,23 @@ def fileslocked(files):
|
||||
for lock in locks:
|
||||
bb.utils.unlockfile(lock)
|
||||
|
||||
def lockfile(name, shared=False, retry=True):
|
||||
def lockfile(name, shared=False):
|
||||
"""
|
||||
Use the file fn as a lock file, return when the lock has been acquired.
|
||||
Returns a variable to pass to unlockfile().
|
||||
"""
|
||||
dirname = os.path.dirname(name)
|
||||
mkdirhier(dirname)
|
||||
path = os.path.dirname(name)
|
||||
if not os.path.isdir(path):
|
||||
logger.error("Lockfile destination directory '%s' does not exist", path)
|
||||
sys.exit(1)
|
||||
|
||||
if not os.access(dirname, os.W_OK):
|
||||
logger.error("Unable to acquire lock '%s', directory is not writable",
|
||||
name)
|
||||
if not os.access(path, os.W_OK):
|
||||
logger.error("Error, lockfile path is not writable!: %s" % path)
|
||||
sys.exit(1)
|
||||
|
||||
op = fcntl.LOCK_EX
|
||||
if shared:
|
||||
op = fcntl.LOCK_SH
|
||||
if not retry:
|
||||
op = op | fcntl.LOCK_NB
|
||||
|
||||
while True:
|
||||
# If we leave the lockfiles lying around there is no problem
|
||||
@@ -444,15 +443,13 @@ def lockfile(name, shared=False, retry=True):
|
||||
lf.close()
|
||||
except Exception:
|
||||
continue
|
||||
if not retry:
|
||||
return None
|
||||
|
||||
def unlockfile(lf):
|
||||
"""
|
||||
Unlock a file locked using lockfile()
|
||||
"""
|
||||
try:
|
||||
# If we had a shared lock, we need to promote to exclusive before
|
||||
# If we had a shared lock, we need to promote to exclusive before
|
||||
# removing the lockfile. Attempt this, ignore failures.
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX|fcntl.LOCK_NB)
|
||||
os.unlink(lf.name)
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
__version__ = "1.0.0"
|
||||
|
||||
import os, time
|
||||
import sys,logging
|
||||
|
||||
def init_logger(logfile, loglevel):
|
||||
numeric_level = getattr(logging, loglevel.upper(), None)
|
||||
if not isinstance(numeric_level, int):
|
||||
raise ValueError('Invalid log level: %s' % loglevel)
|
||||
logging.basicConfig(level=numeric_level, filename=logfile)
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
import logging
|
||||
import os.path
|
||||
import errno
|
||||
import sys
|
||||
import warnings
|
||||
import sqlite3
|
||||
|
||||
try:
|
||||
import sqlite3
|
||||
except ImportError:
|
||||
from pysqlite2 import dbapi2 as sqlite3
|
||||
|
||||
sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
raise Exception("sqlite3 version 3.3.0 or later is required.")
|
||||
|
||||
class NotFoundError(StandardError):
|
||||
pass
|
||||
|
||||
class PRTable():
|
||||
def __init__(self,cursor,table):
|
||||
self.cursor = cursor
|
||||
self.table = table
|
||||
|
||||
#create the table
|
||||
self._execute("CREATE TABLE IF NOT EXISTS %s \
|
||||
(version TEXT NOT NULL, \
|
||||
checksum TEXT NOT NULL, \
|
||||
value INTEGER, \
|
||||
PRIMARY KEY (version,checksum));"
|
||||
% table)
|
||||
|
||||
def _execute(self, *query):
|
||||
"""Execute a query, waiting to acquire a lock if necessary"""
|
||||
count = 0
|
||||
while True:
|
||||
try:
|
||||
return self.cursor.execute(*query)
|
||||
except sqlite3.OperationalError as exc:
|
||||
if 'database is locked' in str(exc) and count < 500:
|
||||
count = count + 1
|
||||
continue
|
||||
raise
|
||||
except sqlite3.IntegrityError as exc:
|
||||
print "Integrity error %s" % str(exc)
|
||||
break
|
||||
|
||||
def getValue(self, version, checksum):
|
||||
data=self._execute("SELECT value FROM %s WHERE version=? AND checksum=?;" % self.table,
|
||||
(version,checksum))
|
||||
row=data.fetchone()
|
||||
if row != None:
|
||||
return row[0]
|
||||
else:
|
||||
#no value found, try to insert
|
||||
self._execute("INSERT INTO %s VALUES (?, ?, (select ifnull(max(value)+1,0) from %s where version=?));"
|
||||
% (self.table,self.table),
|
||||
(version,checksum,version))
|
||||
data=self._execute("SELECT value FROM %s WHERE version=? AND checksum=?;" % self.table,
|
||||
(version,checksum))
|
||||
row=data.fetchone()
|
||||
if row != None:
|
||||
return row[0]
|
||||
else:
|
||||
raise NotFoundError
|
||||
|
||||
class PRData(object):
|
||||
"""Object representing the PR database"""
|
||||
def __init__(self, filename):
|
||||
self.filename=os.path.abspath(filename)
|
||||
#build directory hierarchy
|
||||
try:
|
||||
os.makedirs(os.path.dirname(self.filename))
|
||||
except OSError as e:
|
||||
if e.errno != errno.EEXIST:
|
||||
raise e
|
||||
self.connection=sqlite3.connect(self.filename, timeout=5,
|
||||
isolation_level=None)
|
||||
self.cursor=self.connection.cursor()
|
||||
self._tables={}
|
||||
|
||||
def __del__(self):
|
||||
print "PRData: closing DB %s" % self.filename
|
||||
self.connection.close()
|
||||
|
||||
def __getitem__(self,tblname):
|
||||
if not isinstance(tblname, basestring):
|
||||
raise TypeError("tblname argument must be a string, not '%s'" %
|
||||
type(tblname))
|
||||
if tblname in self._tables:
|
||||
return self._tables[tblname]
|
||||
else:
|
||||
tableobj = self._tables[tblname] = PRTable(self.cursor, tblname)
|
||||
return tableobj
|
||||
|
||||
def __delitem__(self, tblname):
|
||||
if tblname in self._tables:
|
||||
del self._tables[tblname]
|
||||
logging.info("drop table %s" % (tblname))
|
||||
self.cursor.execute("DROP TABLE IF EXISTS %s;" % tblname)
|
||||
@@ -1,198 +0,0 @@
|
||||
import os,sys,logging
|
||||
import signal,time, atexit
|
||||
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import xmlrpclib,sqlite3
|
||||
|
||||
import bb.server.xmlrpc
|
||||
import prserv
|
||||
import prserv.db
|
||||
|
||||
if sys.hexversion < 0x020600F0:
|
||||
print("Sorry, python 2.6 or later is required.")
|
||||
sys.exit(1)
|
||||
|
||||
class Handler(SimpleXMLRPCRequestHandler):
|
||||
def _dispatch(self,method,params):
|
||||
try:
|
||||
value=self.server.funcs[method](*params)
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
raise
|
||||
return value
|
||||
|
||||
class PRServer(SimpleXMLRPCServer):
|
||||
pidfile="/tmp/PRServer.pid"
|
||||
def __init__(self, dbfile, logfile, interface, daemon=True):
|
||||
''' constructor '''
|
||||
SimpleXMLRPCServer.__init__(self, interface,
|
||||
requestHandler=SimpleXMLRPCRequestHandler,
|
||||
logRequests=False, allow_none=True)
|
||||
self.dbfile=dbfile
|
||||
self.daemon=daemon
|
||||
self.logfile=logfile
|
||||
self.host, self.port = self.socket.getsockname()
|
||||
self.db=prserv.db.PRData(dbfile)
|
||||
self.table=self.db["PRMAIN"]
|
||||
|
||||
self.register_function(self.getPR, "getPR")
|
||||
self.register_function(self.quit, "quit")
|
||||
self.register_function(self.ping, "ping")
|
||||
self.register_introspection_functions()
|
||||
|
||||
def ping(self):
|
||||
return not self.quit
|
||||
|
||||
def getPR(self, version, checksum):
|
||||
try:
|
||||
return self.table.getValue(version,checksum)
|
||||
except prserv.NotFoundError:
|
||||
logging.error("can not find value for (%s, %s)",version,checksum)
|
||||
return None
|
||||
except sqlite3.Error as exc:
|
||||
logging.error(str(exc))
|
||||
return None
|
||||
|
||||
def quit(self):
|
||||
self.quit=True
|
||||
return
|
||||
|
||||
def _serve_forever(self):
|
||||
self.quit = False
|
||||
self.timeout = 0.5
|
||||
while not self.quit:
|
||||
self.handle_request()
|
||||
|
||||
logging.info("PRServer: stopping...")
|
||||
self.server_close()
|
||||
return
|
||||
|
||||
def start(self):
|
||||
if self.daemon is True:
|
||||
logging.info("PRServer: starting daemon...")
|
||||
self.daemonize()
|
||||
else:
|
||||
logging.info("PRServer: starting...")
|
||||
self._serve_forever()
|
||||
|
||||
def delpid(self):
|
||||
os.remove(PRServer.pidfile)
|
||||
|
||||
def daemonize(self):
|
||||
"""
|
||||
See Advanced Programming in the UNIX, Sec 13.3
|
||||
"""
|
||||
os.umask(0)
|
||||
|
||||
try:
|
||||
pid = os.fork()
|
||||
if pid > 0:
|
||||
sys.exit(0)
|
||||
except OSError as e:
|
||||
sys.stderr.write("1st fork failed: %d %s\n" % (e.errno, e.strerror))
|
||||
sys.exit(1)
|
||||
|
||||
os.setsid()
|
||||
"""
|
||||
fork again to make sure the daemon is not session leader,
|
||||
which prevents it from acquiring controlling terminal
|
||||
"""
|
||||
try:
|
||||
pid = os.fork()
|
||||
if pid > 0: #parent
|
||||
sys.exit(0)
|
||||
except OSError as e:
|
||||
sys.stderr.write("2nd fork failed: %d %s\n" % (e.errno, e.strerror))
|
||||
sys.exit(1)
|
||||
|
||||
os.chdir("/")
|
||||
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
si = file('/dev/null', 'r')
|
||||
so = file(self.logfile, 'a+')
|
||||
se = so
|
||||
os.dup2(si.fileno(),sys.stdin.fileno())
|
||||
os.dup2(so.fileno(),sys.stdout.fileno())
|
||||
os.dup2(se.fileno(),sys.stderr.fileno())
|
||||
|
||||
# write pidfile
|
||||
atexit.register(self.delpid)
|
||||
pid = str(os.getpid())
|
||||
pf = file(PRServer.pidfile, 'w+')
|
||||
pf.write("%s\n" % pid)
|
||||
pf.write("%s\n" % self.host)
|
||||
pf.write("%s\n" % self.port)
|
||||
pf.close()
|
||||
|
||||
self._serve_forever()
|
||||
|
||||
class PRServerConnection():
|
||||
def __init__(self, host, port):
|
||||
self.connection = bb.server.xmlrpc._create_server(host, port)
|
||||
self.host = host
|
||||
self.port = port
|
||||
|
||||
def terminate(self):
|
||||
# Don't wait for server indefinitely
|
||||
import socket
|
||||
socket.setdefaulttimeout(2)
|
||||
try:
|
||||
self.connection.quit()
|
||||
except:
|
||||
pass
|
||||
|
||||
def getPR(self, version, checksum):
|
||||
return self.connection.getPR(version, checksum)
|
||||
|
||||
def ping(self):
|
||||
return self.connection.ping()
|
||||
|
||||
def start_daemon(options):
|
||||
try:
|
||||
pf = file(PRServer.pidfile,'r')
|
||||
pid = int(pf.readline().strip())
|
||||
pf.close()
|
||||
except IOError:
|
||||
pid = None
|
||||
|
||||
if pid:
|
||||
sys.stderr.write("pidfile %s already exist. Daemon already running?\n"
|
||||
% PRServer.pidfile)
|
||||
sys.exit(1)
|
||||
|
||||
server = PRServer(options.dbfile, interface=(options.host, options.port),
|
||||
logfile=os.path.abspath(options.logfile))
|
||||
server.start()
|
||||
|
||||
def stop_daemon():
|
||||
try:
|
||||
pf = file(PRServer.pidfile,'r')
|
||||
pid = int(pf.readline().strip())
|
||||
host = pf.readline().strip()
|
||||
port = int(pf.readline().strip())
|
||||
pf.close()
|
||||
except IOError:
|
||||
pid = None
|
||||
|
||||
if not pid:
|
||||
sys.stderr.write("pidfile %s does not exist. Daemon not running?\n"
|
||||
% PRServer.pidfile)
|
||||
sys.exit(1)
|
||||
|
||||
PRServerConnection(host,port).terminate()
|
||||
time.sleep(0.5)
|
||||
|
||||
try:
|
||||
while 1:
|
||||
os.kill(pid,signal.SIGTERM)
|
||||
time.sleep(0.1)
|
||||
except OSError as err:
|
||||
err = str(err)
|
||||
if err.find("No such process") > 0:
|
||||
if os.path.exists(PRServer.pidfile):
|
||||
os.remove(PRServer.pidfile)
|
||||
else:
|
||||
print err
|
||||
sys.exit(1)
|
||||
|
||||
@@ -1,142 +0,0 @@
|
||||
# This is a single Makefile to handle all generated Yocto Project documents.
|
||||
# The Makefile needs to live in the documents directory and all figures used
|
||||
# in any manuals must be PNG files and live in the individual book's figures
|
||||
# directory.
|
||||
#
|
||||
# The Makefile has these targets:
|
||||
#
|
||||
# pdf: generates a PDF version of a manual. Not valid for the Quick Start
|
||||
# html: generates an HTML version of a manual.
|
||||
# tarball: creates a tarball for the doc files.
|
||||
# validate: validates
|
||||
# publish: pushes generated files to the Yocto Project website
|
||||
# clean: removes files
|
||||
#
|
||||
# The Makefile generates an HTML and PDF version of every document except the
|
||||
# Yocto Project Quick Start. The Quick Start is in HTML form only. The variable
|
||||
# The command-line argument DOC represents the folder name in which a particular
|
||||
# document is stored. The command-line argument VER represents the distro
|
||||
# version of the Yocto Release for which the manuals are being generated.
|
||||
# You must invoke the Makefile with the DOC and VER arguments.
|
||||
# Examples:
|
||||
#
|
||||
# make DOC=bsp-guide VER=1.1
|
||||
# make DOC=yocto-project-qs VER=1.1
|
||||
# make pdf DOC=yocto-project-qs VER=1.1
|
||||
#
|
||||
# The first example generates the HTML and PDF versions of the BSP Guide for
|
||||
# the Yocto Project 1.1 Release. The second example generates the HTML version
|
||||
# of the Quick Start. The third example generates an error because you cannot
|
||||
# generate a PDF version of the Quick Start.
|
||||
#
|
||||
# Use the publish target to push the generated manuals to the Yocto Project
|
||||
# website. All files needed for the manual's HTML form are pushed as well as the
|
||||
# PDF version (if applicable).
|
||||
# Examples:
|
||||
#
|
||||
# make publish DOC=bsp-guide VER=1.1
|
||||
# make publish DOC=adt-manual VER=1.1
|
||||
#
|
||||
|
||||
ifeq ($(DOC),bsp-guide)
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = style.css bsp-guide.html bsp-guide.pdf figures/bsp-title.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),yocto-project-qs)
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--xinclude
|
||||
ALLPREQ = html tarball
|
||||
TARFILES = yocto-project-qs.html style.css figures/yocto-environment.png figures/building-an-image.png figures/using-a-pre-built-image.png figures/yocto-project-transp.png
|
||||
MANUALS = $(DOC)/$(DOC).html
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),poky-ref-manual)
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = poky-ref-manual.html style.css figures/poky-title.png figures/ss-sato.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
|
||||
ifeq ($(DOC),adt-manual)
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = adt-manual.html adt-manual.pdf style.css figures/adt-title.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),kernel-manual)
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = kernel-manual.html kernel-manual.pdf style.css figures/kernel-title.png figures/kernel-architecture-overview.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
|
||||
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
|
||||
|
||||
all: $(ALLPREQ)
|
||||
|
||||
pdf:
|
||||
ifeq ($(DOC),yocto-project-qs)
|
||||
@echo " "
|
||||
@echo "ERROR: You cannot generate a PDF file for the Yocto Project Quick Start"
|
||||
@echo " "
|
||||
else
|
||||
cd $(DOC); ../tools/poky-docbook-to-pdf $(DOC).xml ../template; cd ..
|
||||
endif
|
||||
|
||||
html:
|
||||
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
|
||||
cd $(DOC); xsltproc $(XSLTOPTS) -o $(DOC).html $(DOC)-customization.xsl $(DOC).xml; cd ..
|
||||
|
||||
tarball: html
|
||||
cd $(DOC); tar -cvzf $(DOC).tgz $(TARFILES); cd ..
|
||||
|
||||
validate:
|
||||
cd $(DOC); xmllint --postvalid --xinclude --noout $(DOC).xml; cd ..
|
||||
|
||||
|
||||
publish:
|
||||
scp -r $(MANUALS) $(STYLESHEET) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)
|
||||
cd $(DOC); scp -r $(FIGURES) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)/figures
|
||||
|
||||
clean:
|
||||
rm -f $(MANUALS)
|
||||
@@ -1,67 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='using-the-command-line'>
|
||||
<title>Using the Command Line</title>
|
||||
<para>
|
||||
Recall that earlier we talked about how to use an existing toolchain
|
||||
tarball that had been installed into <filename>/opt/poky</filename>,
|
||||
which is outside of the Yocto Project build tree
|
||||
(see <xref linkend='using-an-existing-toolchain-tarball'>
|
||||
“Using an Existing Toolchain Tarball”)</xref>.
|
||||
And, that sourcing your architecture-specific environment setup script
|
||||
initializes a suitable cross-toolchain development environment.
|
||||
This setup occurs by adding the compiler, QEMU scripts, QEMU binary,
|
||||
a special version of <filename>pkgconfig</filename> and other useful
|
||||
utilities to the <filename>PATH</filename> variable.
|
||||
Variables to assist <filename>pkgconfig</filename> and <filename>autotools</filename>
|
||||
are also defined so that,
|
||||
for example, <filename>configure.sh</filename> can find pre-generated
|
||||
test results for tests that need target hardware on which to run.
|
||||
These conditions allow you to easily use the toolchain outside of the
|
||||
Yocto Project build environment on both autotools-based projects and
|
||||
makefile-based projects.
|
||||
</para>
|
||||
|
||||
<section id='autotools-based-projects'>
|
||||
<title>Autotools-Based Projects</title>
|
||||
<para>
|
||||
For an autotools-based project you can use the cross-toolchain by just
|
||||
passing the appropriate host option to <filename>configure.sh</filename>.
|
||||
The host option you use is derived from the name of the environment setup
|
||||
script in <filename>/opt/poky</filename> resulting from unpacking the
|
||||
cross-toolchain tarball.
|
||||
For example, the host option for an ARM-based target that uses the GNU EABI
|
||||
is <filename>armv5te-poky-linux-gnueabi</filename>.
|
||||
Note that the name of the script is
|
||||
<filename>environment-setup-armv5te-poky-linux-gnueabi</filename>.
|
||||
Thus, the following command works:
|
||||
<literallayout class='monospaced'>
|
||||
$ configure ‐‐host-armv5te-poky-linux-gnueabi ‐‐with-libtool-sysroot=<sysroot-dir>
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
This single command updates your project and rebuilds it using the appropriate
|
||||
cross-toolchain tools.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='makefile-based-projects'>
|
||||
<title>Makefile-Based Projects</title>
|
||||
<para>
|
||||
For a makefile-based project you use the cross-toolchain by making sure
|
||||
the tools are used.
|
||||
You can do this as follows:
|
||||
<literallayout class='monospaced'>
|
||||
CC=arm-poky-linux-gnueabi-gcc
|
||||
LD=arm-poky-linux-gnueabi-ld
|
||||
CFLAGS=”${CFLAGS} ‐‐sysroot=<sysroot-dir>”
|
||||
CXXFLAGS=”${CXXFLAGS} ‐‐sysroot=<sysroot-dir>”
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,436 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='adt-eclipse'>
|
||||
<title>Working Within Eclipse</title>
|
||||
<para>
|
||||
The Eclipse IDE is a popular development environment and it fully supports
|
||||
development using Yocto Project.
|
||||
When you install and configure the Eclipse Yocto Project Plug-in into
|
||||
the Eclipse IDE you maximize your Yocto Project design experience.
|
||||
Installing and configuring the Plug-in results in an environment that
|
||||
has extensions specifically designed to let you more easily develop software.
|
||||
These extensions allow for cross-compilation and deployment and execution of
|
||||
your output into a QEMU emulation session.
|
||||
You can also perform cross-debugging and profiling.
|
||||
The environment also has a suite of tools that allows you to perform
|
||||
remote profiling, tracing, collection of power data, collection of
|
||||
latency data, and collection of performance data.
|
||||
</para>
|
||||
<para>
|
||||
This section describes how to install and configure the Eclipse IDE
|
||||
Yocto Plug-in and how to use it to develop your Yocto Project.
|
||||
</para>
|
||||
|
||||
<section id='setting-up-the-eclipse-ide'>
|
||||
<title>Setting Up the Eclipse IDE</title>
|
||||
<para>
|
||||
To develop within the Eclipse IDE you need to do the following:
|
||||
<orderedlist>
|
||||
<listitem><para>Be sure the optimal version of Eclipse IDE
|
||||
is installed.</para></listitem>
|
||||
<listitem><para>Install Eclipse plug-in requirements prior to installing
|
||||
the Eclipse Yocto Plug-in.</para></listitem>
|
||||
<listitem><para>Configure the Eclipse Yocto Plug-in.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
|
||||
<section id='installing-eclipse-ide'>
|
||||
<title>Installing Eclipse IDE</title>
|
||||
<para>
|
||||
It is recommended that you have the Indigo 3.7 version of the
|
||||
Eclipse IDE installed on your development system.
|
||||
If you don’t have this version you can find it at
|
||||
<ulink url='http://www.eclipse.org/downloads'></ulink>.
|
||||
From that site, choose the Eclipse Classic version.
|
||||
This version contains the Eclipse Platform, the Java Development
|
||||
Tools (JDT), and the Plug-in Development Environment.
|
||||
</para>
|
||||
<para>
|
||||
Once you have downloaded the tarball, extract it into a clean
|
||||
directory and complete the installation.
|
||||
</para>
|
||||
<para>
|
||||
One issue exists that you need to be aware of regarding the Java
|
||||
Virtual machine’s garbage collection (GC) process.
|
||||
The GC process does not clean up the permanent generation
|
||||
space (PermGen).
|
||||
This space stores meta-data descriptions of classes.
|
||||
The default value is set too small and it could trigger an
|
||||
out-of-memory error such as the following:
|
||||
<literallayout class='monospaced'>
|
||||
Java.lang.OutOfMemoryError: PermGen space
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
This error causes the application to hang.
|
||||
</para>
|
||||
<para>
|
||||
To fix this issue you can use the ‐‐vmargs option when you start
|
||||
Eclipse to increase the size of the permanent generation space:
|
||||
<literallayout class='monospaced'>
|
||||
eclipse ‐‐vmargs ‐‐XX:PermSize=256M
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='installing-required-plug-ins-and-the-eclipse-yocto-plug-in'>
|
||||
<title>Installing Required Plug-ins and the Eclipse Yocto Plug-in</title>
|
||||
<para>
|
||||
Before installing the Yocto Plug-in you need to be sure that the
|
||||
CDT 8.0, RSE 3.2, and Autotools plug-ins are all installed in the
|
||||
following order.
|
||||
After installing these three plug-ins, you can install the
|
||||
Eclipse Yocto Plug-in.
|
||||
Use the following URLs for the plug-ins:
|
||||
<orderedlist>
|
||||
<listitem><para><emphasis>CDT 8.0</emphasis> –
|
||||
<ulink url='http://download.eclipse.org/tools/cdt/releases/indigo/'></ulink>:
|
||||
For CDT main features select the checkbox so you get all items.
|
||||
For CDT optional features expand the selections and check
|
||||
“C/C++ Remote Launch”.</para></listitem>
|
||||
<listitem><para><emphasis>RSE 3.2</emphasis> –
|
||||
<ulink url='http://download.eclipse.org/tm/updates/3.2'></ulink>:
|
||||
Check the box next to “TM and RSE Main Features” so you select all
|
||||
those items.
|
||||
Note that all items in the main features depend on 3.2.1 version.
|
||||
Expand the items under “TM and RSE Uncategorized 3.2.1” and
|
||||
select the following: “Remote System Explorer End-User Runtime”,
|
||||
“Remote System Explorer Extended SDK”, “Remote System Explorer User Actions”,
|
||||
“RSE Core”, “RSE Terminals UI”, and “Target Management Terminal”.</para></listitem>
|
||||
<listitem><para><emphasis>Autotools</emphasis> –
|
||||
<ulink url='http://download.eclipse.org/technology/linuxtools/update/'></ulink>:
|
||||
Expand the items under “Linux Tools” and select “Autotools support for
|
||||
CDT (Incubation)”.</para></listitem>
|
||||
<listitem><para><emphasis>Yocto Plug-in</emphasis> –
|
||||
<ulink url='http://www.yoctoproject.org/downloads/eclipse-plugin/1.0'></ulink>:
|
||||
Check the box next to “Development tools & SDKs for Yocto Linux”
|
||||
to select all the items.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
<para>
|
||||
Follow these general steps to install a plug-in:
|
||||
<orderedlist>
|
||||
<listitem><para>From within the Eclipse IDE select the
|
||||
“Install New Software” item from the “Help” menu.</para></listitem>
|
||||
<listitem><para>Click “Add…” in the “Work with:” area.</para></listitem>
|
||||
<listitem><para>Enter the URL for the repository and leave the “Name”
|
||||
field blank.</para></listitem>
|
||||
<listitem><para>Check the boxes next to the software you need to
|
||||
install and then complete the installation.
|
||||
For information on the specific software packages you need to include,
|
||||
see the previous list.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='configuring-the-plug-in'>
|
||||
<title>Configuring the Plug-in</title>
|
||||
<para>
|
||||
Configuring the Eclipse Yocto Plug-in involves choosing the Cross
|
||||
Compiler Options, selecting the Target Architecture, and choosing
|
||||
the Target Options.
|
||||
These settings are the default settings for all projects.
|
||||
You do have opportunities to change them later if you choose to when
|
||||
you configure the project.
|
||||
See “Configuring the Cross Toolchain” section later in the manual.
|
||||
</para>
|
||||
<para>
|
||||
To start, you need to do the following from within the Eclipse IDE:
|
||||
<itemizedlist>
|
||||
<listitem><para>Choose Windows -> Preferences to display
|
||||
the Preferences Dialog</para></listitem>
|
||||
<listitem><para>Click “Yocto SDK”</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<section id='configuring-the-cross-compiler-options'>
|
||||
<title>Configuring the Cross-Compiler Options</title>
|
||||
<para>
|
||||
Choose between ‘Stand-alone Prebuilt Toolchain’ and ‘Build System Derived Toolchain’ for Cross
|
||||
Compiler Options.
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>Stand-alone Prebuilt Toolchain</emphasis> – Select this mode
|
||||
when you are not concerned with building a target image or you do not have
|
||||
a Yocto Project build tree on your development system.
|
||||
For example, suppose you are an application developer and do not
|
||||
need to build a target image.
|
||||
Instead, you just want to use an architecture-specific toolchain on an
|
||||
existing kernel and target root filesystem.
|
||||
When you use Stand-alone Prebuilt Toolchain you are using the toolchain installed
|
||||
in the <filename>/opt/poky</filename> directory.</para></listitem>
|
||||
<listitem><para><emphasis>Build System Derived Toolchain</emphasis> – Select this mode
|
||||
if you are building images for target hardware or your
|
||||
development environment already has a Yocto Project build tree.
|
||||
In this case you likely already have a Yocto Project build tree installed on
|
||||
your system or you (or someone else) will be building one.
|
||||
When you select Build System Derived Toolchain you are using the toolchain bundled
|
||||
inside the Yocto Project build tree.
|
||||
If you use this mode you must also supply the Yocto Project build directory
|
||||
in the Preferences Dialog.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='configuring-the-sysroot'>
|
||||
<title>Configuring the Sysroot</title>
|
||||
<para>
|
||||
Specify the sysroot location, which is where the root filesystem for the
|
||||
target hardware is created on the development system by the ADT Installer.
|
||||
The QEMU user-space tools, the
|
||||
NFS boot process and the cross-toolchain all use the sysroot location
|
||||
regardless of wheather you select (Stand-alone Prebuilt Toolchain or Build System Derived Toolchain).
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='selecting-the-target-architecture'>
|
||||
<title>Selecting the Target Architecture</title>
|
||||
<para>
|
||||
Use the pull-down Target Architecture menu and select the
|
||||
target architecture.
|
||||
</para>
|
||||
<para>
|
||||
The Target Architecture is the type of hardware you are
|
||||
going to use or emulate.
|
||||
This pull-down menu should have the supported architectures.
|
||||
If the architecture you need is not listed in the menu then you
|
||||
will need to re-visit
|
||||
<xref linkend='adt-prepare'>
|
||||
“Preparing to Use the Application Development Toolkit (ADT)”</xref>
|
||||
section earlier in this document.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='choosing-the-target-options'>
|
||||
<title>Choosing the Target Options</title>
|
||||
<para>
|
||||
You can choose to emulate hardware using the QEMU emulator, or you
|
||||
can choose to use actual hardware.
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>External HW</emphasis> – Select this option
|
||||
if you will be using actual hardware.</para></listitem>
|
||||
<listitem><para><emphasis>QEMU</emphasis> – Select this option if
|
||||
you will be using the QEMU emulator.
|
||||
If you are using the emulator you also need to locate the Kernel
|
||||
and specify any custom options.</para>
|
||||
<para>If you select Build System Derived Toolchain the target kernel you built
|
||||
will be located in the
|
||||
Yocto Project build tree in <filename>tmp/deploy/images</filename> directory.
|
||||
If you select Stand-alone Prebuilt Toolchain the pre-built kernel you downloaded is located
|
||||
in the directory you specified when you downloaded the image.</para>
|
||||
<para>Most custom options are for advanced QEMU users to further
|
||||
customize their QEMU instance.
|
||||
These options are specified between paired angled brackets.
|
||||
Some options must be specified outside the brackets.
|
||||
In particular, the options <filename>serial</filename>,
|
||||
<filename>nographic</filename>, and <filename>kvm</filename> must all
|
||||
be outside the brackets.
|
||||
Use the <filename>man qemu</filename> command to get help on all the options
|
||||
and their use.
|
||||
The following is an example:
|
||||
<literallayout class='monospaced'>
|
||||
serial ‘<-m 256 -full-screen>’
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
Regardless of the mode, Sysroot is already defined in the “Sysroot”
|
||||
field.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>
|
||||
Click the “OK” button to save your plug-in configurations.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='creating-the-project'>
|
||||
<title>Creating the Project</title>
|
||||
<para>
|
||||
You can create two types of projects: Autotools-based, or Makefile-based.
|
||||
This section describes how to create autotools-based projects from within
|
||||
the Eclipse IDE.
|
||||
For information on creating projects in a terminal window see
|
||||
<xref linkend='using-the-command-line'> “Using the Command Line”</xref>
|
||||
section.
|
||||
</para>
|
||||
<para>
|
||||
To create a project based on a Yocto template and then display the source code,
|
||||
follow these steps:
|
||||
<orderedlist>
|
||||
<listitem><para>Select File -> New -> Project.</para></listitem>
|
||||
<listitem><para>Double click “CC++”.</para></listitem>
|
||||
<listitem><para>Double click “C Project” to create the project.</para></listitem>
|
||||
<listitem><para>Double click “Yocto SDK Project”.</para></listitem>
|
||||
<listitem><para>Select “Hello World ANSI C Autotools Project”.
|
||||
This is an Autotools-based project based on a Yocto Project template.</para></listitem>
|
||||
<listitem><para>Put a name in the “Project name:” field.</para></listitem>
|
||||
<listitem><para>Click “Next”.</para></listitem>
|
||||
<listitem><para>Add information in the “Author” field.</para></listitem>
|
||||
<listitem><para>Use “GNU General Public License v2.0” for the License.</para></listitem>
|
||||
<listitem><para>Click “Finish”.</para></listitem>
|
||||
<listitem><para>Answer ‘Yes” to the open perspective prompt.</para></listitem>
|
||||
<listitem><para>In the Project Explorer expand your project.</para></listitem>
|
||||
<listitem><para>Expand ‘src’.</para></listitem>
|
||||
<listitem><para>Double click on your source file and the code appears
|
||||
in the window.
|
||||
This is the template.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='configuring-the-cross-toolchains'>
|
||||
<title>Configuring the Cross-Toolchains</title>
|
||||
<para>
|
||||
The previous section, <xref linkend='configuring-the-cross-compiler-options'>
|
||||
“Configuring the Cross-Compiler Options”</xref>, set up the default project
|
||||
configurations.
|
||||
You can change these settings for a given project by following these steps:
|
||||
<orderedlist>
|
||||
<listitem><para>Select Project -> Invoke Yocto Tools -> Reconfigure Yocto.
|
||||
This brings up the project's Yocto Settings Dialog.
|
||||
Settings are inherited from the default project configuration.
|
||||
The information in this dialogue is identical to that chosen earlier
|
||||
for the Cross Compiler Option (Stand-alone Prebuilt Toolchain or Build System Derived Toolchain),
|
||||
the Target Architecture, and the Target Options.
|
||||
The settings are inherited from the Yocto Plug-in configuration performed
|
||||
after installing the plug-in.</para></listitem>
|
||||
<listitem><para>Select Project -> Reconfigure Project.
|
||||
This runs the <filename>autogen.sh</filename> in the workspace for your project.
|
||||
The script runs <filename>libtoolize</filename>, <filename>aclocal</filename>,
|
||||
<filename>autoconf</filename>, <filename>autoheader</filename>,
|
||||
<filename>automake ‐‐a</filename>, and
|
||||
<filename>./configure</filename>.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='building-the-project'>
|
||||
<title>Building the Project</title>
|
||||
<para>
|
||||
To build the project, select Project -> Build Project.
|
||||
The console should update and you can note the cross-compiler you are using.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='starting-qemu-in-user-space-nfs-mode'>
|
||||
<title>Starting QEMU in User Space NFS Mode</title>
|
||||
<para>
|
||||
To start the QEMU emulator from within Eclipse, follow these steps:
|
||||
<orderedlist>
|
||||
<listitem><para>Select Run -> External Tools -> External Tools Configurations...
|
||||
This selection brings up the External Tools Configurations Dialogue.</para></listitem>
|
||||
<listitem><para>Go to the left navigation area and expand ‘Program’.
|
||||
You should find the image listed.
|
||||
For example, qemu-x86_64-poky-linux.</para></listitem>
|
||||
<listitem><para>Click on the image.
|
||||
This brings up a new environment in the main area of the External
|
||||
Tools Configurations Dialogue.
|
||||
The Main tab is selected.</para></listitem>
|
||||
<listitem><para>Click “Run” next.
|
||||
This brings up a shell window.</para></listitem>
|
||||
<listitem><para>Enter your host root password in the shell window at the prompt.
|
||||
This sets up a Tap 0 connection needed for running in user-space NFS mode.</para></listitem>
|
||||
<listitem><para>Wait for QEMU to launch.</para></listitem>
|
||||
<listitem><para>Once QEMU launches you need to determine the IP Address
|
||||
for the user-space NFS.
|
||||
You can do that by going to a terminal in the QEMU and entering the
|
||||
<filename>ipconfig</filename> command.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='deploying-and-debugging-the-application'>
|
||||
<title>Deploying and Debugging the Application</title>
|
||||
<para>
|
||||
Once QEMU is running you can deploy your application and use the emulator
|
||||
to perform debugging.
|
||||
Follow these steps to deploy the application.
|
||||
<orderedlist>
|
||||
<listitem><para>Select Run -> Debug Configurations...</para></listitem>
|
||||
<listitem><para>In the left area expand “C/C++Remote Application”.</para></listitem>
|
||||
<listitem><para>Locate your project and select it to bring up a new
|
||||
tabbed view in the Debug Configurations dialogue.</para></listitem>
|
||||
<listitem><para>Enter the absolute path into which you want to deploy
|
||||
the application.
|
||||
Use the Remote Absolute File Path for C/C++Application:.
|
||||
For example, enter <filename>/usr/bin/<programname></filename>.</para></listitem>
|
||||
<listitem><para>Click on the Debugger tab to see the cross-tool debugger
|
||||
you are using.</para></listitem>
|
||||
<listitem><para>Create a new connection to the QEMU instance
|
||||
by clicking on “new”.</para></listitem>
|
||||
<listitem><para>Select “TCF, which means Target Communication Framework.</para></listitem>
|
||||
<listitem><para>Click “Next”.</para></listitem>
|
||||
<listitem><para>Clear out the “host name” field and enter the IP Address
|
||||
determined earlier.</para></listitem>
|
||||
<listitem><para>Click Finish to close the new connections dialogue.</para></listitem>
|
||||
<listitem><para>Use the drop-down menu now in the “Connection” field and pick
|
||||
the IP Address you entered.</para></listitem>
|
||||
<listitem><para>Click “Debug” to bring up a login screen and login.</para></listitem>
|
||||
<listitem><para>Accept the debug perspective.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='running-user-space-tools'>
|
||||
<title>Running User-Space Tools</title>
|
||||
<para>
|
||||
As mentioned earlier in the manual several tools exist that enhance
|
||||
your development experience.
|
||||
These tools are aids in developing and debugging applications and images.
|
||||
You can run these user-space tools from within the Yocto Eclipse
|
||||
Plug-in through the Window -> YoctoTools menu.
|
||||
</para>
|
||||
<para>
|
||||
Once you pick a tool you need to configure it for the remote target.
|
||||
Every tool needs to have the connection configured.
|
||||
You must select an existing TCF-based RSE connection to the remote target.
|
||||
If one does not exist, click "New" to create one.
|
||||
</para>
|
||||
<para>
|
||||
Here are some specifics about the remote tools:
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>OProfile:</emphasis> Selecting this tool causes
|
||||
the oprofile-server on the remote target to launch on the local host machine.
|
||||
The oprofile-viewer must be installed on the local host machine and the
|
||||
oprofile-server must be installed on the remote target, respectively, in order
|
||||
to use.
|
||||
You can locate both the viewer and server from
|
||||
<ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/oprofileui/'></ulink>.
|
||||
You need to compile and install the oprofile-viewer from the source code
|
||||
on your local host machine.
|
||||
The oprofile-server is installed by default in the image.</para></listitem>
|
||||
<listitem><para><emphasis>Lttng-ust:</emphasis> Selecting this tool runs
|
||||
<filename>usttrace</filename> on the remote target, transfers the output data back to the
|
||||
local host machine and uses <filename>lttv-gui</filename> to graphically display the output.
|
||||
The <filename>lttv-gui</filename> must be installed on the local host machine to use this tool.
|
||||
For information on how to use <filename>lttng</filename> to trace an application, see
|
||||
<ulink url='http://lttng.org/files/ust/manual/ust.html'></ulink>.</para>
|
||||
<para>For "Application" you must supply the absolute path name of the
|
||||
application to be traced by user mode lttng.
|
||||
For example, typing <filename>/path/to/foo</filename> triggers
|
||||
<filename>usttrace /path/to/foo</filename> on the remote target to trace the
|
||||
program <filename>/path/to/foo</filename>.</para>
|
||||
<para>"Argument" is passed to <filename>usttrace</filename>
|
||||
running on the remote target.</para></listitem>
|
||||
<listitem><para><emphasis>PowerTOP:</emphasis> Selecting this tool runs
|
||||
"PowerTOP" on the remote target machine and displays the results in a
|
||||
new view called "powertop".</para>
|
||||
<para>"Time to gather data(sec):" is the time passed in seconds before data
|
||||
is gathered from the remote target for analysis.</para>
|
||||
<para>"show pids in wakeups list:" corresponds to the <filename>-p</filename> argument
|
||||
passed to <filename>powertop</filename>.</para></listitem>
|
||||
<listitem><para><emphasis>LatencyTOP and Perf:</emphasis> "LatencyTOP"
|
||||
identifies system latency, while <filename>perf</filename> monitors the system's
|
||||
performance counter registers.
|
||||
Selecting either of these tools causes an RSE terminal view to appear
|
||||
from which you can run the tools.
|
||||
Both tools refresh the entire screen to display results while they run.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,129 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='adt-intro'>
|
||||
|
||||
<title>Application Development Toolkit (ADT) User's Guide</title>
|
||||
|
||||
<para>
|
||||
Welcome to the Application Development Toolkit User’s Guide. This manual provides
|
||||
information that lets you get going with the ADT to develop projects using the Yocto
|
||||
Project.
|
||||
</para>
|
||||
|
||||
<section id='book-intro'>
|
||||
<title>Introducing the Application Development Toolkit (ADT)</title>
|
||||
<para>
|
||||
Fundamentally, the ADT consists of an architecture-specific cross-toolchain and
|
||||
a matching sysroot that are both built by the Poky build system.
|
||||
The toolchain and sysroot are based on a metadata configuration and extensions,
|
||||
which allows you to cross develop for the target on the host machine.
|
||||
</para>
|
||||
<para>
|
||||
Additionally, to provide an effective development platform, the Yocto Project
|
||||
makes available and suggests other tools you can use with the ADT.
|
||||
These other tools include the Eclipse IDE Yocto Plug-in, an emulator (QEMU),
|
||||
and various user-space tools that greatly enhance your development experience.
|
||||
</para>
|
||||
<para>
|
||||
The resulting combination of the architecture-specific cross-toolchain and sysroot
|
||||
along with these additional tools yields a custom-built, cross-development platform
|
||||
for a user-targeted product.
|
||||
</para>
|
||||
|
||||
<section id='the-cross-toolchain'>
|
||||
<title>The Cross-Toolchain</title>
|
||||
<para>
|
||||
The cross-toolchain consists of a cross-compiler, cross-linker, and cross-debugger
|
||||
that are used to develop for targeted hardware.
|
||||
This toolchain is created either by running the ADT Installer script or
|
||||
through a Yocto Project build tree that is based on your metadata
|
||||
configuration or extension for your targeted device.
|
||||
The cross-toolchain works with a matching target sysroot.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='sysroot'>
|
||||
<title>Sysroot</title>
|
||||
<para>
|
||||
The matching target sysroot contains needed headers and libraries for generating
|
||||
binaries that run on the target architecture.
|
||||
The sysroot is based on the target root filesystem image that is built by
|
||||
Poky and uses the same metadata configuration used to build the cross-toolchain.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='the-qemu-emulator'>
|
||||
<title>The QEMU Emulator</title>
|
||||
<para>
|
||||
The QEMU emulator allows you to simulate your hardware while running your
|
||||
application or image.
|
||||
QEMU is made available a number of ways:
|
||||
<itemizedlist>
|
||||
<listitem><para>If you use the ADT Installer script to install ADT you can
|
||||
specify whether or not to install QEMU.</para></listitem>
|
||||
<listitem><para>If you have downloaded a Yocto Project release and unpacked
|
||||
it to create a Yocto Project source directory followed by sourcing
|
||||
the Yocto Project environment setup script, QEMU is installed and automatically
|
||||
available.</para></listitem>
|
||||
<listitem><para>If you have installed the cross-toolchain
|
||||
tarball followed by sourcing the toolchain's setup environment script, QEMU
|
||||
is installed and automatically available.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='user-space-tools'>
|
||||
<title>User-Space Tools</title>
|
||||
<para>
|
||||
User-space tools are included as part of the distribution.
|
||||
You will find these tools helpful during development.
|
||||
The tools include LatencyTOP, PowerTOP, OProfile, Perf, SystemTap, and Lttng-ust.
|
||||
These tools are common development tools for the Linux platform.
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>LatencyTOP</emphasis> – LatencyTOP focuses on latency
|
||||
that causes skips in audio,
|
||||
stutters in your desktop experience, or situations that overload your server
|
||||
even when you have plenty of CPU power left.
|
||||
You can find out more about LatencyTOP at
|
||||
<ulink url='http://www.latencytop.org/'></ulink>.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>PowerTOP</emphasis> – Helps you determine what
|
||||
software is using the most power.
|
||||
You can find out more about PowerTOP at
|
||||
<ulink url='http://www.linuxpowertop.org/'></ulink>.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>OProfile</emphasis> – A system-wide profiler for Linux
|
||||
systems that is capable
|
||||
of profiling all running code at low overhead.
|
||||
You can find out more about OProfile at
|
||||
<ulink url='http://oprofile.sourceforge.net/about/'></ulink>.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>Perf</emphasis> – Performance counters for Linux used
|
||||
to keep track of certain
|
||||
types of hardware and software events.
|
||||
For more information on these types of counters see
|
||||
<ulink url='https://perf.wiki.kernel.org/index.php'></ulink> and click
|
||||
on “Perf tools.”
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>SystemTap</emphasis> – A free software infrastructure
|
||||
that simplifies
|
||||
information gathering about a running Linux system.
|
||||
This information helps you diagnose performance or functional problems.
|
||||
SystemTap is not available as a user-space tool through the Yocto Eclipse IDE Plug-in.
|
||||
See <ulink url='http://sourceware.org/systemtap'></ulink> for more information
|
||||
on SystemTap.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>Lttng-ust</emphasis> – A User-space Tracer designed to
|
||||
provide detailed information on user-space activity.
|
||||
See <ulink url='http://lttng.org/ust'></ulink> for more information on Lttng-ust.
|
||||
</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,75 +0,0 @@
|
||||
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<book id='adt-manual' lang='en'
|
||||
xmlns:xi="http://www.w3.org/2003/XInclude"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
>
|
||||
<bookinfo>
|
||||
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref='figures/adt-title.png'
|
||||
format='SVG'
|
||||
align='left' scalefit='1' width='100%'/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title></title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<firstname>Jessica</firstname> <surname>Zhang</surname>
|
||||
<affiliation>
|
||||
<orgname>Intel Corporation</orgname>
|
||||
</affiliation>
|
||||
<email>jessica.zhang@intel.com</email>
|
||||
</author>
|
||||
</authorgroup>
|
||||
|
||||
<revhistory>
|
||||
<revision>
|
||||
<revnumber>1.0</revnumber>
|
||||
<date>6 April 2011</date>
|
||||
<revremark>Initial Document released with Yocto Project 1.0 on 6 April 2011.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.0.1</revnumber>
|
||||
<date>23 May 2011</date>
|
||||
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
<year>2010-2011</year>
|
||||
<holder>Linux Foundation</holder>
|
||||
</copyright>
|
||||
|
||||
<legalnotice>
|
||||
<para>
|
||||
Permission is granted to copy, distribute and/or modify this document under
|
||||
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England & Wales</ulink> as published by Creative Commons.
|
||||
</para>
|
||||
</legalnotice>
|
||||
|
||||
</bookinfo>
|
||||
|
||||
<xi:include href="adt-intro.xml"/>
|
||||
|
||||
<xi:include href="adt-prepare.xml"/>
|
||||
|
||||
<xi:include href="adt-package.xml"/>
|
||||
|
||||
<xi:include href="adt-eclipse.xml"/>
|
||||
|
||||
<xi:include href="adt-command.xml"/>
|
||||
|
||||
<!-- <index id='index'>
|
||||
<title>Index</title>
|
||||
</index>
|
||||
-->
|
||||
|
||||
</book>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,82 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='adt-package'>
|
||||
<title>Optionally Customizing the Development Packages Installation</title>
|
||||
<para>
|
||||
Because the Yocto Project is suited for embedded Linux development it is
|
||||
likely that you will need to customize your development packages installation.
|
||||
For example, if you are developing a minimal image then you might not need
|
||||
certain packages (e.g. graphics support packages).
|
||||
Thus, you would like to be able to remove those packages from your target sysroot.
|
||||
</para>
|
||||
|
||||
<section id='package-management-systems'>
|
||||
<title>Package Management Systems</title>
|
||||
<para>
|
||||
The Yocto Project supports the generation of sysroot files using
|
||||
three different Package Management Systems (PMS):
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>OPKG</emphasis> – A less well known PMS whose use
|
||||
originated in the OpenEmbedded and OpenWrt embedded Linux projects.
|
||||
This PMS works with files packaged in an <filename>.ipk</filename> format.
|
||||
See <ulink url='http://en.wikipedia.org/wiki/Opkg'></ulink> for more
|
||||
information about OPKG.</para></listitem>
|
||||
<listitem><para><emphasis>RPM</emphasis> – A more widely known PMS intended for GNU/Linux
|
||||
distributions.
|
||||
This PMS works with files packaged in an <filename>.rms</filename> format.
|
||||
The Yocto Project currently installs through this PMS by default.
|
||||
See <ulink url='http://en.wikipedia.org/wiki/RPM_Package_Manager'></ulink>
|
||||
for more information about RPM.</para></listitem>
|
||||
<listitem><para><emphasis>Debian</emphasis> – The PMS for Debian-based systems
|
||||
is built on many PMS tools.
|
||||
The lower-level PMS tool <filename>dpkg</filename> forms the base of the Debian PMS.
|
||||
For information on dpkg see
|
||||
<ulink url='http://en.wikipedia.org/wiki/Dpkg'></ulink>.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='configuring-the-pms'>
|
||||
<title>Configuring the PMS</title>
|
||||
<para>
|
||||
Whichever PMS you are using you need to be sure that the
|
||||
<filename>PACKAGE_CLASSES</filename> variable in the <filename>conf/local.conf</filename>
|
||||
file is set to reflect that system.
|
||||
The first value you choose for the variable specifies the package file format for the root
|
||||
filesystem at sysroot.
|
||||
Additional values specify additional formats for convenience or testing.
|
||||
See the configuration file for details.
|
||||
</para>
|
||||
<para>
|
||||
As an example, consider a scenario where you are using OPKG and you want to add
|
||||
the <filename>libglade</filename> package to the target sysroot.
|
||||
</para>
|
||||
<para>
|
||||
First, you should generate the ipk file for the <filename>libglade</filename> package and add it
|
||||
into a working opkg repository.
|
||||
Use these commands:
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake libglade
|
||||
$ bitbake package-index
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
Next, source the environment setup script found in the Yocto Project source directory.
|
||||
Follow that by setting up the installation destination to point to your
|
||||
sysroot as <filename><sysroot_dir></filename>.
|
||||
Finally, have an opkg configuration file <filename><conf_file></filename>
|
||||
that corresponds to the opkg repository you have just created.
|
||||
The following command forms should now work:
|
||||
<literallayout class='monospaced'>
|
||||
$ opkg-cl –f <conf_file> -o <sysroot-dir> update
|
||||
$ opkg-cl –f <cconf_file>> -o <sysroot-dir> --force-overwrite install libglade
|
||||
$ opkg-cl –f <cconf_file> -o <sysroot-dir> --force-overwrite install libglade-dbg
|
||||
$ opkg-cl –f <conf_file> -o <sysroot-dir> --force-overwrite install libglade-dev
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,356 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='adt-prepare'>
|
||||
|
||||
<title>Preparing to Use the Application Development Toolkit (ADT)</title>
|
||||
|
||||
<para>
|
||||
In order to use the ADT you must install it, source a script to set up the
|
||||
environment, and be sure the kernel and filesystem image specific to the target architecture
|
||||
exists.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This section describes how to be sure you meet these requirements.
|
||||
Througout this section two important terms are used:
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>Yocto Project Source Tree:</emphasis>
|
||||
This term refers to the directory structure created as a result of downloading
|
||||
and unpacking a Yocto Project release tarball.
|
||||
The Yocto Project source tree contains Bitbake, Documentation, Meta-data and
|
||||
other files.
|
||||
The name of the top-level directory of the Yocto Project source tree
|
||||
is derived from the Yocto Project release tarball.
|
||||
For example, downloading and unpacking <filename>poky-bernard-5.0.1.tar.bz2</filename>
|
||||
results in a Yocto Project source tree whose Yocto Project source directory is named
|
||||
<filename>poky-bernard-5.0.1</filename>.</para></listitem>
|
||||
<listitem><para><emphasis>Yocto Project Build Tree:</emphasis>
|
||||
This term refers to the area where you run your builds.
|
||||
The area is created when you source the Yocto Project setup environment script
|
||||
that is found in the Yocto Project source directory
|
||||
(e.g. <filename>poky-init-build-env</filename>).
|
||||
You can create the Yocto Project build tree anywhere you want on your
|
||||
development system.
|
||||
Here is an example that creates the tree in <filename>mybuilds</filename>
|
||||
and names the Yocto Project build directory <filename>YP-5.0.1</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
$ source poky-bernard-5.0.1/poky-init-build-env $HOME/mybuilds/YP-5.0.1
|
||||
</literallayout>
|
||||
If you don't specifically name the build directory then Bitbake creates it
|
||||
in the current directory and uses the name <filename>build</filename>.
|
||||
Also, if you supply an existing directory then Bitbake uses that
|
||||
directory as the Yocto Project build directory and populates the build tree
|
||||
beneath it.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<section id='installing-the-adt'>
|
||||
<title>Installing the ADT</title>
|
||||
|
||||
<para>
|
||||
The following list describes how you can install the ADT, which includes the cross-toolchain.
|
||||
Regardless of the installation you choose, however, you must source the cross-toolchain
|
||||
environment setup script before you use the toolchain.
|
||||
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
|
||||
section for more information.
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>Use the ADT Installer Script:</emphasis>
|
||||
This method is the recommended way to install the ADT because it
|
||||
automates much of the process for you.
|
||||
For example, you can configure the installation to install the QEMU emulator
|
||||
and the user-space NFS, specify which root filesystem profiles to download,
|
||||
and define the target sysroot location.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>Use an Existing Toolchain Tarball:</emphasis>
|
||||
Using this method you select and download an architecture-specific
|
||||
toolchain tarball and then hand-install the toolchain.
|
||||
If you use this method you just get the cross-toolchain and QEMU - you do not
|
||||
get any of the other mentioned benefits had you run the ADT Installer script.</para></listitem>
|
||||
<listitem><para><emphasis>Use the Toolchain from Within a Yocto Project Build Tree:</emphasis>
|
||||
If you already have a Yocto Project build tree you can install the cross-toolchain
|
||||
using that tree.
|
||||
However, like the previous method mentioned, you only get the cross-toolchain and QEMU - you
|
||||
do not get any of the other benefits without taking separate steps.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<section id='using-the-adt-installer'>
|
||||
<title>Using the ADT Installer</title>
|
||||
|
||||
<para>
|
||||
To run the ADT Installer you need to first get the ADT Installer tarball and then run the ADT
|
||||
Installer Script.
|
||||
</para>
|
||||
|
||||
<section id='getting-the-adt-installer-tarball'>
|
||||
<title>Getting the ADT Installer Tarball</title>
|
||||
|
||||
<para>
|
||||
The ADT Installer is contained in the ADT Installer tarball.
|
||||
You can download the tarball into any directory from
|
||||
<ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/adt-installer/'></ulink>.
|
||||
Or, you can use Bitbake to generate the tarball inside the existing Yocto Project build tree.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If you use Bitbake to generate the ADT Installer tarball, you must
|
||||
source the Yocto Project environment setup script located in the Yocto Project
|
||||
source directory before running the Bitbake command that creates the tarball.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The following example commands download the Yocto Project release tarball, create the Yocto
|
||||
Project source tree, set up the environment while also creating the Yocto Project build tree,
|
||||
and finally run the Bitbake command that results in the tarball
|
||||
<filename>~/yocto-project/build/tmp/deploy/sdk/adt_installer.tar.bz2</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
$ cd ~
|
||||
$ mkdir yocto-project
|
||||
$ cd yocto-project
|
||||
$ wget http://www.yoctoproject.org/downloads/poky/poky-bernard-5.0.1.tar.bz2
|
||||
$ tar xjf poky-bernard-5.0.1.tar.bz2
|
||||
$ source poky-bernard-5.0.1/poky-init-build-env poky-5.0.1-build
|
||||
$ bitbake adt-installer
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='configuring-and-running-the-adt-installer-script'>
|
||||
<title>Configuring and Running the ADT Installer Script</title>
|
||||
|
||||
<para>
|
||||
Before running the ADT Installer script you need to unpack the tarball.
|
||||
You can unpack the tarball in any directory you wish.
|
||||
Unpacking it creates the directory <filename>adt-installer</filename>,
|
||||
which contains the ADT Installer script and its configuration file.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Before you run the script, however, you should examine the ADT Installer configuration
|
||||
file (<filename>adt_installer</filename>) and be sure you are going to get what you want.
|
||||
Your configurations determine which kernel and filesystem image are downloaded.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The following list describes the configurations you can define for the ADT Installer.
|
||||
For configuration values and restrictions see the comments in
|
||||
the <filename>adt-installer.conf</filename> file:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem><para><filename>YOCTOADT_IPKG_REPO</filename> – This area
|
||||
includes the IPKG-based packages and the root filesystem upon which
|
||||
the installation is based.
|
||||
If you want to set up your own IPKG repository pointed to by
|
||||
<filename>YOCTOADT_IPKG_REPO</filename>, you need to be sure that the
|
||||
directory structure follows the same layout as the reference directory
|
||||
set up at <ulink url='http://adtrepo.yoctoproject.org'></ulink>.
|
||||
Also, your repository needs to be accessible through HTTP.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>YOCTOADT-TARGETS</filename> – The machine
|
||||
target architectures for which you want to set up cross-development
|
||||
environments.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>YOCTOADT_QEMU</filename> – Indicates whether
|
||||
or not to install the emulator QEMU.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>YOCTOADT_NFS_UTIL</filename> – Indicates whether
|
||||
or not to install user-mode NFS.
|
||||
If you plan to use the Yocto Eclipse IDE plug-in against QEMU,
|
||||
you should install NFS.
|
||||
<note>
|
||||
To boot QEMU images using our userspace NFS server, you need
|
||||
to be running portmap or rpcbind.
|
||||
If you are running rpcbind, you will also need to add the -i
|
||||
option when rpcbind starts up.
|
||||
Please make sure you understand the security implications of doing this.
|
||||
Your firewall settings may also have to be modified to allow
|
||||
NFS booting to work.
|
||||
</note>
|
||||
</para></listitem>
|
||||
<listitem><para><filename>YOCTOADT_ROOTFS_<arch></filename> - The root
|
||||
filesystem images you want to download from the <filename>YOCTOADT_IPKG_REPO</filename>
|
||||
repository.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_IMAGE_<arch></filename> - The
|
||||
particular root filesystem used to extract and create the target sysroot.
|
||||
The value of this variable must have been specified with
|
||||
<filename>YOCTOADT_ROOTFS_<arch></filename>.
|
||||
For example, if you downloaded both <filename>minimal</filename> and
|
||||
<filename>sato-sdk</filename> images by setting <filename>YOCTOADT_ROOTFS_<arch></filename>
|
||||
to "minimal sato-sdk", then <filename>YOCTOADT_ROOTFS_<arch></filename>
|
||||
must be set to either "minimal" or "sato-sdk".
|
||||
</para></listitem>
|
||||
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_LOC_<arch></filename> - The
|
||||
location on the development host where the target sysroot will be created.
|
||||
</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
After you have configured the <filename>adt_installer.conf</filename> file,
|
||||
run the installer using the following command:
|
||||
<literallayout class='monospaced'>
|
||||
$ adt_installer
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<note>
|
||||
The ADT Installer requires the <filename>libtool</filename> package to complete.
|
||||
If you install the recommended packages as described in the
|
||||
<ulink url='http://www.yoctoproject.org/docs/yocto-project-qs/yocto-project-qs.html'>
|
||||
Yocto Project Quick Start</ulink> then you will have libtool installed.
|
||||
</note>
|
||||
|
||||
<para>
|
||||
Once the installer begins to run you are asked whether you want to run in
|
||||
interactive or silent mode.
|
||||
If you want to closely monitor the installation then choose “I” for interactive
|
||||
mode rather than “S” for silent mode.
|
||||
Follow the prompts from the script to complete the installation.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Once the installation completes, the ADT, which includes the cross-toolchain, is installed.
|
||||
You will notice environment setup files for the cross-toolchain in
|
||||
<filename>/opt/poky/$SDKVERSION</filename>,
|
||||
and image tarballs in the <filename>adt-installer</filename>
|
||||
directory according to your installer configurations, and the target sysroot located
|
||||
according to the <filename>YOCTOADT_TARGET_SYSROOT_LOC_<arch></filename> variable
|
||||
also in your configuration file.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='using-an-existing-toolchain-tarball'>
|
||||
<title>Using a Cross-Toolchain Tarball</title>
|
||||
<para>
|
||||
If you want to simply install the cross-toolchain by hand you can do so by using an existing
|
||||
cross-toolchain tarball.
|
||||
If you install the cross-toolchain by hand you will have to set up the target sysroot separately.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Follow these steps:
|
||||
<orderedlist>
|
||||
<listitem><para>Go to
|
||||
<ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/toolchain'></ulink>
|
||||
and find the folder that matches your host development system
|
||||
(i.e. 'i686' for 32-bit machines or 'x86_64' for 64-bit machines).</para>
|
||||
</listitem>
|
||||
<listitem><para>Go into that folder and download the toolchain tarball whose name
|
||||
includes the appropriate target architecture.
|
||||
For example, if your host development system is an Intel-based 64-bit system and
|
||||
you are going to use your cross-toolchain for an arm target go into the
|
||||
<filename>x86_64</filename> folder and download the following tarball:
|
||||
<literallayout class='monospaced'>
|
||||
yocto-eglibc-x86_64-arm-toolchain-gmae-1.0.tar.bz2
|
||||
</literallayout>
|
||||
<note>
|
||||
Alternatively you can build the toolchain tarball if you have a Yocto Project build tree.
|
||||
Use the <filename>bitbake meta-toolchain</filename> command after you have
|
||||
sourced the <filename>poky-build-init script</filename> located in the Yocto Project
|
||||
source directory.
|
||||
When the <filename>bitbake</filename> command completes the toolchain tarball will
|
||||
be in <filename>tmp/deploy/sdk</filename> in the Yocto Project build tree.
|
||||
</note></para></listitem>
|
||||
<listitem><para>Make sure you are in the root directory and then expand
|
||||
the tarball.
|
||||
The tarball expands into <filename>/opt/poky/$SDKVERSION</filename>.
|
||||
Once the tarball in unpacked the cross-toolchain is installed.
|
||||
You will notice environment setup files for the cross-toolchain in the directory.
|
||||
</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='using-the-toolchain-from-within-the-build-tree'>
|
||||
<title>Using Bitbake and the Yocto Project Build Tree</title>
|
||||
<para>
|
||||
A final way of installing just the cross-toolchain is to use Bitbake within an existing
|
||||
Yocto Project build tree.
|
||||
Follow these steps:
|
||||
<orderedlist>
|
||||
<listitem><para>Source the environment setup script located in the Yocto Project
|
||||
source directory.
|
||||
The script has the string <filename>init-build-env</filename>
|
||||
as part of the name.</para></listitem>
|
||||
<listitem><para>At this point you should be sure that the
|
||||
<filename>MACHINE</filename> variable
|
||||
in the <filename>local.conf</filename> file is set for the target architecture.
|
||||
You can find the <filename>local.conf</filename> file in the Yocto Project source
|
||||
directory.
|
||||
Comments within the <filename>local.conf</filename> file list the values you
|
||||
can use for the <filename>MACHINE</filename> variable.
|
||||
<note>You can populate the build tree with the cross-toolchains for more
|
||||
than a single architecture.
|
||||
You just need to edit the <filename>MACHINE</filename> variable in the
|
||||
<filename>local.conf</filename> file and re-run the BitBake command.</note></para></listitem>
|
||||
<listitem><para>Run <filename>bitbake meta-ide-support</filename> to complete the
|
||||
cross-toolchain installation.
|
||||
<note>If you change your working directory after you source the environment
|
||||
setup script and before you run the Bitbake command the command will not work.
|
||||
Be sure to run the Bitbake command immediately after checking or editing the
|
||||
<filename>local.conf</filename> but without changing your working directory.</note>
|
||||
Once Bitbake finishes, the cross-toolchain is installed.
|
||||
You will notice environment setup files for the cross-toolchain in the
|
||||
Yocto Project build tree in the <filename>tmp</filename> directory.
|
||||
Setup script filenames contain the strings <filename>environment-setup</filename>.
|
||||
</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='setting-up-the-environment'>
|
||||
<title>Setting Up the Environment</title>
|
||||
<para>
|
||||
Before you can use the cross-toolchain you need to set up the toolchain environment by
|
||||
sourcing the environment setup script.
|
||||
If you used the ADT Installer or used an existing ADT tarball to install the ADT,
|
||||
then you can find this script in the <filename>/opt/poky/$SDKVERSION</filename>
|
||||
directory.
|
||||
If you used Bitbake and the Yocto Project Build Tree to install the cross-toolchain
|
||||
then you can find the environment setup scripts in in the Yocto Project build tree
|
||||
in the <filename>tmp</filename> directory.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Be sure to run the environment setup script that matches the architecture for
|
||||
which you are developing.
|
||||
Environment setup scripts begin with the string “environment-setup” and include as
|
||||
part of their name the architecture.
|
||||
For example, the environment setup script for a 64-bit IA-based architecture would
|
||||
be the following:
|
||||
<literallayout class='monospaced'>
|
||||
/opt/poky/1.0/environment-setup-x86_64-poky-linux
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='kernels-and-filesystem-images'>
|
||||
<title>Kernels and Filesystem Images</title>
|
||||
<para>
|
||||
You will need to have a kernel and filesystem image to boot using your
|
||||
hardware or the QEMU emulator.
|
||||
That means you either have to build them or know where to get them.
|
||||
You can find lots of details on how to get or build images and kernels for your
|
||||
architecture in the "Yocto Project Quick Start" found at
|
||||
<ulink url='http://www.yoctoproject.org/docs/yocto-quick-start/yocto-project-qs.html'></ulink>.
|
||||
<note>
|
||||
Yocto Project provides basic kernels and filesystem images for several
|
||||
architectures (x86, x86-64, mips, powerpc, and arm) that you can use
|
||||
unaltered in the QEMU emulator.
|
||||
These kernels and filesystem images reside in the Yocto Project release
|
||||
area - <ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/machines/'></ulink>
|
||||
and are ideal for experimentation within Yocto Project.
|
||||
</note>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
|
Before Width: | Height: | Size: 14 KiB |
@@ -1,968 +0,0 @@
|
||||
/*
|
||||
Generic XHTML / DocBook XHTML CSS Stylesheet.
|
||||
|
||||
Browser wrangling and typographic design by
|
||||
Oyvind Kolas / pippin@gimp.org
|
||||
|
||||
Customised for Poky by
|
||||
Matthew Allum / mallum@o-hand.com
|
||||
|
||||
Thanks to:
|
||||
Liam R. E. Quin
|
||||
William Skaggs
|
||||
Jakub Steiner
|
||||
|
||||
Structure
|
||||
---------
|
||||
|
||||
The stylesheet is divided into the following sections:
|
||||
|
||||
Positioning
|
||||
Margins, paddings, width, font-size, clearing.
|
||||
Decorations
|
||||
Borders, style
|
||||
Colors
|
||||
Colors
|
||||
Graphics
|
||||
Graphical backgrounds
|
||||
Nasty IE tweaks
|
||||
Workarounds needed to make it work in internet explorer,
|
||||
currently makes the stylesheet non validating, but up until
|
||||
this point it is validating.
|
||||
Mozilla extensions
|
||||
Transparency for footer
|
||||
Rounded corners on boxes
|
||||
|
||||
*/
|
||||
|
||||
|
||||
/*************** /
|
||||
/ Positioning /
|
||||
/ ***************/
|
||||
|
||||
body {
|
||||
font-family: Verdana, Sans, sans-serif;
|
||||
|
||||
min-width: 640px;
|
||||
width: 80%;
|
||||
margin: 0em auto;
|
||||
padding: 2em 5em 5em 5em;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.reviewer {
|
||||
color: red;
|
||||
}
|
||||
|
||||
h1,h2,h3,h4,h5,h6,h7 {
|
||||
font-family: Arial, Sans;
|
||||
color: #00557D;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 2em;
|
||||
text-align: left;
|
||||
padding: 0em 0em 0em 0em;
|
||||
margin: 2em 0em 0em 0em;
|
||||
}
|
||||
|
||||
h2.subtitle {
|
||||
margin: 0.10em 0em 3.0em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
font-size: 1.8em;
|
||||
padding-left: 20%;
|
||||
font-weight: normal;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
h2 {
|
||||
margin: 2em 0em 0.66em 0em;
|
||||
padding: 0.5em 0em 0em 0em;
|
||||
font-size: 1.5em;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
h3.subtitle {
|
||||
margin: 0em 0em 1em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
font-size: 142.14%;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
h3 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 140%;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
h4 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 120%;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
h5 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 110%;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
h6 {
|
||||
margin: 1em 0em 0em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 80%;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.authorgroup {
|
||||
background-color: transparent;
|
||||
background-repeat: no-repeat;
|
||||
padding-top: 256px;
|
||||
background-image: url("figures/adt-title.png");
|
||||
background-position: left top;
|
||||
margin-top: -256px;
|
||||
padding-right: 50px;
|
||||
margin-left: 0px;
|
||||
text-align: right;
|
||||
width: 740px;
|
||||
}
|
||||
|
||||
h3.author {
|
||||
margin: 0em 0me 0em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
font-weight: normal;
|
||||
font-size: 100%;
|
||||
color: #333;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
.author tt.email {
|
||||
font-size: 66%;
|
||||
}
|
||||
|
||||
.titlepage hr {
|
||||
width: 0em;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
.revhistory {
|
||||
padding-top: 2em;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
.toc,
|
||||
.list-of-tables,
|
||||
.list-of-examples,
|
||||
.list-of-figures {
|
||||
padding: 1.33em 0em 2.5em 0em;
|
||||
color: #00557D;
|
||||
}
|
||||
|
||||
.toc p,
|
||||
.list-of-tables p,
|
||||
.list-of-figures p,
|
||||
.list-of-examples p {
|
||||
padding: 0em 0em 0em 0em;
|
||||
padding: 0em 0em 0.3em;
|
||||
margin: 1.5em 0em 0em 0em;
|
||||
}
|
||||
|
||||
.toc p b,
|
||||
.list-of-tables p b,
|
||||
.list-of-figures p b,
|
||||
.list-of-examples p b{
|
||||
font-size: 100.0%;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.toc dl,
|
||||
.list-of-tables dl,
|
||||
.list-of-figures dl,
|
||||
.list-of-examples dl {
|
||||
margin: 0em 0em 0.5em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
.toc dt {
|
||||
margin: 0em 0em 0em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
.toc dd {
|
||||
margin: 0em 0em 0em 2.6em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
div.glossary dl,
|
||||
div.variablelist dl {
|
||||
}
|
||||
|
||||
.glossary dl dt,
|
||||
.variablelist dl dt,
|
||||
.variablelist dl dt span.term {
|
||||
font-weight: normal;
|
||||
width: 20em;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
.variablelist dl dt {
|
||||
margin-top: 0.5em;
|
||||
}
|
||||
|
||||
.glossary dl dd,
|
||||
.variablelist dl dd {
|
||||
margin-top: -1em;
|
||||
margin-left: 25.5em;
|
||||
}
|
||||
|
||||
.glossary dd p,
|
||||
.variablelist dd p {
|
||||
margin-top: 0em;
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
|
||||
div.calloutlist table td {
|
||||
padding: 0em 0em 0em 0em;
|
||||
margin: 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
div.calloutlist table td p {
|
||||
margin-top: 0em;
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
div p.copyright {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
div.legalnotice p.legalnotice-title {
|
||||
margin-bottom: 0em;
|
||||
}
|
||||
|
||||
p {
|
||||
line-height: 1.5em;
|
||||
margin-top: 0em;
|
||||
|
||||
}
|
||||
|
||||
dl {
|
||||
padding-top: 0em;
|
||||
}
|
||||
|
||||
hr {
|
||||
border: solid 1px;
|
||||
}
|
||||
|
||||
|
||||
.mediaobject,
|
||||
.mediaobjectco {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
img {
|
||||
border: none;
|
||||
}
|
||||
|
||||
ul {
|
||||
padding: 0em 0em 0em 1.5em;
|
||||
}
|
||||
|
||||
ul li {
|
||||
padding: 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
ul li p {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
table {
|
||||
width :100%;
|
||||
}
|
||||
|
||||
th {
|
||||
padding: 0.25em;
|
||||
text-align: left;
|
||||
font-weight: normal;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
td {
|
||||
padding: 0.25em;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
p a[id] {
|
||||
margin: 0px;
|
||||
padding: 0px;
|
||||
display: inline;
|
||||
background-image: none;
|
||||
}
|
||||
|
||||
a {
|
||||
text-decoration: underline;
|
||||
color: #444;
|
||||
}
|
||||
|
||||
pre {
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
text-decoration: underline;
|
||||
/*font-weight: bold;*/
|
||||
}
|
||||
|
||||
|
||||
div.informalfigure,
|
||||
div.informalexample,
|
||||
div.informaltable,
|
||||
div.figure,
|
||||
div.table,
|
||||
div.example {
|
||||
margin: 1em 0em;
|
||||
padding: 1em;
|
||||
page-break-inside: avoid;
|
||||
}
|
||||
|
||||
|
||||
div.informalfigure p.title b,
|
||||
div.informalexample p.title b,
|
||||
div.informaltable p.title b,
|
||||
div.figure p.title b,
|
||||
div.example p.title b,
|
||||
div.table p.title b{
|
||||
padding-top: 0em;
|
||||
margin-top: 0em;
|
||||
font-size: 100%;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
.mediaobject .caption,
|
||||
.mediaobject .caption p {
|
||||
text-align: center;
|
||||
font-size: 80%;
|
||||
padding-top: 0.5em;
|
||||
padding-bottom: 0.5em;
|
||||
}
|
||||
|
||||
.epigraph {
|
||||
padding-left: 55%;
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
.epigraph p {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.epigraph .quote {
|
||||
font-style: italic;
|
||||
}
|
||||
.epigraph .attribution {
|
||||
font-style: normal;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
span.application {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.programlisting {
|
||||
font-family: monospace;
|
||||
font-size: 80%;
|
||||
white-space: pre;
|
||||
margin: 1.33em 0em;
|
||||
padding: 1.33em;
|
||||
}
|
||||
|
||||
.tip,
|
||||
.warning,
|
||||
.caution,
|
||||
.note {
|
||||
margin-top: 1em;
|
||||
margin-bottom: 1em;
|
||||
|
||||
}
|
||||
|
||||
/* force full width of table within div */
|
||||
.tip table,
|
||||
.warning table,
|
||||
.caution table,
|
||||
.note table {
|
||||
border: none;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
|
||||
.tip table th,
|
||||
.warning table th,
|
||||
.caution table th,
|
||||
.note table th {
|
||||
padding: 0.8em 0.0em 0.0em 0.0em;
|
||||
margin : 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
.tip p,
|
||||
.warning p,
|
||||
.caution p,
|
||||
.note p {
|
||||
margin-top: 0.5em;
|
||||
margin-bottom: 0.5em;
|
||||
padding-right: 1em;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.acronym {
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
b.keycap,
|
||||
.keycap {
|
||||
padding: 0.09em 0.3em;
|
||||
margin: 0em;
|
||||
}
|
||||
|
||||
.itemizedlist li {
|
||||
clear: none;
|
||||
}
|
||||
|
||||
.filename {
|
||||
font-size: medium;
|
||||
font-family: Courier, monospace;
|
||||
}
|
||||
|
||||
|
||||
div.navheader, div.heading{
|
||||
position: absolute;
|
||||
left: 0em;
|
||||
top: 0em;
|
||||
width: 100%;
|
||||
background-color: #cdf;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.navfooter, div.footing{
|
||||
position: fixed;
|
||||
left: 0em;
|
||||
bottom: 0em;
|
||||
background-color: #eee;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
|
||||
div.navheader td,
|
||||
div.navfooter td {
|
||||
font-size: 66%;
|
||||
}
|
||||
|
||||
div.navheader table th {
|
||||
/*font-family: Georgia, Times, serif;*/
|
||||
/*font-size: x-large;*/
|
||||
font-size: 80%;
|
||||
}
|
||||
|
||||
div.navheader table {
|
||||
border-left: 0em;
|
||||
border-right: 0em;
|
||||
border-top: 0em;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.navfooter table {
|
||||
border-left: 0em;
|
||||
border-right: 0em;
|
||||
border-bottom: 0em;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.navheader table td a,
|
||||
div.navfooter table td a {
|
||||
color: #777;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
/* normal text in the footer */
|
||||
div.navfooter table td {
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.navheader table td a:visited,
|
||||
div.navfooter table td a:visited {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
|
||||
/* links in header and footer */
|
||||
div.navheader table td a:hover,
|
||||
div.navfooter table td a:hover {
|
||||
text-decoration: underline;
|
||||
background-color: transparent;
|
||||
color: #33a;
|
||||
}
|
||||
|
||||
div.navheader hr,
|
||||
div.navfooter hr {
|
||||
display: none;
|
||||
}
|
||||
|
||||
|
||||
.qandaset tr.question td p {
|
||||
margin: 0em 0em 1em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
}
|
||||
|
||||
.qandaset tr.answer td p {
|
||||
margin: 0em 0em 1em 0em;
|
||||
padding: 0em 0em 0em 0em;
|
||||
}
|
||||
.answer td {
|
||||
padding-bottom: 1.5em;
|
||||
}
|
||||
|
||||
.emphasis {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
|
||||
/************* /
|
||||
/ decorations /
|
||||
/ *************/
|
||||
|
||||
.titlepage {
|
||||
}
|
||||
|
||||
.part .title {
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
border: none;
|
||||
}
|
||||
|
||||
/*
|
||||
h1 {
|
||||
border: none;
|
||||
}
|
||||
|
||||
h2 {
|
||||
border-top: solid 0.2em;
|
||||
border-bottom: solid 0.06em;
|
||||
}
|
||||
|
||||
h3 {
|
||||
border-top: 0em;
|
||||
border-bottom: solid 0.06em;
|
||||
}
|
||||
|
||||
h4 {
|
||||
border: 0em;
|
||||
border-bottom: solid 0.06em;
|
||||
}
|
||||
|
||||
h5 {
|
||||
border: 0em;
|
||||
}
|
||||
*/
|
||||
|
||||
.programlisting {
|
||||
border: solid 1px;
|
||||
}
|
||||
|
||||
div.figure,
|
||||
div.table,
|
||||
div.informalfigure,
|
||||
div.informaltable,
|
||||
div.informalexample,
|
||||
div.example {
|
||||
border: 1px solid;
|
||||
}
|
||||
|
||||
|
||||
|
||||
.tip,
|
||||
.warning,
|
||||
.caution,
|
||||
.note {
|
||||
border: 1px solid;
|
||||
}
|
||||
|
||||
.tip table th,
|
||||
.warning table th,
|
||||
.caution table th,
|
||||
.note table th {
|
||||
border-bottom: 1px solid;
|
||||
}
|
||||
|
||||
.question td {
|
||||
border-top: 1px solid black;
|
||||
}
|
||||
|
||||
.answer {
|
||||
}
|
||||
|
||||
|
||||
b.keycap,
|
||||
.keycap {
|
||||
border: 1px solid;
|
||||
}
|
||||
|
||||
|
||||
div.navheader, div.heading{
|
||||
border-bottom: 1px solid;
|
||||
}
|
||||
|
||||
|
||||
div.navfooter, div.footing{
|
||||
border-top: 1px solid;
|
||||
}
|
||||
|
||||
/********* /
|
||||
/ colors /
|
||||
/ *********/
|
||||
|
||||
body {
|
||||
color: #333;
|
||||
background: white;
|
||||
}
|
||||
|
||||
a {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
background-color: #dedede;
|
||||
}
|
||||
|
||||
|
||||
h1,
|
||||
h2,
|
||||
h3,
|
||||
h4,
|
||||
h5,
|
||||
h6,
|
||||
h7,
|
||||
h8 {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
hr {
|
||||
border-color: #aaa;
|
||||
}
|
||||
|
||||
|
||||
.tip, .warning, .caution, .note {
|
||||
border-color: #aaa;
|
||||
}
|
||||
|
||||
|
||||
.tip table th,
|
||||
.warning table th,
|
||||
.caution table th,
|
||||
.note table th {
|
||||
border-bottom-color: #aaa;
|
||||
}
|
||||
|
||||
|
||||
.warning {
|
||||
background-color: #fea;
|
||||
}
|
||||
|
||||
.caution {
|
||||
background-color: #fea;
|
||||
}
|
||||
|
||||
.tip {
|
||||
background-color: #eff;
|
||||
}
|
||||
|
||||
.note {
|
||||
background-color: #dfc;
|
||||
}
|
||||
|
||||
.glossary dl dt,
|
||||
.variablelist dl dt,
|
||||
.variablelist dl dt span.term {
|
||||
color: #044;
|
||||
}
|
||||
|
||||
div.figure,
|
||||
div.table,
|
||||
div.example,
|
||||
div.informalfigure,
|
||||
div.informaltable,
|
||||
div.informalexample {
|
||||
border-color: #aaa;
|
||||
}
|
||||
|
||||
pre.programlisting {
|
||||
color: black;
|
||||
background-color: #fff;
|
||||
border-color: #aaa;
|
||||
border-width: 2px;
|
||||
}
|
||||
|
||||
.guimenu,
|
||||
.guilabel,
|
||||
.guimenuitem {
|
||||
background-color: #eee;
|
||||
}
|
||||
|
||||
|
||||
b.keycap,
|
||||
.keycap {
|
||||
background-color: #eee;
|
||||
border-color: #999;
|
||||
}
|
||||
|
||||
|
||||
div.navheader {
|
||||
border-color: black;
|
||||
}
|
||||
|
||||
|
||||
div.navfooter {
|
||||
border-color: black;
|
||||
}
|
||||
|
||||
|
||||
/*********** /
|
||||
/ graphics /
|
||||
/ ***********/
|
||||
|
||||
/*
|
||||
body {
|
||||
background-image: url("images/body_bg.jpg");
|
||||
background-attachment: fixed;
|
||||
}
|
||||
|
||||
.navheader,
|
||||
.note,
|
||||
.tip {
|
||||
background-image: url("images/note_bg.jpg");
|
||||
background-attachment: fixed;
|
||||
}
|
||||
|
||||
.warning,
|
||||
.caution {
|
||||
background-image: url("images/warning_bg.jpg");
|
||||
background-attachment: fixed;
|
||||
}
|
||||
|
||||
.figure,
|
||||
.informalfigure,
|
||||
.example,
|
||||
.informalexample,
|
||||
.table,
|
||||
.informaltable {
|
||||
background-image: url("images/figure_bg.jpg");
|
||||
background-attachment: fixed;
|
||||
}
|
||||
|
||||
*/
|
||||
h1,
|
||||
h2,
|
||||
h3,
|
||||
h4,
|
||||
h5,
|
||||
h6,
|
||||
h7{
|
||||
}
|
||||
|
||||
/*
|
||||
Example of how to stick an image as part of the title.
|
||||
|
||||
div.article .titlepage .title
|
||||
{
|
||||
background-image: url("figures/white-on-black.png");
|
||||
background-position: center;
|
||||
background-repeat: repeat-x;
|
||||
}
|
||||
*/
|
||||
|
||||
div.preface .titlepage .title,
|
||||
div.colophon .title,
|
||||
div.chapter .titlepage .title,
|
||||
div.article .titlepage .title
|
||||
{
|
||||
}
|
||||
|
||||
div.section div.section .titlepage .title,
|
||||
div.sect2 .titlepage .title {
|
||||
background: none;
|
||||
}
|
||||
|
||||
|
||||
h1.title {
|
||||
background-color: transparent;
|
||||
background-image: url("figures/yocto-project-bw.png");
|
||||
background-repeat: no-repeat;
|
||||
height: 256px;
|
||||
text-indent: -9000px;
|
||||
overflow:hidden;
|
||||
}
|
||||
|
||||
h2.subtitle {
|
||||
background-color: transparent;
|
||||
text-indent: -9000px;
|
||||
overflow:hidden;
|
||||
width: 0px;
|
||||
display: none;
|
||||
}
|
||||
|
||||
/*************************************** /
|
||||
/ pippin.gimp.org specific alterations /
|
||||
/ ***************************************/
|
||||
|
||||
/*
|
||||
div.heading, div.navheader {
|
||||
color: #777;
|
||||
font-size: 80%;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
text-align: left;
|
||||
position: absolute;
|
||||
top: 0px;
|
||||
left: 0px;
|
||||
width: 100%;
|
||||
height: 50px;
|
||||
background: url('/gfx/heading_bg.png') transparent;
|
||||
background-repeat: repeat-x;
|
||||
background-attachment: fixed;
|
||||
border: none;
|
||||
}
|
||||
|
||||
div.heading a {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
div.footing, div.navfooter {
|
||||
border: none;
|
||||
color: #ddd;
|
||||
font-size: 80%;
|
||||
text-align:right;
|
||||
|
||||
width: 100%;
|
||||
padding-top: 10px;
|
||||
position: absolute;
|
||||
bottom: 0px;
|
||||
left: 0px;
|
||||
|
||||
background: url('/gfx/footing_bg.png') transparent;
|
||||
}
|
||||
*/
|
||||
|
||||
|
||||
|
||||
/****************** /
|
||||
/ nasty ie tweaks /
|
||||
/ ******************/
|
||||
|
||||
/*
|
||||
div.heading, div.navheader {
|
||||
width:expression(document.body.clientWidth + "px");
|
||||
}
|
||||
|
||||
div.footing, div.navfooter {
|
||||
width:expression(document.body.clientWidth + "px");
|
||||
margin-left:expression("-5em");
|
||||
}
|
||||
body {
|
||||
padding:expression("4em 5em 0em 5em");
|
||||
}
|
||||
*/
|
||||
|
||||
/**************************************** /
|
||||
/ mozilla vendor specific css extensions /
|
||||
/ ****************************************/
|
||||
/*
|
||||
div.navfooter, div.footing{
|
||||
-moz-opacity: 0.8em;
|
||||
}
|
||||
|
||||
div.figure,
|
||||
div.table,
|
||||
div.informalfigure,
|
||||
div.informaltable,
|
||||
div.informalexample,
|
||||
div.example,
|
||||
.tip,
|
||||
.warning,
|
||||
.caution,
|
||||
.note {
|
||||
-moz-border-radius: 0.5em;
|
||||
}
|
||||
|
||||
b.keycap,
|
||||
.keycap {
|
||||
-moz-border-radius: 0.3em;
|
||||
}
|
||||
*/
|
||||
|
||||
table tr td table tr td {
|
||||
display: none;
|
||||
}
|
||||
|
||||
|
||||
hr {
|
||||
display: none;
|
||||
}
|
||||
|
||||
table {
|
||||
border: 0em;
|
||||
}
|
||||
|
||||
.photo {
|
||||
float: right;
|
||||
margin-left: 1.5em;
|
||||
margin-bottom: 1.5em;
|
||||
margin-top: 0em;
|
||||
max-width: 17em;
|
||||
border: 1px solid gray;
|
||||
padding: 3px;
|
||||
background: white;
|
||||
}
|
||||
.seperator {
|
||||
padding-top: 2em;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
#validators {
|
||||
margin-top: 5em;
|
||||
text-align: right;
|
||||
color: #777;
|
||||
}
|
||||
@media print {
|
||||
body {
|
||||
font-size: 8pt;
|
||||
}
|
||||
.noprint {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
.tip,
|
||||
.note {
|
||||
background: #666666;
|
||||
color: #fff;
|
||||
padding: 20px;
|
||||
margin: 20px;
|
||||
}
|
||||
|
||||
.tip h3,
|
||||
.note h3 {
|
||||
padding: 0em;
|
||||
margin: 0em;
|
||||
font-size: 2em;
|
||||
font-weight: bold;
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
.tip a,
|
||||
.note a {
|
||||
color: #fff;
|
||||
text-decoration: underline;
|
||||
}
|
||||
35
documentation/bsp-guide/Makefile
Normal file
@@ -0,0 +1,35 @@
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
|
||||
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
|
||||
|
||||
all: html pdf tarball
|
||||
|
||||
pdf:
|
||||
../tools/poky-docbook-to-pdf bsp-guide.xml ../template
|
||||
|
||||
html:
|
||||
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
|
||||
xsltproc $(XSLTOPTS) -o bsp-guide.html bsp-guide-customization.xsl bsp-guide.xml
|
||||
|
||||
tarball: html
|
||||
tar -cvzf bsp-guide.tgz style.css bsp-guide.html figures/bsp-title.png
|
||||
|
||||
validate:
|
||||
xmllint --postvalid --xinclude --noout bsp-guide.xml
|
||||
|
||||
OUTPUTS = bsp-guide.pdf bsp-guide.html
|
||||
SOURCES = *.png *.xml *.css *.svg
|
||||
|
||||
publish:
|
||||
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/
|
||||
|
||||
clean:
|
||||
rm -f $(OUTPUTS)
|
||||
@@ -23,7 +23,7 @@
|
||||
<affiliation>
|
||||
<orgname>Intel Corporation</orgname>
|
||||
</affiliation>
|
||||
<email>richard.purdie@linuxfoundation.org</email>
|
||||
<email>richard@linux.intel.com</email>
|
||||
</author>
|
||||
</authorgroup>
|
||||
|
||||
@@ -31,18 +31,7 @@
|
||||
<revision>
|
||||
<revnumber>0.9</revnumber>
|
||||
<date>27 October 2010</date>
|
||||
<revremark>This manual revision is the initial manual and corresponds to the
|
||||
Yocto Project 0.9 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.0</revnumber>
|
||||
<date>6 April 2011</date>
|
||||
<revremark>This manual revision corresponds to the Yocto Project 1.0 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.0.1</revnumber>
|
||||
<date>23 May 2011</date>
|
||||
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
|
||||
<revremark>Beta Draft</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
|
||||
@@ -60,35 +60,15 @@
|
||||
<literallayout class='monospaced'>
|
||||
meta-<bsp_name>
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
"bsp_name" is a placeholder for the machine or platform name.
|
||||
Here are some example base directory names:
|
||||
<literallayout class='monospaced'>
|
||||
meta-emenlow
|
||||
meta-n450
|
||||
meta-intel_n450
|
||||
meta-beagleboard
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The base directory (<filename>meta-<bsp_name></filename>) is the root of the BSP layer.
|
||||
This root is what you add to the BBLAYERS variable in <filename>build/conf/bblayers.conf</filename>
|
||||
so that the build system recognizes the BSP definition and from it can build an image.
|
||||
Here is an example:
|
||||
<literallayout class='monospaced'>
|
||||
BBLAYERS = " \
|
||||
/usr/local/src/yocto/meta \
|
||||
/usr/local/src/yocto/meta-yocto \
|
||||
/usr/local/src/yocto/meta-<bsp_name> \
|
||||
"
|
||||
</literallayout>
|
||||
For more detailed information on layers, see the
|
||||
<ulink url='http://www.yoctoproject.org/docs/poky-ref-manual/poky-ref-manual.html#usingpoky-changes-layers'>
|
||||
BitBake Layers</ulink> section of the Poky Reference Manual.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Below is the common form for the file structure inside a base directory.
|
||||
While you can use this basic form for the standard, realize that the actual structures
|
||||
@@ -103,7 +83,7 @@ meta-<bsp_name>/conf/layer.conf
|
||||
meta-<bsp_name>/conf/machine/*.conf
|
||||
meta-<bsp_name>/recipes-bsp/*
|
||||
meta-<bsp_name>/recipes-graphics/*
|
||||
meta-<bsp_name>/recipes-kernel/linux/linux-yocto_git.bbappend
|
||||
meta-<bsp_name>/recipes-kernel/linux/linux-yocto-stable.bbappend
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
@@ -127,7 +107,7 @@ meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-emgd/fix_open_max_prepr
|
||||
meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-emgd/macro_tweak.patch
|
||||
meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-emgd/nodolt.patch
|
||||
meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-emgd_1.7.99.2.bb
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_git.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-wrs_git.bbappend
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
@@ -180,10 +160,10 @@ meta-<bsp_name>/binary/<bootable_images>
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
This optional area contains useful pre-built kernels and user-space filesystem
|
||||
This optional area contains useful pre-built kernels and userspace filesystem
|
||||
images appropriate to the target system.
|
||||
This directory typically contains graphical (e.g. sato) and minimal live images
|
||||
when the BSP tarball has been created and made available in the Yocto Project website.
|
||||
This directory contains the Application Development Toolkit (ADT) and minimal
|
||||
live images when the BSP is has been "tar-balled" and placed on the Yocto Project website.
|
||||
You can use these kernels and images to get a system running and quickly get started
|
||||
on development tasks.
|
||||
</para>
|
||||
@@ -217,8 +197,7 @@ meta-<bsp_name>/conf/layer.conf
|
||||
BBPATH := "${BBPATH}:${LAYERDIR}"
|
||||
|
||||
# We have a recipes directory containing .bb and .bbappend files, add to BBFILES
|
||||
BBFILES := "${BBFILES} ${LAYERDIR}/recipes/*/*.bb \
|
||||
${LAYERDIR}/recipes/*/*.bbappend"
|
||||
BBFILES := "${BBFILES} ${LAYERDIR}/recipes/*/*.bb \ ${LAYERDIR}/recipes/*/*.bbappend"
|
||||
|
||||
BBFILE_COLLECTIONS += "bsp"
|
||||
BBFILE_PATTERN_bsp := "^${LAYERDIR}/"
|
||||
@@ -336,7 +315,7 @@ meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-emgd_1.7.99.2.bb
|
||||
<section id='bsp-filelayout-kernel'>
|
||||
<title>Linux Kernel Configuration</title>
|
||||
<programlisting>
|
||||
meta-<bsp_name>/recipes-kernel/linux/linux-yocto_git.bbappend
|
||||
meta-<bsp_name>/recipes-kernel/linux/linux-yocto-stable.bbappend
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
@@ -351,27 +330,27 @@ meta-<bsp_name>/recipes-kernel/linux/linux-yocto_git.bbappend
|
||||
directory.
|
||||
</para>
|
||||
<para>
|
||||
Suppose you use a BSP that uses the <filename>linux-yocto_git.bb</filename> kernel,
|
||||
Suppose you use a BSP that uses the <filename>linux-yocto-stable_git.bb</filename> kernel,
|
||||
which is the preferred kernel to use for developing a new BSP using the Yocto Project.
|
||||
In other words, you have selected the kernel in your
|
||||
<filename><bsp_name>.conf</filename> file by adding the following statement:
|
||||
<programlisting>
|
||||
PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto"
|
||||
PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto-stable"
|
||||
</programlisting>
|
||||
You would use the <filename>linux-yocto_git.bbappend</filename> file to append
|
||||
You would use the <filename>linux-yocto-stable_git.bbappend</filename> file to append
|
||||
specific BSP settings to the kernel, thus configuring the kernel for your particular BSP.
|
||||
</para>
|
||||
<para>
|
||||
Now take a look at the existing "crownbay" BSP.
|
||||
The append file used is:
|
||||
<programlisting>
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_git.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto-stable_git.bbappend
|
||||
</programlisting>
|
||||
The file contains the following:
|
||||
<programlisting>
|
||||
FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
COMPATIBLE_MACHINE_crownbay = "crownbay"
|
||||
KMACHINE_crownbay = "yocto/standard/crownbay"
|
||||
KMACHINE_crownbay = "crownbay"
|
||||
</programlisting>
|
||||
This append file adds "crownbay" as a compatible machine,
|
||||
and additionally sets a Yocto Kernel-specific variable that identifies the name of the
|
||||
@@ -392,7 +371,7 @@ KMACHINE_crownbay = "yocto/standard/crownbay"
|
||||
For example, suppose you had a set of configuration options in a file called
|
||||
<filename>defconfig</filename>.
|
||||
If you put that file inside a directory named
|
||||
<filename class='directory'>/linux-yocto</filename> and then added
|
||||
<filename class='directory'>/linux-yocto-stable</filename> and then added
|
||||
a SRC_URI statement such as the following to the append file, those configuration
|
||||
options will be picked up and applied when the kernel is built.
|
||||
<programlisting>
|
||||
@@ -412,14 +391,13 @@ SRC_URI += "file://defconfig \
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
The FILESEXTRAPATHS variable is in boilerplate form here in order to make it easy
|
||||
to do that.
|
||||
The FILESEXTRAPATHS variable is boilerplated here in order to make it easy to do that.
|
||||
It basically allows those configuration files to be found by the build process.
|
||||
</para>
|
||||
<note><para>
|
||||
Other methods exist to accomplish grouping and defining configuration options.
|
||||
For example, you could directly add configuration options to the Yocto kernel
|
||||
<filename class='directory'>meta</filename> branch for your BSP.
|
||||
<filename class='directory'>wrs_meta</filename> branch for your BSP.
|
||||
The configuration options will likely end up in that location anyway if the BSP gets
|
||||
added to the Yocto Project.
|
||||
For information on how to add these configurations directly, see the
|
||||
@@ -429,7 +407,7 @@ SRC_URI += "file://defconfig \
|
||||
</para>
|
||||
<para>
|
||||
In general, however, the Yocto Project maintainers take care of moving the SRC_URI-specified
|
||||
configuration options to the <filename class='directory'>meta</filename> branch.
|
||||
configuration options to the <filename class='directory'>wrs_meta</filename> branch.
|
||||
Not only is it easier for BSP developers to not have to worry about putting those
|
||||
configurations in the branch, but having the maintainers do it allows them to apply
|
||||
'global' knowledge about the kinds of common configuration options multiple BSPs in
|
||||
@@ -555,22 +533,19 @@ FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For cases where you can substitute something and still maintain functionality,
|
||||
the Yocto Project website's
|
||||
<ulink url='http://www.yoctoproject.org/download/all?keys=&download_type=1&download_version='>BSP Download Page</ulink>
|
||||
makes available 'de-featured' BSPs that are completely free of any IP encumbrances.
|
||||
For these cases you can use the substitution directly and without any further licensing
|
||||
requirements.
|
||||
If present, these fully 'de-featured' BSPs are named appropriately different
|
||||
as compared to the names of the respective encumbered BSPs.
|
||||
If available, these substitutions are the simplest and most preferred options.
|
||||
For cases where you can substitute something and still maintain functionality, the Poky website will make
|
||||
available a 'de-featured' BSP completely free of the encumbered IP.
|
||||
In that case you can use the substitution directly and without any further licensing requirements.
|
||||
If present, this fully 'de-featured' BSP will be named meta-<bsp_name> (i.e. the
|
||||
normal default naming convention).
|
||||
If available, this is the simplest the most preferred option.
|
||||
This, of course, assumes the resulting functionality meets requirements.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If however, a non-encumbered version is unavailable or the 'free' version
|
||||
would provide unsuitable functionality or quality, you can use
|
||||
an encumbered version.
|
||||
If however, a non-encumbered version is unavailable or the 'free' version would provide unsuitable
|
||||
functionality or quality, an encumbered version can be used.
|
||||
Encumbered versions of a BSP are given names of the form meta-<bsp_name>-nonfree.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -584,23 +559,14 @@ FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
|
||||
<para>
|
||||
Get a license key (or keys) for the encumbered BSP by visiting
|
||||
a website and providing the name of the BSP and your email address
|
||||
through a web form.
|
||||
</para>
|
||||
|
||||
<!--
|
||||
<ulink url='https://pokylinux.org/bsp-keys.html'>https://pokylinux.org/bsp-keys.html</ulink>
|
||||
and give the name of the BSP and your e-mail address in the web form.
|
||||
</para>
|
||||
|
||||
COMMENT: This link is not implemented at this point.
|
||||
|
||||
<programlisting>
|
||||
[screenshot of dialog box]
|
||||
</programlisting>
|
||||
|
||||
-->
|
||||
|
||||
<para>
|
||||
After agreeing to any applicable license terms, the
|
||||
BSP key(s) will be immediately sent to the address
|
||||
@@ -609,7 +575,7 @@ FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
$ BSPKEY_<keydomain>=<key> bitbake core-image-sato
|
||||
$ BSPKEY_<keydomain>=<key> bitbake poky-image-sato
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
@@ -643,8 +609,7 @@ FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
encumbered BSP.
|
||||
These prompts usually take the form of instructions
|
||||
needed to manually fetch the encumbered package(s)
|
||||
and md5 sums into the required directory
|
||||
(e.g. the <filename>poky/build/downloads</filename>).
|
||||
and md5 sums into the required directory (e.g. the poky/build/downloads)
|
||||
Once the manual package fetch has been
|
||||
completed, restart the build to continue where
|
||||
it left off.
|
||||
@@ -654,21 +619,25 @@ FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Get a full-featured BSP recipe rather than a key.
|
||||
You can do this by visiting the applicable BSP download page from the Yocto
|
||||
Project website at
|
||||
<ulink url='http://yoctoproject.org/download/board-support-package-bsp-downloads'></ulink>.
|
||||
BSP tarballs that have proprietary information can be downloaded after agreeing
|
||||
to licensing requirements as part of the download process.
|
||||
Obtaining the code this way allows you to build an encumbered image with
|
||||
no changes at all as compared to the normal build.
|
||||
</para>
|
||||
Get a full-featured BSP recipe rather than a key, by
|
||||
visiting
|
||||
<ulink url='https://pokylinux.org/bsps.html'>https://pokylinux.org/bsps.html</ulink>.
|
||||
Accepting the license agreement(s) presented will
|
||||
subsequently allow you to download a tarball
|
||||
containing a full-featured BSP that is legally cleared for
|
||||
your use by the just-given license agreement(s).
|
||||
This method will also allow the encumbered image to
|
||||
be built with no change at all to the normal build
|
||||
process.
|
||||
</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>
|
||||
Note that the third method is also the only option available
|
||||
when downloading pre-compiled images generated from non-free BSPs.
|
||||
Those images are likewise available at from the Yocto Project website.
|
||||
when downloading pre-compiled images generated from
|
||||
non-free BSPs.
|
||||
Those images are likewise available at
|
||||
<ulink url='https://pokylinux.org/bsps.html'>https://pokylinux.org/bsps.html</ulink>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
|
||||
BIN
documentation/bsp-guide/figures/bsp-title.png
Normal file → Executable file
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
BIN
documentation/bsp-guide/figures/poky-ref-manual.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
@@ -50,13 +50,9 @@ body {
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.reviewer {
|
||||
color: red;
|
||||
}
|
||||
|
||||
h1,h2,h3,h4,h5,h6,h7 {
|
||||
font-family: Arial, Sans;
|
||||
color: #00557D;
|
||||
color:#999999;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
@@ -80,7 +76,7 @@ h2 {
|
||||
margin: 2em 0em 0.66em 0em;
|
||||
padding: 0.5em 0em 0em 0em;
|
||||
font-size: 1.5em;
|
||||
font-weight: bold;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
h3.subtitle {
|
||||
@@ -94,28 +90,28 @@ h3 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 140%;
|
||||
font-weight: bold;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
h4 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 120%;
|
||||
font-weight: bold;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
h5 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 110%;
|
||||
font-weight: bold;
|
||||
font-size: 110.000%;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
h6 {
|
||||
margin: 1em 0em 0em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 80%;
|
||||
font-weight: bold;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
.authorgroup {
|
||||
@@ -128,7 +124,7 @@ h6 {
|
||||
padding-right: 50px;
|
||||
margin-left: 0px;
|
||||
text-align: right;
|
||||
width: 740px;
|
||||
width: 700px;
|
||||
}
|
||||
|
||||
h3.author {
|
||||
@@ -136,7 +132,6 @@ h3.author {
|
||||
padding: 0em 0em 0em 0em;
|
||||
font-weight: normal;
|
||||
font-size: 100%;
|
||||
color: #333;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
@@ -159,7 +154,6 @@ h3.author {
|
||||
.list-of-examples,
|
||||
.list-of-figures {
|
||||
padding: 1.33em 0em 2.5em 0em;
|
||||
color: #00557D;
|
||||
}
|
||||
|
||||
.toc p,
|
||||
@@ -936,7 +930,7 @@ table {
|
||||
|
||||
.tip,
|
||||
.note {
|
||||
background: #666666;
|
||||
background: #91ae35;
|
||||
color: #fff;
|
||||
padding: 20px;
|
||||
margin: 20px;
|
||||
|
||||
42
documentation/kernel-manual/Makefile
Normal file
@@ -0,0 +1,42 @@
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
|
||||
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
|
||||
|
||||
all: html pdf tarball
|
||||
|
||||
pdf:
|
||||
../tools/poky-docbook-to-pdf kernel-manual.xml ../template
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
|
||||
html:
|
||||
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
|
||||
|
||||
# xsltproc $(XSLTOPTS) -o yocto-project-qs.html $(XSL_XHTML_URI) yocto-project-qs.xml
|
||||
xsltproc $(XSLTOPTS) -o kernel-manual.html yocto-project-kernel-manual-customization.xsl kernel-manual.xml
|
||||
|
||||
tarball: html
|
||||
tar -cvzf kernel-manual.tgz kernel-manual.html style.css figures/kernel-title.png figures/kernel-big-picture.png figures/kernel-architecture-overview.png
|
||||
|
||||
validate:
|
||||
xmllint --postvalid --xinclude --noout kernel-manual.xml
|
||||
|
||||
OUTPUTS = kernel-manual.tgz kernel-manual.html kernel-manual.pdf
|
||||
SOURCES = *.png *.xml *.css
|
||||
|
||||
publish:
|
||||
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/
|
||||
|
||||
clean:
|
||||
rm -f $(OUTPUTS)
|
||||
BIN
documentation/kernel-manual/figures/kernel-big-picture.png
Executable file
|
After Width: | Height: | Size: 169 KiB |
BIN
documentation/kernel-manual/figures/kernel-title.png
Normal file → Executable file
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
BIN
documentation/kernel-manual/figures/yocto-project-transp.png
Executable file
|
After Width: | Height: | Size: 8.4 KiB |
@@ -425,18 +425,18 @@ repository.
|
||||
|
||||
<literallayout class='monospaced'>
|
||||
# full description of the changes
|
||||
> git whatchanged <kernel type>..<kernel type>/<bsp>
|
||||
> eg: git whatchanged yocto/standard/base..yocto/standard/common-pc/base
|
||||
> git whatchanged <kernel type>..<bsp>-<kernel type>
|
||||
> eg: git whatchanged standard..common_pc-standard
|
||||
|
||||
# summary of the changes
|
||||
> git log --pretty=oneline --abbrev-commit <kernel type>..<kernel type>/<bsp>
|
||||
> git log --pretty=oneline --abbrev-commit <kernel type>..<bsp>-<kernel type>
|
||||
|
||||
# source code changes (one combined diff)
|
||||
> git diff <kernel type>..<kernel type>/<bsp>
|
||||
> git show <kernel type>..<kernel type>/<bsp>
|
||||
> git diff <kernel type>..<bsp>-<kernel type>
|
||||
> git show <kernel type>..<bsp>-<kernel type>
|
||||
|
||||
# dump individual patches per commit
|
||||
> git format-patch -o <dir> <kernel type>..<kernel type>/<bsp>
|
||||
> git format-patch -o <dir> <kernel type>..<bsp>-<kernel type>
|
||||
|
||||
# determine the change history of a particular file
|
||||
> git whatchanged <path to file>
|
||||
@@ -467,15 +467,15 @@ repository.
|
||||
# determine which branches contain a feature
|
||||
> git branch --contains <tag>
|
||||
|
||||
# show the changes in a kernel type
|
||||
> git whatchanged yocto/base..<kernel type>
|
||||
> eg: git whatchanged yocto/base..yocto/standard/base
|
||||
# show the changes in a kernel type - (0.9 examples)
|
||||
> git whatchanged wrs_base..<kernel type>
|
||||
> eg: git whatchanged wrs_base..standard
|
||||
</literallayout>
|
||||
|
||||
<para>
|
||||
You can use many other comparisons to isolate BSP changes.
|
||||
For example, you can compare against kernel.org tags (e.g. v2.6.27.18, etc), or
|
||||
you can compare against subsystems (e.g. git whatchanged mm).
|
||||
you can compare agains subsystems (e.g. git whatchanged mm).
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
@@ -600,7 +600,7 @@ repository.
|
||||
<para>
|
||||
Distributed development with git is possible when you use a universally
|
||||
agreed-upon unique commit identifier (set by the creator of the commit) that maps to a
|
||||
specific change set with a specific parent.
|
||||
specific changeset with a specific parent.
|
||||
This identifier is created for you when
|
||||
you create a commit, and is re-created when you amend, alter or re-apply
|
||||
a commit.
|
||||
@@ -733,10 +733,10 @@ repository.
|
||||
|
||||
<para>
|
||||
For example, the following command pushes the changes from your local branch
|
||||
<filename>yocto/standard/common-pc/base</filename> to the remote branch with the same name
|
||||
in the master repository <filename>//git.mycompany.com/pub/git/kernel-2.6.37</filename>.
|
||||
<filename>common_pc-standard</filename> to the remote branch with the same name
|
||||
in the master repository <filename>//git.mycompany.com/pub/git/kernel-2.6.27</filename>.
|
||||
<literallayout class='monospaced'>
|
||||
> push ssh://git.mycompany.com/pub/git/kernel-2.6.37 yocto/standard/common-pc/base:yocto/standard/common-pc/base
|
||||
> push ssh://git.mycompany.com/pub/git/kernel-2.6.27 common_pc-standard:common_pc-standard
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
@@ -866,9 +866,9 @@ repository.
|
||||
|
||||
<para>
|
||||
The following commands illustrate some of the steps you could use to
|
||||
import the yocto/standard/common-pc/base kernel into a secondary SCM:
|
||||
import the common_pc-standard kernel into a secondary SCM:
|
||||
<literallayout class='monospaced'>
|
||||
> git checkout yocto/standard/common-pc/base
|
||||
> git checkout common_pc-standard
|
||||
> cd .. ; echo linux/.git > .cvsignore
|
||||
> cvs import -m "initial import" linux MY_COMPANY start
|
||||
</literallayout>
|
||||
@@ -881,8 +881,8 @@ repository.
|
||||
<para>
|
||||
The following commands illustrate how you can condense and merge two BSPs into a second SCM:
|
||||
<literallayout class='monospaced'>
|
||||
> git checkout yocto/standard/common-pc/base
|
||||
> git merge yocto/standard/common-pc-64/base
|
||||
> git checkout common_pc-standard
|
||||
> git merge common_pc_64-standard
|
||||
# resolve any conflicts and commit them
|
||||
> cd .. ; echo linux/.git > .cvsignore
|
||||
> cvs import -m "initial import" linux MY_COMPANY start
|
||||
@@ -1006,12 +1006,9 @@ That's it. Configure and build.
|
||||
<title>Creating a BSP Based on an Existing Similar BSP</title>
|
||||
|
||||
<para>
|
||||
This section provides an example for creating a BSP
|
||||
that is based on an existing, and hopefully, similar
|
||||
one. It assumes you will be using a local kernel
|
||||
repository and will be pointing the kernel recipe at
|
||||
that. Follow these steps and keep in mind your
|
||||
particular situation and differences:
|
||||
This section provides an example for creating a BSP that is based on an existing, and hopefully,
|
||||
similar one.
|
||||
Follow these steps and keep in mind your particular situation and differences:
|
||||
|
||||
<orderedlist>
|
||||
<listitem><para>
|
||||
@@ -1019,14 +1016,16 @@ That's it. Configure and build.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You can start with something in <filename>meta/conf/machine</filename> - <filename>
|
||||
meta/conf/machine/atom-pc.conf</filename> for example. Or, you can start with a machine
|
||||
configuration from any of the BSP layers in the meta-intel repository at
|
||||
<ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel/'></ulink>, such as
|
||||
<filename>meta-intel/meta-emenlow/conf/machine/emenlow.conf</filename>.
|
||||
You can start with something in <filename>meta/conf/machine</filename>.
|
||||
Or, <filename>meta-emenlow/conf/machine</filename> has an example in its own layer.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The most up-to-date machines that are probably most similar to yours and that you might want
|
||||
to look at are <filename>meta/conf/machine/atom-pc.conf</filename> and
|
||||
<filename>meta-emenlow/conf/machine/emenlow.conf</filename>.
|
||||
Both of these files were either just added or upgraded to use the Yocto Project kernel
|
||||
at <ulink url='http://git.pokylinux.org/cgit/cgit.cgi/linux-2.6-windriver/'></ulink>.
|
||||
The main difference between the two is that "emenlow" is in its own layer.
|
||||
It is in its own layer because it needs extra machine-specific packages such as its
|
||||
own video driver and other supporting packages.
|
||||
@@ -1050,21 +1049,19 @@ That's it. Configure and build.
|
||||
<para>
|
||||
As an example consider this:
|
||||
<itemizedlist>
|
||||
<listitem><para>Copy meta-emenlow to meta-mymachine</para></listitem>
|
||||
<listitem><para>Copy meta-emenlow</para></listitem>
|
||||
<listitem><para>Fix or remove anything you do not need.
|
||||
For this example the only thing left was the kernel directory with a
|
||||
<filename>linux-yocto_git.bbappend</filename>
|
||||
file
|
||||
and <filename>meta-mymachine/conf/machine/mymachine.conf</filename>
|
||||
(linux-yocto is the kernel listed in
|
||||
<filename>meta-emenlow/conf/machine/emenlow.conf</filename>)</para></listitem>.
|
||||
<filename>linux-yocto-stable_git.bbappend</filename> file
|
||||
(linux-yocto-stable is the kernel listed in
|
||||
<filename>meta-crownbay/conf/machine/crownbay.conf</filename></para></listitem>.
|
||||
<listitem><para>Add a new entry in the <filename>build/conf/bblayers.conf</filename>
|
||||
so the new layer can be found by BitBake.</para></listitem>
|
||||
so the new layer can be found by Bitbake.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para></listitem>
|
||||
|
||||
<listitem><para>
|
||||
Create a machine branch for your machine.
|
||||
Get an image with a working kernel built.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -1073,52 +1070,58 @@ That's it. Configure and build.
|
||||
To create this branch first create a bare clone of the Yocto Project git repository.
|
||||
Next, create a local clone of that:
|
||||
<literallayout class='monospaced'>
|
||||
$ git clone --bare git://git.yoctoproject.org/linux-yocto-2.6.37.git
|
||||
linux-yocto-2.6.37.git
|
||||
$ git clone linux-yocto-2.6.37.git linux-yocto-2.6.37
|
||||
$ git clone --bare git://git.pokylinux.org/linux-2.6-windriver.git
|
||||
linux-2.6-windriver.git
|
||||
$ git clone linux-2.6-windriver.git linux-2.6-windriver
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Now create a branch in the local clone and push it to the bare clone:
|
||||
<literallayout class='monospaced'>
|
||||
$ git checkout -b yocto/standard/mymachine origin/yocto/standard/base
|
||||
$ git push origin yocto/standard/mymachine:yocto/standard/mymachine
|
||||
$ git checkout -b crownbay-standard origin/standard
|
||||
$ git push origin crownbay-standard:crownbay-standard
|
||||
</literallayout>
|
||||
</para></listitem>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
At this point, your git tree should compile.
|
||||
</para></listitem>
|
||||
|
||||
<listitem><para>
|
||||
In a layer, create a <filename>linux-yocto_git.bbappend</filename>
|
||||
In a layer, create a <filename>linux-yocto-stable_git.bbappend</filename>
|
||||
file with the following:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
FILESEXTRAPATHS := "${THISDIR}/${PN}"
|
||||
COMPATIBLE_MACHINE_mymachine = "mymachine"
|
||||
COMPATIBLE_MACHINE = ${MACHINE}
|
||||
|
||||
# It is often nice to have a local clone of the kernel repository, to
|
||||
# allow patches to be staged, branches created, and so forth. Modify
|
||||
# KSRC to point to your local clone as appropriate.
|
||||
|
||||
KSRC ?= /path/to/your/bare/clone/for/example/linux-yocto-2.6.37.git
|
||||
# KSRC ?= /path/to/your/bare/clone/yocto-kernel
|
||||
|
||||
# KMACHINE is the branch to be built, or alternatively
|
||||
# KMACHINE is the branch to be built, or alternateively
|
||||
# KBRANCH can be directly set.
|
||||
# KBRANCH is set to KMACHINE in the main linux-yocto_git.bb
|
||||
# KBRANCH ?= "${LINUX_KERNEL_TYPE}/${KMACHINE}"
|
||||
|
||||
KMACHINE_mymachine = "yocto/standard/mymachine"
|
||||
# KBRANCH ?= "${KMACHINE}-${LINUX_KERNEL_TYPE}"
|
||||
|
||||
SRC_URI = "git://${KSRC};nocheckout=1;branch=${KBRANCH},meta;name=machine,meta"
|
||||
# SRC_URI = "git://${KSRC};nocheckout=1;branch=${KBRANCH},meta;name=machine,meta"
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In the previous sample file you need to update and remove the comment from
|
||||
the KSRC assignment and also remove the comment from the SRC_URI line.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
After doing that, select the machine in <filename>build/conf/local.conf</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
#
|
||||
MACHINE ?= "mymachine"
|
||||
MACHINE ?= "crownbay"
|
||||
#
|
||||
</literallayout>
|
||||
</para>
|
||||
@@ -1126,12 +1129,8 @@ That's it. Configure and build.
|
||||
<para>
|
||||
You should now be able to build and boot an image with the new kernel:
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake core-image-sato-live
|
||||
$ bitbake poky-image-sato-live
|
||||
</literallayout>
|
||||
</para></listitem>
|
||||
|
||||
<listitem><para>
|
||||
Modify the kernel configuration for your machine.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -1150,22 +1149,17 @@ That's it. Configure and build.
|
||||
And, another <filename>.cfg</filename> file would contain:
|
||||
<literallayout class='monospaced'>
|
||||
CONFIG_LOG_BUF_SHIFT=18
|
||||
</literallayout>
|
||||
|
||||
<para>
|
||||
These config fragments could then be picked up and
|
||||
applied to the kernel .config by appending them to the kernel SRC_URI:
|
||||
</para>
|
||||
http://git.pokylinux.org/cgit/cgit.cgi/linux-2.6-windriver/
|
||||
|
||||
<literallayout class='monospaced'>
|
||||
SRC_URI_append_mymachine = " file://some.cfg \
|
||||
SRC_URI_append_crownbay = " file://some.cfg \
|
||||
file://other.cfg \
|
||||
"
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You could also add these directly to the git repository <filename>meta</filename>
|
||||
You could also add these directly to the git repository <filename>wrs_meta</filename>
|
||||
branch as well.
|
||||
However, the former method is a simple starting point.
|
||||
</para></listitem>
|
||||
@@ -1179,7 +1173,7 @@ That's it. Configure and build.
|
||||
<para>
|
||||
Practically speaking, to generate the patches, you'd go to the source in the build tree:
|
||||
<literallayout class='monospaced'>
|
||||
build/tmp/work/mymachine-poky-linux/linux-yocto-2.6.37+git0+d1cd5c80ee97e81e130be8c3de3965b770f320d6_0+
|
||||
build/tmp/work/crownbay-poky-linux/linux-yocto-2.6.34+git0+d1cd5c80ee97e81e130be8c3de3965b770f320d6_0+
|
||||
0431115c9d720fee5bb105f6a7411efb4f851d26-r13/linux
|
||||
</literallayout>
|
||||
</para>
|
||||
@@ -1188,7 +1182,7 @@ That's it. Configure and build.
|
||||
Then, modify the code there, using quilt to save the changes, and recompile until
|
||||
it works:
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake -c compile -f linux-yocto
|
||||
$ bitbake -c compile -f linux-yocto-stable
|
||||
</literallayout>
|
||||
</para></listitem>
|
||||
|
||||
@@ -1197,7 +1191,7 @@ That's it. Configure and build.
|
||||
SRC_URI location.
|
||||
The patch is applied the next time you do a clean build.
|
||||
Of course, since you have a branch for the BSP in git, it would be better to put it there instead.
|
||||
For example, in this case, commit the patch to the "yocto/standard/mymachine" branch, and during the
|
||||
For example, in this case, commit the patch to the "crownbay-standard" branch, and during the
|
||||
next build it is applied from there.
|
||||
</para></listitem>
|
||||
</orderedlist>
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
|
||||
|
||||
</xsl:stylesheet>
|
||||
@@ -31,18 +31,7 @@
|
||||
<revision>
|
||||
<revnumber>0.9</revnumber>
|
||||
<date>24 November 2010</date>
|
||||
<revremark>This revision is the initial document draft and corresponds with
|
||||
the Yocto Project 0.9 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.0</revnumber>
|
||||
<date>6 April 2011</date>
|
||||
<revremark>This revision corresponds with the Yocto Project 1.0 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.0.1</revnumber>
|
||||
<date>23 May 2011</date>
|
||||
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
|
||||
<revremark>Beta Draft</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
|
||||
@@ -56,7 +56,7 @@ body {
|
||||
|
||||
h1,h2,h3,h4,h5,h6,h7 {
|
||||
font-family: Arial, Sans;
|
||||
color: #00557D;
|
||||
color:#999999;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
@@ -79,8 +79,9 @@ h2.subtitle {
|
||||
h2 {
|
||||
margin: 2em 0em 0.66em 0em;
|
||||
padding: 0.5em 0em 0em 0em;
|
||||
font-size: 1.5em;
|
||||
font-size: 2em;
|
||||
font-weight: bold;
|
||||
color: black;
|
||||
}
|
||||
|
||||
h3.subtitle {
|
||||
@@ -93,29 +94,29 @@ h3.subtitle {
|
||||
h3 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 140%;
|
||||
font-size: 150%;
|
||||
font-weight: bold;
|
||||
color: black;
|
||||
border-bottom: 2px solid black;
|
||||
}
|
||||
|
||||
h4 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 120%;
|
||||
font-weight: bold;
|
||||
font-size: 130%;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
h5 {
|
||||
margin: 1em 0em 0.5em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 110%;
|
||||
font-weight: bold;
|
||||
font-size: 120%;
|
||||
}
|
||||
|
||||
h6 {
|
||||
margin: 1em 0em 0em 0em;
|
||||
padding: 1em 0em 0em 0em;
|
||||
font-size: 80%;
|
||||
font-weight: bold;
|
||||
font-size: 100%;
|
||||
}
|
||||
|
||||
.authorgroup {
|
||||
@@ -123,12 +124,12 @@ h6 {
|
||||
background-repeat: no-repeat;
|
||||
padding-top: 256px;
|
||||
background-image: url("figures/kernel-title.png");
|
||||
background-position: left top;
|
||||
background-position: top;
|
||||
margin-top: -256px;
|
||||
padding-right: 50px;
|
||||
margin-left: 0px;
|
||||
margin-left: 50px;
|
||||
text-align: right;
|
||||
width: 740px;
|
||||
width: 700px;
|
||||
}
|
||||
|
||||
h3.author {
|
||||
@@ -136,7 +137,6 @@ h3.author {
|
||||
padding: 0em 0em 0em 0em;
|
||||
font-weight: normal;
|
||||
font-size: 100%;
|
||||
color: #333;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
@@ -159,7 +159,6 @@ h3.author {
|
||||
.list-of-examples,
|
||||
.list-of-figures {
|
||||
padding: 1.33em 0em 2.5em 0em;
|
||||
color: #00557D;
|
||||
}
|
||||
|
||||
.toc p,
|
||||
@@ -247,6 +246,7 @@ div.legalnotice p.legalnotice-title {
|
||||
p {
|
||||
line-height: 1.5em;
|
||||
margin-top: 0em;
|
||||
color: black; font-size: 100%;
|
||||
|
||||
}
|
||||
|
||||
@@ -946,7 +946,7 @@ table {
|
||||
|
||||
.tip,
|
||||
.note {
|
||||
background: #666666;
|
||||
background: #91ae35;
|
||||
color: #fff;
|
||||
padding: 20px;
|
||||
margin: 20px;
|
||||
|
||||
@@ -3,6 +3,6 @@
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
|
||||
<xsl:param name="generate.toc" select="'article nop'"></xsl:param>
|
||||
|
||||
</xsl:stylesheet>
|
||||
36
documentation/poky-ref-manual/Makefile
Normal file
@@ -0,0 +1,36 @@
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
|
||||
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
|
||||
|
||||
all: html pdf tarball
|
||||
|
||||
pdf:
|
||||
../tools/poky-docbook-to-pdf poky-ref-manual.xml ../template
|
||||
|
||||
html:
|
||||
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
|
||||
xsltproc $(XSLTOPTS) -o poky-ref-manual.html poky-ref-manual-customization.xsl poky-ref-manual.xml
|
||||
|
||||
tarball: html
|
||||
tar -cvzf poky-ref-manual.tgz poky-ref-manual.html style.css figures/yocto-project-transp.png figures/poky-ref-manual.png screenshots/ss-sato.png
|
||||
|
||||
validate:
|
||||
xmllint --postvalid --xinclude --noout poky-ref-manual.xml
|
||||
|
||||
OUTPUTS = poky-ref-manual.tgz poky-ref-manual.html poky-ref-manual.pdf
|
||||
SOURCES = *.png *.xml *.css *.svg
|
||||
|
||||
publish:
|
||||
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/
|
||||
|
||||
clean:
|
||||
rm -f $(OUTPUTS)
|
||||
@@ -12,7 +12,7 @@
|
||||
</para>
|
||||
|
||||
<section id="platdev-appdev-external-sdk">
|
||||
<title>External Development Using the Application Development Toolkit (ADT)</title>
|
||||
<title>External Development Using the Poky SDK</title>
|
||||
<para>
|
||||
The meta-toolchain and meta-toolchain-sdk targets build tarballs that contain toolchains and
|
||||
libraries suitable for application development outside of Poky.
|
||||
@@ -45,41 +45,17 @@
|
||||
</section>
|
||||
|
||||
<section id="using-the-eclipse-and-anjuta-plug-ins">
|
||||
<title>Using the Eclipse Plug-in</title>
|
||||
<title>Using the Eclipse and Anjuta Plug-ins</title>
|
||||
<para>
|
||||
The current release of the Yocto Project supports the Eclipse IDE plug-in
|
||||
to make developing software easier for the application developer.
|
||||
The plug-in provides capability extensions to the graphical IDE to allow
|
||||
for cross compilation, deployment and execution of the output in a QEMU
|
||||
emulation session.
|
||||
Support of the Eclipse plug-in also allows for cross debugging and
|
||||
profiling.
|
||||
Additionally, the Eclipse plug-in provides a suite of tools
|
||||
Yocto Project supports both Anjuta and Eclipse IDE plug-ins to make developing software
|
||||
easier for the application developer. The plug-ins provide capability
|
||||
extensions to the graphical IDE allowing for cross compilation,
|
||||
deployment and execution of the output in a QEMU emulation session.
|
||||
Support of these plug-ins also allows for cross debugging and
|
||||
profiling. Additionally, the Eclipse plug-in provides a suite of tools
|
||||
that allows the developer to perform remote profiling, tracing, collection of
|
||||
power data, collection of latency data and collection of performance data.
|
||||
</para>
|
||||
<note>
|
||||
The current release of the Yocto Project no longer supports the Anjuta plug-in.
|
||||
However, the Poky Anjuta Plug-in is available to download directly from the Poky
|
||||
Git repository located through the web interface at
|
||||
<ulink url="http://git.yoctoproject.org/"></ulink> under IDE Plugins.
|
||||
The community is free to continue supporting it beyond the Yocto Project 0.9
|
||||
Release.
|
||||
</note>
|
||||
<para>
|
||||
To use the Eclipse plug-in you need the Eclipse Framework (Helios 3.6.1) along
|
||||
with other plug-ins installed into the Eclipse IDE.
|
||||
Once you have your environment setup you need to configure the Eclipse plug-in.
|
||||
For information on how to install and configure the Eclipse plug-in, see the
|
||||
<ulink url='http://www.yoctoproject.org/docs/adt-manual/adt-manual.html#adt-eclipse'>
|
||||
"Working Within Eclipse"</ulink> chapter in the
|
||||
<ulink url='http://www.yoctoproject.org/docs/adt-manual/adt-manual.html'>
|
||||
"Application Development Toolkit (ADT) User's Guide."</ulink>
|
||||
</para>
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
<section id="the-eclipse-plug-in">
|
||||
<title>The Eclipse Plug-in</title>
|
||||
@@ -91,13 +67,12 @@
|
||||
<literallayout class='monospaced'>
|
||||
Help -> Install New Software
|
||||
</literallayout>
|
||||
Specify the target URL as
|
||||
<ulink url='http://www.yoctoproject.org/downloads/eclipse-plugin/'></ulink>.
|
||||
Specify the target URL as <ulink url='http://www.yoctoproject.org/downloads/eclipse-plug-in/'></ulink>.
|
||||
</para>
|
||||
<para>
|
||||
If you want to download the source code for the plug-in you can find it in the Poky
|
||||
git repository, which has a web interface, and is located at
|
||||
<ulink url="http://git.yoctoproject.org"></ulink> under IDE Plugins.
|
||||
<ulink url="http://git.pokylinux.org/cgit.cgi/eclipse-poky"></ulink>.
|
||||
</para>
|
||||
|
||||
<section id="installing-and-setting-up-the-eclipse-ide">
|
||||
@@ -326,14 +301,15 @@
|
||||
Plug-in are all required.
|
||||
The Poky Anjuta Plug-in is available to download as a tarball at the OpenedHand
|
||||
labs <ulink url="http://labs.o-hand.com/anjuta-poky-sdk-plugin/"></ulink> page or
|
||||
directly from the Poky Git repository located at git://git.yoctoproject.org/anjuta-poky.
|
||||
You can access the source code from a web interface to the repository at
|
||||
<ulink url="http://git.yoctoproject.org/"></ulink> under IDE Plugins.
|
||||
directly from the Poky Git repository located at
|
||||
<ulink url="git://git.pokylinux.org/anjuta-poky"></ulink>.
|
||||
You can also access a web interface to the repository at
|
||||
<ulink url="http://git.pokylinux.org/?p=anjuta-poky.git;a=summary"></ulink>.
|
||||
</para>
|
||||
<para>
|
||||
See the README file contained in the project for more information on
|
||||
Anjuta dependencies and building the plug-in.
|
||||
If you want to disable remote gdb debugging, pass the "‐‐disable-gdb-integration" switch when
|
||||
If you want to disable remote gdb debugging, pass the "--disable-gdb-integration" switch when
|
||||
you configure the plug-in.
|
||||
</para>
|
||||
<section id="setting-up-the-anjuta-plugin">
|
||||
@@ -383,8 +359,8 @@
|
||||
triplet is "i586-poky-linux".</para></listitem>
|
||||
<listitem><para>Kernel: Use the file chooser to select the kernel used with QEMU.</para></listitem>
|
||||
<listitem><para>Root filesystem: Use the file chooser to select the root
|
||||
filesystem directory. This directory is where you use "runqemu-extract-sdk" to extract the
|
||||
core-image-sdk tarball.</para></listitem>
|
||||
filesystem directory. This directory is where you use "poky-extract-sdk" to extract the
|
||||
poky-image-sdk tarball.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
@@ -440,10 +416,6 @@
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
||||
-->
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-qemu">
|
||||
@@ -719,7 +691,7 @@
|
||||
</literallayout>
|
||||
Once the binary is built you can find it here:
|
||||
<programlisting>
|
||||
tmp/sysroots/<host-arch>/usr/bin/<target-abi>-gdb
|
||||
tmp/sysroots/<host-arch</usr/bin/<target-abi>-gdb
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
@@ -738,7 +710,7 @@ tmp/sysroots/<host-arch>/usr/bin/<target-abi>-gdb
|
||||
<para>
|
||||
Perhaps the easiest is to have an 'sdk' image that corresponds to the plain
|
||||
image installed on the device.
|
||||
In the case of 'core-image-sato', 'core-image-sdk' would contain suitable symbols.
|
||||
In the case of 'poky-image-sato', 'poky-image-sdk' would contain suitable symbols.
|
||||
Because the sdk images already have the debugging symbols installed it is just a
|
||||
question of expanding the archive to some location and then informing GDB.
|
||||
</para>
|
||||
@@ -764,17 +736,17 @@ tmp/sysroots/<host-arch>/usr/bin/<target-abi>-gdb
|
||||
<filename>tmp/rootfs</filename>:
|
||||
<programlisting>
|
||||
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
|
||||
tmp/work/<target-abi>/core-image-sato-1.0-r0/temp/opkg.conf -o \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf -o \
|
||||
tmp/rootfs/ update
|
||||
</programlisting></para></listitem>
|
||||
<listitem><para>Install the debugging information:
|
||||
<programlisting>
|
||||
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
|
||||
tmp/work/<target-abi>/core-image-sato-1.0-r0/temp/opkg.conf \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \
|
||||
-o tmp/rootfs install foo
|
||||
|
||||
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
|
||||
tmp/work/<target-abi>/core-image-sato-1.0-r0/temp/opkg.conf \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \
|
||||
-o tmp/rootfs install foo-dbg
|
||||
</programlisting></para></listitem>
|
||||
</orderedlist>
|
||||
@@ -937,8 +909,8 @@ $ opreport -cl
|
||||
|
||||
<para>
|
||||
A graphical user interface for OProfile is also available.
|
||||
You can download and build it from the Yocto Project at
|
||||
<ulink url="http://git.yoctoproject.org/cgit.cgi/oprofileui/"></ulink>.
|
||||
You can download and build it from svn at
|
||||
<ulink url="http://svn.o-hand.com/repos/oprofileui/trunk/"></ulink>.
|
||||
If the "tools-profile" image feature is selected, all necessary binaries
|
||||
are installed onto the target device for OProfileUI interaction.
|
||||
</para>
|
||||
@@ -954,7 +926,7 @@ $ opreport -cl
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
|
||||
-->
|
||||
<para>
|
||||
In order to convert the data in the sample format from the target
|
||||
to the host you need the <filename>opimport</filename> program.
|
||||
@@ -963,12 +935,13 @@ $ opreport -cl
|
||||
<ulink url='http://debian.o-hand.com/'>OpenedHand repository</ulink>.
|
||||
We recommend using OProfile 0.9.3 or greater.
|
||||
</para>
|
||||
-->
|
||||
<para>
|
||||
Even though Poky usually includes all needed patches on the target device, you
|
||||
might find you need other OProfile patches for recent OProfileUI features.
|
||||
If so, see the <ulink url='http://git.yoctoproject.org/cgit.cgi/oprofileui/tree/README'>
|
||||
If so, see the <ulink url='http://svn.o-hand.com/repos/oprofileui/trunk/README'>
|
||||
OProfileUI README</ulink> for the most recent information.
|
||||
You can also see <ulink url="http://labs.o-hand.com/oprofileui">OProfileUI website
|
||||
</ulink> for general information on the OProfileUI project.
|
||||
</para>
|
||||
|
||||
<section id="platdev-oprofile-oprofileui-online">
|
||||
@@ -1065,7 +1038,7 @@ $ opreport -cl
|
||||
a "vmlinux" file that matches the running kernel is available.
|
||||
In Poky, that file is usually located in
|
||||
<filename>/boot/vmlinux-KERNELVERSION</filename>, where KERNEL-version is the
|
||||
version of the kernel.
|
||||
version of the kernel (e.g. 2.6.23).
|
||||
Poky generates separate vmlinux packages for each kernel
|
||||
it builds so it should be a question of just making sure a matching package is
|
||||
installed - for example: <filename>opkg install kernel-vmlinux</filename>.
|
||||
|
||||