mirror of
https://git.yoctoproject.org/poky
synced 2026-01-30 21:38:43 +01:00
Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b9065372f4 | ||
|
|
0e09f04573 |
35
.gitignore
vendored
35
.gitignore
vendored
@@ -1,35 +0,0 @@
|
||||
*.pyc
|
||||
*.pyo
|
||||
build/conf/local.conf
|
||||
build/conf/bblayers.conf
|
||||
build/tmp/
|
||||
pstage/
|
||||
scripts/poky-git-proxy-socks
|
||||
sources/
|
||||
meta-darwin
|
||||
meta-maemo
|
||||
meta-prvt*
|
||||
poky-autobuilder*
|
||||
*.swp
|
||||
*.orig
|
||||
*.rej
|
||||
*~
|
||||
handbook/poky-doc-tools/Makefile
|
||||
handbook/poky-doc-tools/Makefile.in
|
||||
handbook/poky-doc-tools/aclocal.m4
|
||||
handbook/poky-doc-tools/autom4te.cache/
|
||||
handbook/poky-doc-tools/common/Makefile
|
||||
handbook/poky-doc-tools/common/Makefile.in
|
||||
handbook/poky-doc-tools/common/fop-config.xml
|
||||
handbook/poky-doc-tools/config.log
|
||||
handbook/poky-doc-tools/config.status
|
||||
handbook/poky-doc-tools/configure
|
||||
handbook/poky-doc-tools/install-sh
|
||||
handbook/poky-doc-tools/missing
|
||||
handbook/poky-doc-tools/poky-docbook-to-pdf
|
||||
handbook/poky-handbook.html
|
||||
handbook/poky-handbook.pdf
|
||||
handbook/poky-handbook.tgz
|
||||
handbook/bsp-guide.html
|
||||
handbook/bsp-guide.pdf
|
||||
|
||||
14
LICENSE
14
LICENSE
@@ -1,14 +0,0 @@
|
||||
Different components of Poky are under different licenses (a mix of
|
||||
MIT and GPLv2). Please see:
|
||||
|
||||
bitbake/COPYING (GPLv2)
|
||||
meta/COPYING.MIT (MIT)
|
||||
meta-extras/COPYING.MIT (MIT)
|
||||
|
||||
which cover the components in those subdirectories. This means all
|
||||
metadata is MIT licensed unless otherwise stated. Source code included
|
||||
in tree for individual recipes is under the LICENSE stated in the .bb
|
||||
file for those software projects unless otherwise stated.
|
||||
|
||||
License information for any other files is either explicitly stated
|
||||
or defaults to GPL version 2.
|
||||
75
README
75
README
@@ -1,15 +1,66 @@
|
||||
Poky
|
||||
====
|
||||
Introduction
|
||||
==
|
||||
|
||||
Poky platform builder is a combined cross build system and development
|
||||
environment. It features support for building X11/Matchbox/GTK based
|
||||
filesystem images for various embedded devices and boards. It also
|
||||
supports cross-architecture application development using QEMU emulation
|
||||
and a standalone toolchain and SDK with IDE integration.
|
||||
'Poky' is a combined cross build system and linux distribution based
|
||||
upon OpenEmbedded. It features support for building X11/Matchbox/GTK
|
||||
based filesystem images for various embedded devices and boards.
|
||||
|
||||
Poky has an extensive handbook, the source of which is contained in
|
||||
the handbook directory. For compiled HTML or pdf versions of this,
|
||||
see the Poky website http://pokylinux.org.
|
||||
|
||||
Additional information on the specifics of hardware that Poky supports
|
||||
is available in README.hardware.
|
||||
Required Packages
|
||||
===
|
||||
|
||||
Running Poky on Debian based distributions requires the following
|
||||
extra packages be installed;
|
||||
|
||||
build-essential
|
||||
diffstat
|
||||
texinfo
|
||||
texi2html
|
||||
cvs
|
||||
subversion
|
||||
gawk
|
||||
bochsbios (to run qemux86 images)
|
||||
|
||||
You also need to install the qemu from http://debian.o-hand.com/. A
|
||||
poky-depends deb is also available from this source which will install
|
||||
all the dependencies mentioned above for you.
|
||||
|
||||
Alternatively poky can build qemu itself, but for this you need the
|
||||
following packages installed;
|
||||
|
||||
gcc-3.4
|
||||
libsdl1.2-dev
|
||||
zlib1g-dev
|
||||
|
||||
You will also need to comment out ASSUME_PROVIDED += "qemu-native"' in
|
||||
build/conf/local.conf.
|
||||
|
||||
Building under other distro's such as Fedora is known to work. Use the above
|
||||
package names as a guide for dependencies.
|
||||
|
||||
Building An Image
|
||||
===
|
||||
|
||||
Simply run;
|
||||
|
||||
% source poky-init-build-env
|
||||
% bitbake oh-image-pda
|
||||
|
||||
This will result in an ext2 image and kernel for qemu arm (see scripts dir).
|
||||
|
||||
To build for other machine types see MACHINE in build/conf/local.conf
|
||||
|
||||
Notes:
|
||||
===
|
||||
|
||||
Useful Links;
|
||||
|
||||
OpenedHand
|
||||
http://openedhand.com
|
||||
|
||||
Poky Homepage
|
||||
http://projects.o-hand.com/poky
|
||||
|
||||
OE Homepage and wiki
|
||||
http://openembedded.org
|
||||
|
||||
|
||||
436
README.hardware
436
README.hardware
@@ -1,436 +0,0 @@
|
||||
Poky Hardware Reference Guide
|
||||
=============================
|
||||
|
||||
This file gives details about using Poky with different hardware reference
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device. To discuss
|
||||
support for further hardware reference boards/devices please contact OpenedHand.
|
||||
|
||||
QEMU Emulation Images (qemuarm and qemux86)
|
||||
===========================================
|
||||
|
||||
To simplify development Poky supports building images to work with the QEMU
|
||||
emulator in system emulation mode. Two architectures are currently supported,
|
||||
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
|
||||
in the Poky Handbook.
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
The following boards are supported by Poky:
|
||||
|
||||
* Compulab CM-X270 (cm-x270)
|
||||
* Compulab EM-X270 (em-x270)
|
||||
* FreeScale iMX31ADS (mx31ads)
|
||||
* Marvell PXA3xx Zylonite (zylonite)
|
||||
* Logic iMX31 Lite Kit (mx31litekit)
|
||||
* Phytec phyCORE-iMX31 (mx31phy)
|
||||
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
The following consumer devices are supported by Poky:
|
||||
|
||||
* FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
* HTC Universal (htcuniversal)
|
||||
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
* Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
* Sharp Zaurus SL-C1000 (akita)
|
||||
* Sharp Zaurus SL-C3x00 series (spitz)
|
||||
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
Poky Boot CD (bootcdx86)
|
||||
========================
|
||||
|
||||
The Poky boot CD iso images are designed as a demonstration of the Poky
|
||||
environment and to show the versatile image formats Poky can generate. It will
|
||||
run on Pentium2 or greater PC style computers. The iso image can be
|
||||
burnt to CD and then booted from.
|
||||
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
Compulab CM-X270 (cm-x270)
|
||||
==========================
|
||||
|
||||
The bootloader on this board doesn't support writing jffs2 images directly to
|
||||
NAND and normally uses a proprietary kernel flash driver. To allow the use of
|
||||
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
|
||||
is booted which contains mtd utilities and this is then used to write the main
|
||||
filesystem.
|
||||
|
||||
It is assumed the board is connected to a network where a TFTP server is
|
||||
available and that a serial terminal is available to communicate with the
|
||||
bootloader (38400, 8N1). If a DHCP server is available the device will use it
|
||||
to obtain an IP address. If not, run:
|
||||
|
||||
ARMmon > setip dhcp off
|
||||
ARMmon > setip ip 192.168.1.203
|
||||
ARMmon > setip mask 255.255.255.0
|
||||
|
||||
To reflash the kernel:
|
||||
|
||||
ARMmon > download kernel tftp zimage 192.168.1.202
|
||||
ARMmon > flash kernel
|
||||
|
||||
where zimage is the name of the kernel on the TFTP server and its IP address is
|
||||
192.168.1.202. The names of the files must be all lowercase.
|
||||
|
||||
To reflash the initrd/initramfs:
|
||||
|
||||
ARMmon > download ramdisk tftp diskimage 192.168.1.202
|
||||
ARMmon > flash ramdisk
|
||||
|
||||
where diskimage is the name of the initramfs image (a cpio.gz file).
|
||||
|
||||
To boot the initramfs:
|
||||
|
||||
ARMmon > ramdisk on
|
||||
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
|
||||
|
||||
To reflash the main image login to the system as user "root", then run:
|
||||
|
||||
# ifconfig eth0 192.168.1.203
|
||||
# tftp -g -r mainimage 192.168.1.202
|
||||
# flash_eraseall /dev/mtd1
|
||||
# nandwrite /dev/mtd1 mainimage
|
||||
|
||||
which configures the network interface with the IP address 192.168.1.203,
|
||||
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
|
||||
the flash and then writes the new image to the flash.
|
||||
|
||||
The main image can then be booted with:
|
||||
|
||||
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
|
||||
|
||||
Note that the initramfs image is built by poky in a slightly different mode to
|
||||
normal since it uses uclibc. To generate this use a command like:
|
||||
|
||||
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
|
||||
|
||||
|
||||
Compulab EM-X270 (em-x270)
|
||||
==========================
|
||||
|
||||
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
|
||||
Compulab website. Inside the images directory of this ZIP file is another ZIP
|
||||
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
|
||||
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
|
||||
|
||||
Insert this USB disk into the supplied adapter and connect this to the
|
||||
board. Whilst holding down the the suspend button press the reset button. The
|
||||
board will now boot off the USB key and into a version of Angstrom. On the
|
||||
desktop is an icon labelled "Updater". Run this program to launch the updater
|
||||
that will flash the Poky kernel and rootfs to the board.
|
||||
|
||||
|
||||
FreeScale iMX31ADS (mx31ads)
|
||||
===========================
|
||||
|
||||
The correct serial port is the top-most female connector to the right of the
|
||||
ethernet socket.
|
||||
|
||||
For uploading data to RedBoot we are going to use tftp. In this example we
|
||||
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
|
||||
|
||||
To set the IP address, run:
|
||||
|
||||
ip_address -l 192.168.9.2/24 -h 192.168.9.1
|
||||
|
||||
To download a kernel called "zimage" from the TFTP server, run:
|
||||
|
||||
load -r -b 0x100000 zimage
|
||||
|
||||
To write the kernel to flash run:
|
||||
|
||||
fis create kernel
|
||||
|
||||
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
|
||||
|
||||
load -r -b 0x100000 rootfs
|
||||
|
||||
To write the root filesystem to flash run:
|
||||
|
||||
fis create root
|
||||
|
||||
To load and boot a kernel and rootfs from flash:
|
||||
|
||||
fis load kernel
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
|
||||
|
||||
To load and boot a kernel from a TFTP server with the rootfs over NFS:
|
||||
|
||||
load -r -b 0x100000 zimage
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
|
||||
|
||||
The instructions above are for using the (default) NOR flash on the board,
|
||||
there is also 128M of NAND flash. It is possible to install Poky to the NAND
|
||||
flash which gives more space for the rootfs and instructions for using this are
|
||||
given below. To switch to the NAND flash:
|
||||
|
||||
factive NAND
|
||||
|
||||
This will then restart RedBoot using the NAND rather than the NOR. If you
|
||||
have not used the NAND before then it is unlikely that there will be a
|
||||
partition table yet. You can get the list of partitions with 'fis list'.
|
||||
|
||||
If this shows no partitions then you can create them with:
|
||||
|
||||
fis init
|
||||
|
||||
The output of 'fis list' should now show:
|
||||
|
||||
Name FLASH addr Mem addr Length Entry point
|
||||
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
|
||||
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
|
||||
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
|
||||
|
||||
Partitions for the kernel and rootfs need to be created:
|
||||
|
||||
fis create -l 0x1A0000 -e 0x00100000 kernel
|
||||
fis create -l 0x5000000 -e 0x00100000 root
|
||||
|
||||
You may now use the instructions above for flashing. However it is important
|
||||
to note that the erase block size for the NAND is different to the NOR so the
|
||||
JFFS erase size will need to be changed to 0x4000. Stardard images are built
|
||||
for NOR and you will need to build custom images for NAND.
|
||||
|
||||
You will also need to update the kernel command line to use the correct root
|
||||
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
|
||||
scheme shown above. If this fails then you can doublecheck against the output
|
||||
from the kernel when it evaluates the available mtd partitions.
|
||||
|
||||
|
||||
Marvell PXA3xx Zylonite (zylonite)
|
||||
==================================
|
||||
|
||||
These instructions assume the Zylonite is connected to a machine running a TFTP
|
||||
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
|
||||
to access the blob bootloader. The kernel is on the TFTP server as
|
||||
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
|
||||
the images are to be saved in NAND flash.
|
||||
|
||||
The following commands setup blob:
|
||||
|
||||
blob> setip client 192.168.123.4
|
||||
blob> setip server 192.168.123.5
|
||||
|
||||
To flash the kernel:
|
||||
|
||||
blob> tftp zylonite-kernel
|
||||
blob> nandwrite -j 0x80800000 0x60000 0x200000
|
||||
|
||||
To flash the rootfs:
|
||||
|
||||
blob> tftp zylonite-rootfs
|
||||
blob> nanderase -j 0x260000 0x5000000
|
||||
blob> nandwrite -j 0x80800000 0x260000 <length>
|
||||
|
||||
(where <length> is the rootfs size which will be printed by the tftp step)
|
||||
|
||||
To boot the board:
|
||||
|
||||
blob> nkernel
|
||||
blob> boot
|
||||
|
||||
|
||||
Logic iMX31 Lite Kit (mx31litekit)
|
||||
===============================
|
||||
|
||||
The easiest method to boot this board is to take an MMC/SD card and format
|
||||
the first partition as ext2, then extract the poky image onto this as root.
|
||||
Assuming the board is network connected, a TFTP server is available at
|
||||
192.168.1.33 and a serial terminal is available (115200 8N1), the following
|
||||
commands will boot a kernel called "mx31kern" from the TFTP server:
|
||||
|
||||
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
|
||||
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
|
||||
losh> exec 0x80100000 -
|
||||
|
||||
|
||||
Phytec phyCORE-iMX31 (mx31phy)
|
||||
==============================
|
||||
|
||||
Support for this board is currently being developed. Experimental jffs2
|
||||
images and a suitable kernel are available and are known to work with the
|
||||
board.
|
||||
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
========================================
|
||||
|
||||
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
|
||||
which you can build with "bitbake dfu-util-native" command.
|
||||
|
||||
Flashing requires these steps:
|
||||
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB.
|
||||
3. Hold AUX key and press Power key. There should be a bootmenu
|
||||
on screen.
|
||||
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
|
||||
The output should look like this:
|
||||
|
||||
dfu-util - (C) 2007 by OpenMoko Inc.
|
||||
This program is Free Software and has ABSOLUTELY NO WARRANTY
|
||||
|
||||
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
|
||||
|
||||
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
|
||||
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
|
||||
|
||||
|
||||
HTC Universal (htcuniversal)
|
||||
============================
|
||||
|
||||
Note: HTC Universal support is highly experimental.
|
||||
|
||||
On the HTC Universal, entirely replacing the Windows installation is not
|
||||
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
|
||||
has booted, Windows is no longer in memory or active but when power is removed,
|
||||
the user will be returned to windows and will need to return to Linux from
|
||||
there.
|
||||
|
||||
Once an MMC/SD card is available it is suggested its split into two partitions,
|
||||
one for a program called HaRET which lets you boot Linux from within Windows
|
||||
and the second for the rootfs. The HaRET partition should be the first partition
|
||||
on the card and be vfat formatted. It doesn't need to be large, just enough for
|
||||
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
|
||||
second partition. The first partition should be vfat so Windows recognises it
|
||||
as if it doesn't, it has been known to reformat cards.
|
||||
|
||||
On the first partition you need three files:
|
||||
|
||||
* a HaRET binary (version 0.5.1 works well and a working version
|
||||
should be part of the last Poky release)
|
||||
* a kernel renamed to "zImage"
|
||||
* a default.txt which contains:
|
||||
|
||||
set kernel "zImage"
|
||||
set mtype "855"
|
||||
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
|
||||
boot2
|
||||
|
||||
On the second parition the root file system is extracted as root. A different
|
||||
partition layout or other kernel options can be changed in the default.txt file.
|
||||
|
||||
When inserted into the device, Windows should see the card and let you browse
|
||||
its contents using File Explorer. Running the HaRET binary will present a dialog
|
||||
box (maybe after messages warning about running unsigned binaries) where you
|
||||
select OK and you should then see Poky boot. Kernel messages can be seen by
|
||||
adding psplash=false to the kernel commandline.
|
||||
|
||||
|
||||
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
============================================================
|
||||
|
||||
Note: Nokia tablet support is highly experimental.
|
||||
|
||||
The Nokia internet tablet devices are OMAP based tablet formfactor devices
|
||||
with large screens (800x480), wifi and touchscreen.
|
||||
|
||||
To flash images to these devices you need the "flasher" utility which can be
|
||||
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
|
||||
utility needs to be run as root and the usb filesystem needs to be mounted
|
||||
although most distributions will have done this for you. Once you have this
|
||||
follow these steps:
|
||||
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB
|
||||
(connecting power to the device doesn't hurt either).
|
||||
3. Run "flasher -i"
|
||||
4. Power on the device.
|
||||
5. The program should give an indication it's found
|
||||
a tablet device. If not, recheck the cables, make sure you're
|
||||
root and usbfs/usbdevfs is mounted.
|
||||
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
|
||||
and <kernel> is the kernel to use
|
||||
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
|
||||
7. Run "flasher -R" to reboot the device.
|
||||
8. The device should boot into Poky.
|
||||
|
||||
The nokia800 images and kernel will run on both the N800 and N810.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
==================================
|
||||
|
||||
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
|
||||
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
|
||||
these devices follow these steps:
|
||||
|
||||
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
|
||||
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
|
||||
card as "initrd.bin":
|
||||
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
|
||||
|
||||
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
|
||||
"zImage.bin":
|
||||
|
||||
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
|
||||
|
||||
4. Copy an updater script (updater.sh.c7x0) onto the card
|
||||
as "updater.sh":
|
||||
|
||||
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
|
||||
|
||||
5. Power down the Zaurus.
|
||||
6. Hold "OK" key and power on the device. An update menu should appear
|
||||
(in Japanese).
|
||||
7. Choose "Update" (item 4).
|
||||
8. The next screen will ask for the source, choose the appropriate
|
||||
card (CF or SD).
|
||||
9. Make sure AC power is connected.
|
||||
10. The next screen asks for confirmation, choose "Yes" (the left button).
|
||||
11. The update process will start, flash the files on the card onto
|
||||
the device and the device will then reboot into Poky.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C1000 (akita)
|
||||
=============================
|
||||
|
||||
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
|
||||
c7x0. To install Poky images on this device follow the instructions for
|
||||
the c7x0 but replace "c7x0" with "akita" where appropriate.
|
||||
|
||||
|
||||
Sharp Zaurus SL-C3x00 series (spitz)
|
||||
====================================
|
||||
|
||||
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
|
||||
to akita but with an internal microdrive. The installation procedure
|
||||
assumes a standard microdrive based device where the root (first)
|
||||
partition has been enlarged to fit the image (at least 100MB,
|
||||
400MB for the SDK).
|
||||
|
||||
The procedure is the same as for the c7x0 and akita models with the
|
||||
following differences:
|
||||
|
||||
1. Instead of a jffs2 image you need to copy a compressed tarball of the
|
||||
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
|
||||
card as "hdimage1.tgz":
|
||||
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
|
||||
|
||||
2. You additionally need to copy a special tar utility (gnu-tar) onto
|
||||
the card as "gnu-tar":
|
||||
|
||||
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar
|
||||
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ Tim Ansell <mithro@mithis.net>
|
||||
Phil Blundell <pb@handhelds.org>
|
||||
Seb Frankengul <seb@frankengul.org>
|
||||
Holger Freyther <zecke@handhelds.org>
|
||||
Marcin Juszkiewicz <marcin@juszkiewicz.com.pl>
|
||||
Marcin Juszkiewicz <hrw@hrw.one.pl>
|
||||
Chris Larson <kergoth@handhelds.org>
|
||||
Ulrich Luckas <luckas@musoft.de>
|
||||
Mickey Lauer <mickey@Vanille.de>
|
||||
|
||||
@@ -1,227 +1,10 @@
|
||||
Changes in Bitbake 1.9.x:
|
||||
- Add PE (Package Epoch) support from Philipp Zabel (pH5)
|
||||
- Treat python functions the same as shell functions for logging
|
||||
- Use TMPDIR/anonfunc as a __anonfunc temp directory (T)
|
||||
- Catch truncated cache file errors
|
||||
- Allow operations other than assignment on flag variables
|
||||
- Add code to handle inter-task dependencies
|
||||
- Fix cache errors when generation dotGraphs
|
||||
- Make sure __inherit_cache is updated before calling include() (from Michael Krelin)
|
||||
- Fix bug when target was in ASSUME_PROVIDED (#2236)
|
||||
- Raise ParseError for filenames with multiple underscores instead of infinitely looping (#2062)
|
||||
- Fix invalid regexp in BBMASK error handling (missing import) (#1124)
|
||||
- Promote certain warnings from debug to note 2 level
|
||||
- Update manual
|
||||
- Correctly redirect stdin when forking
|
||||
- If parsing errors are found, exit, too many users miss the errors
|
||||
- Remove supriours PREFERRED_PROVIDER warnings
|
||||
- svn fetcher: Add _buildsvncommand function
|
||||
- Improve certain error messages
|
||||
- Rewrite svn fetcher to make adding extra operations easier
|
||||
as part of future SRCDATE="now" fixes
|
||||
(requires new FETCHCMD_svn definition in bitbake.conf)
|
||||
- Change SVNDIR layout to be more unique (fixes #2644 and #2624)
|
||||
- Add ConfigParsed Event after configuration parsing is complete
|
||||
- Add SRCREV support for svn fetcher
|
||||
- data.emit_var() - only call getVar if we need the variable
|
||||
- Stop generating the A variable (seems to be legacy code)
|
||||
- Make sure intertask depends get processed correcting in recursive depends
|
||||
- Add pn-PN to overrides when evaluating PREFERRED_VERSION
|
||||
- Improve the progress indicator by skipping tasks that have
|
||||
already run before starting the build rather than during it
|
||||
- Add profiling option (-P)
|
||||
- Add BB_SRCREV_POLICY variable (clear or cache) to control SRCREV cache
|
||||
- Add SRCREV_FORMAT support
|
||||
- Fix local fetcher's localpath return values
|
||||
- Apply OVERRIDES before performing immediate expansions
|
||||
- Allow the -b -e option combination to take regular expressions
|
||||
- Fix handling of variables with expansion in the name using _append/_prepend
|
||||
e.g. RRECOMMENDS_${PN}_append_xyz = "abc"
|
||||
- Add plain message function to bb.msg
|
||||
- Sort the list of providers before processing so dependency problems are
|
||||
reproducible rather than effectively random
|
||||
- Fix/improve bitbake -s output
|
||||
- Add locking for fetchers so only one tries to fetch a given file at a given time
|
||||
- Fix int(0)/None confusion in runqueue.py which causes random gaps in dependency chains
|
||||
- Expand data in addtasks
|
||||
- Print the list of missing DEPENDS,RDEPENDS for the "No buildable providers available for required...."
|
||||
error message.
|
||||
- Rework add_task to be more efficient (6% speedup, 7% number of function calls reduction)
|
||||
- Sort digraph output to make builds more reproducible
|
||||
- Split expandKeys into two for loops to benefit from the expand_cache (12% speedup)
|
||||
- runqueue.py: Fix idepends handling to avoid dependency errors
|
||||
- Clear the terminal TOSTOP flag if set (and warn the user)
|
||||
- Fix regression from r653 and make SRCDATE/CVSDATE work for packages again
|
||||
- Fix a bug in bb.decodeurl where http://some.where.com/somefile.tgz decoded to host="" (#1530)
|
||||
- Warn about malformed PREFERRED_PROVIDERS (#1072)
|
||||
- Add support for BB_NICE_LEVEL option (#1627)
|
||||
- Psyco is used only on x86 as there is no support for other architectures.
|
||||
- Sort initial providers list by default preference (#1145, #2024)
|
||||
- Improve provider sorting so prefered versions have preference over latest versions (#768)
|
||||
- Detect builds of tasks with overlapping providers and warn (will become a fatal error) (#1359)
|
||||
- Add MULTI_PROVIDER_WHITELIST variable to allow known safe multiple providers to be listed
|
||||
- Handle paths in svn fetcher module parameter
|
||||
- Support the syntax "export VARIABLE"
|
||||
- Add bzr fetcher
|
||||
- Add support for cleaning directories before a task in the form:
|
||||
do_taskname[cleandirs] = "dir"
|
||||
- bzr fetcher tweaks from Robert Schuster (#2913)
|
||||
- Add mercurial (hg) fetcher from Robert Schuster (#2913)
|
||||
- Don't add duplicates to BBPATH
|
||||
- Fix preferred_version return values (providers.py)
|
||||
- Fix 'depends' flag splitting
|
||||
- Fix unexport handling (#3135)
|
||||
- Add bb.copyfile function similar to bb.movefile (and improve movefile error reporting)
|
||||
- Allow multiple options for deptask flag
|
||||
- Use git-fetch instead of git-pull removing any need for merges when
|
||||
fetching (we don't care about the index). Fixes fetch errors.
|
||||
- Add BB_GENERATE_MIRROR_TARBALLS option, set to 0 to make git fetches
|
||||
faster at the expense of not creating mirror tarballs.
|
||||
- SRCREV handling updates, improvements and fixes from Poky
|
||||
- Add bb.utils.lockfile() and bb.utils.unlockfile() from Poky
|
||||
- Add support for task selfstamp and lockfiles flags
|
||||
- Disable task number acceleration since it can allow the tasks to run
|
||||
out of sequence
|
||||
- Improve runqueue code comments
|
||||
- Add task scheduler abstraction and some example schedulers
|
||||
- Improve circular dependency chain debugging code and user feedback
|
||||
- Don't give a stacktrace for invalid tasks, have a user friendly message (#3431)
|
||||
- Add support for "-e target" (#3432)
|
||||
- Fix shell showdata command (#3259)
|
||||
- Fix shell data updating problems (#1880)
|
||||
- Properly raise errors for invalid source URI protocols
|
||||
- Change the wget fetcher failure handling to avoid lockfile problems
|
||||
- Add support for branches in git fetcher (Otavio Salvador, Michael Lauer)
|
||||
- Make taskdata and runqueue errors more user friendly
|
||||
- Add norecurse and fullpath options to cvs fetcher
|
||||
- Fix exit code for build failures in --continue mode
|
||||
- Fix git branch tags fetching
|
||||
- Change parseConfigurationFile so it works on real data, not a copy
|
||||
- Handle 'base' inherit and all other INHERITs from parseConfigurationFile
|
||||
instead of BBHandler
|
||||
- Fix getVarFlags bug in data_smart
|
||||
- Optmise cache handling by more quickly detecting an invalid cache, only
|
||||
saving the cache when its changed, moving the cache validity check into
|
||||
the parsing loop and factoring some getVar calls outside a for loop
|
||||
- Cooker: Remove a debug message from the parsing loop to lower overhead
|
||||
- Convert build.py exec_task to use getVarFlags
|
||||
- Update shell to use cooker.buildFile
|
||||
- Add StampUpdate event
|
||||
- Convert -b option to use taskdata/runqueue
|
||||
- Remove digraph and switch to new stamp checking code. exec_task no longer
|
||||
honours dependencies
|
||||
- Make fetcher timestamp updating non-fatal when permissions don't allow
|
||||
updates
|
||||
- Add BB_SCHEDULER variable/option ("completion" or "speed") controlling
|
||||
the way bitbake schedules tasks
|
||||
- Add BB_STAMP_POLICY variable/option ("perfile" or "full") controlling
|
||||
how extensively stamps are looked at for validity
|
||||
- When handling build target failures make sure idepends are checked and
|
||||
failed where needed. Fixes --continue mode crashes.
|
||||
- Fix -f (force) in conjunction with -b
|
||||
- Fix problems with recrdeptask handling where some idepends weren't handled
|
||||
correctly.
|
||||
- Handle exit codes correctly (from pH5)
|
||||
- Work around refs/HEAD issues with git over http (#3410)
|
||||
- Add proxy support to the CVS fetcher (from Cyril Chemparathy)
|
||||
- Improve runfetchcmd so errors are seen and various GIT variables are exported
|
||||
- Add ability to fetchers to check URL validity without downloading
|
||||
- Improve runtime PREFERRED_PROVIDERS warning message
|
||||
- Add BB_STAMP_WHITELIST option which contains a list of stamps to ignore when
|
||||
checking stamp dependencies and using a BB_STAMP_POLICY of "whitelist"
|
||||
- No longer weight providers on the basis of a package being "already staged". This
|
||||
leads to builds being non-deterministic.
|
||||
- Flush stdout/stderr before forking to fix duplicate console output
|
||||
- Make sure recrdeps tasks include all inter-task dependencies of a given fn
|
||||
- Add bb.runqueue.check_stamp_fn() for use by packaged-staging
|
||||
- Add PERSISTENT_DIR to store the PersistData in a persistent
|
||||
directory != the cache dir.
|
||||
- Add md5 and sha256 checksum generation functions to utils.py
|
||||
- Correctly handle '-' characters in class names (#2958)
|
||||
- Make sure expandKeys has been called on the data dictonary before running tasks
|
||||
- Correctly add a task override in the form task-TASKNAME.
|
||||
- Revert the '-' character fix in class names since it breaks things
|
||||
- When a regexp fails to compile for PACKAGES_DYNAMIC, print a more useful error (#4444)
|
||||
- Allow to checkout CVS by Date and Time. Just add HHmm to the SRCDATE.
|
||||
- Move prunedir function to utils.py and add explode_dep_versions function
|
||||
- Raise an exception if SRCREV == 'INVALID'
|
||||
- Fix hg fetcher username/password handling and fix crash
|
||||
- Fix PACKAGES_DYNAMIC handling of packages with '++' in the name
|
||||
- Rename __depends to __base_depends after configuration parsing so we don't
|
||||
recheck the validity of the config files time after time
|
||||
- Add better environmental variable handling. By default it will now only pass certain
|
||||
whitelisted variables into the data store. If BB_PRESERVE_ENV is set bitbake will use
|
||||
all variable from the environment. If BB_ENV_WHITELIST is set, that whitelist will be
|
||||
used instead of the internal bitbake one. Alternatively, BB_ENV_EXTRAWHITE can be used
|
||||
to extend the internal whitelist.
|
||||
- Perforce fetcher fix to use commandline options instead of being overriden by the environment
|
||||
- bb.utils.prunedir can cope with symlinks to directoriees without exceptions
|
||||
- use @rev when doing a svn checkout
|
||||
- Add osc fetcher (from Joshua Lock in Poky)
|
||||
- When SRCREV autorevisioning for a recipe is in use, don't cache the recipe
|
||||
- Add tryaltconfigs option to control whether bitbake trys using alternative providers
|
||||
to fulfil failed dependencies. It defaults to off, changing the default since this
|
||||
behaviour confuses many users and isn't often useful.
|
||||
- Improve lock file function error handling
|
||||
- Add username handling to the git fetcher (Robert Bragg)
|
||||
- Add support for HTTP_PROXY and HTTP_PROXY_IGNORE variables to the wget fetcher
|
||||
- Export more variables to the fetcher commands to allow ssh checkouts and checkouts through
|
||||
proxies to work better. (from Poky)
|
||||
- Also allow user and pswd options in SRC_URIs globally (from Poky)
|
||||
- Improve proxy handling when using mirrors (from Poky)
|
||||
- Add bb.utils.prune_suffix function
|
||||
- Fix hg checkouts of specific revisions (from Poky)
|
||||
- Fix wget fetching of urls with parameters specified (from Poky)
|
||||
- Add username handling to git fetcher (from Poky)
|
||||
- Set HOME environmental variable when running fetcher commands (from Poky)
|
||||
- Make sure allowed variables inherited from the environment are exported again (from Poky)
|
||||
- When running a stage task in bbshell, run populate_staging, not the stage task (from Poky)
|
||||
- Fix + character escaping from PACKAGES_DYNAMIC (thanks Otavio Salvador)
|
||||
- Addition of BBCLASSEXTEND support for allowing one recipe to provide multiple targets (from Poky)
|
||||
Changes in BitBake 1.7.3:
|
||||
|
||||
Changes in Bitbake 1.8.0:
|
||||
- Release 1.7.x as a stable series
|
||||
|
||||
Changes in BitBake 1.7.x:
|
||||
- Major updates of the dependency handling and execution
|
||||
of tasks. Code from bin/bitbake replaced with runqueue.py
|
||||
and taskdata.py
|
||||
- New task execution code supports multithreading with a simplistic
|
||||
threading algorithm controlled by BB_NUMBER_THREADS
|
||||
- Change of the SVN Fetcher to keep the checkout around
|
||||
courtsey of Paul Sokolovsky (#1367)
|
||||
- PATH fix to bbimage (#1108)
|
||||
- Allow debug domains to be specified on the commandline (-l)
|
||||
- Allow 'interactive' tasks
|
||||
- Logging message improvements
|
||||
- Drop now uneeded BUILD_ALL_DEPS variable
|
||||
- Add support for wildcards to -b option
|
||||
- Major overhaul of the fetchers making a large amount of code common
|
||||
including mirroring code
|
||||
- Fetchers now touch md5 stamps upon access (to show activity)
|
||||
- Fix -f force option when used without -b (long standing bug)
|
||||
- Add expand_cache to data_cache.py, caching expanded data (speedup)
|
||||
- Allow version field in DEPENDS (ignored for now)
|
||||
- Add abort flag support to the shell
|
||||
- Make inherit fail if the class doesn't exist (#1478)
|
||||
- Fix data.emit_env() to expand keynames as well as values
|
||||
- Add ssh fetcher
|
||||
- Add perforce fetcher
|
||||
- Make PREFERRED_PROVIDER_foobar defaults to foobar if available
|
||||
- Share the parser's mtime_cache, reducing the number of stat syscalls
|
||||
- Compile all anonfuncs at once!
|
||||
*** Anonfuncs must now use common spacing format ***
|
||||
- Memorise the list of handlers in __BBHANDLERS and tasks in __BBTASKS
|
||||
This removes 2 million function calls resulting in a 5-10% speedup
|
||||
- Add manpage
|
||||
- Update generateDotGraph to use taskData/runQueue improving accuracy
|
||||
and also adding a task dependency graph
|
||||
- Fix/standardise on GPLv2 licence
|
||||
- Move most functionality from bin/bitbake to cooker.py and split into
|
||||
separate funcitons
|
||||
- CVS fetcher: Added support for non-default port
|
||||
- Add BBINCLUDELOGS_LINES, the number of lines to read from any logfile
|
||||
- Drop shebangs from lib/bb scripts
|
||||
Changes in BitBake 1.7.1:
|
||||
- Major updates of the dependency handling and execution
|
||||
of tasks
|
||||
- Change of the SVN Fetcher to keep the checkout around
|
||||
courtsey to Paul Sokolovsky (#1367)
|
||||
|
||||
Changes in Bitbake 1.6.0:
|
||||
- Better msg handling
|
||||
|
||||
45
bitbake/MANIFEST
Normal file
45
bitbake/MANIFEST
Normal file
@@ -0,0 +1,45 @@
|
||||
AUTHORS
|
||||
ChangeLog
|
||||
MANIFEST
|
||||
setup.py
|
||||
bin/bitdoc
|
||||
bin/bbimage
|
||||
bin/bitbake
|
||||
lib/bb/COW.py
|
||||
lib/bb/__init__.py
|
||||
lib/bb/build.py
|
||||
lib/bb/cache.py
|
||||
lib/bb/cooker.py
|
||||
lib/bb/data.py
|
||||
lib/bb/data_smart.py
|
||||
lib/bb/event.py
|
||||
lib/bb/manifest.py
|
||||
lib/bb/methodpool.py
|
||||
lib/bb/msg.py
|
||||
lib/bb/providers.py
|
||||
lib/bb/runqueue.py
|
||||
lib/bb/shell.py
|
||||
lib/bb/taskdata.py
|
||||
lib/bb/utils.py
|
||||
lib/bb/fetch/cvs.py
|
||||
lib/bb/fetch/git.py
|
||||
lib/bb/fetch/__init__.py
|
||||
lib/bb/fetch/local.py
|
||||
lib/bb/fetch/perforce.py
|
||||
lib/bb/fetch/ssh.py
|
||||
lib/bb/fetch/svk.py
|
||||
lib/bb/fetch/svn.py
|
||||
lib/bb/fetch/wget.py
|
||||
lib/bb/parse/__init__.py
|
||||
lib/bb/parse/parse_py/BBHandler.py
|
||||
lib/bb/parse/parse_py/ConfHandler.py
|
||||
lib/bb/parse/parse_py/__init__.py
|
||||
doc/COPYING.GPL
|
||||
doc/COPYING.MIT
|
||||
doc/manual/html.css
|
||||
doc/manual/Makefile
|
||||
doc/manual/usermanual.xml
|
||||
contrib/bbdev.sh
|
||||
contrib/vim/syntax/bitbake.vim
|
||||
conf/bitbake.conf
|
||||
classes/base.bbclass
|
||||
155
bitbake/bin/bbimage
Executable file
155
bitbake/bin/bbimage
Executable file
@@ -0,0 +1,155 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, os
|
||||
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
from bb import *
|
||||
|
||||
__version__ = 1.1
|
||||
type = "jffs2"
|
||||
cfg_bb = data.init()
|
||||
cfg_oespawn = data.init()
|
||||
|
||||
bb.msg.set_debug_level(0)
|
||||
|
||||
def usage():
|
||||
print "Usage: bbimage [options ...]"
|
||||
print "Creates an image for a target device from a root filesystem,"
|
||||
print "obeying configuration parameters from the BitBake"
|
||||
print "configuration files, thereby easing handling of deviceisms."
|
||||
print ""
|
||||
print " %s\t\t%s" % ("-r [arg], --root [arg]", "root directory (default=${IMAGE_ROOTFS})")
|
||||
print " %s\t\t%s" % ("-t [arg], --type [arg]", "image type (jffs2[default], cramfs)")
|
||||
print " %s\t\t%s" % ("-n [arg], --name [arg]", "image name (override IMAGE_NAME variable)")
|
||||
print " %s\t\t%s" % ("-v, --version", "output version information and exit")
|
||||
sys.exit(0)
|
||||
|
||||
def version():
|
||||
print "BitBake Build Tool Core version %s" % bb.__version__
|
||||
print "BBImage version %s" % __version__
|
||||
|
||||
def emit_bb(d, base_d = {}):
|
||||
for v in d.keys():
|
||||
if d[v] != base_d[v]:
|
||||
data.emit_var(v, d)
|
||||
|
||||
def getopthash(l):
|
||||
h = {}
|
||||
for (opt, val) in l:
|
||||
h[opt] = val
|
||||
return h
|
||||
|
||||
import getopt
|
||||
try:
|
||||
(opts, args) = getopt.getopt(sys.argv[1:], 'vr:t:e:n:', [ 'version', 'root=', 'type=', 'bbfile=', 'name=' ])
|
||||
except getopt.GetoptError:
|
||||
usage()
|
||||
|
||||
# handle opts
|
||||
opthash = getopthash(opts)
|
||||
|
||||
if '--version' in opthash or '-v' in opthash:
|
||||
version()
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
cfg_bb = parse.handle(os.path.join('conf', 'bitbake.conf'), cfg_bb)
|
||||
except IOError:
|
||||
fatal("Unable to open bitbake.conf")
|
||||
|
||||
# sanity check
|
||||
if cfg_bb is None:
|
||||
fatal("Unable to open/parse %s" % os.path.join('conf', 'bitbake.conf'))
|
||||
usage(1)
|
||||
|
||||
rootfs = None
|
||||
extra_files = []
|
||||
|
||||
if '--root' in opthash:
|
||||
rootfs = opthash['--root']
|
||||
if '-r' in opthash:
|
||||
rootfs = opthash['-r']
|
||||
|
||||
if '--type' in opthash:
|
||||
type = opthash['--type']
|
||||
if '-t' in opthash:
|
||||
type = opthash['-t']
|
||||
|
||||
if '--bbfile' in opthash:
|
||||
extra_files.append(opthash['--bbfile'])
|
||||
if '-e' in opthash:
|
||||
extra_files.append(opthash['-e'])
|
||||
|
||||
for f in extra_files:
|
||||
try:
|
||||
cfg_bb = parse.handle(f, cfg_bb)
|
||||
except IOError:
|
||||
print "unable to open %s" % f
|
||||
|
||||
if not rootfs:
|
||||
rootfs = data.getVar('IMAGE_ROOTFS', cfg_bb, 1)
|
||||
|
||||
if not rootfs:
|
||||
bb.fatal("IMAGE_ROOTFS not defined")
|
||||
|
||||
data.setVar('IMAGE_ROOTFS', rootfs, cfg_bb)
|
||||
|
||||
from copy import copy, deepcopy
|
||||
localdata = data.createCopy(cfg_bb)
|
||||
|
||||
overrides = data.getVar('OVERRIDES', localdata)
|
||||
if not overrides:
|
||||
bb.fatal("OVERRIDES not defined.")
|
||||
data.setVar('OVERRIDES', '%s:%s' % (overrides, type), localdata)
|
||||
data.update_data(localdata)
|
||||
data.setVar('OVERRIDES', overrides, localdata)
|
||||
|
||||
if '-n' in opthash:
|
||||
data.setVar('IMAGE_NAME', opthash['-n'], localdata)
|
||||
if '--name' in opthash:
|
||||
data.setVar('IMAGE_NAME', opthash['--name'], localdata)
|
||||
|
||||
topdir = data.getVar('TOPDIR', localdata, 1) or os.getcwd()
|
||||
|
||||
cmd = data.getVar('IMAGE_CMD', localdata, 1)
|
||||
if not cmd:
|
||||
bb.fatal("IMAGE_CMD not defined")
|
||||
|
||||
outdir = data.getVar('DEPLOY_DIR_IMAGE', localdata, 1)
|
||||
if not outdir:
|
||||
bb.fatal('DEPLOY_DIR_IMAGE not defined')
|
||||
mkdirhier(outdir)
|
||||
|
||||
#depends = data.getVar('IMAGE_DEPENDS', localdata, 1) or ""
|
||||
#if depends:
|
||||
# bb.note("Spawning bbmake to satisfy dependencies: %s" % depends)
|
||||
# ret = os.system('bbmake %s' % depends)
|
||||
# if ret != 0:
|
||||
# bb.error("executing bbmake to satisfy dependencies")
|
||||
|
||||
bb.note("Executing %s" % cmd)
|
||||
data.setVar('image_cmd', cmd, localdata)
|
||||
data.setVarFlag('image_cmd', 'func', 1, localdata)
|
||||
try:
|
||||
bb.build.exec_func('image_cmd', localdata)
|
||||
except bb.build.FuncFailed:
|
||||
sys.exit(1)
|
||||
#ret = os.system(cmd)
|
||||
#sys.exit(ret)
|
||||
@@ -22,20 +22,12 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, os, getopt, re, time, optparse, xmlrpclib
|
||||
import sys, os, getopt, re, time, optparse
|
||||
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
from bb import cooker
|
||||
from bb import ui
|
||||
from bb import server
|
||||
from bb.server import none
|
||||
#from bb.server import xmlrpc
|
||||
|
||||
__version__ = "1.9.0"
|
||||
|
||||
if sys.hexversion < 0x020500F0:
|
||||
print "Sorry, python 2.5 or later is required for this version of bitbake"
|
||||
sys.exit(1)
|
||||
__version__ = "1.7.4"
|
||||
|
||||
#============================================================================#
|
||||
# BBOptions
|
||||
@@ -49,33 +41,16 @@ class BBConfiguration( object ):
|
||||
setattr( self, key, val )
|
||||
|
||||
|
||||
def print_exception(exc, value, tb):
|
||||
"""
|
||||
Print the exception to stderr, only showing the traceback if bitbake
|
||||
debugging is enabled.
|
||||
"""
|
||||
if not bb.msg.debug_level['default']:
|
||||
tb = None
|
||||
|
||||
sys.__excepthook__(exc, value, tb)
|
||||
|
||||
|
||||
#============================================================================#
|
||||
# main
|
||||
#============================================================================#
|
||||
|
||||
def main():
|
||||
return_value = 0
|
||||
pythonver = sys.version_info
|
||||
if pythonver[0] < 2 or (pythonver[0] == 2 and pythonver[1] < 5):
|
||||
print "Sorry, bitbake needs python 2.5 or later."
|
||||
sys.exit(1)
|
||||
|
||||
parser = optparse.OptionParser( version = "BitBake Build Tool Core version %s, %%prog version %s" % ( bb.__version__, __version__ ),
|
||||
usage = """%prog [options] [package ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of BitBake files.
|
||||
It expects that BBFILES is defined, which is a space separated list of files to
|
||||
It expects that BBFILES is defined, which is a space seperated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
Default BBFILES are the .bb files in the current directory.""" )
|
||||
|
||||
@@ -85,9 +60,6 @@ Default BBFILES are the .bb files in the current directory.""" )
|
||||
parser.add_option( "-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
|
||||
action = "store_false", dest = "abort", default = True )
|
||||
|
||||
parser.add_option( "-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
|
||||
action = "store_true", dest = "tryaltconfigs", default = False )
|
||||
|
||||
parser.add_option( "-f", "--force", help = "force run of specified cmd, regardless of stamp status",
|
||||
action = "store_true", dest = "force", default = False )
|
||||
|
||||
@@ -124,20 +96,12 @@ Default BBFILES are the .bb files in the current directory.""" )
|
||||
parser.add_option( "-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
|
||||
action = "store_true", dest = "dot_graph", default = False )
|
||||
|
||||
parser.add_option( "-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
|
||||
action = "append", dest = "extra_assume_provided", default = [] )
|
||||
parser.add_option( "-I", "--ignore-deps", help = """Stop processing at the given list of dependencies when generating dependency graphs. This can help to make the graph more appealing""",
|
||||
action = "append", dest = "ignored_dot_deps", default = [] )
|
||||
|
||||
parser.add_option( "-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
|
||||
action = "append", dest = "debug_domains", default = [] )
|
||||
|
||||
parser.add_option( "-P", "--profile", help = "profile the command and print a report",
|
||||
action = "store_true", dest = "profile", default = False )
|
||||
|
||||
parser.add_option( "-u", "--ui", help = "userinterface to use",
|
||||
action = "store", dest = "ui")
|
||||
|
||||
parser.add_option( "", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
|
||||
action = "store_true", dest = "revisions_changed", default = False )
|
||||
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
|
||||
@@ -145,53 +109,15 @@ Default BBFILES are the .bb files in the current directory.""" )
|
||||
configuration.pkgs_to_build = []
|
||||
configuration.pkgs_to_build.extend(args[1:])
|
||||
|
||||
#server = bb.server.xmlrpc
|
||||
server = bb.server.none
|
||||
|
||||
# Save a logfile for cooker into the current working directory. When the
|
||||
# server is daemonized this logfile will be truncated.
|
||||
cooker_logfile = os.path.join (os.getcwd(), "cooker.log")
|
||||
|
||||
cooker = bb.cooker.BBCooker(configuration, server)
|
||||
|
||||
# Clear away any spurious environment variables. But don't wipe the
|
||||
# environment totally. This is necessary to ensure the correct operation
|
||||
# of the UIs (e.g. for DISPLAY, etc.)
|
||||
bb.utils.clean_environment()
|
||||
|
||||
cooker.parseCommandLine()
|
||||
|
||||
serverinfo = server.BitbakeServerInfo(cooker.server)
|
||||
|
||||
server.BitBakeServerFork(serverinfo, cooker.serve, cooker_logfile)
|
||||
del cooker
|
||||
|
||||
sys.excepthook = print_exception
|
||||
|
||||
# Setup a connection to the server (cooker)
|
||||
serverConnection = server.BitBakeServerConnection(serverinfo)
|
||||
|
||||
# Launch the UI
|
||||
if configuration.ui:
|
||||
ui = configuration.ui
|
||||
else:
|
||||
ui = "knotty"
|
||||
|
||||
try:
|
||||
# Dynamically load the UI based on the ui name. Although we
|
||||
# suggest a fixed set this allows you to have flexibility in which
|
||||
# ones are available.
|
||||
exec "from bb.ui import " + ui
|
||||
exec "return_value = " + ui + ".init(serverConnection.connection, serverConnection.events)"
|
||||
except ImportError:
|
||||
print "FATAL: Invalid user interface '%s' specified. " % ui
|
||||
print "Valid interfaces are 'ncurses', 'depexp' or the default, 'knotty'."
|
||||
except Exception, e:
|
||||
print "FATAL: Unable to start to '%s' UI due to exception: %s." % (configuration.ui, e)
|
||||
finally:
|
||||
serverConnection.terminate()
|
||||
return return_value
|
||||
bb.cooker.BBCooker().cook(configuration)
|
||||
|
||||
if __name__ == "__main__":
|
||||
ret = main()
|
||||
sys.exit(ret)
|
||||
main()
|
||||
sys.exit(0)
|
||||
import profile
|
||||
profile.run('main()', "profile.log")
|
||||
import pstats
|
||||
p = pstats.Stats('profile.log')
|
||||
p.sort_stats('time')
|
||||
p.print_stats()
|
||||
p.print_callers()
|
||||
|
||||
@@ -453,8 +453,6 @@ def main():
|
||||
except bb.parse.ParseError:
|
||||
bb.fatal( "Unable to parse %s" % config_file )
|
||||
|
||||
if isinstance(documentation, dict):
|
||||
documentation = documentation[""]
|
||||
|
||||
# Assuming we've the file loaded now, we will initialize the 'tree'
|
||||
doc = Documentation()
|
||||
|
||||
79
bitbake/classes/base.bbclass
Normal file
79
bitbake/classes/base.bbclass
Normal file
@@ -0,0 +1,79 @@
|
||||
# Copyright (C) 2003 Chris Larson
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a
|
||||
# copy of this software and associated documentation files (the "Software"),
|
||||
# to deal in the Software without restriction, including without limitation
|
||||
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
# and/or sell copies of the Software, and to permit persons to whom the
|
||||
# Software is furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included
|
||||
# in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
# OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
die() {
|
||||
bbfatal "$*"
|
||||
}
|
||||
|
||||
bbnote() {
|
||||
echo "NOTE:" "$*"
|
||||
}
|
||||
|
||||
bbwarn() {
|
||||
echo "WARNING:" "$*"
|
||||
}
|
||||
|
||||
bbfatal() {
|
||||
echo "FATAL:" "$*"
|
||||
exit 1
|
||||
}
|
||||
|
||||
bbdebug() {
|
||||
test $# -ge 2 || {
|
||||
echo "Usage: bbdebug level \"message\""
|
||||
exit 1
|
||||
}
|
||||
|
||||
test ${@bb.msg.debug_level} -ge $1 && {
|
||||
shift
|
||||
echo "DEBUG:" $*
|
||||
}
|
||||
}
|
||||
|
||||
addtask showdata
|
||||
do_showdata[nostamp] = "1"
|
||||
python do_showdata() {
|
||||
import sys
|
||||
# emit variables and shell functions
|
||||
bb.data.emit_env(sys.__stdout__, d, True)
|
||||
# emit the metadata which isnt valid shell
|
||||
for e in bb.data.keys(d):
|
||||
if bb.data.getVarFlag(e, 'python', d):
|
||||
sys.__stdout__.write("\npython %s () {\n%s}\n" % (e, bb.data.getVar(e, d, 1)))
|
||||
}
|
||||
|
||||
addtask listtasks
|
||||
do_listtasks[nostamp] = "1"
|
||||
python do_listtasks() {
|
||||
import sys
|
||||
for e in bb.data.keys(d):
|
||||
if bb.data.getVarFlag(e, 'task', d):
|
||||
sys.__stdout__.write("%s\n" % e)
|
||||
}
|
||||
|
||||
addtask build
|
||||
do_build[dirs] = "${TOPDIR}"
|
||||
do_build[nostamp] = "1"
|
||||
python base_do_build () {
|
||||
bb.note("The included, default BB base.bbclass does not define a useful default task.")
|
||||
bb.note("Try running the 'listtasks' task against a .bb to see what tasks are defined.")
|
||||
}
|
||||
|
||||
EXPORT_FUNCTIONS do_clean do_mrproper do_build
|
||||
58
bitbake/conf/bitbake.conf
Normal file
58
bitbake/conf/bitbake.conf
Normal file
@@ -0,0 +1,58 @@
|
||||
# Copyright (C) 2003 Chris Larson
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a
|
||||
# copy of this software and associated documentation files (the "Software"),
|
||||
# to deal in the Software without restriction, including without limitation
|
||||
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
# and/or sell copies of the Software, and to permit persons to whom the
|
||||
# Software is furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included
|
||||
# in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
# OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
B = "${S}"
|
||||
CVSDIR = "${DL_DIR}/cvs"
|
||||
DEPENDS = ""
|
||||
DEPLOY_DIR = "${TMPDIR}/deploy"
|
||||
DEPLOY_DIR_IMAGE = "${DEPLOY_DIR}/images"
|
||||
DL_DIR = "${TMPDIR}/downloads"
|
||||
FETCHCOMMAND = ""
|
||||
FETCHCOMMAND_cvs = "/usr/bin/env cvs -d${CVSROOT} co ${CVSCOOPTS} ${CVSMODULE}"
|
||||
FETCHCOMMAND_svn = "/usr/bin/env svn co ${SVNCOOPTS} ${SVNROOT} ${SVNMODULE}"
|
||||
FETCHCOMMAND_wget = "/usr/bin/env wget -t 5 --passive-ftp -P ${DL_DIR} ${URI}"
|
||||
FILESDIR = "${@bb.which(bb.data.getVar('FILESPATH', d, 1), '.')}"
|
||||
FILESPATH = "${FILE_DIRNAME}/${PF}:${FILE_DIRNAME}/${P}:${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/files:${FILE_DIRNAME}"
|
||||
FILE_DIRNAME = "${@os.path.dirname(bb.data.getVar('FILE', d))}"
|
||||
GITDIR = "${DL_DIR}/git"
|
||||
IMAGE_CMD = "_NO_DEFINED_IMAGE_TYPES_"
|
||||
IMAGE_ROOTFS = "${TMPDIR}/rootfs"
|
||||
MKTEMPCMD = "mktemp -q ${TMPBASE}"
|
||||
MKTEMPDIRCMD = "mktemp -d -q ${TMPBASE}"
|
||||
OVERRIDES = "local:${MACHINE}:${TARGET_OS}:${TARGET_ARCH}"
|
||||
P = "${PN}-${PV}"
|
||||
PF = "${PN}-${PV}-${PR}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[0] or 'defaultpkgname'}"
|
||||
PR = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[2] or 'r0'}"
|
||||
PROVIDES = ""
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[1] or '1.0'}"
|
||||
RESUMECOMMAND = ""
|
||||
RESUMECOMMAND_wget = "/usr/bin/env wget -c -t 5 --passive-ftp -P ${DL_DIR} ${URI}"
|
||||
S = "${WORKDIR}/${P}"
|
||||
SRC_URI = "file://${FILE}"
|
||||
STAMP = "${TMPDIR}/stamps/${PF}"
|
||||
SVNDIR = "${DL_DIR}/svn"
|
||||
T = "${WORKDIR}/temp"
|
||||
TARGET_ARCH = "${BUILD_ARCH}"
|
||||
TMPDIR = "${TOPDIR}/tmp"
|
||||
UPDATECOMMAND = ""
|
||||
UPDATECOMMAND_cvs = "/usr/bin/env cvs -d${CVSROOT} update ${CVSCOOPTS}"
|
||||
UPDATECOMMAND_svn = "/usr/bin/env svn update ${SVNCOOPTS}"
|
||||
WORKDIR = "${TMPDIR}/work/${PF}"
|
||||
@@ -16,17 +16,12 @@ endif
|
||||
|
||||
syn case match
|
||||
|
||||
|
||||
" Catch incorrect syntax (only matches if nothing else does)
|
||||
"
|
||||
syn match bbUnmatched "."
|
||||
|
||||
|
||||
syn include @python syntax/python.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
|
||||
|
||||
" Other
|
||||
|
||||
syn match bbComment "^#.*$" display contains=bbTodo
|
||||
@@ -39,25 +34,21 @@ syn match bbArrayBrackets "[\[\]]" contained
|
||||
" BitBake strings
|
||||
|
||||
syn match bbContinue "\\$"
|
||||
syn region bbString matchgroup=bbQuote start=/"/ skip=/\\$/ excludenl end=/"/ contained keepend contains=bbTodo,bbContinue,bbVarInlinePy,bbVarDeref
|
||||
syn region bbString matchgroup=bbQuote start=/'/ skip=/\\$/ excludenl end=/'/ contained keepend contains=bbTodo,bbContinue,bbVarInlinePy,bbVarDeref
|
||||
syn region bbString matchgroup=bbQuote start=/"/ skip=/\\$/ excludenl end=/"/ contained keepend contains=bbTodo,bbContinue,bbVarDeref
|
||||
syn region bbString matchgroup=bbQuote start=/'/ skip=/\\$/ excludenl end=/'/ contained keepend contains=bbTodo,bbContinue,bbVarDeref
|
||||
|
||||
|
||||
" BitBake variable metadata
|
||||
|
||||
syn match bbVarBraces "[\${}]"
|
||||
syn region bbVarDeref matchgroup=bbVarBraces start="${" end="}" contained
|
||||
" syn region bbVarDeref start="${" end="}" contained
|
||||
" syn region bbVarInlinePy start="${@" end="}" contained contains=@python
|
||||
syn region bbVarInlinePy matchgroup=bbVarBraces start="${@" end="}" contained contains=@python
|
||||
|
||||
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
|
||||
" syn match bbVarDeref "${[a-zA-Z0-9\-_\.]\+}" contained
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.]\+\(_[${}a-zA/-Z0-9\-_\.]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_\.]\+}" contained
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.]\+\(_[${}a-zA-Z0-9\-_\.]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\./]\+" display contained
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\.]\+" display contained
|
||||
"syn keyword bbVarEq = display contained nextgroup=bbVarValue
|
||||
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
|
||||
syn match bbVarValue ".*$" contained contains=bbString
|
||||
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref
|
||||
|
||||
|
||||
" BitBake variable metadata flags
|
||||
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
|
||||
@@ -70,6 +61,10 @@ syn match bbFunction "\h\w*" display contained
|
||||
|
||||
|
||||
" BitBake python metadata
|
||||
syn include @python syntax/python.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
|
||||
syn keyword bbPythonFlag python contained nextgroup=bbFunction
|
||||
syn match bbPythonFuncDef "^\(python\s\+\)\(\w\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPythonFlag,bbFunction,bbDelimiter nextgroup=bbPythonFuncRegion skipwhite
|
||||
@@ -103,6 +98,7 @@ syn match bbStatementRest ".*$" contained contains=bbString,bbVarDeref
|
||||
"
|
||||
hi def link bbArrayBrackets Statement
|
||||
hi def link bbUnmatched Error
|
||||
hi def link bbVarDeref String
|
||||
hi def link bbContinue Special
|
||||
hi def link bbDef Statement
|
||||
hi def link bbPythonFlag Type
|
||||
@@ -120,8 +116,5 @@ hi def link bbIdentifier Identifier
|
||||
hi def link bbVarEq Operator
|
||||
hi def link bbQuote String
|
||||
hi def link bbVarValue String
|
||||
" hi def link bbVarInlinePy PreProc
|
||||
hi def link bbVarDeref PreProc
|
||||
hi def link bbVarBraces PreProc
|
||||
|
||||
let b:current_syntax = "bb"
|
||||
|
||||
@@ -32,7 +32,7 @@ command.
|
||||
\fBbitbake\fP is a program that executes the specified task (default is 'build')
|
||||
for a given set of BitBake files.
|
||||
.br
|
||||
It expects that BBFILES is defined, which is a space separated list of files to
|
||||
It expects that BBFILES is defined, which is a space seperated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
.br
|
||||
Default BBFILES are the .bb files in the current directory.
|
||||
@@ -54,9 +54,6 @@ continue as much as possible after an error. While the target that failed, and
|
||||
those that depend on it, cannot be remade, the other dependencies of these
|
||||
targets can be processed all the same.
|
||||
.TP
|
||||
.B \-a, \-\-tryaltconfigs
|
||||
continue with builds by trying to use alternative providers where possible.
|
||||
.TP
|
||||
.B \-f, \-\-force
|
||||
force run of specified cmd, regardless of stamp status
|
||||
.TP
|
||||
@@ -67,7 +64,7 @@ drop into the interactive mode also called the BitBake shell.
|
||||
Specify task to execute. Note that this only executes the specified task for
|
||||
the providee and the packages it depends on, i.e. 'compile' does not implicitly
|
||||
call stage for the dependencies (IOW: use only if you know what you are doing).
|
||||
Depending on the base.bbclass a listtasks task is defined and will show
|
||||
Depending on the base.bbclass a listtaks tasks is defined and will show
|
||||
available tasks.
|
||||
.TP
|
||||
.B \-rFILE, \-\-read=FILE
|
||||
@@ -100,13 +97,12 @@ emit the dependency trees of the specified packages in the dot syntax
|
||||
.B \-IIGNORED\_DOT\_DEPS, \-\-ignore-deps=IGNORED_DOT_DEPS
|
||||
Stop processing at the given list of dependencies when generating dependency
|
||||
graphs. This can help to make the graph more appealing
|
||||
.TP
|
||||
.B \-lDEBUG_DOMAINS, \-\-log-domains=DEBUG_DOMAINS
|
||||
Show debug logging for the specified logging domains
|
||||
.TP
|
||||
.B \-P, \-\-profile
|
||||
profile the command and print a report
|
||||
.TP
|
||||
.\"
|
||||
.\" Next option is only in BitBake 1.7.x (trunk)
|
||||
.\"
|
||||
.\".TP
|
||||
.\".B \-lDEBUG_DOMAINS, \-\-log-domains=DEBUG_DOMAINS
|
||||
.\"Show debug logging for the specified logging domains
|
||||
|
||||
.SH AUTHORS
|
||||
BitBake was written by
|
||||
|
||||
@@ -88,17 +88,6 @@ share common metadata between many packages.</para></listitem>
|
||||
<varname>B</varname> = "pre${A}post"</screen></para>
|
||||
<para>This results in <varname>A</varname> containing <literal>aval</literal> and <varname>B</varname> containing <literal>preavalpost</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Setting a default value (?=)</title>
|
||||
<para><screen><varname>A</varname> ?= "aval"</screen></para>
|
||||
<para>If <varname>A</varname> is set before the above is called, it will retain it's previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Setting a default value (??=)</title>
|
||||
<para><screen><varname>A</varname> ??= "somevalue"</screen></para>
|
||||
<para><screen><varname>A</varname> ??= "someothervalue"</screen></para>
|
||||
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy version of ??=, in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Immediate variable expansion (:=)</title>
|
||||
<para>:= results in a variable's contents being expanded immediately, rather than when the variable is actually used.</para>
|
||||
@@ -130,7 +119,7 @@ will be introduced.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Conditional metadata set</title>
|
||||
<para>OVERRIDES is a <quote>:</quote> separated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
|
||||
<para>OVERRIDES is a <quote>:</quote> seperated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
|
||||
<para><screen><varname>OVERRIDES</varname> = "architecture:os:machine"
|
||||
<varname>TEST</varname> = "defaultvalue"
|
||||
<varname>TEST_os</varname> = "osspecificvalue"
|
||||
@@ -186,16 +175,10 @@ include</literal> directive.</para>
|
||||
<varname>DEPENDS</varname> = "${@get_depends(bb, d)}"</screen></para>
|
||||
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variable Flags</title>
|
||||
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
|
||||
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
|
||||
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inheritance</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudimentary form of inheritance. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.oeclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
|
||||
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudamentary form of inheritence. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.oeclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Tasks</title>
|
||||
@@ -228,71 +211,21 @@ This event handler gets called every time an event is triggered. A global variab
|
||||
method one can get the name of the triggered event.</para><para>The above event handler prints the name
|
||||
of the event and the content of the <varname>FILE</varname> variable.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variants</title>
|
||||
<para>Two Bitbake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
|
||||
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes to utilize to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
|
||||
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to be able to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
|
||||
<para><screen>BBVERSIONS = "1.0 2.0 git"
|
||||
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
|
||||
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
|
||||
1.0.[7-9]:1.0.7+"
|
||||
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
|
||||
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides, it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Dependency Handling</title>
|
||||
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
|
||||
<section>
|
||||
<title>Dependencies internal to the .bb file</title>
|
||||
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>DEPENDS</title>
|
||||
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>RDEPENDS</title>
|
||||
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
|
||||
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive DEPENDS</title>
|
||||
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive RDEPENDS</title>
|
||||
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inter Task</title>
|
||||
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
|
||||
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Parsing</title>
|
||||
<section>
|
||||
<title>Configuration Files</title>
|
||||
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
|
||||
<para>Bitbake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
|
||||
<para>Bitbake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
|
||||
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed. Currently, BitBake has hardcoded knowledge of a single configuration file. It expects to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
|
||||
<para>Only variable definitions and include directives are allowed in .conf files.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Classes</title>
|
||||
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
|
||||
<para>BitBake classes are our rudamentary inheritence mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>.bb Files</title>
|
||||
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
|
||||
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space seperated list of .bb files, and does handle wildcards.</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
@@ -377,7 +310,15 @@ will be tried first when fetching a file if that fails the actual file will be t
|
||||
|
||||
|
||||
<chapter>
|
||||
<title>The bitbake command</title>
|
||||
<title>Commands</title>
|
||||
<section>
|
||||
<title>bbread</title>
|
||||
<para>bbread is a command for displaying BitBake metadata. When run with no arguments, it has the core parse 'conf/bitbake.conf', as located in BBPATH, and displays that. If you supply a file on the commandline, such as a .bb, then it parses that afterwards, using the aforementioned configuration metadata.</para>
|
||||
<para><emphasis>NOTE: the stand a lone bbread command was removed. Instead of bbread use bitbake -e.
|
||||
</emphasis></para>
|
||||
</section>
|
||||
<section>
|
||||
<title>bitbake</title>
|
||||
<section>
|
||||
<title>Introduction</title>
|
||||
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
|
||||
@@ -389,7 +330,7 @@ will be tried first when fetching a file if that fails the actual file will be t
|
||||
usage: bitbake [options] [package ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of BitBake files.
|
||||
It expects that BBFILES is defined, which is a space separated list of files to
|
||||
It expects that BBFILES is defined, which is a space seperated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
Default BBFILES are the .bb files in the current directory.
|
||||
|
||||
@@ -411,7 +352,7 @@ options:
|
||||
it depends on, i.e. 'compile' does not implicitly call
|
||||
stage for the dependencies (IOW: use only if you know
|
||||
what you are doing). Depending on the base.bbclass a
|
||||
listtasks task is defined and will show available
|
||||
listtasks tasks is defined and will show available
|
||||
tasks
|
||||
-r FILE, --read=FILE read the specified file before bitbake.conf
|
||||
-v, --verbose output more chit-chat to the terminal
|
||||
@@ -430,10 +371,6 @@ options:
|
||||
Stop processing at the given list of dependencies when
|
||||
generating dependency graphs. This can help to make
|
||||
the graph more appealing
|
||||
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
|
||||
Show debug logging for the specified logging domains
|
||||
-P, --profile profile the command and print a report
|
||||
|
||||
|
||||
</screen>
|
||||
</para>
|
||||
@@ -464,28 +401,20 @@ options:
|
||||
<title>Generating dependency graphs</title>
|
||||
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
|
||||
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
|
||||
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
|
||||
Three files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing <varname>DEPENDS</varname> variables, <emphasis>rdepends.dot</emphasis> and <emphasis>alldepends.dot</emphasis> containing both <varname>DEPENDS</varname> and <varname>RDEPENDS</varname>. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
|
||||
<screen><prompt>$ </prompt>bitbake -g blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
|
||||
</example>
|
||||
</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Special variables</title>
|
||||
<para>Certain variables affect bitbake operation:</para>
|
||||
<section>
|
||||
<title><varname>BB_NUMBER_THREADS</varname></title>
|
||||
<para> The number of threads bitbake should run at once (default: 1).</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Metadata</title>
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space seperated list of files that are available, and supports wildcards.
|
||||
<example>
|
||||
<title>Setting BBFILES</title>
|
||||
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
|
||||
</example></para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space seperated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
|
||||
<example>
|
||||
<title>Depending on another .bb</title>
|
||||
<para>a.bb:
|
||||
@@ -532,5 +461,6 @@ BBFILE_PRIORITY_upstream = "5"
|
||||
BBFILE_PRIORITY_local = "10"</screen>
|
||||
</example>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
</book>
|
||||
|
||||
@@ -23,8 +23,10 @@
|
||||
# Assign a file to __warn__ to get warnings about slow operations.
|
||||
#
|
||||
|
||||
from inspect import getmro
|
||||
|
||||
import copy
|
||||
import types
|
||||
import types, sets
|
||||
types.ImmutableTypes = tuple([ \
|
||||
types.BooleanType, \
|
||||
types.ComplexType, \
|
||||
@@ -33,7 +35,7 @@ types.ImmutableTypes = tuple([ \
|
||||
types.LongType, \
|
||||
types.NoneType, \
|
||||
types.TupleType, \
|
||||
frozenset] + \
|
||||
sets.ImmutableSet] + \
|
||||
list(types.StringTypes))
|
||||
|
||||
MUTABLE = "__mutable__"
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -25,23 +25,12 @@
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
from bb import data, event, mkdirhier, utils
|
||||
import bb, os, sys
|
||||
|
||||
# When we execute a python function we'd like certain things
|
||||
# in all namespaces, hence we add them to __builtins__
|
||||
# If we do not do this and use the exec globals, they will
|
||||
# not be available to subfunctions.
|
||||
__builtins__['bb'] = bb
|
||||
__builtins__['os'] = os
|
||||
from bb import data, fetch, event, mkdirhier, utils
|
||||
import bb, os
|
||||
|
||||
# events
|
||||
class FuncFailed(Exception):
|
||||
"""
|
||||
Executed function failed
|
||||
First parameter a message
|
||||
Second paramter is a logfile (optional)
|
||||
"""
|
||||
"""Executed function failed"""
|
||||
|
||||
class EventException(Exception):
|
||||
"""Exception which is associated with an Event."""
|
||||
@@ -54,9 +43,7 @@ class TaskBase(event.Event):
|
||||
|
||||
def __init__(self, t, d ):
|
||||
self._task = t
|
||||
self._package = bb.data.getVar("PF", d, 1)
|
||||
event.Event.__init__(self)
|
||||
self._message = "package %s: task %s: %s" % (bb.data.getVar("PF", d, 1), t, bb.event.getName(self)[4:])
|
||||
event.Event.__init__(self, d)
|
||||
|
||||
def getTask(self):
|
||||
return self._task
|
||||
@@ -74,10 +61,6 @@ class TaskSucceeded(TaskBase):
|
||||
|
||||
class TaskFailed(TaskBase):
|
||||
"""Task execution failed"""
|
||||
def __init__(self, msg, logfile, t, d ):
|
||||
self.logfile = logfile
|
||||
self.msg = msg
|
||||
TaskBase.__init__(self, t, d)
|
||||
|
||||
class InvalidTask(TaskBase):
|
||||
"""Invalid Task"""
|
||||
@@ -91,22 +74,10 @@ def exec_func(func, d, dirs = None):
|
||||
if not body:
|
||||
return
|
||||
|
||||
flags = data.getVarFlags(func, d)
|
||||
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot']:
|
||||
if not item in flags:
|
||||
flags[item] = None
|
||||
|
||||
ispython = flags['python']
|
||||
|
||||
cleandirs = (data.expand(flags['cleandirs'], d) or "").split()
|
||||
for cdir in cleandirs:
|
||||
os.system("rm -rf %s" % cdir)
|
||||
|
||||
if dirs:
|
||||
dirs = data.expand(dirs, d)
|
||||
else:
|
||||
dirs = (data.expand(flags['dirs'], d) or "").split()
|
||||
if not dirs:
|
||||
dirs = (data.getVarFlag(func, 'dirs', d) or "").split()
|
||||
for adir in dirs:
|
||||
adir = data.expand(adir, d)
|
||||
mkdirhier(adir)
|
||||
|
||||
if len(dirs) > 0:
|
||||
@@ -114,114 +85,40 @@ def exec_func(func, d, dirs = None):
|
||||
else:
|
||||
adir = data.getVar('B', d, 1)
|
||||
|
||||
# Save current directory
|
||||
adir = data.expand(adir, d)
|
||||
|
||||
try:
|
||||
prevdir = os.getcwd()
|
||||
except OSError:
|
||||
prevdir = data.getVar('TOPDIR', d, True)
|
||||
|
||||
# Setup logfiles
|
||||
t = data.getVar('T', d, 1)
|
||||
if not t:
|
||||
bb.msg.fatal(bb.msg.domain.Build, "T not set")
|
||||
mkdirhier(t)
|
||||
logfile = "%s/log.%s.%s" % (t, func, str(os.getpid()))
|
||||
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
|
||||
|
||||
# Change to correct directory (if specified)
|
||||
prevdir = data.expand('${TOPDIR}', d)
|
||||
if adir and os.access(adir, os.F_OK):
|
||||
os.chdir(adir)
|
||||
|
||||
# Handle logfiles
|
||||
si = file('/dev/null', 'r')
|
||||
try:
|
||||
if bb.msg.debug_level['default'] > 0 or ispython:
|
||||
so = os.popen("tee \"%s\"" % logfile, "w")
|
||||
else:
|
||||
so = file(logfile, 'w')
|
||||
except OSError, e:
|
||||
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
|
||||
pass
|
||||
if data.getVarFlag(func, "python", d):
|
||||
exec_func_python(func, d)
|
||||
else:
|
||||
exec_func_shell(func, d)
|
||||
|
||||
se = so
|
||||
if os.path.exists(prevdir):
|
||||
os.chdir(prevdir)
|
||||
|
||||
# Dup the existing fds so we dont lose them
|
||||
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
|
||||
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
|
||||
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
|
||||
|
||||
# Replace those fds with our own
|
||||
os.dup2(si.fileno(), osi[1])
|
||||
os.dup2(so.fileno(), oso[1])
|
||||
os.dup2(se.fileno(), ose[1])
|
||||
|
||||
locks = []
|
||||
lockfiles = (data.expand(flags['lockfiles'], d) or "").split()
|
||||
for lock in lockfiles:
|
||||
locks.append(bb.utils.lockfile(lock))
|
||||
|
||||
try:
|
||||
# Run the function
|
||||
if ispython:
|
||||
exec_func_python(func, d, runfile, logfile)
|
||||
else:
|
||||
exec_func_shell(func, d, runfile, logfile, flags)
|
||||
|
||||
# Restore original directory
|
||||
try:
|
||||
os.chdir(prevdir)
|
||||
except:
|
||||
pass
|
||||
|
||||
finally:
|
||||
|
||||
# Unlock any lockfiles
|
||||
for lock in locks:
|
||||
bb.utils.unlockfile(lock)
|
||||
|
||||
# Restore the backup fds
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
|
||||
# Close our logs
|
||||
si.close()
|
||||
so.close()
|
||||
se.close()
|
||||
|
||||
if os.path.exists(logfile) and os.path.getsize(logfile) == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Build, "Zero size logfile %s, removing" % logfile)
|
||||
os.remove(logfile)
|
||||
|
||||
# Close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
def exec_func_python(func, d, runfile, logfile):
|
||||
def exec_func_python(func, d):
|
||||
"""Execute a python BB 'function'"""
|
||||
import re, os
|
||||
|
||||
bbfile = bb.data.getVar('FILE', d, 1)
|
||||
tmp = "def " + func + "():\n%s" % data.getVar(func, d)
|
||||
tmp += '\n' + func + '()'
|
||||
|
||||
f = open(runfile, "w")
|
||||
f.write(tmp)
|
||||
comp = utils.better_compile(tmp, func, bbfile)
|
||||
comp = utils.better_compile(tmp, func, bb.data.getVar('FILE', d, 1) )
|
||||
prevdir = os.getcwd()
|
||||
g = {} # globals
|
||||
g['bb'] = bb
|
||||
g['os'] = os
|
||||
g['d'] = d
|
||||
try:
|
||||
utils.better_exec(comp, g, tmp, bbfile)
|
||||
except:
|
||||
(t,value,tb) = sys.exc_info()
|
||||
utils.better_exec(comp,g,tmp, bb.data.getVar('FILE',d,1))
|
||||
if os.path.exists(prevdir):
|
||||
os.chdir(prevdir)
|
||||
|
||||
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
|
||||
raise
|
||||
bb.msg.error(bb.msg.domain.Build, "Function %s failed" % func)
|
||||
raise FuncFailed("function %s failed" % func, logfile)
|
||||
|
||||
def exec_func_shell(func, d, runfile, logfile, flags):
|
||||
def exec_func_shell(func, d):
|
||||
"""Execute a shell BB 'function' Returns true if execution was successful.
|
||||
|
||||
For this, it creates a bash shell script in the tmp dectory, writes the local
|
||||
@@ -231,13 +128,23 @@ def exec_func_shell(func, d, runfile, logfile, flags):
|
||||
of the directories you need created prior to execution. The last
|
||||
item in the list is where we will chdir/cd to.
|
||||
"""
|
||||
import sys
|
||||
|
||||
deps = flags['deps']
|
||||
check = flags['check']
|
||||
deps = data.getVarFlag(func, 'deps', d)
|
||||
check = data.getVarFlag(func, 'check', d)
|
||||
interact = data.getVarFlag(func, 'interactive', d)
|
||||
if check in globals():
|
||||
if globals()[check](func, deps):
|
||||
return
|
||||
|
||||
global logfile
|
||||
t = data.getVar('T', d, 1)
|
||||
if not t:
|
||||
return 0
|
||||
mkdirhier(t)
|
||||
logfile = "%s/log.%s.%s" % (t, func, str(os.getpid()))
|
||||
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
|
||||
|
||||
f = open(runfile, "w")
|
||||
f.write("#!/bin/sh -e\n")
|
||||
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
|
||||
@@ -249,21 +156,84 @@ def exec_func_shell(func, d, runfile, logfile, flags):
|
||||
os.chmod(runfile, 0775)
|
||||
if not func:
|
||||
bb.msg.error(bb.msg.domain.Build, "Function not specified")
|
||||
raise FuncFailed("Function not specified for exec_func_shell")
|
||||
raise FuncFailed()
|
||||
|
||||
# open logs
|
||||
si = file('/dev/null', 'r')
|
||||
try:
|
||||
if bb.msg.debug_level['default'] > 0:
|
||||
so = os.popen("tee \"%s\"" % logfile, "w")
|
||||
else:
|
||||
so = file(logfile, 'w')
|
||||
except OSError, e:
|
||||
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
|
||||
pass
|
||||
|
||||
se = so
|
||||
|
||||
if not interact:
|
||||
# dup the existing fds so we dont lose them
|
||||
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
|
||||
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
|
||||
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
|
||||
|
||||
# replace those fds with our own
|
||||
os.dup2(si.fileno(), osi[1])
|
||||
os.dup2(so.fileno(), oso[1])
|
||||
os.dup2(se.fileno(), ose[1])
|
||||
|
||||
# execute function
|
||||
if flags['fakeroot']:
|
||||
prevdir = os.getcwd()
|
||||
if data.getVarFlag(func, "fakeroot", d):
|
||||
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
|
||||
else:
|
||||
maybe_fakeroot = ''
|
||||
lang_environment = "LC_ALL=C "
|
||||
ret = os.system('%s%ssh -e %s' % (lang_environment, maybe_fakeroot, runfile))
|
||||
ret = os.system('%ssh -e %s' % (maybe_fakeroot, runfile))
|
||||
try:
|
||||
os.chdir(prevdir)
|
||||
except:
|
||||
pass
|
||||
|
||||
if ret == 0:
|
||||
if not interact:
|
||||
# restore the backups
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
|
||||
# close our logs
|
||||
si.close()
|
||||
so.close()
|
||||
se.close()
|
||||
|
||||
# close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
if ret==0:
|
||||
if bb.msg.debug_level['default'] > 0:
|
||||
os.remove(runfile)
|
||||
# os.remove(logfile)
|
||||
return
|
||||
|
||||
bb.msg.error(bb.msg.domain.Build, "Function %s failed" % func)
|
||||
raise FuncFailed("function %s failed" % func, logfile)
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Build, "function %s failed" % func)
|
||||
if data.getVar("BBINCLUDELOGS", d):
|
||||
bb.msg.error(bb.msg.domain.Build, "log data follows (%s)" % logfile)
|
||||
number_of_lines = data.getVar("BBINCLUDELOGS_LINES", d)
|
||||
if number_of_lines:
|
||||
os.system('tail -n%s %s' % (number_of_lines, logfile))
|
||||
else:
|
||||
f = open(logfile, "r")
|
||||
while True:
|
||||
l = f.readline()
|
||||
if l == '':
|
||||
break
|
||||
l = l.rstrip()
|
||||
print '| %s' % l
|
||||
f.close()
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Build, "see log in %s" % logfile)
|
||||
raise FuncFailed( logfile )
|
||||
|
||||
|
||||
def exec_task(task, d):
|
||||
@@ -273,36 +243,72 @@ def exec_task(task, d):
|
||||
a function is that a task exists in the task digraph, and therefore
|
||||
has dependencies amongst other tasks."""
|
||||
|
||||
# Check whther this is a valid task
|
||||
if not data.getVarFlag(task, 'task', d):
|
||||
raise EventException("No such task", InvalidTask(task, d))
|
||||
# check if the task is in the graph..
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
task_cache = data.getVar('_task_cache', d)
|
||||
if not task_cache:
|
||||
task_cache = []
|
||||
data.setVar('_task_cache', task_cache, d)
|
||||
if not task_graph.hasnode(task):
|
||||
raise EventException("Missing node in task graph", InvalidTask(task, d))
|
||||
|
||||
try:
|
||||
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % task)
|
||||
old_overrides = data.getVar('OVERRIDES', d, 0)
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', 'task-%s:%s' % (task[3:], old_overrides), localdata)
|
||||
data.update_data(localdata)
|
||||
data.expandKeys(localdata)
|
||||
event.fire(TaskStarted(task, localdata), localdata)
|
||||
exec_func(task, localdata)
|
||||
event.fire(TaskSucceeded(task, localdata), localdata)
|
||||
except FuncFailed, message:
|
||||
# Try to extract the optional logfile
|
||||
try:
|
||||
(msg, logfile) = message
|
||||
except:
|
||||
logfile = None
|
||||
msg = message
|
||||
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % message )
|
||||
failedevent = TaskFailed(msg, logfile, task, d)
|
||||
event.fire(failedevent, d)
|
||||
raise EventException("Function failed in task: %s" % message, failedevent)
|
||||
# check whether this task needs executing..
|
||||
if stamp_is_current(task, d):
|
||||
return 1
|
||||
|
||||
# follow digraph path up, then execute our way back down
|
||||
def execute(graph, item):
|
||||
if data.getVarFlag(item, 'task', d):
|
||||
if item in task_cache:
|
||||
return 1
|
||||
|
||||
if task != item:
|
||||
# deeper than toplevel, exec w/ deps
|
||||
exec_task(item, d)
|
||||
return 1
|
||||
|
||||
try:
|
||||
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % item)
|
||||
old_overrides = data.getVar('OVERRIDES', d, 0)
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', 'task_%s:%s' % (item, old_overrides), localdata)
|
||||
data.update_data(localdata)
|
||||
event.fire(TaskStarted(item, localdata))
|
||||
exec_func(item, localdata)
|
||||
event.fire(TaskSucceeded(item, localdata))
|
||||
task_cache.append(item)
|
||||
data.setVar('_task_cache', task_cache, d)
|
||||
except FuncFailed, reason:
|
||||
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % reason )
|
||||
failedevent = TaskFailed(item, d)
|
||||
event.fire(failedevent)
|
||||
raise EventException("Function failed in task: %s" % reason, failedevent)
|
||||
|
||||
if data.getVarFlag(task, 'dontrundeps', d):
|
||||
execute(None, task)
|
||||
else:
|
||||
task_graph.walkdown(task, execute)
|
||||
|
||||
# make stamp, or cause event and raise exception
|
||||
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
|
||||
if not data.getVarFlag(task, 'nostamp', d):
|
||||
make_stamp(task, d)
|
||||
|
||||
def extract_stamp_data(d, fn):
|
||||
"""
|
||||
Extracts stamp data from d which is either a data dictonary (fn unset)
|
||||
or a dataCache entry (fn set).
|
||||
"""
|
||||
if fn:
|
||||
return (d.task_queues[fn], d.stamp[fn], d.task_deps[fn])
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
return (task_graph, data.getVar('STAMP', d, 1), None)
|
||||
|
||||
def extract_stamp(d, fn):
|
||||
"""
|
||||
Extracts stamp format which is either a data dictonary (fn unset)
|
||||
@@ -312,6 +318,49 @@ def extract_stamp(d, fn):
|
||||
return d.stamp[fn]
|
||||
return data.getVar('STAMP', d, 1)
|
||||
|
||||
def stamp_is_current(task, d, file_name = None, checkdeps = 1):
|
||||
"""
|
||||
Check status of a given task's stamp.
|
||||
Returns 0 if it is not current and needs updating.
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
|
||||
(task_graph, stampfn, taskdep) = extract_stamp_data(d, file_name)
|
||||
|
||||
if not stampfn:
|
||||
return 0
|
||||
|
||||
stampfile = "%s.%s" % (stampfn, task)
|
||||
if not os.access(stampfile, os.F_OK):
|
||||
return 0
|
||||
|
||||
if checkdeps == 0:
|
||||
return 1
|
||||
|
||||
import stat
|
||||
tasktime = os.stat(stampfile)[stat.ST_MTIME]
|
||||
|
||||
_deps = []
|
||||
def checkStamp(graph, task):
|
||||
# check for existance
|
||||
if file_name:
|
||||
if 'nostamp' in taskdep and task in taskdep['nostamp']:
|
||||
return 1
|
||||
else:
|
||||
if data.getVarFlag(task, 'nostamp', d):
|
||||
return 1
|
||||
|
||||
if not stamp_is_current(task, d, file_name, 0 ):
|
||||
return 0
|
||||
|
||||
depfile = "%s.%s" % (stampfn, task)
|
||||
deptime = os.stat(depfile)[stat.ST_MTIME]
|
||||
if deptime > tasktime:
|
||||
return 0
|
||||
return 1
|
||||
|
||||
return task_graph.walkdown(task, checkStamp)
|
||||
|
||||
def stamp_internal(task, d, file_name):
|
||||
"""
|
||||
Internal stamp helper function
|
||||
@@ -347,40 +396,33 @@ def del_stamp(task, d, file_name = None):
|
||||
"""
|
||||
stamp_internal(task, d, file_name)
|
||||
|
||||
def add_tasks(tasklist, d):
|
||||
def add_task(task, deps, d):
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVarFlag(task, 'task', 1, d)
|
||||
task_graph.addnode(task, None)
|
||||
for dep in deps:
|
||||
if not task_graph.hasnode(dep):
|
||||
task_graph.addnode(dep, None)
|
||||
task_graph.addnode(task, dep)
|
||||
# don't assume holding a reference
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
|
||||
task_deps = data.getVar('_task_deps', d)
|
||||
if not task_deps:
|
||||
task_deps = {}
|
||||
if not 'tasks' in task_deps:
|
||||
task_deps['tasks'] = []
|
||||
if not 'parents' in task_deps:
|
||||
task_deps['parents'] = {}
|
||||
|
||||
for task in tasklist:
|
||||
task = data.expand(task, d)
|
||||
data.setVarFlag(task, 'task', 1, d)
|
||||
|
||||
if not task in task_deps['tasks']:
|
||||
task_deps['tasks'].append(task)
|
||||
|
||||
flags = data.getVarFlags(task, d)
|
||||
def getTask(name):
|
||||
def getTask(name):
|
||||
deptask = data.getVarFlag(task, name, d)
|
||||
if deptask:
|
||||
if not name in task_deps:
|
||||
task_deps[name] = {}
|
||||
if name in flags:
|
||||
deptask = data.expand(flags[name], d)
|
||||
task_deps[name][task] = deptask
|
||||
getTask('depends')
|
||||
getTask('deptask')
|
||||
getTask('rdeptask')
|
||||
getTask('recrdeptask')
|
||||
getTask('nostamp')
|
||||
task_deps['parents'][task] = []
|
||||
for dep in flags['deps']:
|
||||
dep = data.expand(dep, d)
|
||||
task_deps['parents'][task].append(dep)
|
||||
task_deps[name][task] = deptask
|
||||
getTask('deptask')
|
||||
getTask('rdeptask')
|
||||
getTask('recrdeptask')
|
||||
getTask('nostamp')
|
||||
|
||||
# don't assume holding a reference
|
||||
data.setVar('_task_deps', task_deps, d)
|
||||
|
||||
def remove_task(task, kill, d):
|
||||
@@ -388,5 +430,22 @@ def remove_task(task, kill, d):
|
||||
|
||||
If kill is 1, also remove tasks that depend on this task."""
|
||||
|
||||
data.delVarFlag(task, 'task', d)
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
if not task_graph.hasnode(task):
|
||||
return
|
||||
|
||||
data.delVarFlag(task, 'task', d)
|
||||
ref = 1
|
||||
if kill == 1:
|
||||
ref = 2
|
||||
task_graph.delnode(task, ref)
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
|
||||
def task_exists(task, d):
|
||||
task_graph = data.getVar('_task_graph', d)
|
||||
if not task_graph:
|
||||
task_graph = bb.digraph()
|
||||
data.setVar('_task_graph', task_graph, d)
|
||||
return task_graph.hasnode(task)
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
import os, re
|
||||
import bb.data
|
||||
import bb.utils
|
||||
from sets import Set
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
@@ -38,7 +39,7 @@ except ImportError:
|
||||
import pickle
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
|
||||
|
||||
__cache_version__ = "131"
|
||||
__cache_version__ = "125"
|
||||
|
||||
class Cache:
|
||||
"""
|
||||
@@ -49,54 +50,39 @@ class Cache:
|
||||
|
||||
self.cachedir = bb.data.getVar("CACHE", cooker.configuration.data, True)
|
||||
self.clean = {}
|
||||
self.checked = {}
|
||||
self.depends_cache = {}
|
||||
self.data = None
|
||||
self.data_fn = None
|
||||
self.cacheclean = True
|
||||
|
||||
if self.cachedir in [None, '']:
|
||||
self.has_cache = False
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Not using a cache. Set CACHE = <directory> to enable.")
|
||||
return
|
||||
|
||||
self.has_cache = True
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_cache.dat")
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
|
||||
try:
|
||||
os.stat( self.cachedir )
|
||||
except OSError:
|
||||
bb.mkdirhier( self.cachedir )
|
||||
|
||||
# If any of configuration.data's dependencies are newer than the
|
||||
# cache there isn't even any point in loading it...
|
||||
newest_mtime = 0
|
||||
deps = bb.data.getVar("__depends", cooker.configuration.data, True)
|
||||
for f,old_mtime in deps:
|
||||
if old_mtime > newest_mtime:
|
||||
newest_mtime = old_mtime
|
||||
|
||||
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
|
||||
else:
|
||||
self.has_cache = True
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_cache.dat")
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
|
||||
try:
|
||||
p = pickle.Unpickler(file(self.cachefile, "rb"))
|
||||
os.stat( self.cachedir )
|
||||
except OSError:
|
||||
bb.mkdirhier( self.cachedir )
|
||||
|
||||
if self.has_cache and (self.mtime(self.cachefile)):
|
||||
try:
|
||||
p = pickle.Unpickler( file(self.cachefile,"rb"))
|
||||
self.depends_cache, version_data = p.load()
|
||||
if version_data['CACHE_VER'] != __cache_version__:
|
||||
raise ValueError, 'Cache Version Mismatch'
|
||||
if version_data['BITBAKE_VER'] != bb.__version__:
|
||||
raise ValueError, 'Bitbake Version Mismatch'
|
||||
except EOFError:
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
|
||||
self.depends_cache = {}
|
||||
except:
|
||||
except (ValueError, KeyError):
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
|
||||
self.depends_cache = {}
|
||||
else:
|
||||
try:
|
||||
os.stat( self.cachefile )
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Out of date cache found, rebuilding...")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if self.depends_cache:
|
||||
for fn in self.depends_cache.keys():
|
||||
self.clean[fn] = ""
|
||||
self.cacheValidUpdate(fn)
|
||||
|
||||
def getVar(self, var, fn, exp = 0):
|
||||
"""
|
||||
@@ -108,6 +94,7 @@ class Cache:
|
||||
2. We're learning what data to cache - serve from data
|
||||
backend but add a copy of the data to the cache.
|
||||
"""
|
||||
|
||||
if fn in self.clean:
|
||||
return self.depends_cache[fn][var]
|
||||
|
||||
@@ -119,72 +106,32 @@ class Cache:
|
||||
# yet setData hasn't been called to setup the right access. Very bad.
|
||||
bb.msg.error(bb.msg.domain.Cache, "Parsing error data_fn %s and fn %s don't match" % (self.data_fn, fn))
|
||||
|
||||
self.cacheclean = False
|
||||
result = bb.data.getVar(var, self.data, exp)
|
||||
self.depends_cache[fn][var] = result
|
||||
return result
|
||||
|
||||
def setData(self, virtualfn, fn, data):
|
||||
def setData(self, fn, data):
|
||||
"""
|
||||
Called to prime bb_cache ready to learn which variables to cache.
|
||||
Will be followed by calls to self.getVar which aren't cached
|
||||
but can be fulfilled from self.data.
|
||||
"""
|
||||
self.data_fn = virtualfn
|
||||
self.data_fn = fn
|
||||
self.data = data
|
||||
|
||||
# Make sure __depends makes the depends_cache
|
||||
# If we're a virtual class we need to make sure all our depends are appended
|
||||
# to the depends of fn.
|
||||
depends = self.getVar("__depends", virtualfn, True) or []
|
||||
if "__depends" not in self.depends_cache[fn] or not self.depends_cache[fn]["__depends"]:
|
||||
self.depends_cache[fn]["__depends"] = depends
|
||||
for dep in depends:
|
||||
if dep not in self.depends_cache[fn]["__depends"]:
|
||||
self.depends_cache[fn]["__depends"].append(dep)
|
||||
self.getVar("__depends", fn, True)
|
||||
self.depends_cache[fn]["CACHETIMESTAMP"] = bb.parse.cached_mtime(fn)
|
||||
|
||||
# Make sure the variants always make it into the cache too
|
||||
self.getVar('__VARIANTS', virtualfn, True)
|
||||
|
||||
self.depends_cache[virtualfn]["CACHETIMESTAMP"] = bb.parse.cached_mtime(fn)
|
||||
|
||||
def virtualfn2realfn(self, virtualfn):
|
||||
"""
|
||||
Convert a virtual file name to a real one + the associated subclass keyword
|
||||
"""
|
||||
|
||||
fn = virtualfn
|
||||
cls = ""
|
||||
if virtualfn.startswith('virtual:'):
|
||||
cls = virtualfn.split(':', 2)[1]
|
||||
fn = virtualfn.replace('virtual:' + cls + ':', '')
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "virtualfn2realfn %s to %s %s" % (virtualfn, fn, cls))
|
||||
return (fn, cls)
|
||||
|
||||
def realfn2virtual(self, realfn, cls):
|
||||
"""
|
||||
Convert a real filename + the associated subclass keyword to a virtual filename
|
||||
"""
|
||||
if cls == "":
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and '%s' to %s" % (realfn, cls, realfn))
|
||||
return realfn
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and %s to %s" % (realfn, cls, "virtual:" + cls + ":" + realfn))
|
||||
return "virtual:" + cls + ":" + realfn
|
||||
|
||||
def loadDataFull(self, virtualfn, cfgData):
|
||||
def loadDataFull(self, fn, cfgData):
|
||||
"""
|
||||
Return a complete set of data for fn.
|
||||
To do this, we need to parse the file.
|
||||
"""
|
||||
bb_data, skipped = self.load_bbfile(fn, cfgData)
|
||||
return bb_data
|
||||
|
||||
(fn, cls) = self.virtualfn2realfn(virtualfn)
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s (full)" % fn)
|
||||
|
||||
bb_data = self.load_bbfile(fn, cfgData)
|
||||
return bb_data[cls]
|
||||
|
||||
def loadData(self, fn, cfgData, cacheData):
|
||||
def loadData(self, fn, cfgData):
|
||||
"""
|
||||
Load a subset of data for fn.
|
||||
If the cached data is valid we do nothing,
|
||||
@@ -192,39 +139,14 @@ class Cache:
|
||||
to record the variables accessed.
|
||||
Return the cache status and whether the file was skipped when parsed
|
||||
"""
|
||||
skipped = 0
|
||||
virtuals = 0
|
||||
|
||||
if fn not in self.checked:
|
||||
self.cacheValidUpdate(fn)
|
||||
|
||||
if self.cacheValid(fn):
|
||||
multi = self.getVar('__VARIANTS', fn, True)
|
||||
for cls in (multi or "").split() + [""]:
|
||||
virtualfn = self.realfn2virtual(fn, cls)
|
||||
if self.depends_cache[virtualfn]["__SKIPPED"]:
|
||||
skipped += 1
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
|
||||
continue
|
||||
self.handle_data(virtualfn, cacheData)
|
||||
virtuals += 1
|
||||
return True, skipped, virtuals
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s" % fn)
|
||||
|
||||
bb_data = self.load_bbfile(fn, cfgData)
|
||||
|
||||
for data in bb_data:
|
||||
virtualfn = self.realfn2virtual(fn, data)
|
||||
self.setData(virtualfn, fn, bb_data[data])
|
||||
if self.getVar("__SKIPPED", virtualfn, True):
|
||||
skipped += 1
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
|
||||
else:
|
||||
self.handle_data(virtualfn, cacheData)
|
||||
virtuals += 1
|
||||
return False, skipped, virtuals
|
||||
if "SKIPPED" in self.depends_cache[fn]:
|
||||
return True, True
|
||||
return True, False
|
||||
|
||||
bb_data, skipped = self.load_bbfile(fn, cfgData)
|
||||
self.setData(fn, bb_data)
|
||||
return False, skipped
|
||||
|
||||
def cacheValid(self, fn):
|
||||
"""
|
||||
@@ -247,10 +169,11 @@ class Cache:
|
||||
if not self.has_cache:
|
||||
return False
|
||||
|
||||
self.checked[fn] = ""
|
||||
|
||||
# Pretend we're clean so getVar works
|
||||
self.clean[fn] = ""
|
||||
# Check file still exists
|
||||
if self.mtime(fn) == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# File isn't in depends_cache
|
||||
if not fn in self.depends_cache:
|
||||
@@ -258,47 +181,40 @@ class Cache:
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
mtime = bb.parse.cached_mtime_noerror(fn)
|
||||
|
||||
# Check file still exists
|
||||
if mtime == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# Check the file's timestamp
|
||||
if mtime != self.getVar("CACHETIMESTAMP", fn, True):
|
||||
if bb.parse.cached_mtime(fn) > self.getVar("CACHETIMESTAMP", fn, True):
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s changed" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# Check dependencies are still valid
|
||||
depends = self.getVar("__depends", fn, True)
|
||||
if depends:
|
||||
for f,old_mtime in depends:
|
||||
fmtime = bb.parse.cached_mtime_noerror(f)
|
||||
# Check if file still exists
|
||||
if old_mtime != 0 and fmtime == 0:
|
||||
self.remove(fn)
|
||||
return False
|
||||
for f,old_mtime in depends:
|
||||
# Check if file still exists
|
||||
if self.mtime(f) == 0:
|
||||
return False
|
||||
|
||||
if (fmtime != old_mtime):
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
|
||||
self.remove(fn)
|
||||
return False
|
||||
new_mtime = bb.parse.cached_mtime(f)
|
||||
if (new_mtime > old_mtime):
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
|
||||
if not fn in self.clean:
|
||||
self.clean[fn] = ""
|
||||
|
||||
# Mark extended class data as clean too
|
||||
multi = self.getVar('__VARIANTS', fn, True)
|
||||
for cls in (multi or "").split():
|
||||
virtualfn = self.realfn2virtual(fn, cls)
|
||||
self.clean[virtualfn] = ""
|
||||
|
||||
return True
|
||||
|
||||
def skip(self, fn):
|
||||
"""
|
||||
Mark a fn as skipped
|
||||
Called from the parser
|
||||
"""
|
||||
if not fn in self.depends_cache:
|
||||
self.depends_cache[fn] = {}
|
||||
self.depends_cache[fn]["SKIPPED"] = "1"
|
||||
|
||||
def remove(self, fn):
|
||||
"""
|
||||
Remove a fn from the cache
|
||||
@@ -315,30 +231,16 @@ class Cache:
|
||||
Save the cache
|
||||
Called from the parser when complete (or exiting)
|
||||
"""
|
||||
import copy
|
||||
|
||||
if not self.has_cache:
|
||||
return
|
||||
|
||||
if self.cacheclean:
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Cache is clean, not saving.")
|
||||
return
|
||||
|
||||
version_data = {}
|
||||
version_data['CACHE_VER'] = __cache_version__
|
||||
version_data['BITBAKE_VER'] = bb.__version__
|
||||
|
||||
cache_data = copy.copy(self.depends_cache)
|
||||
for fn in self.depends_cache:
|
||||
if '__BB_DONT_CACHE' in self.depends_cache[fn] and self.depends_cache[fn]['__BB_DONT_CACHE']:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Not caching %s, marked as not cacheable" % fn)
|
||||
del cache_data[fn]
|
||||
elif 'PV' in self.depends_cache[fn] and 'SRCREVINACTION' in self.depends_cache[fn]['PV']:
|
||||
bb.msg.error(bb.msg.domain.Cache, "Not caching %s as it had SRCREVINACTION in PV. Please report this bug" % fn)
|
||||
del cache_data[fn]
|
||||
|
||||
p = pickle.Pickler(file(self.cachefile, "wb" ), -1 )
|
||||
p.dump([cache_data, version_data])
|
||||
p.dump([self.depends_cache, version_data])
|
||||
|
||||
def mtime(self, cachefile):
|
||||
return bb.parse.cached_mtime_noerror(cachefile)
|
||||
@@ -349,17 +251,16 @@ class Cache:
|
||||
"""
|
||||
|
||||
pn = self.getVar('PN', file_name, True)
|
||||
pe = self.getVar('PE', file_name, True) or "0"
|
||||
pv = self.getVar('PV', file_name, True)
|
||||
if 'SRCREVINACTION' in pv:
|
||||
bb.note("Found SRCREVINACTION in PV (%s) or %s. Please report this bug." % (pv, file_name))
|
||||
pr = self.getVar('PR', file_name, True)
|
||||
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
|
||||
provides = Set([pn] + (self.getVar("PROVIDES", file_name, True) or "").split())
|
||||
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
|
||||
packages = (self.getVar('PACKAGES', file_name, True) or "").split()
|
||||
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
|
||||
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
|
||||
|
||||
cacheData.task_queues[file_name] = self.getVar("_task_graph", file_name, True)
|
||||
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name, True)
|
||||
|
||||
# build PackageName to FileName lookup table
|
||||
@@ -371,34 +272,25 @@ class Cache:
|
||||
|
||||
# build FileName to PackageName lookup table
|
||||
cacheData.pkg_fn[file_name] = pn
|
||||
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
|
||||
cacheData.pkg_pvpr[file_name] = (pv,pr)
|
||||
cacheData.pkg_dp[file_name] = dp
|
||||
|
||||
provides = [pn]
|
||||
for provide in (self.getVar("PROVIDES", file_name, True) or "").split():
|
||||
if provide not in provides:
|
||||
provides.append(provide)
|
||||
|
||||
# Build forward and reverse provider hashes
|
||||
# Forward: virtual -> [filenames]
|
||||
# Reverse: PN -> [virtuals]
|
||||
if pn not in cacheData.pn_provides:
|
||||
cacheData.pn_provides[pn] = []
|
||||
cacheData.pn_provides[pn] = Set()
|
||||
cacheData.pn_provides[pn] |= provides
|
||||
|
||||
cacheData.fn_provides[file_name] = provides
|
||||
for provide in provides:
|
||||
if provide not in cacheData.providers:
|
||||
cacheData.providers[provide] = []
|
||||
cacheData.providers[provide].append(file_name)
|
||||
if not provide in cacheData.pn_provides[pn]:
|
||||
cacheData.pn_provides[pn].append(provide)
|
||||
|
||||
cacheData.deps[file_name] = []
|
||||
cacheData.deps[file_name] = Set()
|
||||
for dep in depends:
|
||||
if not dep in cacheData.deps[file_name]:
|
||||
cacheData.deps[file_name].append(dep)
|
||||
if not dep in cacheData.all_depends:
|
||||
cacheData.all_depends.append(dep)
|
||||
cacheData.all_depends.add(dep)
|
||||
cacheData.deps[file_name].add(dep)
|
||||
|
||||
# Build reverse hash for PACKAGES, so runtime dependencies
|
||||
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
|
||||
@@ -420,30 +312,32 @@ class Cache:
|
||||
|
||||
# Build hash of runtime depends and rececommends
|
||||
|
||||
def add_dep(deplist, deps):
|
||||
for dep in deps:
|
||||
if not dep in deplist:
|
||||
deplist[dep] = ""
|
||||
|
||||
if not file_name in cacheData.rundeps:
|
||||
cacheData.rundeps[file_name] = {}
|
||||
if not file_name in cacheData.runrecs:
|
||||
cacheData.runrecs[file_name] = {}
|
||||
|
||||
rdepends = self.getVar('RDEPENDS', file_name, True) or ""
|
||||
rrecommends = self.getVar('RRECOMMENDS', file_name, True) or ""
|
||||
for package in packages + [pn]:
|
||||
if not package in cacheData.rundeps[file_name]:
|
||||
cacheData.rundeps[file_name][package] = []
|
||||
cacheData.rundeps[file_name][package] = {}
|
||||
if not package in cacheData.runrecs[file_name]:
|
||||
cacheData.runrecs[file_name][package] = []
|
||||
cacheData.runrecs[file_name][package] = {}
|
||||
|
||||
cacheData.rundeps[file_name][package] = rdepends + " " + (self.getVar("RDEPENDS_%s" % package, file_name, True) or "")
|
||||
cacheData.runrecs[file_name][package] = rrecommends + " " + (self.getVar("RRECOMMENDS_%s" % package, file_name, True) or "")
|
||||
add_dep(cacheData.rundeps[file_name][package], bb.utils.explode_deps(self.getVar('RDEPENDS', file_name, True) or ""))
|
||||
add_dep(cacheData.runrecs[file_name][package], bb.utils.explode_deps(self.getVar('RRECOMMENDS', file_name, True) or ""))
|
||||
add_dep(cacheData.rundeps[file_name][package], bb.utils.explode_deps(self.getVar("RDEPENDS_%s" % package, file_name, True) or ""))
|
||||
add_dep(cacheData.runrecs[file_name][package], bb.utils.explode_deps(self.getVar("RRECOMMENDS_%s" % package, file_name, True) or ""))
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not self.getVar('BROKEN', file_name, True) and not self.getVar('EXCLUDE_FROM_WORLD', file_name, True):
|
||||
cacheData.possible_world.append(file_name)
|
||||
|
||||
# Touch this to make sure its in the cache
|
||||
self.getVar('__BB_DONT_CACHE', file_name, True)
|
||||
self.getVar('__VARIANTS', file_name, True)
|
||||
|
||||
def load_bbfile( self, bbfile , config):
|
||||
"""
|
||||
@@ -458,13 +352,16 @@ class Cache:
|
||||
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
oldpath = os.path.abspath(os.getcwd())
|
||||
if bb.parse.cached_mtime_noerror(bbfile_loc):
|
||||
if self.mtime(bbfile_loc):
|
||||
os.chdir(bbfile_loc)
|
||||
bb_data = data.init_db(config)
|
||||
try:
|
||||
bb_data = parse.handle(bbfile, bb_data) # read .bb data
|
||||
os.chdir(oldpath)
|
||||
return bb_data
|
||||
return bb_data, False
|
||||
except bb.parse.SkipPackage:
|
||||
os.chdir(oldpath)
|
||||
return bb_data, True
|
||||
except:
|
||||
os.chdir(oldpath)
|
||||
raise
|
||||
@@ -510,11 +407,10 @@ class CacheData:
|
||||
self.possible_world = []
|
||||
self.pkg_pn = {}
|
||||
self.pkg_fn = {}
|
||||
self.pkg_pepvpr = {}
|
||||
self.pkg_pvpr = {}
|
||||
self.pkg_dp = {}
|
||||
self.pn_provides = {}
|
||||
self.fn_provides = {}
|
||||
self.all_depends = []
|
||||
self.all_depends = Set()
|
||||
self.deps = {}
|
||||
self.rundeps = {}
|
||||
self.runrecs = {}
|
||||
@@ -528,6 +424,6 @@ class CacheData:
|
||||
(set elsewhere)
|
||||
"""
|
||||
self.ignored_dependencies = []
|
||||
self.world_target = set()
|
||||
self.world_target = Set()
|
||||
self.bbfile_priority = {}
|
||||
self.bbfile_config_priorities = []
|
||||
|
||||
@@ -1,273 +0,0 @@
|
||||
"""
|
||||
BitBake 'Command' module
|
||||
|
||||
Provide an interface to interact with the bitbake server through 'commands'
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
The bitbake server takes 'commands' from its UI/commandline.
|
||||
Commands are either synchronous or asynchronous.
|
||||
Async commands return data to the client in the form of events.
|
||||
Sync commands must only return data through the function return value
|
||||
and must not trigger events, directly or indirectly.
|
||||
Commands are queued in a CommandQueue
|
||||
"""
|
||||
|
||||
import bb.event
|
||||
import bb.cooker
|
||||
import bb.data
|
||||
|
||||
async_cmds = {}
|
||||
sync_cmds = {}
|
||||
|
||||
class Command:
|
||||
"""
|
||||
A queue of asynchronous commands for bitbake
|
||||
"""
|
||||
def __init__(self, cooker):
|
||||
|
||||
self.cooker = cooker
|
||||
self.cmds_sync = CommandsSync()
|
||||
self.cmds_async = CommandsAsync()
|
||||
|
||||
# FIXME Add lock for this
|
||||
self.currentAsyncCommand = None
|
||||
|
||||
for attr in CommandsSync.__dict__:
|
||||
command = attr[:].lower()
|
||||
method = getattr(CommandsSync, attr)
|
||||
sync_cmds[command] = (method)
|
||||
|
||||
for attr in CommandsAsync.__dict__:
|
||||
command = attr[:].lower()
|
||||
method = getattr(CommandsAsync, attr)
|
||||
async_cmds[command] = (method)
|
||||
|
||||
def runCommand(self, commandline):
|
||||
try:
|
||||
command = commandline.pop(0)
|
||||
if command in CommandsSync.__dict__:
|
||||
# Can run synchronous commands straight away
|
||||
return getattr(CommandsSync, command)(self.cmds_sync, self, commandline)
|
||||
if self.currentAsyncCommand is not None:
|
||||
return "Busy (%s in progress)" % self.currentAsyncCommand[0]
|
||||
if command not in CommandsAsync.__dict__:
|
||||
return "No such command"
|
||||
self.currentAsyncCommand = (command, commandline)
|
||||
self.cooker.server.register_idle_function(self.cooker.runCommands, self.cooker)
|
||||
return True
|
||||
except:
|
||||
import traceback
|
||||
return traceback.format_exc()
|
||||
|
||||
def runAsyncCommand(self):
|
||||
try:
|
||||
if self.currentAsyncCommand is not None:
|
||||
(command, options) = self.currentAsyncCommand
|
||||
commandmethod = getattr(CommandsAsync, command)
|
||||
needcache = getattr( commandmethod, "needcache" )
|
||||
if needcache and self.cooker.cookerState != bb.cooker.cookerParsed:
|
||||
self.cooker.updateCache()
|
||||
return True
|
||||
else:
|
||||
commandmethod(self.cmds_async, self, options)
|
||||
return False
|
||||
else:
|
||||
return False
|
||||
except:
|
||||
import traceback
|
||||
self.finishAsyncCommand(traceback.format_exc())
|
||||
return False
|
||||
|
||||
def finishAsyncCommand(self, error = None):
|
||||
if error:
|
||||
bb.event.fire(CookerCommandFailed(error), self.cooker.configuration.event_data)
|
||||
else:
|
||||
bb.event.fire(CookerCommandCompleted(), self.cooker.configuration.event_data)
|
||||
self.currentAsyncCommand = None
|
||||
|
||||
|
||||
class CommandsSync:
|
||||
"""
|
||||
A class of synchronous commands
|
||||
These should run quickly so as not to hurt interactive performance.
|
||||
These must not influence any running synchronous command.
|
||||
"""
|
||||
|
||||
def stateShutdown(self, command, params):
|
||||
"""
|
||||
Trigger cooker 'shutdown' mode
|
||||
"""
|
||||
command.cooker.cookerAction = bb.cooker.cookerShutdown
|
||||
|
||||
def stateStop(self, command, params):
|
||||
"""
|
||||
Stop the cooker
|
||||
"""
|
||||
command.cooker.cookerAction = bb.cooker.cookerStop
|
||||
|
||||
def getCmdLineAction(self, command, params):
|
||||
"""
|
||||
Get any command parsed from the commandline
|
||||
"""
|
||||
return command.cooker.commandlineAction
|
||||
|
||||
def getVariable(self, command, params):
|
||||
"""
|
||||
Read the value of a variable from configuration.data
|
||||
"""
|
||||
varname = params[0]
|
||||
expand = True
|
||||
if len(params) > 1:
|
||||
expand = params[1]
|
||||
|
||||
return bb.data.getVar(varname, command.cooker.configuration.data, expand)
|
||||
|
||||
def setVariable(self, command, params):
|
||||
"""
|
||||
Set the value of variable in configuration.data
|
||||
"""
|
||||
varname = params[0]
|
||||
value = params[1]
|
||||
bb.data.setVar(varname, value, command.cooker.configuration.data)
|
||||
|
||||
|
||||
class CommandsAsync:
|
||||
"""
|
||||
A class of asynchronous commands
|
||||
These functions communicate via generated events.
|
||||
Any function that requires metadata parsing should be here.
|
||||
"""
|
||||
|
||||
def buildFile(self, command, params):
|
||||
"""
|
||||
Build a single specified .bb file
|
||||
"""
|
||||
bfile = params[0]
|
||||
task = params[1]
|
||||
|
||||
command.cooker.buildFile(bfile, task)
|
||||
buildFile.needcache = False
|
||||
|
||||
def buildTargets(self, command, params):
|
||||
"""
|
||||
Build a set of targets
|
||||
"""
|
||||
pkgs_to_build = params[0]
|
||||
task = params[1]
|
||||
|
||||
command.cooker.buildTargets(pkgs_to_build, task)
|
||||
buildTargets.needcache = True
|
||||
|
||||
def generateDepTreeEvent(self, command, params):
|
||||
"""
|
||||
Generate an event containing the dependency information
|
||||
"""
|
||||
pkgs_to_build = params[0]
|
||||
task = params[1]
|
||||
|
||||
command.cooker.generateDepTreeEvent(pkgs_to_build, task)
|
||||
command.finishAsyncCommand()
|
||||
generateDepTreeEvent.needcache = True
|
||||
|
||||
def generateDotGraph(self, command, params):
|
||||
"""
|
||||
Dump dependency information to disk as .dot files
|
||||
"""
|
||||
pkgs_to_build = params[0]
|
||||
task = params[1]
|
||||
|
||||
command.cooker.generateDotGraphFiles(pkgs_to_build, task)
|
||||
command.finishAsyncCommand()
|
||||
generateDotGraph.needcache = True
|
||||
|
||||
def showVersions(self, command, params):
|
||||
"""
|
||||
Show the currently selected versions
|
||||
"""
|
||||
command.cooker.showVersions()
|
||||
command.finishAsyncCommand()
|
||||
showVersions.needcache = True
|
||||
|
||||
def showEnvironmentTarget(self, command, params):
|
||||
"""
|
||||
Print the environment of a target recipe
|
||||
(needs the cache to work out which recipe to use)
|
||||
"""
|
||||
pkg = params[0]
|
||||
|
||||
command.cooker.showEnvironment(None, pkg)
|
||||
command.finishAsyncCommand()
|
||||
showEnvironmentTarget.needcache = True
|
||||
|
||||
def showEnvironment(self, command, params):
|
||||
"""
|
||||
Print the standard environment
|
||||
or if specified the environment for a specified recipe
|
||||
"""
|
||||
bfile = params[0]
|
||||
|
||||
command.cooker.showEnvironment(bfile)
|
||||
command.finishAsyncCommand()
|
||||
showEnvironment.needcache = False
|
||||
|
||||
def parseFiles(self, command, params):
|
||||
"""
|
||||
Parse the .bb files
|
||||
"""
|
||||
command.cooker.updateCache()
|
||||
command.finishAsyncCommand()
|
||||
parseFiles.needcache = True
|
||||
|
||||
def compareRevisions(self, command, params):
|
||||
"""
|
||||
Parse the .bb files
|
||||
"""
|
||||
command.cooker.compareRevisions()
|
||||
command.finishAsyncCommand()
|
||||
compareRevisions.needcache = True
|
||||
|
||||
#
|
||||
# Events
|
||||
#
|
||||
class CookerCommandCompleted(bb.event.Event):
|
||||
"""
|
||||
Cooker command completed
|
||||
"""
|
||||
def __init__(self):
|
||||
bb.event.Event.__init__(self)
|
||||
|
||||
|
||||
class CookerCommandFailed(bb.event.Event):
|
||||
"""
|
||||
Cooker command completed
|
||||
"""
|
||||
def __init__(self, error):
|
||||
bb.event.Event.__init__(self)
|
||||
self.error = error
|
||||
|
||||
class CookerCommandSetExitCode(bb.event.Event):
|
||||
"""
|
||||
Set the exit code for a cooker command
|
||||
"""
|
||||
def __init__(self, exitcode):
|
||||
bb.event.Event.__init__(self)
|
||||
self.exitcode = int(exitcode)
|
||||
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,191 +0,0 @@
|
||||
"""
|
||||
Python Deamonizing helper
|
||||
|
||||
Configurable daemon behaviors:
|
||||
|
||||
1.) The current working directory set to the "/" directory.
|
||||
2.) The current file creation mode mask set to 0.
|
||||
3.) Close all open files (1024).
|
||||
4.) Redirect standard I/O streams to "/dev/null".
|
||||
|
||||
A failed call to fork() now raises an exception.
|
||||
|
||||
References:
|
||||
1) Advanced Programming in the Unix Environment: W. Richard Stevens
|
||||
2) Unix Programming Frequently Asked Questions:
|
||||
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
|
||||
|
||||
Modified to allow a function to be daemonized and return for
|
||||
bitbake use by Richard Purdie
|
||||
"""
|
||||
|
||||
__author__ = "Chad J. Schroeder"
|
||||
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
|
||||
__version__ = "0.2"
|
||||
|
||||
# Standard Python modules.
|
||||
import os # Miscellaneous OS interfaces.
|
||||
import sys # System-specific parameters and functions.
|
||||
|
||||
# Default daemon parameters.
|
||||
# File mode creation mask of the daemon.
|
||||
# For BitBake's children, we do want to inherit the parent umask.
|
||||
UMASK = None
|
||||
|
||||
# Default maximum for the number of available file descriptors.
|
||||
MAXFD = 1024
|
||||
|
||||
# The standard I/O file descriptors are redirected to /dev/null by default.
|
||||
if (hasattr(os, "devnull")):
|
||||
REDIRECT_TO = os.devnull
|
||||
else:
|
||||
REDIRECT_TO = "/dev/null"
|
||||
|
||||
def createDaemon(function, logfile):
|
||||
"""
|
||||
Detach a process from the controlling terminal and run it in the
|
||||
background as a daemon, returning control to the caller.
|
||||
"""
|
||||
|
||||
try:
|
||||
# Fork a child process so the parent can exit. This returns control to
|
||||
# the command-line or shell. It also guarantees that the child will not
|
||||
# be a process group leader, since the child receives a new process ID
|
||||
# and inherits the parent's process group ID. This step is required
|
||||
# to insure that the next call to os.setsid is successful.
|
||||
pid = os.fork()
|
||||
except OSError, e:
|
||||
raise Exception, "%s [%d]" % (e.strerror, e.errno)
|
||||
|
||||
if (pid == 0): # The first child.
|
||||
# To become the session leader of this new session and the process group
|
||||
# leader of the new process group, we call os.setsid(). The process is
|
||||
# also guaranteed not to have a controlling terminal.
|
||||
os.setsid()
|
||||
|
||||
# Is ignoring SIGHUP necessary?
|
||||
#
|
||||
# It's often suggested that the SIGHUP signal should be ignored before
|
||||
# the second fork to avoid premature termination of the process. The
|
||||
# reason is that when the first child terminates, all processes, e.g.
|
||||
# the second child, in the orphaned group will be sent a SIGHUP.
|
||||
#
|
||||
# "However, as part of the session management system, there are exactly
|
||||
# two cases where SIGHUP is sent on the death of a process:
|
||||
#
|
||||
# 1) When the process that dies is the session leader of a session that
|
||||
# is attached to a terminal device, SIGHUP is sent to all processes
|
||||
# in the foreground process group of that terminal device.
|
||||
# 2) When the death of a process causes a process group to become
|
||||
# orphaned, and one or more processes in the orphaned group are
|
||||
# stopped, then SIGHUP and SIGCONT are sent to all members of the
|
||||
# orphaned group." [2]
|
||||
#
|
||||
# The first case can be ignored since the child is guaranteed not to have
|
||||
# a controlling terminal. The second case isn't so easy to dismiss.
|
||||
# The process group is orphaned when the first child terminates and
|
||||
# POSIX.1 requires that every STOPPED process in an orphaned process
|
||||
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
|
||||
# second child is not STOPPED though, we can safely forego ignoring the
|
||||
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
|
||||
#
|
||||
# import signal # Set handlers for asynchronous events.
|
||||
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
|
||||
|
||||
try:
|
||||
# Fork a second child and exit immediately to prevent zombies. This
|
||||
# causes the second child process to be orphaned, making the init
|
||||
# process responsible for its cleanup. And, since the first child is
|
||||
# a session leader without a controlling terminal, it's possible for
|
||||
# it to acquire one by opening a terminal in the future (System V-
|
||||
# based systems). This second fork guarantees that the child is no
|
||||
# longer a session leader, preventing the daemon from ever acquiring
|
||||
# a controlling terminal.
|
||||
pid = os.fork() # Fork a second child.
|
||||
except OSError, e:
|
||||
raise Exception, "%s [%d]" % (e.strerror, e.errno)
|
||||
|
||||
if (pid == 0): # The second child.
|
||||
# We probably don't want the file mode creation mask inherited from
|
||||
# the parent, so we give the child complete control over permissions.
|
||||
if UMASK is not None:
|
||||
os.umask(UMASK)
|
||||
else:
|
||||
# Parent (the first child) of the second child.
|
||||
os._exit(0)
|
||||
else:
|
||||
# exit() or _exit()?
|
||||
# _exit is like exit(), but it doesn't call any functions registered
|
||||
# with atexit (and on_exit) or any registered signal handlers. It also
|
||||
# closes any open file descriptors. Using exit() may cause all stdio
|
||||
# streams to be flushed twice and any temporary files may be unexpectedly
|
||||
# removed. It's therefore recommended that child branches of a fork()
|
||||
# and the parent branch(es) of a daemon use _exit().
|
||||
return
|
||||
|
||||
# Close all open file descriptors. This prevents the child from keeping
|
||||
# open any file descriptors inherited from the parent. There is a variety
|
||||
# of methods to accomplish this task. Three are listed below.
|
||||
#
|
||||
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
|
||||
# number of open file descriptors to close. If it doesn't exists, use
|
||||
# the default value (configurable).
|
||||
#
|
||||
# try:
|
||||
# maxfd = os.sysconf("SC_OPEN_MAX")
|
||||
# except (AttributeError, ValueError):
|
||||
# maxfd = MAXFD
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
|
||||
# maxfd = os.sysconf("SC_OPEN_MAX")
|
||||
# else:
|
||||
# maxfd = MAXFD
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# Use the getrlimit method to retrieve the maximum file descriptor number
|
||||
# that can be opened by this process. If there is not limit on the
|
||||
# resource, use the default value.
|
||||
#
|
||||
import resource # Resource usage information.
|
||||
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
|
||||
if (maxfd == resource.RLIM_INFINITY):
|
||||
maxfd = MAXFD
|
||||
|
||||
# Iterate through and close all file descriptors.
|
||||
# for fd in range(0, maxfd):
|
||||
# try:
|
||||
# os.close(fd)
|
||||
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
|
||||
# pass
|
||||
|
||||
# Redirect the standard I/O file descriptors to the specified file. Since
|
||||
# the daemon has no controlling terminal, most daemons redirect stdin,
|
||||
# stdout, and stderr to /dev/null. This is done to prevent side-effects
|
||||
# from reads and writes to the standard I/O file descriptors.
|
||||
|
||||
# This call to open is guaranteed to return the lowest file descriptor,
|
||||
# which will be 0 (stdin), since it was closed above.
|
||||
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
|
||||
|
||||
# Duplicate standard input to standard output and standard error.
|
||||
# os.dup2(0, 1) # standard output (1)
|
||||
# os.dup2(0, 2) # standard error (2)
|
||||
|
||||
|
||||
si = file('/dev/null', 'r')
|
||||
so = file(logfile, 'w')
|
||||
se = so
|
||||
|
||||
|
||||
# Replace those fds with our own
|
||||
os.dup2(si.fileno(), sys.stdin.fileno())
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(se.fileno(), sys.stderr.fileno())
|
||||
|
||||
function()
|
||||
|
||||
os._exit(0)
|
||||
|
||||
@@ -37,7 +37,7 @@ the speed is more critical here.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import sys, os, re, types
|
||||
import sys, os, re, time, types
|
||||
if sys.argv[0][-5:] == "pydoc":
|
||||
path = os.path.dirname(os.path.dirname(sys.argv[1]))
|
||||
else:
|
||||
@@ -47,9 +47,6 @@ sys.path.insert(0,path)
|
||||
from bb import data_smart
|
||||
import bb
|
||||
|
||||
class VarExpandError(Exception):
|
||||
pass
|
||||
|
||||
_dict_type = data_smart.DataSmart
|
||||
|
||||
def init():
|
||||
@@ -99,19 +96,6 @@ def getVar(var, d, exp = 0):
|
||||
"""
|
||||
return d.getVar(var,exp)
|
||||
|
||||
|
||||
def renameVar(key, newkey, d):
|
||||
"""Renames a variable from key to newkey
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> renameVar('TEST', 'TEST2', d)
|
||||
>>> print getVar('TEST2', d)
|
||||
testcontents
|
||||
"""
|
||||
d.renameVar(key, newkey)
|
||||
|
||||
def delVar(var, d):
|
||||
"""Removes a variable from the data set
|
||||
|
||||
@@ -285,7 +269,6 @@ def expandKeys(alterdata, readdata = None):
|
||||
if readdata == None:
|
||||
readdata = alterdata
|
||||
|
||||
todolist = {}
|
||||
for key in keys(alterdata):
|
||||
if not '${' in key:
|
||||
continue
|
||||
@@ -293,14 +276,20 @@ def expandKeys(alterdata, readdata = None):
|
||||
ekey = expand(key, readdata)
|
||||
if key == ekey:
|
||||
continue
|
||||
todolist[key] = ekey
|
||||
val = getVar(key, alterdata)
|
||||
if val is None:
|
||||
continue
|
||||
# import copy
|
||||
# setVarFlags(ekey, copy.copy(getVarFlags(key, readdata)), alterdata)
|
||||
setVar(ekey, val, alterdata)
|
||||
|
||||
# These two for loops are split for performance to maximise the
|
||||
# usefulness of the expand cache
|
||||
for i in ('_append', '_prepend'):
|
||||
dest = getVarFlag(ekey, i, alterdata) or []
|
||||
src = getVarFlag(key, i, readdata) or []
|
||||
dest.extend(src)
|
||||
setVarFlag(ekey, i, dest, alterdata)
|
||||
|
||||
for key in todolist:
|
||||
ekey = todolist[key]
|
||||
renameVar(key, ekey, alterdata)
|
||||
delVar(key, alterdata)
|
||||
|
||||
def expandData(alterdata, readdata = None):
|
||||
"""For each variable in alterdata, expand it, and update the var contents.
|
||||
@@ -327,26 +316,27 @@ def expandData(alterdata, readdata = None):
|
||||
if val != expanded:
|
||||
setVar(key, expanded, alterdata)
|
||||
|
||||
import os
|
||||
|
||||
def inheritFromOS(d):
|
||||
"""Inherit variables from the environment."""
|
||||
# fakeroot needs to be able to set these
|
||||
non_inherit_vars = [ "LD_LIBRARY_PATH", "LD_PRELOAD" ]
|
||||
for s in os.environ.keys():
|
||||
try:
|
||||
setVar(s, os.environ[s], d)
|
||||
setVarFlag(s, "export", True, d)
|
||||
except TypeError:
|
||||
pass
|
||||
if not s in non_inherit_vars:
|
||||
try:
|
||||
setVar(s, os.environ[s], d)
|
||||
setVarFlag(s, 'matchesenv', '1', d)
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
import sys
|
||||
|
||||
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emit a variable to be sourced by a shell."""
|
||||
if getVarFlag(var, "python", d):
|
||||
return 0
|
||||
|
||||
export = getVarFlag(var, "export", d)
|
||||
unexport = getVarFlag(var, "unexport", d)
|
||||
func = getVarFlag(var, "func", d)
|
||||
if not all and not export and not unexport and not func:
|
||||
return 0
|
||||
|
||||
try:
|
||||
if all:
|
||||
oval = getVar(var, d, 0)
|
||||
@@ -366,31 +356,34 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
if type(val) is not types.StringType:
|
||||
return 0
|
||||
|
||||
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
|
||||
if getVarFlag(var, 'matchesenv', d):
|
||||
return 0
|
||||
|
||||
varExpanded = expand(var, d)
|
||||
|
||||
if unexport:
|
||||
o.write('unset %s\n' % varExpanded)
|
||||
return 1
|
||||
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
|
||||
return 0
|
||||
|
||||
val.rstrip()
|
||||
if not val:
|
||||
return 0
|
||||
|
||||
varExpanded = expand(var, d)
|
||||
|
||||
if func:
|
||||
# NOTE: should probably check for unbalanced {} within the var
|
||||
if getVarFlag(var, "func", d):
|
||||
# NOTE: should probably check for unbalanced {} within the var
|
||||
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
|
||||
return 1
|
||||
|
||||
if export:
|
||||
o.write('export ')
|
||||
|
||||
# if we're going to output this within doublequotes,
|
||||
# to a shell, we need to escape the quotes in the var
|
||||
alter = re.sub('"', '\\"', val.strip())
|
||||
o.write('%s="%s"\n' % (varExpanded, alter))
|
||||
else:
|
||||
if getVarFlag(var, "unexport", d):
|
||||
o.write('unset %s\n' % varExpanded)
|
||||
return 1
|
||||
if getVarFlag(var, "export", d):
|
||||
o.write('export ')
|
||||
else:
|
||||
if not all:
|
||||
return 0
|
||||
# if we're going to output this within doublequotes,
|
||||
# to a shell, we need to escape the quotes in the var
|
||||
alter = re.sub('"', '\\"', val.strip())
|
||||
o.write('%s="%s"\n' % (varExpanded, alter))
|
||||
return 1
|
||||
|
||||
|
||||
@@ -556,9 +549,7 @@ def inherits_class(klass, d):
|
||||
def _test():
|
||||
"""Start a doctest run on this module"""
|
||||
import doctest
|
||||
import bb
|
||||
from bb import data
|
||||
bb.msg.set_debug_level(0)
|
||||
doctest.testmod(data)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -32,6 +32,7 @@ import copy, os, re, sys, time, types
|
||||
import bb
|
||||
from bb import utils, methodpool
|
||||
from COW import COWDictBase
|
||||
from sets import Set
|
||||
from new import classobj
|
||||
|
||||
|
||||
@@ -39,11 +40,6 @@ __setvar_keyword__ = ["_append","_prepend"]
|
||||
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
|
||||
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
_expand_globals = {
|
||||
"os": os,
|
||||
"bb": bb,
|
||||
"time": time,
|
||||
}
|
||||
|
||||
|
||||
class DataSmart:
|
||||
@@ -55,7 +51,6 @@ class DataSmart:
|
||||
self._seen_overrides = seen
|
||||
|
||||
self.expand_cache = {}
|
||||
self.expand_locals = {"d": self}
|
||||
|
||||
def expand(self,s, varname):
|
||||
def var_sub(match):
|
||||
@@ -72,7 +67,8 @@ class DataSmart:
|
||||
def python_sub(match):
|
||||
import bb
|
||||
code = match.group()[3:-1]
|
||||
s = eval(code, _expand_globals, self.expand_locals)
|
||||
locals()['d'] = self
|
||||
s = eval(code)
|
||||
if type(s) == types.IntType: s = str(s)
|
||||
return s
|
||||
|
||||
@@ -146,19 +142,22 @@ class DataSmart:
|
||||
try:
|
||||
self._special_values[keyword].add( base )
|
||||
except:
|
||||
self._special_values[keyword] = set()
|
||||
self._special_values[keyword] = Set()
|
||||
self._special_values[keyword].add( base )
|
||||
|
||||
return
|
||||
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
if self.getVarFlag(var, 'matchesenv'):
|
||||
self.delVarFlag(var, 'matchesenv')
|
||||
self.setVarFlag(var, 'export', 1)
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
if not self._seen_overrides.has_key(override):
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override] = Set()
|
||||
self._seen_overrides[override].add( var )
|
||||
|
||||
# setting var
|
||||
@@ -171,29 +170,6 @@ class DataSmart:
|
||||
return self.expand(value,var)
|
||||
return value
|
||||
|
||||
def renameVar(self, key, newkey):
|
||||
"""
|
||||
Rename the variable key to newkey
|
||||
"""
|
||||
val = self.getVar(key, 0)
|
||||
if val is not None:
|
||||
self.setVar(newkey, val)
|
||||
|
||||
for i in ('_append', '_prepend'):
|
||||
src = self.getVarFlag(key, i)
|
||||
if src is None:
|
||||
continue
|
||||
|
||||
dest = self.getVarFlag(newkey, i) or []
|
||||
dest.extend(src)
|
||||
self.setVarFlag(newkey, i, dest)
|
||||
|
||||
if self._special_values.has_key(i) and key in self._special_values[i]:
|
||||
self._special_values[i].remove(key)
|
||||
self._special_values[i].add(newkey)
|
||||
|
||||
self.delVar(key)
|
||||
|
||||
def delVar(self,var):
|
||||
self.expand_cache = {}
|
||||
self.dict[var] = {}
|
||||
@@ -224,7 +200,7 @@ class DataSmart:
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
for i in flags:
|
||||
for i in flags.keys():
|
||||
if i == "content":
|
||||
continue
|
||||
self.dict[var][i] = flags[i]
|
||||
@@ -234,10 +210,10 @@ class DataSmart:
|
||||
flags = {}
|
||||
|
||||
if local_var:
|
||||
for i in local_var:
|
||||
for i in self.dict[var].keys():
|
||||
if i == "content":
|
||||
continue
|
||||
flags[i] = local_var[i]
|
||||
flags[i] = self.dict[var][i]
|
||||
|
||||
if len(flags) == 0:
|
||||
return None
|
||||
|
||||
@@ -22,20 +22,24 @@ BitBake build tools.
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, re, sys
|
||||
import os, re
|
||||
import bb.data
|
||||
import bb.utils
|
||||
import pickle
|
||||
|
||||
# This is the pid for which we should generate the event. This is set when
|
||||
# the runqueue forks off.
|
||||
worker_pid = 0
|
||||
worker_pipe = None
|
||||
|
||||
class Event:
|
||||
"""Base class for events"""
|
||||
type = "Event"
|
||||
|
||||
def __init__(self):
|
||||
self.pid = worker_pid
|
||||
def __init__(self, d = bb.data.init()):
|
||||
self._data = d
|
||||
|
||||
def getData(self):
|
||||
return self._data
|
||||
|
||||
def setData(self, data):
|
||||
self._data = data
|
||||
|
||||
data = property(getData, setData, None, "data property")
|
||||
|
||||
NotHandled = 0
|
||||
Handled = 1
|
||||
@@ -44,95 +48,75 @@ Registered = 10
|
||||
AlreadyRegistered = 14
|
||||
|
||||
# Internal
|
||||
_handlers = {}
|
||||
_ui_handlers = {}
|
||||
_ui_handler_seq = 0
|
||||
_handlers = []
|
||||
_handlers_dict = {}
|
||||
|
||||
def fire_class_handlers(event, d):
|
||||
for handler in _handlers:
|
||||
h = _handlers[handler]
|
||||
event.data = d
|
||||
def tmpHandler(event):
|
||||
"""Default handler for code events"""
|
||||
return NotHandled
|
||||
|
||||
def defaultTmpHandler():
|
||||
tmp = "def tmpHandler(e):\n\t\"\"\"heh\"\"\"\n\treturn NotHandled"
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event.defaultTmpHandler")
|
||||
return comp
|
||||
|
||||
def fire(event):
|
||||
"""Fire off an Event"""
|
||||
for h in _handlers:
|
||||
if type(h).__name__ == "code":
|
||||
exec(h)
|
||||
tmpHandler(event)
|
||||
if tmpHandler(event) == Handled:
|
||||
return Handled
|
||||
else:
|
||||
h(event)
|
||||
del event.data
|
||||
|
||||
def fire_ui_handlers(event, d):
|
||||
errors = []
|
||||
for h in _ui_handlers:
|
||||
#print "Sending event %s" % event
|
||||
try:
|
||||
# We use pickle here since it better handles object instances
|
||||
# which xmlrpc's marshaller does not. Events *must* be serializable
|
||||
# by pickle.
|
||||
_ui_handlers[h].event.send((pickle.dumps(event)))
|
||||
except:
|
||||
errors.append(h)
|
||||
for h in errors:
|
||||
del _ui_handlers[h]
|
||||
|
||||
def fire(event, d):
|
||||
"""Fire off an Event"""
|
||||
|
||||
# We can fire class handlers in the worker process context and this is
|
||||
# desired so they get the task based datastore.
|
||||
# UI handlers need to be fired in the server context so we defer this. They
|
||||
# don't have a datastore so the datastore context isn't a problem.
|
||||
|
||||
fire_class_handlers(event, d)
|
||||
if worker_pid != 0:
|
||||
worker_fire(event, d)
|
||||
else:
|
||||
fire_ui_handlers(event, d)
|
||||
|
||||
def worker_fire(event, d):
|
||||
data = "<event>" + pickle.dumps(event) + "</event>"
|
||||
try:
|
||||
if os.write(worker_pipe, data) != len (data):
|
||||
print "Error sending event to server (short write)"
|
||||
except OSError:
|
||||
sys.exit(1)
|
||||
|
||||
def fire_from_worker(event, d):
|
||||
if not event.startswith("<event>") or not event.endswith("</event>"):
|
||||
print "Error, not an event"
|
||||
return
|
||||
event = pickle.loads(event[7:-8])
|
||||
fire_ui_handlers(event, d)
|
||||
if h(event) == Handled:
|
||||
return Handled
|
||||
return NotHandled
|
||||
|
||||
def register(name, handler):
|
||||
"""Register an Event handler"""
|
||||
|
||||
# already registered
|
||||
if name in _handlers:
|
||||
if name in _handlers_dict:
|
||||
return AlreadyRegistered
|
||||
|
||||
if handler is not None:
|
||||
# handle string containing python code
|
||||
# handle string containing python code
|
||||
if type(handler).__name__ == "str":
|
||||
tmp = "def tmpHandler(e):\n%s" % handler
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
|
||||
_handlers[name] = comp
|
||||
_registerCode(handler)
|
||||
else:
|
||||
_handlers[name] = handler
|
||||
_handlers.append(handler)
|
||||
|
||||
_handlers_dict[name] = 1
|
||||
return Registered
|
||||
|
||||
def _registerCode(handlerStr):
|
||||
"""Register a 'code' Event.
|
||||
Deprecated interface; call register instead.
|
||||
|
||||
Expects to be passed python code as a string, which will
|
||||
be passed in turn to compile() and then exec(). Note that
|
||||
the code will be within a function, so should have had
|
||||
appropriate tabbing put in place."""
|
||||
tmp = "def tmpHandler(e):\n%s" % handlerStr
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
|
||||
# prevent duplicate registration
|
||||
_handlers.append(comp)
|
||||
|
||||
def remove(name, handler):
|
||||
"""Remove an Event handler"""
|
||||
_handlers.pop(name)
|
||||
|
||||
def register_UIHhandler(handler):
|
||||
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
|
||||
_ui_handlers[_ui_handler_seq] = handler
|
||||
return _ui_handler_seq
|
||||
_handlers_dict.pop(name)
|
||||
if type(handler).__name__ == "str":
|
||||
return _removeCode(handler)
|
||||
else:
|
||||
_handlers.remove(handler)
|
||||
|
||||
def unregister_UIHhandler(handlerNum):
|
||||
if handlerNum in _ui_handlers:
|
||||
del _ui_handlers[handlerNum]
|
||||
return
|
||||
def _removeCode(handlerStr):
|
||||
"""Remove a 'code' Event handler
|
||||
Deprecated interface; call remove instead."""
|
||||
tmp = "def tmpHandler(e):\n%s" % handlerStr
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._removeCode")
|
||||
_handlers.remove(comp)
|
||||
|
||||
def getName(e):
|
||||
"""Returns the name of a class or class instance"""
|
||||
@@ -141,40 +125,30 @@ def getName(e):
|
||||
else:
|
||||
return e.__name__
|
||||
|
||||
class ConfigParsed(Event):
|
||||
"""Configuration Parsing Complete"""
|
||||
|
||||
class RecipeParsed(Event):
|
||||
""" Recipe Parsing Complete """
|
||||
class PkgBase(Event):
|
||||
"""Base class for package events"""
|
||||
|
||||
def __init__(self, fn):
|
||||
self.fn = fn
|
||||
Event.__init__(self)
|
||||
def __init__(self, t, d = bb.data.init()):
|
||||
self._pkg = t
|
||||
Event.__init__(self, d)
|
||||
|
||||
class StampUpdate(Event):
|
||||
"""Trigger for any adjustment of the stamp files to happen"""
|
||||
def getPkg(self):
|
||||
return self._pkg
|
||||
|
||||
def __init__(self, targets, stampfns):
|
||||
self._targets = targets
|
||||
self._stampfns = stampfns
|
||||
Event.__init__(self)
|
||||
def setPkg(self, pkg):
|
||||
self._pkg = pkg
|
||||
|
||||
def getStampPrefix(self):
|
||||
return self._stampfns
|
||||
pkg = property(getPkg, setPkg, None, "pkg property")
|
||||
|
||||
def getTargets(self):
|
||||
return self._targets
|
||||
|
||||
stampPrefix = property(getStampPrefix)
|
||||
targets = property(getTargets)
|
||||
|
||||
class BuildBase(Event):
|
||||
"""Base class for bbmake run events"""
|
||||
|
||||
def __init__(self, n, p, failures = 0):
|
||||
def __init__(self, n, p, c, failures = 0):
|
||||
self._name = n
|
||||
self._pkgs = p
|
||||
Event.__init__(self)
|
||||
Event.__init__(self, c)
|
||||
self._failures = failures
|
||||
|
||||
def getPkgs(self):
|
||||
@@ -206,7 +180,32 @@ class BuildBase(Event):
|
||||
cfg = property(getCfg, setCfg, None, "cfg property")
|
||||
|
||||
|
||||
class DepBase(PkgBase):
|
||||
"""Base class for dependency events"""
|
||||
|
||||
def __init__(self, t, data, d):
|
||||
self._dep = d
|
||||
PkgBase.__init__(self, t, data)
|
||||
|
||||
def getDep(self):
|
||||
return self._dep
|
||||
|
||||
def setDep(self, dep):
|
||||
self._dep = dep
|
||||
|
||||
dep = property(getDep, setDep, None, "dep property")
|
||||
|
||||
|
||||
class PkgStarted(PkgBase):
|
||||
"""Package build started"""
|
||||
|
||||
|
||||
class PkgFailed(PkgBase):
|
||||
"""Package build failed"""
|
||||
|
||||
|
||||
class PkgSucceeded(PkgBase):
|
||||
"""Package build completed"""
|
||||
|
||||
|
||||
class BuildStarted(BuildBase):
|
||||
@@ -217,13 +216,18 @@ class BuildCompleted(BuildBase):
|
||||
"""bbmake build run completed"""
|
||||
|
||||
|
||||
class UnsatisfiedDep(DepBase):
|
||||
"""Unsatisfied Dependency"""
|
||||
|
||||
|
||||
class RecursiveDep(DepBase):
|
||||
"""Recursive Dependency"""
|
||||
|
||||
class NoProvider(Event):
|
||||
"""No Provider for an Event"""
|
||||
|
||||
def __init__(self, item, runtime=False):
|
||||
Event.__init__(self)
|
||||
def __init__(self, item, data,runtime=False):
|
||||
Event.__init__(self, data)
|
||||
self._item = item
|
||||
self._runtime = runtime
|
||||
|
||||
@@ -236,8 +240,8 @@ class NoProvider(Event):
|
||||
class MultipleProviders(Event):
|
||||
"""Multiple Providers"""
|
||||
|
||||
def __init__(self, item, candidates, runtime = False):
|
||||
Event.__init__(self)
|
||||
def __init__(self, item, candidates, data, runtime = False):
|
||||
Event.__init__(self, data)
|
||||
self._item = item
|
||||
self._candidates = candidates
|
||||
self._is_runtime = runtime
|
||||
@@ -259,29 +263,3 @@ class MultipleProviders(Event):
|
||||
Get the possible Candidates for a PROVIDER.
|
||||
"""
|
||||
return self._candidates
|
||||
|
||||
class ParseProgress(Event):
|
||||
"""
|
||||
Parsing Progress Event
|
||||
"""
|
||||
|
||||
def __init__(self, cached, parsed, skipped, masked, virtuals, errors, total):
|
||||
Event.__init__(self)
|
||||
self.cached = cached
|
||||
self.parsed = parsed
|
||||
self.skipped = skipped
|
||||
self.virtuals = virtuals
|
||||
self.masked = masked
|
||||
self.errors = errors
|
||||
self.sofar = cached + parsed
|
||||
self.total = total
|
||||
|
||||
class DepTreeGenerated(Event):
|
||||
"""
|
||||
Event when a dependency tree has been generated
|
||||
"""
|
||||
|
||||
def __init__(self, depgraph):
|
||||
Event.__init__(self)
|
||||
self._depgraph = depgraph
|
||||
|
||||
|
||||
@@ -27,10 +27,6 @@ BitBake build tools.
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb import persist_data
|
||||
|
||||
class MalformedUrl(Exception):
|
||||
"""Exception raised when encountering an invalid url"""
|
||||
|
||||
class FetchError(Exception):
|
||||
"""Exception raised when a download fails"""
|
||||
@@ -47,106 +43,6 @@ class ParameterError(Exception):
|
||||
class MD5SumError(Exception):
|
||||
"""Exception raised when a MD5SUM of a file does not match the expected one"""
|
||||
|
||||
class InvalidSRCREV(Exception):
|
||||
"""Exception raised when an invalid SRCREV is encountered"""
|
||||
|
||||
def decodeurl(url):
|
||||
"""Decodes an URL into the tokens (scheme, network location, path,
|
||||
user, password, parameters).
|
||||
|
||||
>>> decodeurl("http://www.google.com/index.html")
|
||||
('http', 'www.google.com', '/index.html', '', '', {})
|
||||
|
||||
>>> decodeurl("file://gas/COPYING")
|
||||
('file', '', 'gas/COPYING', '', '', {})
|
||||
|
||||
CVS url with username, host and cvsroot. The cvs module to check out is in the
|
||||
parameters:
|
||||
|
||||
>>> decodeurl("cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg")
|
||||
('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'})
|
||||
|
||||
Dito, but this time the username has a password part. And we also request a special tag
|
||||
to check out.
|
||||
|
||||
>>> decodeurl("cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;module=familiar/dist/ipkg;tag=V0-99-81")
|
||||
('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'})
|
||||
"""
|
||||
|
||||
m = re.compile('(?P<type>[^:]*)://((?P<user>.+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
|
||||
if not m:
|
||||
raise MalformedUrl(url)
|
||||
|
||||
type = m.group('type')
|
||||
location = m.group('location')
|
||||
if not location:
|
||||
raise MalformedUrl(url)
|
||||
user = m.group('user')
|
||||
parm = m.group('parm')
|
||||
|
||||
locidx = location.find('/')
|
||||
if locidx != -1 and type.lower() != 'file':
|
||||
host = location[:locidx]
|
||||
path = location[locidx:]
|
||||
else:
|
||||
host = ""
|
||||
path = location
|
||||
if user:
|
||||
m = re.compile('(?P<user>[^:]+)(:?(?P<pswd>.*))').match(user)
|
||||
if m:
|
||||
user = m.group('user')
|
||||
pswd = m.group('pswd')
|
||||
else:
|
||||
user = ''
|
||||
pswd = ''
|
||||
|
||||
p = {}
|
||||
if parm:
|
||||
for s in parm.split(';'):
|
||||
s1,s2 = s.split('=')
|
||||
p[s1] = s2
|
||||
|
||||
return (type, host, path, user, pswd, p)
|
||||
|
||||
def encodeurl(decoded):
|
||||
"""Encodes a URL from tokens (scheme, network location, path,
|
||||
user, password, parameters).
|
||||
|
||||
>>> encodeurl(['http', 'www.google.com', '/index.html', '', '', {}])
|
||||
'http://www.google.com/index.html'
|
||||
|
||||
CVS with username, host and cvsroot. The cvs module to check out is in the
|
||||
parameters:
|
||||
|
||||
>>> encodeurl(['cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}])
|
||||
'cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg'
|
||||
|
||||
Dito, but this time the username has a password part. And we also request a special tag
|
||||
to check out.
|
||||
|
||||
>>> encodeurl(['cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}])
|
||||
'cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg'
|
||||
"""
|
||||
|
||||
(type, host, path, user, pswd, p) = decoded
|
||||
|
||||
if not type or not path:
|
||||
bb.msg.fatal(bb.msg.domain.Fetcher, "invalid or missing parameters for url encoding")
|
||||
url = '%s://' % type
|
||||
if user:
|
||||
url += "%s" % user
|
||||
if pswd:
|
||||
url += ":%s" % pswd
|
||||
url += "@"
|
||||
if host:
|
||||
url += "%s" % host
|
||||
url += "%s" % path
|
||||
if p:
|
||||
for parm in p:
|
||||
url += ";%s=%s" % (parm, p[parm])
|
||||
|
||||
return url
|
||||
|
||||
def uri_replace(uri, uri_find, uri_replace, d):
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
|
||||
if not uri or not uri_find or not uri_replace:
|
||||
@@ -160,6 +56,7 @@ def uri_replace(uri, uri_find, uri_replace, d):
|
||||
result_decoded[loc] = uri_decoded[loc]
|
||||
import types
|
||||
if type(i) == types.StringType:
|
||||
import re
|
||||
if (re.match(i, uri_decoded[loc])):
|
||||
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
|
||||
if uri_find_decoded.index(i) == 2:
|
||||
@@ -172,388 +69,83 @@ def uri_replace(uri, uri_find, uri_replace, d):
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: no match")
|
||||
return uri
|
||||
# else:
|
||||
# for j in i:
|
||||
# for j in i.keys():
|
||||
# FIXME: apply replacements against options
|
||||
return bb.encodeurl(result_decoded)
|
||||
|
||||
methods = []
|
||||
urldata_cache = {}
|
||||
saved_headrevs = {}
|
||||
urldata = {}
|
||||
|
||||
def fetcher_init(d):
|
||||
"""
|
||||
Called to initilize the fetchers once the configuration data is known
|
||||
Calls before this must not hit the cache.
|
||||
"""
|
||||
pd = persist_data.PersistData(d)
|
||||
# When to drop SCM head revisions controled by user policy
|
||||
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Clearing SRCREV cache due to cache policy of: %s" % srcrev_policy)
|
||||
try:
|
||||
bb.fetch.saved_headrevs = pd.getKeyValues("BB_URI_HEADREVS")
|
||||
except:
|
||||
pass
|
||||
pd.delDomain("BB_URI_HEADREVS")
|
||||
else:
|
||||
bb.msg.fatal(bb.msg.domain.Fetcher, "Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
def init(urls = [], d = None):
|
||||
if d == None:
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "BUG init called with None as data object!!!")
|
||||
return
|
||||
|
||||
for m in methods:
|
||||
if hasattr(m, "init"):
|
||||
m.init(d)
|
||||
|
||||
# Make sure our domains exist
|
||||
pd.addDomain("BB_URI_HEADREVS")
|
||||
pd.addDomain("BB_URI_LOCALCOUNT")
|
||||
|
||||
def fetcher_compare_revisons(d):
|
||||
"""
|
||||
Compare the revisions in the persistant cache with current values and
|
||||
return true/false on whether they've changed.
|
||||
"""
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
data = pd.getKeyValues("BB_URI_HEADREVS")
|
||||
data2 = bb.fetch.saved_headrevs
|
||||
|
||||
changed = False
|
||||
for key in data:
|
||||
if key not in data2 or data2[key] != data[key]:
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s changed" % key)
|
||||
changed = True
|
||||
return True
|
||||
else:
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "%s did not change" % key)
|
||||
return False
|
||||
|
||||
# Function call order is usually:
|
||||
# 1. init
|
||||
# 2. go
|
||||
# 3. localpaths
|
||||
# localpath can be called at any time
|
||||
|
||||
def init(urls, d, setup = True):
|
||||
urldata = {}
|
||||
fn = bb.data.getVar('FILE', d, 1)
|
||||
if fn in urldata_cache:
|
||||
urldata = urldata_cache[fn]
|
||||
|
||||
for url in urls:
|
||||
if url not in urldata:
|
||||
urldata[url] = FetchData(url, d)
|
||||
|
||||
if setup:
|
||||
for url in urldata:
|
||||
if not urldata[url].setup:
|
||||
urldata[url].setup_localpath(d)
|
||||
|
||||
urldata_cache[fn] = urldata
|
||||
return urldata
|
||||
|
||||
def go(d, urls = None):
|
||||
"""
|
||||
Fetch all urls
|
||||
init must have previously been called
|
||||
"""
|
||||
if not urls:
|
||||
urls = d.getVar("SRC_URI", 1).split()
|
||||
urldata = init(urls, d, True)
|
||||
m.urls = []
|
||||
|
||||
for u in urls:
|
||||
ud = urldata[u]
|
||||
m = ud.method
|
||||
if ud.localfile:
|
||||
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
|
||||
ud = initdata(u, d)
|
||||
if ud.method:
|
||||
ud.method.urls.append(u)
|
||||
|
||||
def initdata(url, d):
|
||||
fn = bb.data.getVar('FILE', d, 1)
|
||||
if fn not in urldata:
|
||||
urldata[fn] = {}
|
||||
if url not in urldata[fn]:
|
||||
ud = FetchData()
|
||||
(ud.type, ud.host, ud.path, ud.user, ud.pswd, ud.parm) = bb.decodeurl(data.expand(url, d))
|
||||
ud.date = Fetch.getSRCDate(ud, d)
|
||||
for m in methods:
|
||||
if m.supports(url, ud, d):
|
||||
ud.localpath = m.localpath(url, ud, d)
|
||||
ud.md5 = ud.localpath + '.md5'
|
||||
# if user sets localpath for file, use it instead.
|
||||
if "localpath" in ud.parm:
|
||||
ud.localpath = ud.parm["localpath"]
|
||||
ud.method = m
|
||||
break
|
||||
urldata[fn][url] = ud
|
||||
return urldata[fn][url]
|
||||
|
||||
def go(d):
|
||||
"""Fetch all urls"""
|
||||
fn = bb.data.getVar('FILE', d, 1)
|
||||
for m in methods:
|
||||
for u in m.urls:
|
||||
ud = urldata[fn][u]
|
||||
if ud.localfile and not m.forcefetch(u, ud, d) and os.path.exists(urldata[fn][u].md5):
|
||||
# File already present along with md5 stamp file
|
||||
# Touch md5 file to show activity
|
||||
try:
|
||||
os.utime(ud.md5, None)
|
||||
except:
|
||||
# Errors aren't fatal here
|
||||
pass
|
||||
os.utime(ud.md5, None)
|
||||
continue
|
||||
lf = bb.utils.lockfile(ud.lockfile)
|
||||
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
|
||||
# If someone else fetched this before we got the lock,
|
||||
# notice and don't try again
|
||||
try:
|
||||
os.utime(ud.md5, None)
|
||||
except:
|
||||
# Errors aren't fatal here
|
||||
pass
|
||||
bb.utils.unlockfile(lf)
|
||||
continue
|
||||
|
||||
# First try fetching uri, u, from PREMIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('PREMIRRORS', d, 1) or "").split('\n') if i ]
|
||||
localpath = try_mirrors(d, u, mirrors)
|
||||
if not localpath:
|
||||
# Next try fetching from the original uri, u
|
||||
try:
|
||||
m.go(u, ud, d)
|
||||
localpath = ud.localpath
|
||||
except:
|
||||
# Finally, try fetching uri, u, from MIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('MIRRORS', d, 1) or "").split('\n') if i ]
|
||||
localpath = try_mirrors (d, u, mirrors)
|
||||
|
||||
if localpath:
|
||||
ud.localpath = localpath
|
||||
|
||||
if ud.localfile:
|
||||
if not m.forcefetch(u, ud, d):
|
||||
# RP - is olddir needed?
|
||||
# olddir = os.path.abspath(os.getcwd())
|
||||
m.go(u, ud , d)
|
||||
# os.chdir(olddir)
|
||||
if ud.localfile and not m.forcefetch(u, ud, d):
|
||||
Fetch.write_md5sum(u, ud, d)
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
|
||||
def checkstatus(d):
|
||||
"""
|
||||
Check all urls exist upstream
|
||||
init must have previously been called
|
||||
"""
|
||||
urldata = init([], d, True)
|
||||
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
m = ud.method
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
|
||||
# First try checking uri, u, from PREMIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('PREMIRRORS', d, 1) or "").split('\n') if i ]
|
||||
ret = try_mirrors(d, u, mirrors, True)
|
||||
if not ret:
|
||||
# Next try checking from the original uri, u
|
||||
try:
|
||||
ret = m.checkstatus(u, ud, d)
|
||||
except:
|
||||
# Finally, try checking uri, u, from MIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('MIRRORS', d, 1) or "").split('\n') if i ]
|
||||
ret = try_mirrors (d, u, mirrors, True)
|
||||
|
||||
if not ret:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "URL %s doesn't work" % u)
|
||||
|
||||
def localpaths(d):
|
||||
"""
|
||||
Return a list of the local filenames, assuming successful fetch
|
||||
"""
|
||||
"""Return a list of the local filenames, assuming successful fetch"""
|
||||
local = []
|
||||
urldata = init([], d, True)
|
||||
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
local.append(ud.localpath)
|
||||
|
||||
fn = bb.data.getVar('FILE', d, 1)
|
||||
for m in methods:
|
||||
for u in m.urls:
|
||||
local.append(urldata[fn][u].localpath)
|
||||
return local
|
||||
|
||||
srcrev_internal_call = False
|
||||
|
||||
def get_srcrev(d):
|
||||
"""
|
||||
Return the version string for the current package
|
||||
(usually to be used as PV)
|
||||
Most packages usually only have one SCM so we just pass on the call.
|
||||
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
|
||||
have been set.
|
||||
"""
|
||||
|
||||
#
|
||||
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
|
||||
# could translate into a call to here. If it does, we need to catch this
|
||||
# and provide some way so it knows get_srcrev is active instead of being
|
||||
# some number etc. hence the srcrev_internal_call tracking and the magic
|
||||
# "SRCREVINACTION" return value.
|
||||
#
|
||||
# Neater solutions welcome!
|
||||
#
|
||||
if bb.fetch.srcrev_internal_call:
|
||||
return "SRCREVINACTION"
|
||||
|
||||
scms = []
|
||||
|
||||
# Only call setup_localpath on URIs which suppports_srcrev()
|
||||
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
if ud.method.suppports_srcrev():
|
||||
if not ud.setup:
|
||||
ud.setup_localpath(d)
|
||||
scms.append(u)
|
||||
|
||||
if len(scms) == 0:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
raise ParameterError
|
||||
|
||||
bb.data.setVar('__BB_DONT_CACHE','1', d)
|
||||
|
||||
if len(scms) == 1:
|
||||
return urldata[scms[0]].method.sortable_revision(scms[0], urldata[scms[0]], d)
|
||||
|
||||
#
|
||||
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
|
||||
#
|
||||
format = bb.data.getVar('SRCREV_FORMAT', d, 1)
|
||||
if not format:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
|
||||
raise ParameterError
|
||||
|
||||
for scm in scms:
|
||||
if 'name' in urldata[scm].parm:
|
||||
name = urldata[scm].parm["name"]
|
||||
rev = urldata[scm].method.sortable_revision(scm, urldata[scm], d)
|
||||
format = format.replace(name, rev)
|
||||
|
||||
return format
|
||||
|
||||
def localpath(url, d, cache = True):
|
||||
"""
|
||||
Called from the parser with cache=False since the cache isn't ready
|
||||
at this point. Also called from classed in OE e.g. patch.bbclass
|
||||
"""
|
||||
ud = init([url], d)
|
||||
if ud[url].method:
|
||||
return ud[url].localpath
|
||||
def localpath(url, d):
|
||||
ud = initdata(url, d)
|
||||
if ud.method:
|
||||
return ud.localpath
|
||||
return url
|
||||
|
||||
def runfetchcmd(cmd, d, quiet = False):
|
||||
"""
|
||||
Run cmd returning the command output
|
||||
Raise an error if interrupted or cmd fails
|
||||
Optionally echo command output to stdout
|
||||
"""
|
||||
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
# FIXME: Should really include all export varaiables?
|
||||
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST', 'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy', 'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
|
||||
|
||||
for var in exportvars:
|
||||
val = data.getVar(var, d, True)
|
||||
if val:
|
||||
cmd = 'export ' + var + '=%s; %s' % (val, cmd)
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
|
||||
|
||||
# redirect stderr to stdout
|
||||
stdout_handle = os.popen(cmd + " 2>&1", "r")
|
||||
output = ""
|
||||
|
||||
while 1:
|
||||
line = stdout_handle.readline()
|
||||
if not line:
|
||||
break
|
||||
if not quiet:
|
||||
print line,
|
||||
output += line
|
||||
|
||||
status = stdout_handle.close() or 0
|
||||
signal = status >> 8
|
||||
exitstatus = status & 0xff
|
||||
|
||||
if signal:
|
||||
raise FetchError("Fetch command %s failed with signal %s, output:\n%s" % (cmd, signal, output))
|
||||
elif status != 0:
|
||||
raise FetchError("Fetch command %s failed with exit code %s, output:\n%s" % (cmd, status, output))
|
||||
|
||||
return output
|
||||
|
||||
def try_mirrors(d, uri, mirrors, check = False):
|
||||
"""
|
||||
Try to use a mirrored version of the sources.
|
||||
This method will be automatically called before the fetchers go.
|
||||
|
||||
d Is a bb.data instance
|
||||
uri is the original uri we're trying to download
|
||||
mirrors is the list of mirrors we're going to try
|
||||
"""
|
||||
fpath = os.path.join(data.getVar("DL_DIR", d, 1), os.path.basename(uri))
|
||||
if not check and os.access(fpath, os.R_OK):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % fpath)
|
||||
return fpath
|
||||
|
||||
ld = d.createCopy()
|
||||
for (find, replace) in mirrors:
|
||||
newuri = uri_replace(uri, find, replace, ld)
|
||||
if newuri != uri:
|
||||
try:
|
||||
ud = FetchData(newuri, ld)
|
||||
except bb.fetch.NoMethodError:
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "No method for %s" % uri)
|
||||
continue
|
||||
|
||||
ud.setup_localpath(ld)
|
||||
|
||||
try:
|
||||
if check:
|
||||
ud.method.checkstatus(newuri, ud, ld)
|
||||
else:
|
||||
ud.method.go(newuri, ud, ld)
|
||||
return ud.localpath
|
||||
except (bb.fetch.MissingParameterError,
|
||||
bb.fetch.FetchError,
|
||||
bb.fetch.MD5SumError):
|
||||
import sys
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Mirror fetch failure: %s" % value)
|
||||
continue
|
||||
return None
|
||||
|
||||
|
||||
class FetchData(object):
|
||||
"""
|
||||
A class which represents the fetcher state for a given URI.
|
||||
"""
|
||||
def __init__(self, url, d):
|
||||
"""Class for fetcher variable store"""
|
||||
def __init__(self):
|
||||
self.localfile = ""
|
||||
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = bb.decodeurl(data.expand(url, d))
|
||||
self.date = Fetch.getSRCDate(self, d)
|
||||
self.url = url
|
||||
if not self.user and "user" in self.parm:
|
||||
self.user = self.parm["user"]
|
||||
if not self.pswd and "pswd" in self.parm:
|
||||
self.pswd = self.parm["pswd"]
|
||||
self.setup = False
|
||||
for m in methods:
|
||||
if m.supports(url, self, d):
|
||||
self.method = m
|
||||
return
|
||||
raise NoMethodError("Missing implementation for url %s" % url)
|
||||
|
||||
def setup_localpath(self, d):
|
||||
self.setup = True
|
||||
if "localpath" in self.parm:
|
||||
# if user sets localpath for file, use it instead.
|
||||
self.localpath = self.parm["localpath"]
|
||||
else:
|
||||
premirrors = bb.data.getVar('PREMIRRORS', d, True)
|
||||
local = ""
|
||||
if premirrors and self.url:
|
||||
aurl = self.url.split(";")[0]
|
||||
mirrors = [ i.split() for i in (premirrors or "").split('\n') if i ]
|
||||
for (find, replace) in mirrors:
|
||||
if replace.startswith("file://"):
|
||||
path = aurl.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
local = replace.split("://")[1] + os.path.basename(path)
|
||||
if local == aurl or not os.path.exists(local) or os.path.isdir(local):
|
||||
local = ""
|
||||
self.localpath = local
|
||||
if not local:
|
||||
try:
|
||||
bb.fetch.srcrev_internal_call = True
|
||||
self.localpath = self.method.localpath(self.url, self, d)
|
||||
finally:
|
||||
bb.fetch.srcrev_internal_call = False
|
||||
# We have to clear data's internal caches since the cached value of SRCREV is now wrong.
|
||||
# Horrible...
|
||||
bb.data.delVar("ISHOULDNEVEREXIST", d)
|
||||
|
||||
# Note: These files should always be in DL_DIR whereas localpath may not be.
|
||||
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath), d)
|
||||
self.md5 = basepath + '.md5'
|
||||
self.lockfile = basepath + '.lock'
|
||||
|
||||
|
||||
class Fetch(object):
|
||||
@@ -590,12 +182,6 @@ class Fetch(object):
|
||||
"""
|
||||
return False
|
||||
|
||||
def suppports_srcrev(self):
|
||||
"""
|
||||
The fetcher supports auto source revisions (SRCREV)
|
||||
"""
|
||||
return False
|
||||
|
||||
def go(self, url, urldata, d):
|
||||
"""
|
||||
Fetch urls
|
||||
@@ -603,14 +189,6 @@ class Fetch(object):
|
||||
"""
|
||||
raise NoMethodError("Missing implementation for url")
|
||||
|
||||
def checkstatus(self, url, urldata, d):
|
||||
"""
|
||||
Check the status of a URL
|
||||
Assumes localpath was called first
|
||||
"""
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s could not be checked for status since no method exists." % url)
|
||||
return True
|
||||
|
||||
def getSRCDate(urldata, d):
|
||||
"""
|
||||
Return the SRC Date for the component
|
||||
@@ -623,57 +201,42 @@ class Fetch(object):
|
||||
pn = data.getVar("PN", d, 1)
|
||||
|
||||
if pn:
|
||||
return data.getVar("SRCDATE_%s" % pn, d, 1) or data.getVar("CVSDATE_%s" % pn, d, 1) or data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
|
||||
return data.getVar("SRCDATE_%s" % pn, d, 1) or data.getVar("CVSDATE_%s" % pn, d, 1) or data.getVar("DATE", d, 1)
|
||||
|
||||
return data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
|
||||
getSRCDate = staticmethod(getSRCDate)
|
||||
|
||||
def srcrev_internal_helper(ud, d):
|
||||
def try_mirror(d, tarfn):
|
||||
"""
|
||||
Return:
|
||||
a) a source revision if specified
|
||||
b) True if auto srcrev is in action
|
||||
c) False otherwise
|
||||
Try to use a mirrored version of the sources. We do this
|
||||
to avoid massive loads on foreign cvs and svn servers.
|
||||
This method will be used by the different fetcher
|
||||
implementations.
|
||||
|
||||
d Is a bb.data instance
|
||||
tarfn is the name of the tarball
|
||||
"""
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
return ud.parm['rev']
|
||||
|
||||
if 'tag' in ud.parm:
|
||||
return ud.parm['tag']
|
||||
|
||||
rev = None
|
||||
if 'name' in ud.parm:
|
||||
pn = data.getVar("PN", d, 1)
|
||||
rev = data.getVar("SRCREV_pn-" + pn + "_" + ud.parm['name'], d, 1)
|
||||
if not rev:
|
||||
rev = data.getVar("SRCREV", d, 1)
|
||||
if rev == "INVALID":
|
||||
raise InvalidSRCREV("Please set SRCREV to a valid value")
|
||||
if not rev:
|
||||
return False
|
||||
if rev is "SRCREVINACTION":
|
||||
tarpath = os.path.join(data.getVar("DL_DIR", d, 1), tarfn)
|
||||
if os.access(tarpath, os.R_OK):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % tarfn)
|
||||
return True
|
||||
return rev
|
||||
|
||||
srcrev_internal_helper = staticmethod(srcrev_internal_helper)
|
||||
pn = data.getVar('PN', d, True)
|
||||
src_tarball_stash = None
|
||||
if pn:
|
||||
src_tarball_stash = (data.getVar('SRC_TARBALL_STASH_%s' % pn, d, True) or data.getVar('CVS_TARBALL_STASH_%s' % pn, d, True) or data.getVar('SRC_TARBALL_STASH', d, True) or data.getVar('CVS_TARBALL_STASH', d, True) or "").split()
|
||||
|
||||
def localcount_internal_helper(ud, d):
|
||||
"""
|
||||
Return:
|
||||
a) a locked localcount if specified
|
||||
b) None otherwise
|
||||
"""
|
||||
|
||||
localcount= None
|
||||
if 'name' in ud.parm:
|
||||
pn = data.getVar("PN", d, 1)
|
||||
localcount = data.getVar("LOCALCOUNT_" + ud.parm['name'], d, 1)
|
||||
if not localcount:
|
||||
localcount = data.getVar("LOCALCOUNT", d, 1)
|
||||
return localcount
|
||||
|
||||
localcount_internal_helper = staticmethod(localcount_internal_helper)
|
||||
for stash in src_tarball_stash:
|
||||
fetchcmd = data.getVar("FETCHCOMMAND_mirror", d, True) or data.getVar("FETCHCOMMAND_wget", d, True)
|
||||
uri = stash + tarfn
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri)
|
||||
ret = os.system(fetchcmd)
|
||||
if ret == 0:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetched %s from tarball stash, skipping checkout" % tarfn)
|
||||
return True
|
||||
return False
|
||||
try_mirror = staticmethod(try_mirror)
|
||||
|
||||
def verify_md5sum(ud, got_sum):
|
||||
"""
|
||||
@@ -689,7 +252,14 @@ class Fetch(object):
|
||||
verify_md5sum = staticmethod(verify_md5sum)
|
||||
|
||||
def write_md5sum(url, ud, d):
|
||||
md5data = bb.utils.md5_file(ud.localpath)
|
||||
if bb.which(data.getVar('PATH', d), 'md5sum'):
|
||||
try:
|
||||
md5pipe = os.popen('md5sum ' + ud.localpath)
|
||||
md5data = (md5pipe.readline().split() or [ "" ])[0]
|
||||
md5pipe.close()
|
||||
except OSError:
|
||||
md5data = ""
|
||||
|
||||
# verify the md5sum
|
||||
if not Fetch.verify_md5sum(ud, md5data):
|
||||
raise MD5SumError(url)
|
||||
@@ -699,65 +269,6 @@ class Fetch(object):
|
||||
md5out.close()
|
||||
write_md5sum = staticmethod(write_md5sum)
|
||||
|
||||
def latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Look in the cache for the latest revision, if not present ask the SCM.
|
||||
"""
|
||||
if not hasattr(self, "_latest_revision"):
|
||||
raise ParameterError
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
key = self.generate_revision_key(url, ud, d)
|
||||
rev = pd.getValue("BB_URI_HEADREVS", key)
|
||||
if rev != None:
|
||||
return str(rev)
|
||||
|
||||
rev = self._latest_revision(url, ud, d)
|
||||
pd.setValue("BB_URI_HEADREVS", key, rev)
|
||||
return rev
|
||||
|
||||
def sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
|
||||
"""
|
||||
if hasattr(self, "_sortable_revision"):
|
||||
return self._sortable_revision(url, ud, d)
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
key = self.generate_revision_key(url, ud, d)
|
||||
|
||||
latest_rev = self._build_revision(url, ud, d)
|
||||
last_rev = pd.getValue("BB_URI_LOCALCOUNT", key + "_rev")
|
||||
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
|
||||
count = None
|
||||
if uselocalcount:
|
||||
count = Fetch.localcount_internal_helper(ud, d)
|
||||
if count is None:
|
||||
count = pd.getValue("BB_URI_LOCALCOUNT", key + "_count")
|
||||
|
||||
if last_rev == latest_rev:
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
buildindex_provided = hasattr(self, "_sortable_buildindex")
|
||||
if buildindex_provided:
|
||||
count = self._sortable_buildindex(url, ud, d, latest_rev)
|
||||
|
||||
if count is None:
|
||||
count = "0"
|
||||
elif uselocalcount or buildindex_provided:
|
||||
count = str(count)
|
||||
else:
|
||||
count = str(int(count) + 1)
|
||||
|
||||
pd.setValue("BB_URI_LOCALCOUNT", key + "_rev", latest_rev)
|
||||
pd.setValue("BB_URI_LOCALCOUNT", key + "_count", count)
|
||||
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
def generate_revision_key(self, url, ud, d):
|
||||
key = self._revision_key(url, ud, d)
|
||||
return "%s-%s" % (key, bb.data.getVar("PN", d, True) or "")
|
||||
|
||||
import cvs
|
||||
import git
|
||||
import local
|
||||
@@ -766,20 +277,12 @@ import wget
|
||||
import svk
|
||||
import ssh
|
||||
import perforce
|
||||
import bzr
|
||||
import hg
|
||||
import osc
|
||||
import repo
|
||||
|
||||
methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
methods.append(svn.Svn())
|
||||
methods.append(git.Git())
|
||||
methods.append(cvs.Cvs())
|
||||
methods.append(git.Git())
|
||||
methods.append(local.Local())
|
||||
methods.append(svn.Svn())
|
||||
methods.append(wget.Wget())
|
||||
methods.append(svk.Svk())
|
||||
methods.append(ssh.SSH())
|
||||
methods.append(perforce.Perforce())
|
||||
methods.append(bzr.Bzr())
|
||||
methods.append(hg.Hg())
|
||||
methods.append(osc.Osc())
|
||||
methods.append(repo.Repo())
|
||||
|
||||
@@ -1,148 +0,0 @@
|
||||
"""
|
||||
BitBake 'Fetch' implementation for bzr.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2007 Ross Burton
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
#
|
||||
# Classes for obtaining upstream sources for the
|
||||
# BitBake build tools.
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Bzr(Fetch):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['bzr']
|
||||
|
||||
def localpath (self, url, ud, d):
|
||||
|
||||
# Create paths to bzr checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
|
||||
|
||||
revision = Fetch.srcrev_internal_helper(ud, d)
|
||||
if revision is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
elif revision:
|
||||
ud.revision = revision
|
||||
|
||||
if not ud.revision:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildbzrcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an bzr commandline based on ud
|
||||
command is "fetch", "update", "revno"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_bzr}', d)
|
||||
|
||||
proto = "http"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
bzrroot = ud.host + ud.path
|
||||
|
||||
options = []
|
||||
|
||||
if command is "revno":
|
||||
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command is "fetch":
|
||||
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
elif command is "update":
|
||||
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid bzr command %s" % command)
|
||||
|
||||
return bzrcmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "update")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Update %s" % loc)
|
||||
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
|
||||
runfetchcmd(bzrcmd, d)
|
||||
else:
|
||||
os.system("rm -rf %s" % os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)))
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Checkout %s" % loc)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % bzrcmd)
|
||||
runfetchcmd(bzrcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.pkgdir)), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "bzr:" + ud.pkgdir
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "BZR fetcher hitting network for %s" % url)
|
||||
|
||||
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
|
||||
|
||||
return output.strip()
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
|
||||
@@ -26,7 +26,7 @@ BitBake build tools.
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
#
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
@@ -41,7 +41,7 @@ class Cvs(Fetch):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['cvs']
|
||||
return ud.type in ['cvs', 'pserver']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
@@ -58,15 +58,7 @@ class Cvs(Fetch):
|
||||
elif ud.tag:
|
||||
ud.date = ""
|
||||
|
||||
norecurse = ''
|
||||
if 'norecurse' in ud.parm:
|
||||
norecurse = '_norecurse'
|
||||
|
||||
fullpath = ''
|
||||
if 'fullpath' in ud.parm:
|
||||
fullpath = '_fullpath'
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
|
||||
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
@@ -77,6 +69,11 @@ class Cvs(Fetch):
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
|
||||
# try to use the tarball stash
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping cvs checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
method = "pserver"
|
||||
if "method" in ud.parm:
|
||||
method = ud.parm["method"]
|
||||
@@ -97,27 +94,14 @@ class Cvs(Fetch):
|
||||
if method == "dir":
|
||||
cvsroot = ud.path
|
||||
else:
|
||||
cvsroot = ":" + method
|
||||
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
|
||||
if cvsproxyhost:
|
||||
cvsroot += ";proxy=" + cvsproxyhost
|
||||
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
|
||||
if cvsproxyport:
|
||||
cvsroot += ";proxyport=" + cvsproxyport
|
||||
cvsroot += ":" + ud.user
|
||||
cvsroot = ":" + method + ":" + ud.user
|
||||
if ud.pswd:
|
||||
cvsroot += ":" + ud.pswd
|
||||
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
|
||||
|
||||
options = []
|
||||
if 'norecurse' in ud.parm:
|
||||
options.append("-l")
|
||||
if ud.date:
|
||||
# treat YYYYMMDDHHMM specially for CVS
|
||||
if len(ud.date) == 12:
|
||||
options.append("-D \"%s %s:%s UTC\"" % (ud.date[0:8], ud.date[8:10], ud.date[10:12]))
|
||||
else:
|
||||
options.append("-D \"%s UTC\"" % ud.date)
|
||||
options.append("-D %s" % ud.date)
|
||||
if ud.tag:
|
||||
options.append("-r %s" % ud.tag)
|
||||
|
||||
@@ -160,15 +144,10 @@ class Cvs(Fetch):
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
|
||||
os.chdir(moddir)
|
||||
os.chdir('..')
|
||||
# tar them up to a defined filename
|
||||
if 'fullpath' in ud.parm:
|
||||
os.chdir(pkgdir)
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, localdir))
|
||||
else:
|
||||
os.chdir(moddir)
|
||||
os.chdir('..')
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
|
||||
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
|
||||
@@ -20,20 +20,36 @@ BitBake 'Fetch' git implementation
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import runfetchcmd
|
||||
from bb.fetch import FetchError
|
||||
|
||||
def prunedir(topdir):
|
||||
# Delete everything reachable from the directory named in 'topdir'.
|
||||
# CAUTION: This is dangerous!
|
||||
for root, dirs, files in os.walk(topdir, topdown=False):
|
||||
for name in files:
|
||||
os.remove(os.path.join(root, name))
|
||||
for name in dirs:
|
||||
os.rmdir(os.path.join(root, name))
|
||||
|
||||
def rungitcmd(cmd,d):
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
|
||||
|
||||
# Need to export PATH as git is likely to be in metadata paths
|
||||
# rather than host provided
|
||||
pathcmd = 'export PATH=%s; %s' % (data.expand('${PATH}', d), cmd)
|
||||
|
||||
myret = os.system(pathcmd)
|
||||
|
||||
if myret != 0:
|
||||
raise FetchError("Git: %s failed" % pathcmd)
|
||||
|
||||
class Git(Fetch):
|
||||
"""Class to fetch a module or modules from git repositories"""
|
||||
def init(self, d):
|
||||
#
|
||||
# Only enable _sortable revision if the key is set
|
||||
#
|
||||
if bb.data.getVar("BB_GIT_CLONE_FOR_SRCREV", d, True):
|
||||
self._sortable_buildindex = self._sortable_buildindex_disabled
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with git.
|
||||
@@ -42,176 +58,70 @@ class Git(Fetch):
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
ud.proto = "rsync"
|
||||
if 'protocol' in ud.parm:
|
||||
ud.proto = ud.parm['protocol']
|
||||
elif not ud.host:
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "rsync"
|
||||
|
||||
ud.branch = ud.parm.get("branch", "master")
|
||||
ud.tag = "master"
|
||||
if 'tag' in ud.parm:
|
||||
ud.tag = ud.parm['tag']
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
|
||||
ud.mirrortarball = 'git_%s.tar.gz' % (gitsrcname)
|
||||
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
|
||||
tag = Fetch.srcrev_internal_helper(ud, d)
|
||||
if tag is True:
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
elif tag:
|
||||
ud.tag = tag
|
||||
|
||||
if not ud.tag or ud.tag == "master":
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
if subdir.endswith("/"):
|
||||
subdir = subdir[:-1]
|
||||
subdirpath = os.path.join(ud.path, subdir);
|
||||
else:
|
||||
subdirpath = ud.path;
|
||||
|
||||
if 'fullclone' in ud.parm:
|
||||
ud.localfile = ud.mirrortarball
|
||||
else:
|
||||
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, subdirpath.replace('/', '.'), ud.tag), d)
|
||||
|
||||
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.tag), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def forcefetch(self, url, ud, d):
|
||||
# tag=="master" must always update
|
||||
if (ud.tag == "master"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
repofile = os.path.join(data.getVar("DL_DIR", d, 1), ud.mirrortarball)
|
||||
|
||||
coname = '%s' % (ud.tag)
|
||||
codir = os.path.join(ud.clonedir, coname)
|
||||
|
||||
if not os.path.exists(ud.clonedir):
|
||||
try:
|
||||
Fetch.try_mirrors(ud.mirrortarball)
|
||||
bb.mkdirhier(ud.clonedir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % (repofile), d)
|
||||
except:
|
||||
runfetchcmd("%s clone -n %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
# Remove all but the .git directory
|
||||
if not self._contains_ref(ud.tag, d):
|
||||
runfetchcmd("rm * -Rf", d)
|
||||
runfetchcmd("%s fetch %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.branch), d)
|
||||
runfetchcmd("%s fetch --tags %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
|
||||
runfetchcmd("%s prune-packed" % ud.basecmd, d)
|
||||
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
|
||||
if mirror_tarballs != "0" or 'fullclone' in ud.parm:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
|
||||
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
|
||||
|
||||
if 'fullclone' in ud.parm:
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping git checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
if os.path.exists(codir):
|
||||
bb.utils.prunedir(codir)
|
||||
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
|
||||
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
if subdir.endswith("/"):
|
||||
subdirbase = os.path.basename(subdir[:-1])
|
||||
repofilename = 'git_%s.tar.gz' % (gitsrcname)
|
||||
repofile = os.path.join(data.getVar("DL_DIR", d, 1), repofilename)
|
||||
repodir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
|
||||
coname = '%s' % (ud.tag)
|
||||
codir = os.path.join(repodir, coname)
|
||||
|
||||
if not os.path.exists(repodir):
|
||||
if Fetch.try_mirror(d, repofilename):
|
||||
bb.mkdirhier(repodir)
|
||||
os.chdir(repodir)
|
||||
rungitcmd("tar -xzf %s" % (repofile),d)
|
||||
else:
|
||||
subdirbase = os.path.basename(subdir)
|
||||
else:
|
||||
subdirbase = ""
|
||||
rungitcmd("git clone -n %s://%s%s %s" % (ud.proto, ud.host, ud.path, repodir),d)
|
||||
|
||||
if subdir != "":
|
||||
readpathspec = ":%s" % (subdir)
|
||||
codir = os.path.join(codir, "git")
|
||||
coprefix = os.path.join(codir, subdirbase, "")
|
||||
else:
|
||||
readpathspec = ""
|
||||
coprefix = os.path.join(codir, "git", "")
|
||||
os.chdir(repodir)
|
||||
rungitcmd("git pull %s://%s%s" % (ud.proto, ud.host, ud.path),d)
|
||||
rungitcmd("git pull --tags %s://%s%s" % (ud.proto, ud.host, ud.path),d)
|
||||
rungitcmd("git prune-packed", d)
|
||||
rungitcmd("git pack-redundant --all | xargs -r rm", d)
|
||||
# Remove all but the .git directory
|
||||
rungitcmd("rm * -Rf", d)
|
||||
# old method of downloading tags
|
||||
#rungitcmd("rsync -a --verbose --stats --progress rsync://%s%s/ %s" % (ud.host, ud.path, os.path.join(repodir, ".git", "")),d)
|
||||
|
||||
os.chdir(repodir)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
|
||||
rungitcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ),d)
|
||||
|
||||
if os.path.exists(codir):
|
||||
prunedir(codir)
|
||||
|
||||
bb.mkdirhier(codir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f --prefix=%s -a" % (ud.basecmd, coprefix), d)
|
||||
os.chdir(repodir)
|
||||
rungitcmd("git read-tree %s" % (ud.tag),d)
|
||||
rungitcmd("git checkout-index -q -f --prefix=%s -a" % (os.path.join(codir, "git", "")),d)
|
||||
|
||||
os.chdir(codir)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git checkout")
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
bb.utils.prunedir(codir)
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _contains_ref(self, tag, d):
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
output = runfetchcmd("%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag), d, quiet=True)
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "git:" + ud.host + ud.path.replace('/', '.')
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Compute the HEAD revision for the url
|
||||
"""
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
cmd = "%s ls-remote %s://%s%s%s %s" % (basecmd, ud.proto, username, ud.host, ud.path, ud.branch)
|
||||
output = runfetchcmd(cmd, d, True)
|
||||
if not output:
|
||||
raise bb.fetch.FetchError("Fetch command %s gave empty output\n" % (cmd))
|
||||
return output.split()[0]
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.tag
|
||||
|
||||
def _sortable_buildindex_disabled(self, url, ud, d, rev):
|
||||
"""
|
||||
Return a suitable buildindex for the revision specified. This is done by counting revisions
|
||||
using "git rev-list" which may or may not work in different circumstances.
|
||||
"""
|
||||
|
||||
cwd = os.getcwd()
|
||||
|
||||
# Check if we have the rev already
|
||||
|
||||
if not os.path.exists(ud.clonedir):
|
||||
print "no repo"
|
||||
self.go(None, ud, d)
|
||||
if not os.path.exists(ud.clonedir):
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value" % (url, ud.clonedir))
|
||||
return None
|
||||
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
if not self._contains_ref(rev, d):
|
||||
self.go(None, ud, d)
|
||||
|
||||
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
|
||||
os.chdir(cwd)
|
||||
|
||||
buildindex = "%s" % output.split()[0]
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "GIT repository for %s in %s is returning %s revisions in rev-list before %s" % (url, ud.clonedir, buildindex, rev))
|
||||
return buildindex
|
||||
|
||||
rungitcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ),d)
|
||||
|
||||
@@ -1,173 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for mercurial DRCS (hg).
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
# Copyright (C) 2007 Robert Schuster
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Hg(Fetch):
|
||||
"""Class to fetch a from mercurial repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with mercurial.
|
||||
"""
|
||||
return ud.type in ['hg']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("hg method needs a 'module' parameter")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to mercurial checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
tag = Fetch.srcrev_internal_helper(ud, d)
|
||||
if tag is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
elif tag:
|
||||
ud.revision = tag
|
||||
else:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildhgcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an hg commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_hg}', d)
|
||||
|
||||
proto = "http"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
host = ud.host
|
||||
if proto == "file":
|
||||
host = "/"
|
||||
ud.host = "localhost"
|
||||
|
||||
if not ud.user:
|
||||
hgroot = host + ud.path
|
||||
else:
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
|
||||
if command is "info":
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
|
||||
options = [];
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command is "fetch":
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
elif command is "pull":
|
||||
# do not pass options list; limiting pull to rev causes the local
|
||||
# repo not to contain it and immediately following "update" command
|
||||
# will crash
|
||||
cmd = "%s pull" % (basecmd)
|
||||
elif command is "update":
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid hg command %s" % command)
|
||||
|
||||
return cmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
|
||||
updatecmd = self._buildhgcommand(ud, d, "pull")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
else:
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
|
||||
runfetchcmd(fetchcmd, d)
|
||||
|
||||
# Even when we clone (fetch), we still need to update as hg's clone
|
||||
# won't checkout the specified revision if its on a branch
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Compute tip revision for the url
|
||||
"""
|
||||
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
|
||||
return output.strip()
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "hg:" + ud.moddir
|
||||
|
||||
@@ -25,7 +25,7 @@ BitBake build tools.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
@@ -33,16 +33,14 @@ from bb.fetch import Fetch
|
||||
class Local(Fetch):
|
||||
def supports(self, url, urldata, d):
|
||||
"""
|
||||
Check to see if a given url represents a local fetch.
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return urldata.type in ['file']
|
||||
return urldata.type in ['file','patch']
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
"""Return the local filename of a given url assuming a successful fetch.
|
||||
"""
|
||||
path = url.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
newpath = path
|
||||
if path[0] != "/":
|
||||
filespath = data.getVar('FILESPATH', d, 1)
|
||||
@@ -59,14 +57,3 @@ class Local(Fetch):
|
||||
"""Fetch urls (no-op for Local method)"""
|
||||
# no need to fetch local files, we'll deal with them in place.
|
||||
return 1
|
||||
|
||||
def checkstatus(self, url, urldata, d):
|
||||
"""
|
||||
Check the status of the url
|
||||
"""
|
||||
if urldata.localpath.find("*") != -1:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
|
||||
return True
|
||||
if os.path.exists(urldata.localpath):
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -1,150 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
Bitbake "Fetch" implementation for osc (Opensuse build service client).
|
||||
Based on the svn "Fetch" implementation.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Osc(Fetch):
|
||||
"""Class to fetch a module or modules from Opensuse build server
|
||||
repositories."""
|
||||
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with osc.
|
||||
"""
|
||||
return ud.type in ['osc']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("osc method needs a 'module' parameter.")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to osc checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
|
||||
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
pv = data.getVar("PV", d, 0)
|
||||
rev = Fetch.srcrev_internal_helper(ud, d)
|
||||
if rev and rev != True:
|
||||
ud.revision = rev
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildosccommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an ocs commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_osc}', d)
|
||||
|
||||
proto = "ocs"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
options = []
|
||||
|
||||
config = "-c %s" % self.generate_config(ud, d)
|
||||
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
coroot = ud.path
|
||||
if coroot.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
coroot= coroot[1:]
|
||||
|
||||
if command is "fetch":
|
||||
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
|
||||
elif command is "update":
|
||||
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid osc command %s" % command)
|
||||
|
||||
return osccmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""
|
||||
Fetch url
|
||||
"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
|
||||
oscupdatecmd = self._buildosccommand(ud, d, "update")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update "+ loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscupdatecmd)
|
||||
runfetchcmd(oscupdatecmd, d)
|
||||
else:
|
||||
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscfetchcmd)
|
||||
runfetchcmd(oscfetchcmd, d)
|
||||
|
||||
os.chdir(os.path.join(ud.pkgdir + ud.path))
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def supports_srcrev(self):
|
||||
return False
|
||||
|
||||
def generate_config(self, ud, d):
|
||||
"""
|
||||
Generate a .oscrc to be used for this run.
|
||||
"""
|
||||
|
||||
config_path = "%s/oscrc" % data.expand('${OSCDIR}', d)
|
||||
if (os.path.exists(config_path)):
|
||||
os.remove(config_path)
|
||||
|
||||
f = open(config_path, 'w')
|
||||
f.write("[general]\n")
|
||||
f.write("apisrv = %s\n" % ud.host)
|
||||
f.write("scheme = http\n")
|
||||
f.write("su-wrapper = su -c\n")
|
||||
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
|
||||
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
|
||||
f.write("extra-pkgs = gzip\n")
|
||||
f.write("\n")
|
||||
f.write("[%s]\n" % ud.host)
|
||||
f.write("user = %s\n" % ud.parm["user"])
|
||||
f.write("pass = %s\n" % ud.parm["pswd"])
|
||||
f.close()
|
||||
|
||||
return config_path
|
||||
@@ -25,18 +25,19 @@ BitBake build tools.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
class Perforce(Fetch):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['p4']
|
||||
|
||||
def doparse(url,d):
|
||||
parm = {}
|
||||
parm=[]
|
||||
path = url.split("://")[1]
|
||||
delim = path.find("@");
|
||||
if delim != -1:
|
||||
@@ -66,15 +67,14 @@ class Perforce(Fetch):
|
||||
doparse = staticmethod(doparse)
|
||||
|
||||
def getcset(d, depot,host,user,pswd,parm):
|
||||
p4opt = ""
|
||||
if "cset" in parm:
|
||||
return parm["cset"];
|
||||
if user:
|
||||
p4opt += " -u %s" % (user)
|
||||
data.setVar('P4USER', user, d)
|
||||
if pswd:
|
||||
p4opt += " -P %s" % (pswd)
|
||||
data.setVar('P4PASSWD', pswd, d)
|
||||
if host:
|
||||
p4opt += " -p %s" % (host)
|
||||
data.setVar('P4PORT', host, d)
|
||||
|
||||
p4date = data.getVar("P4DATE", d, 1)
|
||||
if "revision" in parm:
|
||||
@@ -85,8 +85,8 @@ class Perforce(Fetch):
|
||||
depot += "@%s" % (p4date)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND_p4', d, 1)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
|
||||
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s changes -m 1 %s" % (p4cmd, depot))
|
||||
p4file = os.popen("%s changes -m 1 %s" % (p4cmd,depot))
|
||||
cset = p4file.readline().strip()
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "READ %s" % (cset))
|
||||
if not cset:
|
||||
@@ -124,6 +124,11 @@ class Perforce(Fetch):
|
||||
Fetch urls
|
||||
"""
|
||||
|
||||
# try to use the tarball stash
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping perforce checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
(host,depot,user,pswd,parm) = Perforce.doparse(loc, d)
|
||||
|
||||
if depot.find('/...') != -1:
|
||||
@@ -141,15 +146,14 @@ class Perforce(Fetch):
|
||||
data.update_data(localdata)
|
||||
|
||||
# Get the p4 command
|
||||
p4opt = ""
|
||||
if user:
|
||||
p4opt += " -u %s" % (user)
|
||||
data.setVar('P4USER', user, localdata)
|
||||
|
||||
if pswd:
|
||||
p4opt += " -P %s" % (pswd)
|
||||
data.setVar('P4PASSWD', pswd, localdata)
|
||||
|
||||
if host:
|
||||
p4opt += " -p %s" % (host)
|
||||
data.setVar('P4PORT', host, localdata)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND', localdata, 1)
|
||||
|
||||
@@ -171,8 +175,8 @@ class Perforce(Fetch):
|
||||
|
||||
os.chdir(tmpfile)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "%s%s files %s" % (p4cmd, p4opt, depot))
|
||||
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "%s files %s" % (p4cmd, depot))
|
||||
p4file = os.popen("%s files %s" % (p4cmd, depot))
|
||||
|
||||
if not p4file:
|
||||
bb.error("Fetch: unable to get the P4 files from %s" % (depot))
|
||||
@@ -189,7 +193,7 @@ class Perforce(Fetch):
|
||||
dest = list[0][len(path)+1:]
|
||||
where = dest.find("#")
|
||||
|
||||
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module,dest[:where],list[0]))
|
||||
os.system("%s print -o %s/%s %s" % (p4cmd, module,dest[:where],list[0]))
|
||||
count = count + 1
|
||||
|
||||
if count == 0:
|
||||
|
||||
@@ -1,106 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake "Fetch" repo (git) implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2009 Tom Rini <trini@embeddedalley.com>
|
||||
#
|
||||
# Based on git.py which is:
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Repo(Fetch):
|
||||
"""Class to fetch a module or modules from repo (git) repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with repo.
|
||||
"""
|
||||
return ud.type in ["repo"]
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
"""
|
||||
We don"t care about the git rev of the manifests repository, but
|
||||
we do care about the manifest to use. The default is "default".
|
||||
We also care about the branch or tag to be used. The default is
|
||||
"master".
|
||||
"""
|
||||
|
||||
if "protocol" in ud.parm:
|
||||
ud.proto = ud.parm["protocol"]
|
||||
else:
|
||||
ud.proto = "git"
|
||||
|
||||
if "branch" in ud.parm:
|
||||
ud.branch = ud.parm["branch"]
|
||||
else:
|
||||
ud.branch = "master"
|
||||
|
||||
if "manifest" in ud.parm:
|
||||
manifest = ud.parm["manifest"]
|
||||
if manifest.endswith(".xml"):
|
||||
ud.manifest = manifest
|
||||
else:
|
||||
ud.manifest = manifest + ".xml"
|
||||
else:
|
||||
ud.manifest = "default.xml"
|
||||
|
||||
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping repo init / sync." % ud.localpath)
|
||||
return
|
||||
|
||||
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
|
||||
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
|
||||
codir = os.path.join(repodir, gitsrcname, ud.manifest)
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + "@"
|
||||
else:
|
||||
username = ""
|
||||
|
||||
bb.mkdirhier(os.path.join(codir, "repo"))
|
||||
os.chdir(os.path.join(codir, "repo"))
|
||||
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
|
||||
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
|
||||
|
||||
runfetchcmd("repo sync", d)
|
||||
os.chdir(codir)
|
||||
|
||||
# Create a cache
|
||||
runfetchcmd("tar --exclude=.repo --exclude=.git -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return False
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.manifest
|
||||
|
||||
def _want_sortable_revision(self, url, ud, d):
|
||||
return False
|
||||
@@ -37,9 +37,11 @@ IETF secsh internet draft:
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
|
||||
__pattern__ = re.compile(r'''
|
||||
|
||||
@@ -25,7 +25,7 @@ This implementation is for svk. It is based on the svn implementation
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
@@ -36,7 +36,7 @@ class Svk(Fetch):
|
||||
"""Class to fetch a module or modules from svk repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with svk.
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['svk']
|
||||
|
||||
@@ -62,12 +62,15 @@ class Svk(Fetch):
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
return
|
||||
|
||||
svkroot = ud.host + ud.path
|
||||
|
||||
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
|
||||
svkcmd = "svk co -r {%s} %s/%s" % (date, svkroot, ud.module)
|
||||
|
||||
if ud.revision:
|
||||
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
|
||||
svkcmd = "svk co -r %s/%s" % (ud.revision, svkroot, ud.module)
|
||||
|
||||
# create temp directory
|
||||
localdata = data.createCopy(d)
|
||||
|
||||
@@ -1,12 +1,17 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for svn.
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
This implementation is for svn. It is based on the cvs implementation.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
#
|
||||
# Classes for obtaining upstream sources for the
|
||||
# BitBake build tools.
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
@@ -23,14 +28,13 @@ BitBake 'Fetch' implementation for svn.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Svn(Fetch):
|
||||
"""Class to fetch a module or modules from svn repositories"""
|
||||
@@ -43,54 +47,32 @@ class Svn(Fetch):
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("svn method needs a 'module' parameter")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to svn checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.date = ""
|
||||
ud.revision = ud.parm['rev']
|
||||
elif 'date' in ud.date:
|
||||
ud.date = ud.parm['date']
|
||||
ud.revision = ""
|
||||
else:
|
||||
#
|
||||
# ***Nasty hack***
|
||||
# If DATE in unexpanded PV, use ud.date (which is set from SRCDATE)
|
||||
# Should warn people to switch to SRCREV here
|
||||
#
|
||||
pv = data.getVar("PV", d, 0)
|
||||
if "DATE" in pv:
|
||||
ud.revision = ""
|
||||
else:
|
||||
rev = Fetch.srcrev_internal_helper(ud, d)
|
||||
if rev is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
ud.date = ""
|
||||
elif rev:
|
||||
ud.revision = rev
|
||||
ud.date = ""
|
||||
else:
|
||||
ud.revision = ""
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.revision = ""
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
if ud.revision:
|
||||
ud.date = ""
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildsvncommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an svn commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
def forcefetch(self, url, ud, d):
|
||||
if (ud.date == "now"):
|
||||
return True
|
||||
return False
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_svn}', d)
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
# try to use the tarball stash
|
||||
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping svn checkout." % ud.localpath)
|
||||
return
|
||||
|
||||
proto = "svn"
|
||||
if "proto" in ud.parm:
|
||||
@@ -102,100 +84,55 @@ class Svn(Fetch):
|
||||
|
||||
svnroot = ud.host + ud.path
|
||||
|
||||
# either use the revision, or SRCDATE in braces,
|
||||
# either use the revision, or SRCDATE in braces, or nothing for SRCDATE = "now"
|
||||
options = []
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
elif ud.date != "now":
|
||||
options.append("-r {%s}" % ud.date)
|
||||
|
||||
if ud.user:
|
||||
options.append("--username %s" % ud.user)
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "svn:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
if ud.pswd:
|
||||
options.append("--password %s" % ud.pswd)
|
||||
|
||||
if command is "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
suffix = "@%s" % (ud.revision)
|
||||
elif ud.date:
|
||||
options.append("-r {%s}" % ud.date)
|
||||
|
||||
if command is "fetch":
|
||||
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
|
||||
elif command is "update":
|
||||
svncmd = "%s update %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid svn command %s" % command)
|
||||
data.setVar('SVNROOT', "%s://%s/%s" % (proto, svnroot, ud.module), localdata)
|
||||
data.setVar('SVNCOOPTS', " ".join(options), localdata)
|
||||
data.setVar('SVNMODULE', ud.module, localdata)
|
||||
svncmd = data.getVar('FETCHCOMMAND', localdata, 1)
|
||||
svnupcmd = data.getVar('UPDATECOMMAND', localdata, 1)
|
||||
|
||||
if svn_rsh:
|
||||
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
|
||||
svnupcmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svnupcmd)
|
||||
|
||||
return svncmd
|
||||
pkg = data.expand('${PN}', d)
|
||||
pkgdir = os.path.join(data.expand('${SVNDIR}', localdata), pkg)
|
||||
moddir = os.path.join(pkgdir, ud.module)
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + moddir + "'")
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
|
||||
svnupdatecmd = self._buildsvncommand(ud, d, "update")
|
||||
if os.access(os.path.join(moddir, '.svn'), os.R_OK):
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupdatecmd)
|
||||
runfetchcmd(svnupdatecmd, d)
|
||||
os.chdir(moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupcmd)
|
||||
myret = os.system(svnupcmd)
|
||||
else:
|
||||
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnfetchcmd)
|
||||
runfetchcmd(svnfetchcmd, d)
|
||||
bb.mkdirhier(pkgdir)
|
||||
os.chdir(pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svncmd)
|
||||
myret = os.system(svncmd)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
if myret != 0:
|
||||
raise FetchError(ud.module)
|
||||
|
||||
os.chdir(pkgdir)
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)))
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "svn:" + ud.moddir
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "SVN fetcher hitting network for %s" % url)
|
||||
|
||||
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
|
||||
|
||||
revision = None
|
||||
for line in output.splitlines():
|
||||
if "Last Changed Rev" in line:
|
||||
revision = line.split(":")[1].strip()
|
||||
|
||||
return revision
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
raise FetchError(ud.module)
|
||||
|
||||
@@ -25,17 +25,18 @@ BitBake build tools.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import uri_replace
|
||||
|
||||
class Wget(Fetch):
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with wget.
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['http','https','ftp']
|
||||
|
||||
@@ -47,55 +48,28 @@ class Wget(Fetch):
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, uri, ud, d, checkonly = False):
|
||||
def go(self, uri, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
def fetch_uri(uri, ud, d):
|
||||
if checkonly:
|
||||
fetchcmd = data.getVar("CHECKCOMMAND", d, 1)
|
||||
elif os.path.exists(ud.localpath):
|
||||
if os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again..
|
||||
fetchcmd = data.getVar("RESUMECOMMAND", d, 1)
|
||||
else:
|
||||
fetchcmd = data.getVar("FETCHCOMMAND", d, 1)
|
||||
|
||||
uri = uri.split(";")[0]
|
||||
uri_decoded = list(bb.decodeurl(uri))
|
||||
uri_type = uri_decoded[0]
|
||||
uri_host = uri_decoded[1]
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri)
|
||||
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
|
||||
httpproxy = None
|
||||
ftpproxy = None
|
||||
if uri_type == 'http':
|
||||
httpproxy = data.getVar("HTTP_PROXY", d, True)
|
||||
httpproxy_ignore = (data.getVar("HTTP_PROXY_IGNORE", d, True) or "").split()
|
||||
for p in httpproxy_ignore:
|
||||
if uri_host.endswith(p):
|
||||
httpproxy = None
|
||||
break
|
||||
if uri_type == 'ftp':
|
||||
ftpproxy = data.getVar("FTP_PROXY", d, True)
|
||||
ftpproxy_ignore = (data.getVar("HTTP_PROXY_IGNORE", d, True) or "").split()
|
||||
for p in ftpproxy_ignore:
|
||||
if uri_host.endswith(p):
|
||||
ftpproxy = None
|
||||
break
|
||||
if httpproxy:
|
||||
fetchcmd = "http_proxy=" + httpproxy + " " + fetchcmd
|
||||
if ftpproxy:
|
||||
fetchcmd = "ftp_proxy=" + ftpproxy + " " + fetchcmd
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "executing " + fetchcmd)
|
||||
ret = os.system(fetchcmd)
|
||||
if ret != 0:
|
||||
return False
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath) and not checkonly:
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "The fetch command for %s returned success but %s doesn't exist?..." % (uri, ud.localpath))
|
||||
# check if sourceforge did send us to the mirror page
|
||||
if not os.path.exists(ud.localpath):
|
||||
os.system("rm %s*" % ud.localpath) # FIXME shell quote it
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "sourceforge.net send us to the mirror on %s" % ud.basename)
|
||||
return False
|
||||
|
||||
return True
|
||||
@@ -104,11 +78,22 @@ class Wget(Fetch):
|
||||
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
premirrors = [ i.split() for i in (data.getVar('PREMIRRORS', localdata, 1) or "").split('\n') if i ]
|
||||
for (find, replace) in premirrors:
|
||||
newuri = uri_replace(uri, find, replace, d)
|
||||
if newuri != uri:
|
||||
if fetch_uri(newuri, ud, localdata):
|
||||
return
|
||||
|
||||
if fetch_uri(uri, ud, localdata):
|
||||
return True
|
||||
return
|
||||
|
||||
# try mirrors
|
||||
mirrors = [ i.split() for i in (data.getVar('MIRRORS', localdata, 1) or "").split('\n') if i ]
|
||||
for (find, replace) in mirrors:
|
||||
newuri = uri_replace(uri, find, replace, d)
|
||||
if newuri != uri:
|
||||
if fetch_uri(newuri, ud, localdata):
|
||||
return
|
||||
|
||||
raise FetchError(uri)
|
||||
|
||||
|
||||
def checkstatus(self, uri, ud, d):
|
||||
return self.go(uri, ud, d, True)
|
||||
|
||||
144
bitbake/lib/bb/manifest.py
Normal file
144
bitbake/lib/bb/manifest.py
Normal file
@@ -0,0 +1,144 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, sys
|
||||
import bb, bb.data
|
||||
|
||||
def getfields(line):
|
||||
fields = {}
|
||||
fieldmap = ( "pkg", "src", "dest", "type", "mode", "uid", "gid", "major", "minor", "start", "inc", "count" )
|
||||
for f in xrange(len(fieldmap)):
|
||||
fields[fieldmap[f]] = None
|
||||
|
||||
if not line:
|
||||
return None
|
||||
|
||||
splitline = line.split()
|
||||
if not len(splitline):
|
||||
return None
|
||||
|
||||
try:
|
||||
for f in xrange(len(fieldmap)):
|
||||
if splitline[f] == '-':
|
||||
continue
|
||||
fields[fieldmap[f]] = splitline[f]
|
||||
except IndexError:
|
||||
pass
|
||||
return fields
|
||||
|
||||
def parse (mfile, d):
|
||||
manifest = []
|
||||
while 1:
|
||||
line = mfile.readline()
|
||||
if not line:
|
||||
break
|
||||
if line.startswith("#"):
|
||||
continue
|
||||
fields = getfields(line)
|
||||
if not fields:
|
||||
continue
|
||||
manifest.append(fields)
|
||||
return manifest
|
||||
|
||||
def emit (func, manifest, d):
|
||||
#str = "%s () {\n" % func
|
||||
str = ""
|
||||
for line in manifest:
|
||||
emittedline = emit_line(func, line, d)
|
||||
if not emittedline:
|
||||
continue
|
||||
str += emittedline + "\n"
|
||||
# str += "}\n"
|
||||
return str
|
||||
|
||||
def mangle (func, line, d):
|
||||
import copy
|
||||
newline = copy.copy(line)
|
||||
src = bb.data.expand(newline["src"], d)
|
||||
|
||||
if src:
|
||||
if not os.path.isabs(src):
|
||||
src = "${WORKDIR}/" + src
|
||||
|
||||
dest = newline["dest"]
|
||||
if not dest:
|
||||
return
|
||||
|
||||
if dest.startswith("/"):
|
||||
dest = dest[1:]
|
||||
|
||||
if func is "do_install":
|
||||
dest = "${D}/" + dest
|
||||
|
||||
elif func is "do_populate":
|
||||
dest = "${WORKDIR}/install/" + newline["pkg"] + "/" + dest
|
||||
|
||||
elif func is "do_stage":
|
||||
varmap = {}
|
||||
varmap["${bindir}"] = "${STAGING_DIR}/${HOST_SYS}/bin"
|
||||
varmap["${libdir}"] = "${STAGING_DIR}/${HOST_SYS}/lib"
|
||||
varmap["${includedir}"] = "${STAGING_DIR}/${HOST_SYS}/include"
|
||||
varmap["${datadir}"] = "${STAGING_DATADIR}"
|
||||
|
||||
matched = 0
|
||||
for key in varmap.keys():
|
||||
if dest.startswith(key):
|
||||
dest = varmap[key] + "/" + dest[len(key):]
|
||||
matched = 1
|
||||
if not matched:
|
||||
newline = None
|
||||
return
|
||||
else:
|
||||
newline = None
|
||||
return
|
||||
|
||||
newline["src"] = src
|
||||
newline["dest"] = dest
|
||||
return newline
|
||||
|
||||
def emit_line (func, line, d):
|
||||
import copy
|
||||
newline = copy.deepcopy(line)
|
||||
newline = mangle(func, newline, d)
|
||||
if not newline:
|
||||
return None
|
||||
|
||||
str = ""
|
||||
type = newline["type"]
|
||||
mode = newline["mode"]
|
||||
src = newline["src"]
|
||||
dest = newline["dest"]
|
||||
if type is "d":
|
||||
str = "install -d "
|
||||
if mode:
|
||||
str += "-m %s " % mode
|
||||
str += dest
|
||||
elif type is "f":
|
||||
if not src:
|
||||
return None
|
||||
if dest.endswith("/"):
|
||||
str = "install -d "
|
||||
str += dest + "\n"
|
||||
str += "install "
|
||||
else:
|
||||
str = "install -D "
|
||||
if mode:
|
||||
str += "-m %s " % mode
|
||||
str += src + " " + dest
|
||||
del newline
|
||||
return str
|
||||
@@ -22,8 +22,8 @@ Message handling infrastructure for bitbake
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys, bb
|
||||
from bb import event
|
||||
import sys, os, re, bb
|
||||
from bb import utils
|
||||
|
||||
debug_level = {}
|
||||
|
||||
@@ -37,38 +37,11 @@ domain = bb.utils.Enum(
|
||||
'Depends',
|
||||
'Fetcher',
|
||||
'Parsing',
|
||||
'PersistData',
|
||||
'Provider',
|
||||
'RunQueue',
|
||||
'TaskData',
|
||||
'Util')
|
||||
|
||||
|
||||
class MsgBase(bb.event.Event):
|
||||
"""Base class for messages"""
|
||||
|
||||
def __init__(self, msg):
|
||||
self._message = msg
|
||||
event.Event.__init__(self)
|
||||
|
||||
class MsgDebug(MsgBase):
|
||||
"""Debug Message"""
|
||||
|
||||
class MsgNote(MsgBase):
|
||||
"""Note Message"""
|
||||
|
||||
class MsgWarn(MsgBase):
|
||||
"""Warning Message"""
|
||||
|
||||
class MsgError(MsgBase):
|
||||
"""Error Message"""
|
||||
|
||||
class MsgFatal(MsgBase):
|
||||
"""Fatal Message"""
|
||||
|
||||
class MsgPlain(MsgBase):
|
||||
"""General output"""
|
||||
|
||||
#
|
||||
# Message control functions
|
||||
#
|
||||
@@ -90,36 +63,45 @@ def set_debug_domains(domains):
|
||||
bb.msg.debug_level[ddomain] = bb.msg.debug_level[ddomain] + 1
|
||||
found = True
|
||||
if not found:
|
||||
bb.msg.warn(None, "Logging domain %s is not valid, ignoring" % domain)
|
||||
std_warn("Logging domain %s is not valid, ignoring" % domain)
|
||||
|
||||
#
|
||||
# Message handling functions
|
||||
#
|
||||
|
||||
def debug(level, domain, msg, fn = None):
|
||||
if not domain:
|
||||
domain = 'default'
|
||||
if debug_level[domain] >= level:
|
||||
bb.event.fire(MsgDebug(msg), None)
|
||||
print 'DEBUG: ' + msg
|
||||
|
||||
def note(level, domain, msg, fn = None):
|
||||
if not domain:
|
||||
domain = 'default'
|
||||
if level == 1 or verbose or debug_level[domain] >= 1:
|
||||
bb.event.fire(MsgNote(msg), None)
|
||||
std_note(msg)
|
||||
|
||||
def warn(domain, msg, fn = None):
|
||||
bb.event.fire(MsgWarn(msg), None)
|
||||
std_warn(msg)
|
||||
|
||||
def error(domain, msg, fn = None):
|
||||
bb.event.fire(MsgError(msg), None)
|
||||
print 'ERROR: ' + msg
|
||||
std_error(msg)
|
||||
|
||||
def fatal(domain, msg, fn = None):
|
||||
bb.event.fire(MsgFatal(msg), None)
|
||||
print 'FATAL: ' + msg
|
||||
std_fatal(msg)
|
||||
|
||||
#
|
||||
# Compatibility functions for the original message interface
|
||||
#
|
||||
def std_debug(lvl, msg):
|
||||
if debug_level['default'] >= lvl:
|
||||
print 'DEBUG: ' + msg
|
||||
|
||||
def std_note(msg):
|
||||
print 'NOTE: ' + msg
|
||||
|
||||
def std_warn(msg):
|
||||
print 'WARNING: ' + msg
|
||||
|
||||
def std_error(msg):
|
||||
print 'ERROR: ' + msg
|
||||
|
||||
def std_fatal(msg):
|
||||
print 'ERROR: ' + msg
|
||||
sys.exit(1)
|
||||
|
||||
def plain(msg, fn = None):
|
||||
bb.event.fire(MsgPlain(msg), None)
|
||||
|
||||
|
||||
@@ -50,10 +50,6 @@ def cached_mtime_noerror(f):
|
||||
return 0
|
||||
return __mtime_cache[f]
|
||||
|
||||
def update_mtime(f):
|
||||
__mtime_cache[f] = os.stat(f)[8]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def mark_dependency(d, f):
|
||||
if f.startswith('./'):
|
||||
f = "%s/%s" % (os.getcwd(), f[2:])
|
||||
@@ -80,34 +76,5 @@ def init(fn, data):
|
||||
if h['supports'](fn):
|
||||
return h['init'](data)
|
||||
|
||||
def resolve_file(fn, d):
|
||||
if not os.path.isabs(fn):
|
||||
fn = bb.which(bb.data.getVar("BBPATH", d, 1), fn)
|
||||
if not fn:
|
||||
raise IOError("file %s not found" % fn)
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "LOAD %s" % fn)
|
||||
return fn
|
||||
|
||||
# Used by OpenEmbedded metadata
|
||||
__pkgsplit_cache__={}
|
||||
def vars_from_file(mypkg, d):
|
||||
if not mypkg:
|
||||
return (None, None, None)
|
||||
if mypkg in __pkgsplit_cache__:
|
||||
return __pkgsplit_cache__[mypkg]
|
||||
|
||||
myfile = os.path.splitext(os.path.basename(mypkg))
|
||||
parts = myfile[0].split('_')
|
||||
__pkgsplit_cache__[mypkg] = parts
|
||||
if len(parts) > 3:
|
||||
raise ParseError("Unable to generate default variables from the filename: %s (too many underscores)" % mypkg)
|
||||
exp = 3 - len(parts)
|
||||
tmplist = []
|
||||
while exp != 0:
|
||||
exp -= 1
|
||||
tmplist.append(None)
|
||||
parts.extend(tmplist)
|
||||
return parts
|
||||
|
||||
from bb.parse.parse_py import __version__, ConfHandler, BBHandler
|
||||
from parse_py import __version__, ConfHandler, BBHandler
|
||||
|
||||
@@ -1,451 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
AbstractSyntaxTree classes for the Bitbake language
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
# Copyright (C) 2009 Holger Hans Peter Freyther
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import bb, re, string
|
||||
from itertools import chain
|
||||
|
||||
__word__ = re.compile(r"\S+")
|
||||
__parsed_methods__ = bb.methodpool.get_parsed_dict()
|
||||
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
|
||||
|
||||
class StatementGroup(list):
|
||||
def eval(self, data):
|
||||
map(lambda x: x.eval(data), self)
|
||||
|
||||
class AstNode(object):
|
||||
pass
|
||||
|
||||
class IncludeNode(AstNode):
|
||||
def __init__(self, what_file, fn, lineno, force):
|
||||
self.what_file = what_file
|
||||
self.from_fn = fn
|
||||
self.from_lineno = lineno
|
||||
self.force = force
|
||||
|
||||
def eval(self, data):
|
||||
"""
|
||||
Include the file and evaluate the statements
|
||||
"""
|
||||
s = bb.data.expand(self.what_file, data)
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "CONF %s:%d: including %s" % (self.from_fn, self.from_lineno, s))
|
||||
|
||||
# TODO: Cache those includes... maybe not here though
|
||||
if self.force:
|
||||
bb.parse.ConfHandler.include(self.from_fn, s, data, "include required")
|
||||
else:
|
||||
bb.parse.ConfHandler.include(self.from_fn, s, data, False)
|
||||
|
||||
class ExportNode(AstNode):
|
||||
def __init__(self, var):
|
||||
self.var = var
|
||||
|
||||
def eval(self, data):
|
||||
bb.data.setVarFlag(self.var, "export", 1, data)
|
||||
|
||||
class DataNode(AstNode):
|
||||
"""
|
||||
Various data related updates. For the sake of sanity
|
||||
we have one class doing all this. This means that all
|
||||
this need to be re-evaluated... we might be able to do
|
||||
that faster with multiple classes.
|
||||
"""
|
||||
def __init__(self, groupd):
|
||||
self.groupd = groupd
|
||||
|
||||
def getFunc(self, key, data):
|
||||
if 'flag' in self.groupd and self.groupd['flag'] != None:
|
||||
return bb.data.getVarFlag(key, self.groupd['flag'], data)
|
||||
else:
|
||||
return bb.data.getVar(key, data)
|
||||
|
||||
def eval(self, data):
|
||||
groupd = self.groupd
|
||||
key = groupd["var"]
|
||||
if "exp" in groupd and groupd["exp"] != None:
|
||||
bb.data.setVarFlag(key, "export", 1, data)
|
||||
if "ques" in groupd and groupd["ques"] != None:
|
||||
val = self.getFunc(key, data)
|
||||
if val == None:
|
||||
val = groupd["value"]
|
||||
elif "colon" in groupd and groupd["colon"] != None:
|
||||
e = data.createCopy()
|
||||
bb.data.update_data(e)
|
||||
val = bb.data.expand(groupd["value"], e)
|
||||
elif "append" in groupd and groupd["append"] != None:
|
||||
val = "%s %s" % ((self.getFunc(key, data) or ""), groupd["value"])
|
||||
elif "prepend" in groupd and groupd["prepend"] != None:
|
||||
val = "%s %s" % (groupd["value"], (self.getFunc(key, data) or ""))
|
||||
elif "postdot" in groupd and groupd["postdot"] != None:
|
||||
val = "%s%s" % ((self.getFunc(key, data) or ""), groupd["value"])
|
||||
elif "predot" in groupd and groupd["predot"] != None:
|
||||
val = "%s%s" % (groupd["value"], (self.getFunc(key, data) or ""))
|
||||
else:
|
||||
val = groupd["value"]
|
||||
|
||||
if 'flag' in groupd and groupd['flag'] != None:
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "setVarFlag(%s, %s, %s, data)" % (key, groupd['flag'], val))
|
||||
bb.data.setVarFlag(key, groupd['flag'], val, data)
|
||||
elif groupd["lazyques"]:
|
||||
assigned = bb.data.getVar("__lazy_assigned", data) or []
|
||||
assigned.append(key)
|
||||
bb.data.setVar("__lazy_assigned", assigned, data)
|
||||
bb.data.setVarFlag(key, "defaultval", val, data)
|
||||
else:
|
||||
bb.data.setVar(key, val, data)
|
||||
|
||||
class MethodNode:
|
||||
def __init__(self, func_name, body, lineno, fn):
|
||||
self.func_name = func_name
|
||||
self.body = body
|
||||
self.fn = fn
|
||||
self.lineno = lineno
|
||||
|
||||
def eval(self, data):
|
||||
if self.func_name == "__anonymous":
|
||||
funcname = ("__anon_%s_%s" % (self.lineno, self.fn.translate(string.maketrans('/.+-', '____'))))
|
||||
if not funcname in bb.methodpool._parsed_fns:
|
||||
text = "def %s(d):\n" % (funcname) + '\n'.join(self.body)
|
||||
bb.methodpool.insert_method(funcname, text, self.fn)
|
||||
anonfuncs = bb.data.getVar('__BBANONFUNCS', data) or []
|
||||
anonfuncs.append(funcname)
|
||||
bb.data.setVar('__BBANONFUNCS', anonfuncs, data)
|
||||
else:
|
||||
bb.data.setVarFlag(self.func_name, "func", 1, data)
|
||||
bb.data.setVar(self.func_name, '\n'.join(self.body), data)
|
||||
|
||||
class PythonMethodNode(AstNode):
|
||||
def __init__(self, root, body, fn):
|
||||
self.root = root
|
||||
self.body = body
|
||||
self.fn = fn
|
||||
|
||||
def eval(self, data):
|
||||
# Note we will add root to parsedmethods after having parse
|
||||
# 'this' file. This means we will not parse methods from
|
||||
# bb classes twice
|
||||
if not self.root in __parsed_methods__:
|
||||
text = '\n'.join(self.body)
|
||||
bb.methodpool.insert_method(self.root, text, self.fn)
|
||||
|
||||
class MethodFlagsNode(AstNode):
|
||||
def __init__(self, key, m):
|
||||
self.key = key
|
||||
self.m = m
|
||||
|
||||
def eval(self, data):
|
||||
if bb.data.getVar(self.key, data):
|
||||
# clean up old version of this piece of metadata, as its
|
||||
# flags could cause problems
|
||||
bb.data.setVarFlag(self.key, 'python', None, data)
|
||||
bb.data.setVarFlag(self.key, 'fakeroot', None, data)
|
||||
if self.m.group("py") is not None:
|
||||
bb.data.setVarFlag(self.key, "python", "1", data)
|
||||
else:
|
||||
bb.data.delVarFlag(self.key, "python", data)
|
||||
if self.m.group("fr") is not None:
|
||||
bb.data.setVarFlag(self.key, "fakeroot", "1", data)
|
||||
else:
|
||||
bb.data.delVarFlag(self.key, "fakeroot", data)
|
||||
|
||||
class ExportFuncsNode(AstNode):
|
||||
def __init__(self, fns, classes):
|
||||
self.n = __word__.findall(fns)
|
||||
self.classes = classes
|
||||
|
||||
def eval(self, data):
|
||||
for f in self.n:
|
||||
allvars = []
|
||||
allvars.append(f)
|
||||
allvars.append(self.classes[-1] + "_" + f)
|
||||
|
||||
vars = [[ allvars[0], allvars[1] ]]
|
||||
if len(self.classes) > 1 and self.classes[-2] is not None:
|
||||
allvars.append(self.classes[-2] + "_" + f)
|
||||
vars = []
|
||||
vars.append([allvars[2], allvars[1]])
|
||||
vars.append([allvars[0], allvars[2]])
|
||||
|
||||
for (var, calledvar) in vars:
|
||||
if bb.data.getVar(var, data) and not bb.data.getVarFlag(var, 'export_func', data):
|
||||
continue
|
||||
|
||||
if bb.data.getVar(var, data):
|
||||
bb.data.setVarFlag(var, 'python', None, data)
|
||||
bb.data.setVarFlag(var, 'func', None, data)
|
||||
|
||||
for flag in [ "func", "python" ]:
|
||||
if bb.data.getVarFlag(calledvar, flag, data):
|
||||
bb.data.setVarFlag(var, flag, bb.data.getVarFlag(calledvar, flag, data), data)
|
||||
for flag in [ "dirs" ]:
|
||||
if bb.data.getVarFlag(var, flag, data):
|
||||
bb.data.setVarFlag(calledvar, flag, bb.data.getVarFlag(var, flag, data), data)
|
||||
|
||||
if bb.data.getVarFlag(calledvar, "python", data):
|
||||
bb.data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n", data)
|
||||
else:
|
||||
bb.data.setVar(var, "\t" + calledvar + "\n", data)
|
||||
bb.data.setVarFlag(var, 'export_func', '1', data)
|
||||
|
||||
class AddTaskNode(AstNode):
|
||||
def __init__(self, func, before, after):
|
||||
self.func = func
|
||||
self.before = before
|
||||
self.after = after
|
||||
|
||||
def eval(self, data):
|
||||
var = self.func
|
||||
if self.func[:3] != "do_":
|
||||
var = "do_" + self.func
|
||||
|
||||
bb.data.setVarFlag(var, "task", 1, data)
|
||||
bbtasks = bb.data.getVar('__BBTASKS', data) or []
|
||||
if not var in bbtasks:
|
||||
bbtasks.append(var)
|
||||
bb.data.setVar('__BBTASKS', bbtasks, data)
|
||||
|
||||
existing = bb.data.getVarFlag(var, "deps", data) or []
|
||||
if self.after is not None:
|
||||
# set up deps for function
|
||||
for entry in self.after.split():
|
||||
if entry not in existing:
|
||||
existing.append(entry)
|
||||
bb.data.setVarFlag(var, "deps", existing, data)
|
||||
if self.before is not None:
|
||||
# set up things that depend on this func
|
||||
for entry in self.before.split():
|
||||
existing = bb.data.getVarFlag(entry, "deps", data) or []
|
||||
if var not in existing:
|
||||
bb.data.setVarFlag(entry, "deps", [var] + existing, data)
|
||||
|
||||
class BBHandlerNode(AstNode):
|
||||
def __init__(self, fns):
|
||||
self.hs = __word__.findall(fns)
|
||||
|
||||
def eval(self, data):
|
||||
bbhands = bb.data.getVar('__BBHANDLERS', data) or []
|
||||
for h in self.hs:
|
||||
bbhands.append(h)
|
||||
bb.data.setVarFlag(h, "handler", 1, data)
|
||||
bb.data.setVar('__BBHANDLERS', bbhands, data)
|
||||
|
||||
class InheritNode(AstNode):
|
||||
def __init__(self, files):
|
||||
self.n = __word__.findall(files)
|
||||
|
||||
def eval(self, data):
|
||||
bb.parse.BBHandler.inherit(self.n, data)
|
||||
|
||||
def handleInclude(statements, m, fn, lineno, force):
|
||||
statements.append(IncludeNode(m.group(1), fn, lineno, force))
|
||||
|
||||
def handleExport(statements, m):
|
||||
statements.append(ExportNode(m.group(1)))
|
||||
|
||||
def handleData(statements, groupd):
|
||||
statements.append(DataNode(groupd))
|
||||
|
||||
def handleMethod(statements, func_name, lineno, fn, body):
|
||||
statements.append(MethodNode(func_name, body, lineno, fn))
|
||||
|
||||
def handlePythonMethod(statements, root, body, fn):
|
||||
statements.append(PythonMethodNode(root, body, fn))
|
||||
|
||||
def handleMethodFlags(statements, key, m):
|
||||
statements.append(MethodFlagsNode(key, m))
|
||||
|
||||
def handleExportFuncs(statements, m, classes):
|
||||
statements.append(ExportFuncsNode(m.group(1), classes))
|
||||
|
||||
def handleAddTask(statements, m):
|
||||
func = m.group("func")
|
||||
before = m.group("before")
|
||||
after = m.group("after")
|
||||
if func is None:
|
||||
return
|
||||
|
||||
statements.append(AddTaskNode(func, before, after))
|
||||
|
||||
def handleBBHandlers(statements, m):
|
||||
statements.append(BBHandlerNode(m.group(1)))
|
||||
|
||||
def handleInherit(statements, m):
|
||||
files = m.group(1)
|
||||
n = __word__.findall(files)
|
||||
statements.append(InheritNode(m.group(1)))
|
||||
|
||||
def finalise(fn, d):
|
||||
for lazykey in bb.data.getVar("__lazy_assigned", d) or ():
|
||||
if bb.data.getVar(lazykey, d) is None:
|
||||
val = bb.data.getVarFlag(lazykey, "defaultval", d)
|
||||
bb.data.setVar(lazykey, val, d)
|
||||
|
||||
bb.data.expandKeys(d)
|
||||
bb.data.update_data(d)
|
||||
anonqueue = bb.data.getVar("__anonqueue", d, 1) or []
|
||||
body = [x['content'] for x in anonqueue]
|
||||
flag = { 'python' : 1, 'func' : 1 }
|
||||
bb.data.setVar("__anonfunc", "\n".join(body), d)
|
||||
bb.data.setVarFlags("__anonfunc", flag, d)
|
||||
from bb import build
|
||||
try:
|
||||
t = bb.data.getVar('T', d)
|
||||
bb.data.setVar('T', '${TMPDIR}/anonfunc/', d)
|
||||
anonfuncs = bb.data.getVar('__BBANONFUNCS', d) or []
|
||||
code = ""
|
||||
for f in anonfuncs:
|
||||
code = code + " %s(d)\n" % f
|
||||
bb.data.setVar("__anonfunc", code, d)
|
||||
build.exec_func("__anonfunc", d)
|
||||
bb.data.delVar('T', d)
|
||||
if t:
|
||||
bb.data.setVar('T', t, d)
|
||||
except Exception, e:
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "Exception when executing anonymous function: %s" % e)
|
||||
raise
|
||||
bb.data.delVar("__anonqueue", d)
|
||||
bb.data.delVar("__anonfunc", d)
|
||||
bb.data.update_data(d)
|
||||
|
||||
all_handlers = {}
|
||||
for var in bb.data.getVar('__BBHANDLERS', d) or []:
|
||||
# try to add the handler
|
||||
handler = bb.data.getVar(var,d)
|
||||
bb.event.register(var, handler)
|
||||
|
||||
tasklist = bb.data.getVar('__BBTASKS', d) or []
|
||||
bb.build.add_tasks(tasklist, d)
|
||||
|
||||
bb.event.fire(bb.event.RecipeParsed(fn), d)
|
||||
|
||||
def _create_variants(datastores, names, function):
|
||||
def create_variant(name, orig_d, arg = None):
|
||||
new_d = bb.data.createCopy(orig_d)
|
||||
function(arg or name, new_d)
|
||||
datastores[name] = new_d
|
||||
|
||||
for variant, variant_d in datastores.items():
|
||||
for name in names:
|
||||
if not variant:
|
||||
# Based on main recipe
|
||||
create_variant(name, variant_d)
|
||||
else:
|
||||
create_variant("%s-%s" % (variant, name), variant_d, name)
|
||||
|
||||
def _expand_versions(versions):
|
||||
def expand_one(version, start, end):
|
||||
for i in xrange(start, end + 1):
|
||||
ver = _bbversions_re.sub(str(i), version, 1)
|
||||
yield ver
|
||||
|
||||
versions = iter(versions)
|
||||
while True:
|
||||
try:
|
||||
version = versions.next()
|
||||
except StopIteration:
|
||||
break
|
||||
|
||||
range_ver = _bbversions_re.search(version)
|
||||
if not range_ver:
|
||||
yield version
|
||||
else:
|
||||
newversions = expand_one(version, int(range_ver.group("from")),
|
||||
int(range_ver.group("to")))
|
||||
versions = chain(newversions, versions)
|
||||
|
||||
def multi_finalize(fn, d):
|
||||
safe_d = d
|
||||
|
||||
d = bb.data.createCopy(safe_d)
|
||||
try:
|
||||
finalise(fn, d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, d)
|
||||
datastores = {"": safe_d}
|
||||
|
||||
versions = (d.getVar("BBVERSIONS", True) or "").split()
|
||||
if versions:
|
||||
pv = orig_pv = d.getVar("PV", True)
|
||||
baseversions = {}
|
||||
|
||||
def verfunc(ver, d, pv_d = None):
|
||||
if pv_d is None:
|
||||
pv_d = d
|
||||
|
||||
overrides = d.getVar("OVERRIDES", True).split(":")
|
||||
pv_d.setVar("PV", ver)
|
||||
overrides.append(ver)
|
||||
bpv = baseversions.get(ver) or orig_pv
|
||||
pv_d.setVar("BPV", bpv)
|
||||
overrides.append(bpv)
|
||||
d.setVar("OVERRIDES", ":".join(overrides))
|
||||
|
||||
versions = list(_expand_versions(versions))
|
||||
for pos, version in enumerate(list(versions)):
|
||||
try:
|
||||
pv, bpv = version.split(":", 2)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
versions[pos] = pv
|
||||
baseversions[pv] = bpv
|
||||
|
||||
if pv in versions and not baseversions.get(pv):
|
||||
versions.remove(pv)
|
||||
else:
|
||||
pv = versions.pop()
|
||||
|
||||
# This is necessary because our existing main datastore
|
||||
# has already been finalized with the old PV, we need one
|
||||
# that's been finalized with the new PV.
|
||||
d = bb.data.createCopy(safe_d)
|
||||
verfunc(pv, d, safe_d)
|
||||
try:
|
||||
finalise(fn, d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, d)
|
||||
|
||||
_create_variants(datastores, versions, verfunc)
|
||||
|
||||
extended = d.getVar("BBCLASSEXTEND", True) or ""
|
||||
if extended:
|
||||
pn = d.getVar("PN", True)
|
||||
def extendfunc(name, d):
|
||||
d.setVar("PN", "%s-%s" % (pn, name))
|
||||
bb.parse.BBHandler.inherit([name], d)
|
||||
|
||||
safe_d.setVar("BBCLASSEXTEND", extended)
|
||||
_create_variants(datastores, extended.split(), extendfunc)
|
||||
|
||||
for variant, variant_d in datastores.items():
|
||||
if variant:
|
||||
try:
|
||||
finalise(fn, variant_d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, variant_d)
|
||||
|
||||
if len(datastores) > 1:
|
||||
variants = filter(None, datastores.keys())
|
||||
safe_d.setVar("__VARIANTS", " ".join(variants))
|
||||
|
||||
datastores[""] = d
|
||||
return datastores
|
||||
188
bitbake/lib/bb/parse/parse_c/BBHandler.py
Normal file
188
bitbake/lib/bb/parse/parse_c/BBHandler.py
Normal file
@@ -0,0 +1,188 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""class for handling .bb files (using a C++ parser)
|
||||
|
||||
Reads a .bb file and obtains its metadata (using a C++ parser)
|
||||
|
||||
Copyright (C) 2006 Tim Robert Ansell
|
||||
Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
|
||||
This program is free software; you can redistribute it and/or modify it under
|
||||
the terms of the GNU General Public License as published by the Free Software
|
||||
Foundation; either version 2 of the License, or (at your option) any later
|
||||
version.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
|
||||
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
|
||||
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
"""
|
||||
|
||||
import os, sys
|
||||
|
||||
# The Module we will use here
|
||||
import bb
|
||||
|
||||
from bitbakec import parsefile
|
||||
|
||||
#
|
||||
# This is the Python Part of the Native Parser Implementation.
|
||||
# We will only parse .bbclass, .inc and .bb files but no
|
||||
# configuration files.
|
||||
# supports, init and handle are the public methods used by
|
||||
# parser module
|
||||
#
|
||||
# The rest of the methods are internal implementation details.
|
||||
|
||||
def _init(fn, d):
|
||||
"""
|
||||
Initialize the data implementation with values of
|
||||
the environment and data from the file.
|
||||
"""
|
||||
pass
|
||||
|
||||
#
|
||||
# public
|
||||
#
|
||||
def supports(fn, data):
|
||||
return fn[-3:] == ".bb" or fn[-8:] == ".bbclass" or fn[-4:] == ".inc" or fn[-5:] == ".conf"
|
||||
|
||||
def init(fn, data):
|
||||
if not bb.data.getVar('TOPDIR', data):
|
||||
bb.data.setVar('TOPDIR', os.getcwd(), data)
|
||||
if not bb.data.getVar('BBPATH', data):
|
||||
bb.data.setVar('BBPATH', os.path.join(sys.prefix, 'share', 'bitbake'), data)
|
||||
|
||||
def handle_inherit(d):
|
||||
"""
|
||||
Handle inheriting of classes. This will load all default classes.
|
||||
It could be faster, it could detect infinite loops but this is todo
|
||||
Also this delayed loading of bb.parse could impose a penalty
|
||||
"""
|
||||
from bb.parse import handle
|
||||
|
||||
files = (data.getVar('INHERIT', d, True) or "").split()
|
||||
if not "base" in i:
|
||||
files[0:0] = ["base"]
|
||||
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
for f in files:
|
||||
file = data.expand(f, d)
|
||||
if file[0] != "/" and file[-8:] != ".bbclass":
|
||||
file = os.path.join('classes', '%s.bbclass' % file)
|
||||
|
||||
if not file in __inherit_cache:
|
||||
debug(2, "BB %s:%d: inheriting %s" % (fn, lineno, file))
|
||||
__inherit_cache.append( file )
|
||||
|
||||
try:
|
||||
handle(file, d, True)
|
||||
except IOError:
|
||||
print "Failed to inherit %s" % file
|
||||
data.setVar('__inherit_cache', __inherit_cache, d)
|
||||
|
||||
|
||||
def handle(fn, d, include):
|
||||
from bb import data, parse
|
||||
|
||||
(root, ext) = os.path.splitext(os.path.basename(fn))
|
||||
base_name = "%s%s" % (root,ext)
|
||||
|
||||
# initialize with some data
|
||||
init(fn,d)
|
||||
|
||||
# check if we include or are the beginning
|
||||
oldfile = None
|
||||
if include:
|
||||
oldfile = d.getVar('FILE', False)
|
||||
is_conf = False
|
||||
elif ext == ".conf":
|
||||
is_conf = True
|
||||
data.inheritFromOS(d)
|
||||
|
||||
# find the file
|
||||
if not os.path.isabs(fn):
|
||||
abs_fn = bb.which(d.getVar('BBPATH', True), fn)
|
||||
else:
|
||||
abs_fn = fn
|
||||
|
||||
# check if the file exists
|
||||
if not os.path.exists(abs_fn):
|
||||
raise IOError("file '%(fn)s' not found" % locals() )
|
||||
|
||||
# now we know the file is around mark it as dep
|
||||
if include:
|
||||
parse.mark_dependency(d, abs_fn)
|
||||
|
||||
# manipulate the bbpath
|
||||
if ext != ".bbclass" and ext != ".conf":
|
||||
old_bb_path = data.getVar('BBPATH', d)
|
||||
data.setVar('BBPATH', os.path.dirname(abs_fn) + (":%s" %old_bb_path) , d)
|
||||
|
||||
# handle INHERITS and base inherit
|
||||
if ext != ".bbclass" and ext != ".conf":
|
||||
data.setVar('FILE', fn, d)
|
||||
handle_interit(d)
|
||||
|
||||
# now parse this file - by defering it to C++
|
||||
parsefile(abs_fn, d, is_conf)
|
||||
|
||||
# Finish it up
|
||||
if include == 0:
|
||||
data.expandKeys(d)
|
||||
data.update_data(d)
|
||||
#### !!! XXX Finish it up by executing the anonfunc
|
||||
|
||||
|
||||
# restore the original FILE
|
||||
if oldfile:
|
||||
d.setVar('FILE', oldfile)
|
||||
|
||||
# restore bbpath
|
||||
if ext != ".bbclass" and ext != ".conf":
|
||||
data.setVar('BBPATH', old_bb_path, d )
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
# Needed for BitBake files...
|
||||
__pkgsplit_cache__={}
|
||||
def vars_from_file(mypkg, d):
|
||||
if not mypkg:
|
||||
return (None, None, None)
|
||||
if mypkg in __pkgsplit_cache__:
|
||||
return __pkgsplit_cache__[mypkg]
|
||||
|
||||
myfile = os.path.splitext(os.path.basename(mypkg))
|
||||
parts = myfile[0].split('_')
|
||||
__pkgsplit_cache__[mypkg] = parts
|
||||
exp = 3 - len(parts)
|
||||
tmplist = []
|
||||
while exp != 0:
|
||||
exp -= 1
|
||||
tmplist.append(None)
|
||||
parts.extend(tmplist)
|
||||
return parts
|
||||
|
||||
|
||||
|
||||
|
||||
# Inform bitbake that we are a parser
|
||||
# We need to define all three
|
||||
from bb.parse import handlers
|
||||
handlers.append( {'supports' : supports, 'handle': handle, 'init' : init})
|
||||
del handlers
|
||||
36
bitbake/lib/bb/parse/parse_c/Makefile
Normal file
36
bitbake/lib/bb/parse/parse_c/Makefile
Normal file
@@ -0,0 +1,36 @@
|
||||
|
||||
buil: bitbakec.so
|
||||
echo "Done"
|
||||
|
||||
bitbakescanner.cc: bitbakescanner.l
|
||||
flex -t bitbakescanner.l > bitbakescanner.cc
|
||||
|
||||
bitbakeparser.cc: bitbakeparser.y python_output.h
|
||||
lemon bitbakeparser.y
|
||||
mv bitbakeparser.c bitbakeparser.cc
|
||||
|
||||
bitbakec.c: bitbakec.pyx
|
||||
pyrexc bitbakec.pyx
|
||||
|
||||
bitbakec-processed.c: bitbakec.c
|
||||
cat bitbakec.c | sed -e"s/__pyx_f_8bitbakec_//" > bitbakec-processed.c
|
||||
|
||||
bitbakec.o: bitbakec-processed.c
|
||||
gcc -c bitbakec-processed.c -o bitbakec.o -fPIC -I/usr/include/python2.4
|
||||
|
||||
bitbakeparser.o: bitbakeparser.cc
|
||||
g++ -c bitbakeparser.cc -fPIC -I/usr/include/python2.4
|
||||
|
||||
bitbakescanner.o: bitbakescanner.cc
|
||||
g++ -c bitbakescanner.cc -fPIC -I/usr/include/python2.4
|
||||
|
||||
bitbakec.so: bitbakec.o bitbakeparser.o bitbakescanner.o
|
||||
g++ -shared -fPIC bitbakeparser.o bitbakescanner.o bitbakec.o -o bitbakec.so
|
||||
|
||||
clean:
|
||||
rm -f *.out
|
||||
rm -f *.cc
|
||||
rm -f bitbakec.c
|
||||
rm -f bitbakec-processed.c
|
||||
rm -f *.o
|
||||
rm -f *.so
|
||||
12
bitbake/lib/bb/parse/parse_c/README.build
Normal file
12
bitbake/lib/bb/parse/parse_c/README.build
Normal file
@@ -0,0 +1,12 @@
|
||||
To ease portability (lemon, flex, etc) we keep the
|
||||
result of flex and lemon in the source code. We agree
|
||||
to not manually change the scanner and parser.
|
||||
|
||||
|
||||
|
||||
How we create the files:
|
||||
flex -t bitbakescanner.l > bitbakescanner.cc
|
||||
lemon bitbakeparser.y
|
||||
mv bitbakeparser.c bitbakeparser.cc
|
||||
|
||||
Now manually create two files
|
||||
28
bitbake/lib/bb/parse/parse_c/__init__.py
Normal file
28
bitbake/lib/bb/parse/parse_c/__init__.py
Normal file
@@ -0,0 +1,28 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
|
||||
# SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
|
||||
# THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
#
|
||||
|
||||
__version__ = '0.1'
|
||||
__all__ = [ 'BBHandler' ]
|
||||
|
||||
import BBHandler
|
||||
253
bitbake/lib/bb/parse/parse_c/bitbakec.pyx
Normal file
253
bitbake/lib/bb/parse/parse_c/bitbakec.pyx
Normal file
@@ -0,0 +1,253 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
|
||||
cdef extern from "stdio.h":
|
||||
ctypedef int FILE
|
||||
FILE *fopen(char*, char*)
|
||||
int fclose(FILE *fp)
|
||||
|
||||
cdef extern from "string.h":
|
||||
int strlen(char*)
|
||||
|
||||
cdef extern from "lexerc.h":
|
||||
ctypedef struct lex_t:
|
||||
void* parser
|
||||
void* scanner
|
||||
char* name
|
||||
FILE* file
|
||||
int config
|
||||
void* data
|
||||
|
||||
int lineError
|
||||
int errorParse
|
||||
|
||||
cdef extern int parse(FILE*, char*, object, int)
|
||||
|
||||
def parsefile(object file, object data, object config):
|
||||
#print "parsefile: 1", file, data
|
||||
|
||||
# Open the file
|
||||
cdef FILE* f
|
||||
|
||||
f = fopen(file, "r")
|
||||
#print "parsefile: 2 opening file"
|
||||
if (f == NULL):
|
||||
raise IOError("No such file %s." % file)
|
||||
|
||||
#print "parsefile: 3 parse"
|
||||
parse(f, file, data, config)
|
||||
|
||||
# Close the file
|
||||
fclose(f)
|
||||
|
||||
|
||||
cdef public void e_assign(lex_t* container, char* key, char* what):
|
||||
#print "e_assign", key, what
|
||||
if what == NULL:
|
||||
print "FUTURE Warning empty string: use \"\""
|
||||
what = ""
|
||||
|
||||
d = <object>container.data
|
||||
d.setVar(key, what)
|
||||
|
||||
cdef public void e_export(lex_t* c, char* what):
|
||||
#print "e_export", what
|
||||
#exp:
|
||||
# bb.data.setVarFlag(key, "export", 1, data)
|
||||
d = <object>c.data
|
||||
d.setVarFlag(what, "export", 1)
|
||||
|
||||
cdef public void e_immediate(lex_t* c, char* key, char* what):
|
||||
#print "e_immediate", key, what
|
||||
#colon:
|
||||
# val = bb.data.expand(groupd["value"], data)
|
||||
d = <object>c.data
|
||||
d.setVar(key, d.expand(what,d))
|
||||
|
||||
cdef public void e_cond(lex_t* c, char* key, char* what):
|
||||
#print "e_cond", key, what
|
||||
#ques:
|
||||
# val = bb.data.getVar(key, data)
|
||||
# if val == None:
|
||||
# val = groupd["value"]
|
||||
if what == NULL:
|
||||
print "FUTURE warning: Use \"\" for", key
|
||||
what = ""
|
||||
|
||||
d = <object>c.data
|
||||
d.setVar(key, (d.getVar(key,False) or what))
|
||||
|
||||
cdef public void e_prepend(lex_t* c, char* key, char* what):
|
||||
#print "e_prepend", key, what
|
||||
#prepend:
|
||||
# val = "%s %s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
|
||||
d = <object>c.data
|
||||
d.setVar(key, what + " " + (d.getVar(key,0) or ""))
|
||||
|
||||
cdef public void e_append(lex_t* c, char* key, char* what):
|
||||
#print "e_append", key, what
|
||||
#append:
|
||||
# val = "%s %s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
|
||||
d = <object>c.data
|
||||
d.setVar(key, (d.getVar(key,0) or "") + " " + what)
|
||||
|
||||
cdef public void e_precat(lex_t* c, char* key, char* what):
|
||||
#print "e_precat", key, what
|
||||
#predot:
|
||||
# val = "%s%s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
|
||||
d = <object>c.data
|
||||
d.setVar(key, what + (d.getVar(key,0) or ""))
|
||||
|
||||
cdef public void e_postcat(lex_t* c, char* key, char* what):
|
||||
#print "e_postcat", key, what
|
||||
#postdot:
|
||||
# val = "%s%s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
|
||||
d = <object>c.data
|
||||
d.setVar(key, (d.getVar(key,0) or "") + what)
|
||||
|
||||
cdef public int e_addtask(lex_t* c, char* name, char* before, char* after) except -1:
|
||||
#print "e_addtask", name
|
||||
# func = m.group("func")
|
||||
# before = m.group("before")
|
||||
# after = m.group("after")
|
||||
# if func is None:
|
||||
# return
|
||||
# var = "do_" + func
|
||||
#
|
||||
# data.setVarFlag(var, "task", 1, d)
|
||||
#
|
||||
# if after is not None:
|
||||
# # set up deps for function
|
||||
# data.setVarFlag(var, "deps", after.split(), d)
|
||||
# if before is not None:
|
||||
# # set up things that depend on this func
|
||||
# data.setVarFlag(var, "postdeps", before.split(), d)
|
||||
# return
|
||||
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No tasks allowed in config files")
|
||||
return -1
|
||||
|
||||
d = <object>c.data
|
||||
do = "do_%s" % name
|
||||
d.setVarFlag(do, "task", 1)
|
||||
|
||||
if before != NULL and strlen(before) > 0:
|
||||
#print "Before", before
|
||||
d.setVarFlag(do, "postdeps", ("%s" % before).split())
|
||||
if after != NULL and strlen(after) > 0:
|
||||
#print "After", after
|
||||
d.setVarFlag(do, "deps", ("%s" % after).split())
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_addhandler(lex_t* c, char* h) except -1:
|
||||
#print "e_addhandler", h
|
||||
# data.setVarFlag(h, "handler", 1, d)
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No handlers allowed in config files")
|
||||
return -1
|
||||
|
||||
d = <object>c.data
|
||||
d.setVarFlag(h, "handler", 1)
|
||||
return 0
|
||||
|
||||
cdef public int e_export_func(lex_t* c, char* function) except -1:
|
||||
#print "e_export_func", function
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No functions allowed in config files")
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_inherit(lex_t* c, char* file) except -1:
|
||||
#print "e_inherit", file
|
||||
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No inherits allowed in config files")
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
cdef public void e_include(lex_t* c, char* file):
|
||||
from bb.parse import handle
|
||||
d = <object>c.data
|
||||
|
||||
try:
|
||||
handle(d.expand(file,d), d, True)
|
||||
except IOError:
|
||||
print "Could not include file", file
|
||||
|
||||
|
||||
cdef public int e_require(lex_t* c, char* file) except -1:
|
||||
#print "e_require", file
|
||||
from bb.parse import handle
|
||||
d = <object>c.data
|
||||
|
||||
try:
|
||||
handle(d.expand(file,d), d, True)
|
||||
except IOError:
|
||||
print "ParseError", file
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("Could not include required file %s" % file)
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_proc(lex_t* c, char* key, char* what) except -1:
|
||||
#print "e_proc", key, what
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No inherits allowed in config files")
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_proc_python(lex_t* c, char* key, char* what) except -1:
|
||||
#print "e_proc_python"
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No pythin allowed in config files")
|
||||
return -1
|
||||
|
||||
if key != NULL:
|
||||
pass
|
||||
#print "Key", key
|
||||
if what != NULL:
|
||||
pass
|
||||
#print "What", what
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_proc_fakeroot(lex_t* c, char* key, char* what) except -1:
|
||||
#print "e_fakeroot", key, what
|
||||
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No fakeroot allowed in config files")
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_def(lex_t* c, char* a, char* b, char* d) except -1:
|
||||
#print "e_def", a, b, d
|
||||
|
||||
if c.config == 1:
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("No defs allowed in config files")
|
||||
return -1
|
||||
|
||||
return 0
|
||||
|
||||
cdef public int e_parse_error(lex_t* c) except -1:
|
||||
print "e_parse_error", c.name, "line:", lineError, "parse:", errorParse
|
||||
|
||||
|
||||
from bb.parse import ParseError
|
||||
raise ParseError("There was an parse error, sorry unable to give more information at the current time. File: %s Line: %d" % (c.name,lineError) )
|
||||
return -1
|
||||
|
||||
1157
bitbake/lib/bb/parse/parse_c/bitbakeparser.cc
Normal file
1157
bitbake/lib/bb/parse/parse_c/bitbakeparser.cc
Normal file
File diff suppressed because it is too large
Load Diff
29
bitbake/lib/bb/parse/parse_c/bitbakeparser.h
Normal file
29
bitbake/lib/bb/parse/parse_c/bitbakeparser.h
Normal file
@@ -0,0 +1,29 @@
|
||||
#define T_SYMBOL 1
|
||||
#define T_VARIABLE 2
|
||||
#define T_EXPORT 3
|
||||
#define T_OP_ASSIGN 4
|
||||
#define T_STRING 5
|
||||
#define T_OP_PREDOT 6
|
||||
#define T_OP_POSTDOT 7
|
||||
#define T_OP_IMMEDIATE 8
|
||||
#define T_OP_COND 9
|
||||
#define T_OP_PREPEND 10
|
||||
#define T_OP_APPEND 11
|
||||
#define T_TSYMBOL 12
|
||||
#define T_BEFORE 13
|
||||
#define T_AFTER 14
|
||||
#define T_ADDTASK 15
|
||||
#define T_ADDHANDLER 16
|
||||
#define T_FSYMBOL 17
|
||||
#define T_EXPORT_FUNC 18
|
||||
#define T_ISYMBOL 19
|
||||
#define T_INHERIT 20
|
||||
#define T_INCLUDE 21
|
||||
#define T_REQUIRE 22
|
||||
#define T_PROC_BODY 23
|
||||
#define T_PROC_OPEN 24
|
||||
#define T_PROC_CLOSE 25
|
||||
#define T_PYTHON 26
|
||||
#define T_FAKEROOT 27
|
||||
#define T_DEF_BODY 28
|
||||
#define T_DEF_ARGS 29
|
||||
179
bitbake/lib/bb/parse/parse_c/bitbakeparser.y
Normal file
179
bitbake/lib/bb/parse/parse_c/bitbakeparser.y
Normal file
@@ -0,0 +1,179 @@
|
||||
/* bbp.lemon
|
||||
|
||||
written by Marc Singer
|
||||
6 January 2005
|
||||
|
||||
This program is free software; you can redistribute it and/or
|
||||
modify it under the terms of the GNU General Public License as
|
||||
published by the Free Software Foundation; either version 2 of the
|
||||
License, or (at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful, but
|
||||
WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program; if not, write to the Free Software
|
||||
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
|
||||
USA.
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
lemon parser specification file for a BitBake input file parser.
|
||||
|
||||
Most of the interesting shenanigans are done in the lexer. The
|
||||
BitBake grammar is not regular. In order to emit tokens that
|
||||
the parser can properly interpret in LALR fashion, the lexer
|
||||
manages the interpretation state. This is why there are ISYMBOLs,
|
||||
SYMBOLS, and TSYMBOLS.
|
||||
|
||||
This parser was developed by reading the limited available
|
||||
documentation for BitBake and by analyzing the available BB files.
|
||||
There is no assertion of correctness to be made about this parser.
|
||||
|
||||
*/
|
||||
|
||||
%token_type {token_t}
|
||||
%name bbparse
|
||||
%token_prefix T_
|
||||
%extra_argument {lex_t* lex}
|
||||
|
||||
%include {
|
||||
#include "token.h"
|
||||
#include "lexer.h"
|
||||
#include "python_output.h"
|
||||
}
|
||||
|
||||
|
||||
%token_destructor { $$.release_this (); }
|
||||
|
||||
%syntax_error { e_parse_error( lex ); }
|
||||
|
||||
program ::= statements.
|
||||
|
||||
statements ::= statements statement.
|
||||
statements ::= .
|
||||
|
||||
variable(r) ::= SYMBOL(s).
|
||||
{ r.assignString( (char*)s.string() );
|
||||
s.assignString( 0 );
|
||||
s.release_this(); }
|
||||
variable(r) ::= VARIABLE(v).
|
||||
{
|
||||
r.assignString( (char*)v.string() );
|
||||
v.assignString( 0 );
|
||||
v.release_this(); }
|
||||
|
||||
statement ::= EXPORT variable(s) OP_ASSIGN STRING(v).
|
||||
{ e_assign( lex, s.string(), v.string() );
|
||||
e_export( lex, s.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= EXPORT variable(s) OP_PREDOT STRING(v).
|
||||
{ e_precat( lex, s.string(), v.string() );
|
||||
e_export( lex, s.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= EXPORT variable(s) OP_POSTDOT STRING(v).
|
||||
{ e_postcat( lex, s.string(), v.string() );
|
||||
e_export( lex, s.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= EXPORT variable(s) OP_IMMEDIATE STRING(v).
|
||||
{ e_immediate ( lex, s.string(), v.string() );
|
||||
e_export( lex, s.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= EXPORT variable(s) OP_COND STRING(v).
|
||||
{ e_cond( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
|
||||
statement ::= variable(s) OP_ASSIGN STRING(v).
|
||||
{ e_assign( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= variable(s) OP_PREDOT STRING(v).
|
||||
{ e_precat( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= variable(s) OP_POSTDOT STRING(v).
|
||||
{ e_postcat( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= variable(s) OP_PREPEND STRING(v).
|
||||
{ e_prepend( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= variable(s) OP_APPEND STRING(v).
|
||||
{ e_append( lex, s.string() , v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= variable(s) OP_IMMEDIATE STRING(v).
|
||||
{ e_immediate( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
statement ::= variable(s) OP_COND STRING(v).
|
||||
{ e_cond( lex, s.string(), v.string() );
|
||||
s.release_this(); v.release_this(); }
|
||||
|
||||
task ::= TSYMBOL(t) BEFORE TSYMBOL(b) AFTER TSYMBOL(a).
|
||||
{ e_addtask( lex, t.string(), b.string(), a.string() );
|
||||
t.release_this(); b.release_this(); a.release_this(); }
|
||||
task ::= TSYMBOL(t) AFTER TSYMBOL(a) BEFORE TSYMBOL(b).
|
||||
{ e_addtask( lex, t.string(), b.string(), a.string());
|
||||
t.release_this(); a.release_this(); b.release_this(); }
|
||||
task ::= TSYMBOL(t).
|
||||
{ e_addtask( lex, t.string(), NULL, NULL);
|
||||
t.release_this();}
|
||||
task ::= TSYMBOL(t) BEFORE TSYMBOL(b).
|
||||
{ e_addtask( lex, t.string(), b.string(), NULL);
|
||||
t.release_this(); b.release_this(); }
|
||||
task ::= TSYMBOL(t) AFTER TSYMBOL(a).
|
||||
{ e_addtask( lex, t.string(), NULL, a.string());
|
||||
t.release_this(); a.release_this(); }
|
||||
tasks ::= tasks task.
|
||||
tasks ::= task.
|
||||
statement ::= ADDTASK tasks.
|
||||
|
||||
statement ::= ADDHANDLER SYMBOL(s).
|
||||
{ e_addhandler( lex, s.string()); s.release_this (); }
|
||||
|
||||
func ::= FSYMBOL(f). { e_export_func( lex, f.string()); f.release_this(); }
|
||||
funcs ::= funcs func.
|
||||
funcs ::= func.
|
||||
statement ::= EXPORT_FUNC funcs.
|
||||
|
||||
inherit ::= ISYMBOL(i). { e_inherit( lex, i.string() ); i.release_this (); }
|
||||
inherits ::= inherits inherit.
|
||||
inherits ::= inherit.
|
||||
statement ::= INHERIT inherits.
|
||||
|
||||
statement ::= INCLUDE ISYMBOL(i).
|
||||
{ e_include( lex, i.string() ); i.release_this(); }
|
||||
|
||||
statement ::= REQUIRE ISYMBOL(i).
|
||||
{ e_require( lex, i.string() ); i.release_this(); }
|
||||
|
||||
proc_body(r) ::= proc_body(l) PROC_BODY(b).
|
||||
{ /* concatenate body lines */
|
||||
r.assignString( token_t::concatString(l.string(), b.string()) );
|
||||
l.release_this ();
|
||||
b.release_this ();
|
||||
}
|
||||
proc_body(b) ::= . { b.assignString(0); }
|
||||
statement ::= variable(p) PROC_OPEN proc_body(b) PROC_CLOSE.
|
||||
{ e_proc( lex, p.string(), b.string() );
|
||||
p.release_this(); b.release_this(); }
|
||||
statement ::= PYTHON SYMBOL(p) PROC_OPEN proc_body(b) PROC_CLOSE.
|
||||
{ e_proc_python ( lex, p.string(), b.string() );
|
||||
p.release_this(); b.release_this(); }
|
||||
statement ::= PYTHON PROC_OPEN proc_body(b) PROC_CLOSE.
|
||||
{ e_proc_python( lex, NULL, b.string());
|
||||
b.release_this (); }
|
||||
|
||||
statement ::= FAKEROOT SYMBOL(p) PROC_OPEN proc_body(b) PROC_CLOSE.
|
||||
{ e_proc_fakeroot( lex, p.string(), b.string() );
|
||||
p.release_this (); b.release_this (); }
|
||||
|
||||
def_body(r) ::= def_body(l) DEF_BODY(b).
|
||||
{ /* concatenate body lines */
|
||||
r.assignString( token_t::concatString(l.string(), b.string()) );
|
||||
l.release_this (); b.release_this ();
|
||||
}
|
||||
def_body(b) ::= . { b.assignString( 0 ); }
|
||||
statement ::= SYMBOL(p) DEF_ARGS(a) def_body(b).
|
||||
{ e_def( lex, p.string(), a.string(), b.string());
|
||||
p.release_this(); a.release_this(); b.release_this(); }
|
||||
|
||||
3209
bitbake/lib/bb/parse/parse_c/bitbakescanner.cc
Normal file
3209
bitbake/lib/bb/parse/parse_c/bitbakescanner.cc
Normal file
File diff suppressed because it is too large
Load Diff
319
bitbake/lib/bb/parse/parse_c/bitbakescanner.l
Normal file
319
bitbake/lib/bb/parse/parse_c/bitbakescanner.l
Normal file
@@ -0,0 +1,319 @@
|
||||
/* bbf.flex
|
||||
|
||||
written by Marc Singer
|
||||
6 January 2005
|
||||
|
||||
This program is free software; you can redistribute it and/or
|
||||
modify it under the terms of the GNU General Public License as
|
||||
published by the Free Software Foundation; either version 2 of the
|
||||
License, or (at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful, but
|
||||
WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program; if not, write to the Free Software
|
||||
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
|
||||
USA.
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
flex lexer specification for a BitBake input file parser.
|
||||
|
||||
Unfortunately, flex doesn't welcome comments within the rule sets.
|
||||
I say unfortunately because this lexer is unreasonably complex and
|
||||
comments would make the code much easier to comprehend.
|
||||
|
||||
The BitBake grammar is not regular. In order to interpret all
|
||||
of the available input files, the lexer maintains much state as it
|
||||
parses. There are places where this lexer will emit tokens that
|
||||
are invalid. The parser will tend to catch these.
|
||||
|
||||
The lexer requires C++ at the moment. The only reason for this has
|
||||
to do with a very small amount of managed state. Producing a C
|
||||
lexer should be a reasonably easy task as long as the %reentrant
|
||||
option is used.
|
||||
|
||||
|
||||
NOTES
|
||||
-----
|
||||
|
||||
o RVALUES. There are three kinds of RVALUES. There are unquoted
|
||||
values, double quote enclosed strings, and single quote
|
||||
strings. Quoted strings may contain unescaped quotes (of either
|
||||
type), *and* any type may span more than one line by using a
|
||||
continuation '\' at the end of the line. This requires us to
|
||||
recognize all types of values with a single expression.
|
||||
Moreover, the only reason to quote a value is to include
|
||||
trailing or leading whitespace. Whitespace within a value is
|
||||
preserved, ugh.
|
||||
|
||||
o CLASSES. C_ patterns define classes. Classes ought not include
|
||||
a repitition operator, instead letting the reference to the class
|
||||
define the repitition count.
|
||||
|
||||
C_SS - symbol start
|
||||
C_SB - symbol body
|
||||
C_SP - whitespace
|
||||
|
||||
*/
|
||||
|
||||
%option never-interactive
|
||||
%option yylineno
|
||||
%option noyywrap
|
||||
%option reentrant stack
|
||||
|
||||
|
||||
%{
|
||||
|
||||
#include "token.h"
|
||||
#include "lexer.h"
|
||||
#include "bitbakeparser.h"
|
||||
#include <ctype.h>
|
||||
|
||||
extern void *bbparseAlloc(void *(*mallocProc)(size_t));
|
||||
extern void bbparseFree(void *p, void (*freeProc)(void*));
|
||||
extern void *bbparseAlloc(void *(*mallocProc)(size_t));
|
||||
extern void *bbparse(void*, int, token_t, lex_t*);
|
||||
extern void bbparseTrace(FILE *TraceFILE, char *zTracePrompt);
|
||||
|
||||
//static const char* rgbInput;
|
||||
//static size_t cbInput;
|
||||
|
||||
extern "C" {
|
||||
|
||||
int lineError;
|
||||
int errorParse;
|
||||
|
||||
enum {
|
||||
errorNone = 0,
|
||||
errorUnexpectedInput,
|
||||
errorUnsupportedFeature,
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#define YY_EXTRA_TYPE lex_t*
|
||||
|
||||
/* Read from buffer */
|
||||
#define YY_INPUT(buf,result,max_size) \
|
||||
{ yyextra->input(buf, &result, max_size); }
|
||||
|
||||
//#define YY_DECL static size_t yylex ()
|
||||
|
||||
#define ERROR(e) \
|
||||
do { lineError = yylineno; errorParse = e; yyterminate (); } while (0)
|
||||
|
||||
static const char* fixup_escapes (const char* sz);
|
||||
|
||||
%}
|
||||
|
||||
|
||||
C_SP [ \t]
|
||||
COMMENT #.*\n
|
||||
OP_ASSIGN "="
|
||||
OP_PREDOT ".="
|
||||
OP_POSTDOT "=."
|
||||
OP_IMMEDIATE ":="
|
||||
OP_PREPEND "=+"
|
||||
OP_APPEND "+="
|
||||
OP_COND "?="
|
||||
B_OPEN "{"
|
||||
B_CLOSE "}"
|
||||
|
||||
K_ADDTASK "addtask"
|
||||
K_ADDHANDLER "addhandler"
|
||||
K_AFTER "after"
|
||||
K_BEFORE "before"
|
||||
K_DEF "def"
|
||||
K_INCLUDE "include"
|
||||
K_REQUIRE "require"
|
||||
K_INHERIT "inherit"
|
||||
K_PYTHON "python"
|
||||
K_FAKEROOT "fakeroot"
|
||||
K_EXPORT "export"
|
||||
K_EXPORT_FUNC "EXPORT_FUNCTIONS"
|
||||
|
||||
STRING \"([^\n\r]|"\\\n")*\"
|
||||
SSTRING \'([^\n\r]|"\\\n")*\'
|
||||
VALUE ([^'" \t\n])|([^'" \t\n]([^\n]|(\\\n))*[^'" \t\n])
|
||||
|
||||
C_SS [a-zA-Z_]
|
||||
C_SB [a-zA-Z0-9_+-./]
|
||||
REF $\{{C_SS}{C_SB}*\}
|
||||
SYMBOL {C_SS}{C_SB}*
|
||||
VARIABLE $?{C_SS}({C_SB}*|{REF})*(\[[a-zA-Z0-9_]*\])?
|
||||
FILENAME ([a-zA-Z_./]|{REF})(([-+a-zA-Z0-9_./]*)|{REF})*
|
||||
|
||||
PROC \({C_SP}*\)
|
||||
|
||||
%s S_DEF
|
||||
%s S_DEF_ARGS
|
||||
%s S_DEF_BODY
|
||||
%s S_FUNC
|
||||
%s S_INCLUDE
|
||||
%s S_INHERIT
|
||||
%s S_REQUIRE
|
||||
%s S_PROC
|
||||
%s S_RVALUE
|
||||
%s S_TASK
|
||||
|
||||
%%
|
||||
|
||||
{OP_APPEND} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_APPEND); }
|
||||
{OP_PREPEND} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_PREPEND); }
|
||||
{OP_IMMEDIATE} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_IMMEDIATE); }
|
||||
{OP_ASSIGN} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_ASSIGN); }
|
||||
{OP_PREDOT} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_PREDOT); }
|
||||
{OP_POSTDOT} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_POSTDOT); }
|
||||
{OP_COND} { BEGIN S_RVALUE;
|
||||
yyextra->accept (T_OP_COND); }
|
||||
|
||||
<S_RVALUE>\\\n{C_SP}* { }
|
||||
<S_RVALUE>{STRING} { BEGIN INITIAL;
|
||||
size_t cb = yyleng;
|
||||
while (cb && isspace (yytext[cb - 1]))
|
||||
--cb;
|
||||
yytext[cb - 1] = 0;
|
||||
yyextra->accept (T_STRING, yytext + 1); }
|
||||
<S_RVALUE>{SSTRING} { BEGIN INITIAL;
|
||||
size_t cb = yyleng;
|
||||
while (cb && isspace (yytext[cb - 1]))
|
||||
--cb;
|
||||
yytext[cb - 1] = 0;
|
||||
yyextra->accept (T_STRING, yytext + 1); }
|
||||
|
||||
<S_RVALUE>{VALUE} { ERROR (errorUnexpectedInput); }
|
||||
<S_RVALUE>{C_SP}*\n+ { BEGIN INITIAL;
|
||||
yyextra->accept (T_STRING, NULL); }
|
||||
|
||||
{K_INCLUDE} { BEGIN S_INCLUDE;
|
||||
yyextra->accept (T_INCLUDE); }
|
||||
{K_REQUIRE} { BEGIN S_REQUIRE;
|
||||
yyextra->accept (T_REQUIRE); }
|
||||
{K_INHERIT} { BEGIN S_INHERIT;
|
||||
yyextra->accept (T_INHERIT); }
|
||||
{K_ADDTASK} { BEGIN S_TASK;
|
||||
yyextra->accept (T_ADDTASK); }
|
||||
{K_ADDHANDLER} { yyextra->accept (T_ADDHANDLER); }
|
||||
{K_EXPORT_FUNC} { BEGIN S_FUNC;
|
||||
yyextra->accept (T_EXPORT_FUNC); }
|
||||
<S_TASK>{K_BEFORE} { yyextra->accept (T_BEFORE); }
|
||||
<S_TASK>{K_AFTER} { yyextra->accept (T_AFTER); }
|
||||
<INITIAL>{K_EXPORT} { yyextra->accept (T_EXPORT); }
|
||||
|
||||
<INITIAL>{K_FAKEROOT} { yyextra->accept (T_FAKEROOT); }
|
||||
<INITIAL>{K_PYTHON} { yyextra->accept (T_PYTHON); }
|
||||
{PROC}{C_SP}*{B_OPEN}{C_SP}*\n* { BEGIN S_PROC;
|
||||
yyextra->accept (T_PROC_OPEN); }
|
||||
<S_PROC>{B_CLOSE}{C_SP}*\n* { BEGIN INITIAL;
|
||||
yyextra->accept (T_PROC_CLOSE); }
|
||||
<S_PROC>([^}][^\n]*)?\n* { yyextra->accept (T_PROC_BODY, yytext); }
|
||||
|
||||
{K_DEF} { BEGIN S_DEF; }
|
||||
<S_DEF>{SYMBOL} { BEGIN S_DEF_ARGS;
|
||||
yyextra->accept (T_SYMBOL, yytext); }
|
||||
<S_DEF_ARGS>[^\n:]*: { yyextra->accept (T_DEF_ARGS, yytext); }
|
||||
<S_DEF_ARGS>{C_SP}*\n { BEGIN S_DEF_BODY; }
|
||||
<S_DEF_BODY>{C_SP}+[^\n]*\n { yyextra->accept (T_DEF_BODY, yytext); }
|
||||
<S_DEF_BODY>\n { yyextra->accept (T_DEF_BODY, yytext); }
|
||||
<S_DEF_BODY>. { BEGIN INITIAL; unput (yytext[0]); }
|
||||
|
||||
{COMMENT} { }
|
||||
|
||||
<INITIAL>{SYMBOL} { yyextra->accept (T_SYMBOL, yytext); }
|
||||
<INITIAL>{VARIABLE} { yyextra->accept (T_VARIABLE, yytext); }
|
||||
|
||||
<S_TASK>{SYMBOL} { yyextra->accept (T_TSYMBOL, yytext); }
|
||||
<S_FUNC>{SYMBOL} { yyextra->accept (T_FSYMBOL, yytext); }
|
||||
<S_INHERIT>{SYMBOL} { yyextra->accept (T_ISYMBOL, yytext); }
|
||||
<S_INCLUDE>{FILENAME} { BEGIN INITIAL;
|
||||
yyextra->accept (T_ISYMBOL, yytext); }
|
||||
<S_REQUIRE>{FILENAME} { BEGIN INITIAL;
|
||||
yyextra->accept (T_ISYMBOL, yytext); }
|
||||
<S_TASK>\n { BEGIN INITIAL; }
|
||||
<S_FUNC>\n { BEGIN INITIAL; }
|
||||
<S_INHERIT>\n { BEGIN INITIAL; }
|
||||
|
||||
[ \t\r\n] /* Insignificant whitespace */
|
||||
|
||||
. { ERROR (errorUnexpectedInput); }
|
||||
|
||||
/* Check for premature termination */
|
||||
<<EOF>> { return T_EOF; }
|
||||
|
||||
%%
|
||||
|
||||
void lex_t::accept (int token, const char* sz)
|
||||
{
|
||||
token_t t;
|
||||
memset (&t, 0, sizeof (t));
|
||||
t.copyString(sz);
|
||||
|
||||
/* tell lemon to parse the token */
|
||||
parse (parser, token, t, this);
|
||||
}
|
||||
|
||||
void lex_t::input (char *buf, int *result, int max_size)
|
||||
{
|
||||
/* printf("lex_t::input %p %d\n", buf, max_size); */
|
||||
*result = fread(buf, 1, max_size, file);
|
||||
/* printf("lex_t::input result %d\n", *result); */
|
||||
}
|
||||
|
||||
int lex_t::line ()const
|
||||
{
|
||||
/* printf("lex_t::line\n"); */
|
||||
return yyget_lineno (scanner);
|
||||
}
|
||||
|
||||
|
||||
extern "C" {
|
||||
|
||||
void parse (FILE* file, char* name, PyObject* data, int config)
|
||||
{
|
||||
/* printf("parse bbparseAlloc\n"); */
|
||||
void* parser = bbparseAlloc (malloc);
|
||||
yyscan_t scanner;
|
||||
lex_t lex;
|
||||
|
||||
/* printf("parse yylex_init\n"); */
|
||||
yylex_init (&scanner);
|
||||
|
||||
lex.parser = parser;
|
||||
lex.scanner = scanner;
|
||||
lex.file = file;
|
||||
lex.name = name;
|
||||
lex.data = data;
|
||||
lex.config = config;
|
||||
lex.parse = bbparse;
|
||||
/*printf("parse yyset_extra\n"); */
|
||||
yyset_extra (&lex, scanner);
|
||||
|
||||
/* printf("parse yylex\n"); */
|
||||
int result = yylex (scanner);
|
||||
|
||||
/* printf("parse result %d\n", result); */
|
||||
|
||||
lex.accept (0);
|
||||
/* printf("parse lex.accept\n"); */
|
||||
bbparseTrace (NULL, NULL);
|
||||
/* printf("parse bbparseTrace\n"); */
|
||||
|
||||
if (result != T_EOF)
|
||||
printf ("premature end of file\n");
|
||||
|
||||
yylex_destroy (scanner);
|
||||
bbparseFree (parser, free);
|
||||
}
|
||||
|
||||
}
|
||||
48
bitbake/lib/bb/parse/parse_c/lexer.h
Normal file
48
bitbake/lib/bb/parse/parse_c/lexer.h
Normal file
@@ -0,0 +1,48 @@
|
||||
/*
|
||||
Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
|
||||
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
|
||||
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
*/
|
||||
|
||||
#ifndef LEXER_H
|
||||
#define LEXER_H
|
||||
|
||||
#include "Python.h"
|
||||
|
||||
extern "C" {
|
||||
|
||||
struct lex_t {
|
||||
void* parser;
|
||||
void* scanner;
|
||||
FILE* file;
|
||||
char *name;
|
||||
PyObject *data;
|
||||
int config;
|
||||
|
||||
void* (*parse)(void*, int, token_t, lex_t*);
|
||||
|
||||
void accept(int token, const char* sz = NULL);
|
||||
void input(char *buf, int *result, int max_size);
|
||||
int line()const;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
||||
19
bitbake/lib/bb/parse/parse_c/lexerc.h
Normal file
19
bitbake/lib/bb/parse/parse_c/lexerc.h
Normal file
@@ -0,0 +1,19 @@
|
||||
|
||||
#ifndef LEXERC_H
|
||||
#define LEXERC_H
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
extern int lineError;
|
||||
extern int errorParse;
|
||||
|
||||
typedef struct {
|
||||
void *parser;
|
||||
void *scanner;
|
||||
FILE *file;
|
||||
char *name;
|
||||
PyObject *data;
|
||||
int config;
|
||||
} lex_t;
|
||||
|
||||
#endif
|
||||
56
bitbake/lib/bb/parse/parse_c/python_output.h
Normal file
56
bitbake/lib/bb/parse/parse_c/python_output.h
Normal file
@@ -0,0 +1,56 @@
|
||||
#ifndef PYTHON_OUTPUT_H
|
||||
#define PYTHON_OUTPUT_H
|
||||
/*
|
||||
Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
|
||||
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
|
||||
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
This is the glue:
|
||||
It will be called from the lemon grammar and will call into
|
||||
python to set certain things.
|
||||
|
||||
*/
|
||||
|
||||
extern "C" {
|
||||
|
||||
struct lex_t;
|
||||
|
||||
extern void e_assign(lex_t*, const char*, const char*);
|
||||
extern void e_export(lex_t*, const char*);
|
||||
extern void e_immediate(lex_t*, const char*, const char*);
|
||||
extern void e_cond(lex_t*, const char*, const char*);
|
||||
extern void e_prepend(lex_t*, const char*, const char*);
|
||||
extern void e_append(lex_t*, const char*, const char*);
|
||||
extern void e_precat(lex_t*, const char*, const char*);
|
||||
extern void e_postcat(lex_t*, const char*, const char*);
|
||||
|
||||
extern void e_addtask(lex_t*, const char*, const char*, const char*);
|
||||
extern void e_addhandler(lex_t*,const char*);
|
||||
extern void e_export_func(lex_t*, const char*);
|
||||
extern void e_inherit(lex_t*, const char*);
|
||||
extern void e_include(lex_t*, const char*);
|
||||
extern void e_require(lex_t*, const char*);
|
||||
extern void e_proc(lex_t*, const char*, const char*);
|
||||
extern void e_proc_python(lex_t*, const char*, const char*);
|
||||
extern void e_proc_fakeroot(lex_t*, const char*, const char*);
|
||||
extern void e_def(lex_t*, const char*, const char*, const char*);
|
||||
extern void e_parse_error(lex_t*);
|
||||
|
||||
}
|
||||
#endif // PYTHON_OUTPUT_H
|
||||
96
bitbake/lib/bb/parse/parse_c/token.h
Normal file
96
bitbake/lib/bb/parse/parse_c/token.h
Normal file
@@ -0,0 +1,96 @@
|
||||
/*
|
||||
Copyright (C) 2005 Holger Hans Peter Freyther
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
|
||||
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
|
||||
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
*/
|
||||
|
||||
#ifndef TOKEN_H
|
||||
#define TOKEN_H
|
||||
|
||||
#include <ctype.h>
|
||||
#include <string.h>
|
||||
|
||||
#define PURE_METHOD
|
||||
|
||||
|
||||
/**
|
||||
* Special Value for End Of File Handling. We set it to
|
||||
* 1001 so we can have up to 1000 Terminal Symbols on
|
||||
* grammar. Currenlty we have around 20
|
||||
*/
|
||||
#define T_EOF 1001
|
||||
|
||||
struct token_t {
|
||||
const char* string()const PURE_METHOD;
|
||||
|
||||
static char* concatString(const char* l, const char* r);
|
||||
void assignString(char* str);
|
||||
void copyString(const char* str);
|
||||
|
||||
void release_this();
|
||||
|
||||
private:
|
||||
char *m_string;
|
||||
size_t m_stringLen;
|
||||
};
|
||||
|
||||
inline const char* token_t::string()const
|
||||
{
|
||||
return m_string;
|
||||
}
|
||||
|
||||
/*
|
||||
* append str to the current string
|
||||
*/
|
||||
inline char* token_t::concatString(const char* l, const char* r)
|
||||
{
|
||||
size_t cb = (l ? strlen (l) : 0) + strlen (r) + 1;
|
||||
char *r_sz = new char[cb];
|
||||
*r_sz = 0;
|
||||
|
||||
if (l)
|
||||
strcat (r_sz, l);
|
||||
strcat (r_sz, r);
|
||||
|
||||
return r_sz;
|
||||
}
|
||||
|
||||
inline void token_t::assignString(char* str)
|
||||
{
|
||||
m_string = str;
|
||||
m_stringLen = str ? strlen(str) : 0;
|
||||
}
|
||||
|
||||
inline void token_t::copyString(const char* str)
|
||||
{
|
||||
if( str ) {
|
||||
m_stringLen = strlen(str);
|
||||
m_string = new char[m_stringLen+1];
|
||||
strcpy(m_string, str);
|
||||
}
|
||||
}
|
||||
|
||||
inline void token_t::release_this()
|
||||
{
|
||||
delete m_string;
|
||||
m_string = 0;
|
||||
}
|
||||
|
||||
#endif
|
||||
@@ -25,15 +25,12 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, bb, os, sys, time, string
|
||||
import re, bb, os, sys, time
|
||||
import bb.fetch, bb.build, bb.utils
|
||||
from bb import data, fetch
|
||||
from bb import data, fetch, methodpool
|
||||
|
||||
from ConfHandler import include, init
|
||||
from bb.parse import ParseError, resolve_file, ast
|
||||
|
||||
# For compatibility
|
||||
from bb.parse import vars_from_file
|
||||
from ConfHandler import include, localpath, obtain, init
|
||||
from bb.parse import ParseError
|
||||
|
||||
__func_start_regexp__ = re.compile( r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
|
||||
__inherit_regexp__ = re.compile( r"inherit\s+(.+)" )
|
||||
@@ -42,7 +39,7 @@ __addtask_regexp__ = re.compile("addtask\s+(?P<func>\w+)\s*((before\s*(?P<
|
||||
__addhandler_regexp__ = re.compile( r"addhandler\s+(.+)" )
|
||||
__def_regexp__ = re.compile( r"def\s+(\w+).*:" )
|
||||
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
|
||||
|
||||
__word__ = re.compile(r"\S+")
|
||||
|
||||
__infunc__ = ""
|
||||
__inpython__ = False
|
||||
@@ -50,8 +47,6 @@ __body__ = []
|
||||
__classname__ = ""
|
||||
classes = [ None, ]
|
||||
|
||||
cached_statements = {}
|
||||
|
||||
# We need to indicate EOF to the feeder. This code is so messy that
|
||||
# factoring it out to a close_parse_file method is out of question.
|
||||
# We will use the IN_PYTHON_EOF as an indicator to just close the method
|
||||
@@ -59,10 +54,11 @@ cached_statements = {}
|
||||
# The two parts using it are tightly integrated anyway
|
||||
IN_PYTHON_EOF = -9999999999999
|
||||
|
||||
|
||||
__parsed_methods__ = methodpool.get_parsed_dict()
|
||||
|
||||
def supports(fn, d):
|
||||
return fn[-3:] == ".bb" or fn[-8:] == ".bbclass" or fn[-4:] == ".inc"
|
||||
localfn = localpath(fn, d)
|
||||
return localfn[-3:] == ".bb" or localfn[-8:] == ".bbclass" or localfn[-4:] == ".inc"
|
||||
|
||||
def inherit(files, d):
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
@@ -76,42 +72,17 @@ def inherit(files, d):
|
||||
if not file in __inherit_cache:
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
|
||||
__inherit_cache.append( file )
|
||||
data.setVar('__inherit_cache', __inherit_cache, d)
|
||||
include(fn, file, d, "inherit")
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
data.setVar('__inherit_cache', __inherit_cache, d)
|
||||
|
||||
def get_statements(filename, absolsute_filename, base_name):
|
||||
global cached_statements
|
||||
|
||||
try:
|
||||
return cached_statements[absolsute_filename]
|
||||
except KeyError:
|
||||
file = open(absolsute_filename, 'r')
|
||||
statements = ast.StatementGroup()
|
||||
|
||||
lineno = 0
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = file.readline()
|
||||
if not s: break
|
||||
s = s.rstrip()
|
||||
feeder(lineno, s, filename, base_name, statements)
|
||||
if __inpython__:
|
||||
# add a blank line to close out any python definition
|
||||
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
|
||||
|
||||
if filename.endswith(".bbclass") or filename.endswith(".inc"):
|
||||
cached_statements[absolsute_filename] = statements
|
||||
return statements
|
||||
|
||||
def handle(fn, d, include):
|
||||
def handle(fn, d, include = 0):
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__
|
||||
__body__ = []
|
||||
__infunc__ = ""
|
||||
__classname__ = ""
|
||||
__residue__ = []
|
||||
|
||||
|
||||
if include == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data)")
|
||||
else:
|
||||
@@ -124,51 +95,127 @@ def handle(fn, d, include):
|
||||
if ext == ".bbclass":
|
||||
__classname__ = root
|
||||
classes.append(__classname__)
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
if not fn in __inherit_cache:
|
||||
__inherit_cache.append(fn)
|
||||
data.setVar('__inherit_cache', __inherit_cache, d)
|
||||
|
||||
if include != 0:
|
||||
oldfile = data.getVar('FILE', d)
|
||||
else:
|
||||
oldfile = None
|
||||
|
||||
abs_fn = resolve_file(fn, d)
|
||||
fn = obtain(fn, d)
|
||||
bbpath = (data.getVar('BBPATH', d, 1) or '').split(':')
|
||||
if not os.path.isabs(fn):
|
||||
f = None
|
||||
for p in bbpath:
|
||||
j = os.path.join(p, fn)
|
||||
if os.access(j, os.R_OK):
|
||||
abs_fn = j
|
||||
f = open(j, 'r')
|
||||
break
|
||||
if f is None:
|
||||
raise IOError("file not found")
|
||||
else:
|
||||
f = open(fn,'r')
|
||||
abs_fn = fn
|
||||
|
||||
if ext != ".bbclass":
|
||||
bbpath.insert(0, os.path.dirname(abs_fn))
|
||||
data.setVar('BBPATH', ":".join(bbpath), d)
|
||||
|
||||
if include:
|
||||
bb.parse.mark_dependency(d, abs_fn)
|
||||
|
||||
# actual loading
|
||||
statements = get_statements(fn, abs_fn, base_name)
|
||||
|
||||
# DONE WITH PARSING... time to evaluate
|
||||
if ext != ".bbclass":
|
||||
data.setVar('FILE', fn, d)
|
||||
i = (data.getVar("INHERIT", d, 1) or "").split()
|
||||
if not "base" in i and __classname__ != "base":
|
||||
i[0:0] = ["base"]
|
||||
inherit(i, d)
|
||||
|
||||
statements.eval(d)
|
||||
|
||||
lineno = 0
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = f.readline()
|
||||
if not s: break
|
||||
s = s.rstrip()
|
||||
feeder(lineno, s, fn, base_name, d)
|
||||
if __inpython__:
|
||||
# add a blank line to close out any python definition
|
||||
feeder(IN_PYTHON_EOF, "", fn, base_name, d)
|
||||
if ext == ".bbclass":
|
||||
classes.remove(__classname__)
|
||||
else:
|
||||
if include == 0:
|
||||
return ast.multi_finalize(fn, d)
|
||||
data.expandKeys(d)
|
||||
data.update_data(d)
|
||||
anonqueue = data.getVar("__anonqueue", d, 1) or []
|
||||
body = [x['content'] for x in anonqueue]
|
||||
flag = { 'python' : 1, 'func' : 1 }
|
||||
data.setVar("__anonfunc", "\n".join(body), d)
|
||||
data.setVarFlags("__anonfunc", flag, d)
|
||||
from bb import build
|
||||
try:
|
||||
t = data.getVar('T', d)
|
||||
data.setVar('T', '${TMPDIR}/', d)
|
||||
build.exec_func("__anonfunc", d)
|
||||
data.delVar('T', d)
|
||||
if t:
|
||||
data.setVar('T', t, d)
|
||||
except Exception, e:
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "executing anonymous function: %s" % e)
|
||||
raise
|
||||
data.delVar("__anonqueue", d)
|
||||
data.delVar("__anonfunc", d)
|
||||
set_additional_vars(fn, d, include)
|
||||
data.update_data(d)
|
||||
|
||||
all_handlers = {}
|
||||
for var in data.getVar('__BBHANDLERS', d) or []:
|
||||
# try to add the handler
|
||||
# if we added it remember the choiche
|
||||
handler = data.getVar(var,d)
|
||||
if bb.event.register(var,handler) == bb.event.Registered:
|
||||
all_handlers[var] = handler
|
||||
|
||||
for var in data.getVar('__BBTASKS', d) or []:
|
||||
deps = data.getVarFlag(var, 'deps', d) or []
|
||||
postdeps = data.getVarFlag(var, 'postdeps', d) or []
|
||||
bb.build.add_task(var, deps, d)
|
||||
for p in postdeps:
|
||||
pdeps = data.getVarFlag(p, 'deps', d) or []
|
||||
pdeps.append(var)
|
||||
data.setVarFlag(p, 'deps', pdeps, d)
|
||||
bb.build.add_task(p, pdeps, d)
|
||||
|
||||
# now add the handlers
|
||||
if not len(all_handlers) == 0:
|
||||
data.setVar('__all_handlers__', all_handlers, d)
|
||||
|
||||
bbpath.pop(0)
|
||||
if oldfile:
|
||||
bb.data.setVar("FILE", oldfile, d)
|
||||
|
||||
# we have parsed the bb class now
|
||||
if ext == ".bbclass" or ext == ".inc":
|
||||
bb.methodpool.get_parsed_dict()[base_name] = 1
|
||||
__parsed_methods__[base_name] = 1
|
||||
|
||||
return d
|
||||
|
||||
def feeder(lineno, s, fn, root, statements):
|
||||
def feeder(lineno, s, fn, root, d):
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__,__infunc__, __body__, classes, bb, __residue__
|
||||
if __infunc__:
|
||||
if s == '}':
|
||||
__body__.append('')
|
||||
ast.handleMethod(statements, __infunc__, lineno, fn, __body__)
|
||||
data.setVar(__infunc__, '\n'.join(__body__), d)
|
||||
data.setVarFlag(__infunc__, "func", 1, d)
|
||||
if __infunc__ == "__anonymous":
|
||||
anonqueue = bb.data.getVar("__anonqueue", d) or []
|
||||
anonitem = {}
|
||||
anonitem["content"] = bb.data.getVar("__anonymous", d)
|
||||
anonitem["flags"] = bb.data.getVarFlags("__anonymous", d)
|
||||
anonqueue.append(anonitem)
|
||||
bb.data.setVar("__anonqueue", anonqueue, d)
|
||||
bb.data.delVarFlags("__anonymous", d)
|
||||
bb.data.delVar("__anonymous", d)
|
||||
__infunc__ = ""
|
||||
__body__ = []
|
||||
else:
|
||||
@@ -181,7 +228,19 @@ def feeder(lineno, s, fn, root, statements):
|
||||
__body__.append(s)
|
||||
return
|
||||
else:
|
||||
ast.handlePythonMethod(statements, root, __body__, fn)
|
||||
# Note we will add root to parsedmethods after having parse
|
||||
# 'this' file. This means we will not parse methods from
|
||||
# bb classes twice
|
||||
if not root in __parsed_methods__:
|
||||
text = '\n'.join(__body__)
|
||||
methodpool.insert_method( root, text, fn )
|
||||
funcs = data.getVar('__functions__', d) or {}
|
||||
if not funcs.has_key( root ):
|
||||
funcs[root] = text
|
||||
else:
|
||||
funcs[root] = "%s\n%s" % (funcs[root], text)
|
||||
|
||||
data.setVar('__functions__', funcs, d)
|
||||
__body__ = []
|
||||
__inpython__ = False
|
||||
|
||||
@@ -202,7 +261,20 @@ def feeder(lineno, s, fn, root, statements):
|
||||
m = __func_start_regexp__.match(s)
|
||||
if m:
|
||||
__infunc__ = m.group("func") or "__anonymous"
|
||||
ast.handleMethodFlags(statements, __infunc__, m)
|
||||
key = __infunc__
|
||||
if data.getVar(key, d):
|
||||
# clean up old version of this piece of metadata, as its
|
||||
# flags could cause problems
|
||||
data.setVarFlag(key, 'python', None, d)
|
||||
data.setVarFlag(key, 'fakeroot', None, d)
|
||||
if m.group("py") is not None:
|
||||
data.setVarFlag(key, "python", "1", d)
|
||||
else:
|
||||
data.delVarFlag(key, "python", d)
|
||||
if m.group("fr") is not None:
|
||||
data.setVarFlag(key, "fakeroot", "1", d)
|
||||
else:
|
||||
data.delVarFlag(key, "fakeroot", d)
|
||||
return
|
||||
|
||||
m = __def_regexp__.match(s)
|
||||
@@ -213,26 +285,129 @@ def feeder(lineno, s, fn, root, statements):
|
||||
|
||||
m = __export_func_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleExportFuncs(statements, m, classes)
|
||||
fns = m.group(1)
|
||||
n = __word__.findall(fns)
|
||||
for f in n:
|
||||
allvars = []
|
||||
allvars.append(f)
|
||||
allvars.append(classes[-1] + "_" + f)
|
||||
|
||||
vars = [[ allvars[0], allvars[1] ]]
|
||||
if len(classes) > 1 and classes[-2] is not None:
|
||||
allvars.append(classes[-2] + "_" + f)
|
||||
vars = []
|
||||
vars.append([allvars[2], allvars[1]])
|
||||
vars.append([allvars[0], allvars[2]])
|
||||
|
||||
for (var, calledvar) in vars:
|
||||
if data.getVar(var, d) and not data.getVarFlag(var, 'export_func', d):
|
||||
continue
|
||||
|
||||
if data.getVar(var, d):
|
||||
data.setVarFlag(var, 'python', None, d)
|
||||
data.setVarFlag(var, 'func', None, d)
|
||||
|
||||
for flag in [ "func", "python" ]:
|
||||
if data.getVarFlag(calledvar, flag, d):
|
||||
data.setVarFlag(var, flag, data.getVarFlag(calledvar, flag, d), d)
|
||||
for flag in [ "dirs" ]:
|
||||
if data.getVarFlag(var, flag, d):
|
||||
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag, d), d)
|
||||
|
||||
if data.getVarFlag(calledvar, "python", d):
|
||||
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n", d)
|
||||
else:
|
||||
data.setVar(var, "\t" + calledvar + "\n", d)
|
||||
data.setVarFlag(var, 'export_func', '1', d)
|
||||
|
||||
return
|
||||
|
||||
m = __addtask_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleAddTask(statements, m)
|
||||
func = m.group("func")
|
||||
before = m.group("before")
|
||||
after = m.group("after")
|
||||
if func is None:
|
||||
return
|
||||
var = "do_" + func
|
||||
|
||||
data.setVarFlag(var, "task", 1, d)
|
||||
|
||||
bbtasks = data.getVar('__BBTASKS', d) or []
|
||||
bbtasks.append(var)
|
||||
data.setVar('__BBTASKS', bbtasks, d)
|
||||
|
||||
if after is not None:
|
||||
# set up deps for function
|
||||
data.setVarFlag(var, "deps", after.split(), d)
|
||||
if before is not None:
|
||||
# set up things that depend on this func
|
||||
data.setVarFlag(var, "postdeps", before.split(), d)
|
||||
return
|
||||
|
||||
m = __addhandler_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleBBHandlers(statements, m)
|
||||
fns = m.group(1)
|
||||
hs = __word__.findall(fns)
|
||||
bbhands = data.getVar('__BBHANDLERS', d) or []
|
||||
for h in hs:
|
||||
bbhands.append(h)
|
||||
data.setVarFlag(h, "handler", 1, d)
|
||||
data.setVar('__BBHANDLERS', bbhands, d)
|
||||
return
|
||||
|
||||
m = __inherit_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInherit(statements, m)
|
||||
|
||||
files = m.group(1)
|
||||
n = __word__.findall(files)
|
||||
inherit(n, d)
|
||||
return
|
||||
|
||||
from bb.parse import ConfHandler
|
||||
return ConfHandler.feeder(lineno, s, fn, statements)
|
||||
return ConfHandler.feeder(lineno, s, fn, d)
|
||||
|
||||
__pkgsplit_cache__={}
|
||||
def vars_from_file(mypkg, d):
|
||||
if not mypkg:
|
||||
return (None, None, None)
|
||||
if mypkg in __pkgsplit_cache__:
|
||||
return __pkgsplit_cache__[mypkg]
|
||||
|
||||
myfile = os.path.splitext(os.path.basename(mypkg))
|
||||
parts = myfile[0].split('_')
|
||||
__pkgsplit_cache__[mypkg] = parts
|
||||
exp = 3 - len(parts)
|
||||
tmplist = []
|
||||
while exp != 0:
|
||||
exp -= 1
|
||||
tmplist.append(None)
|
||||
parts.extend(tmplist)
|
||||
return parts
|
||||
|
||||
def set_additional_vars(file, d, include):
|
||||
"""Deduce rest of variables, e.g. ${A} out of ${SRC_URI}"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s: set_additional_vars" % file)
|
||||
|
||||
src_uri = data.getVar('SRC_URI', d, 1)
|
||||
if not src_uri:
|
||||
return
|
||||
|
||||
a = (data.getVar('A', d, 1) or '').split()
|
||||
|
||||
from bb import fetch
|
||||
try:
|
||||
fetch.init(src_uri.split(), d)
|
||||
except fetch.NoMethodError:
|
||||
pass
|
||||
except bb.MalformedUrl,e:
|
||||
raise ParseError("Unable to generate local paths for SRC_URI due to malformed uri: %s" % e)
|
||||
|
||||
a += fetch.localpaths(d)
|
||||
del fetch
|
||||
data.setVar('A', " ".join(a), d)
|
||||
|
||||
|
||||
# Add us to the handlers list
|
||||
from bb.parse import handlers
|
||||
|
||||
@@ -25,25 +25,66 @@
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, bb.data, os, sys
|
||||
from bb.parse import ParseError, resolve_file, ast
|
||||
from bb.parse import ParseError
|
||||
|
||||
#__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}]+)\s*(?P<colon>:)?(?P<ques>\?)?=\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__include_regexp__ = re.compile( r"include\s+(.+)" )
|
||||
__require_regexp__ = re.compile( r"require\s+(.+)" )
|
||||
__export_regexp__ = re.compile( r"export\s+(.+)" )
|
||||
|
||||
def init(data):
|
||||
topdir = bb.data.getVar('TOPDIR', data)
|
||||
if not topdir:
|
||||
topdir = os.getcwd()
|
||||
bb.data.setVar('TOPDIR', topdir, data)
|
||||
if not bb.data.getVar('TOPDIR', data):
|
||||
bb.data.setVar('TOPDIR', os.getcwd(), data)
|
||||
if not bb.data.getVar('BBPATH', data):
|
||||
bb.fatal("The BBPATH environment variable must be set")
|
||||
|
||||
bb.data.setVar('BBPATH', os.path.join(sys.prefix, 'share', 'bitbake'), data)
|
||||
|
||||
def supports(fn, d):
|
||||
return fn[-5:] == ".conf"
|
||||
return localpath(fn, d)[-5:] == ".conf"
|
||||
|
||||
def localpath(fn, d):
|
||||
if os.path.exists(fn):
|
||||
return fn
|
||||
|
||||
localfn = None
|
||||
try:
|
||||
localfn = bb.fetch.localpath(fn, d)
|
||||
except bb.MalformedUrl:
|
||||
pass
|
||||
|
||||
if not localfn:
|
||||
localfn = fn
|
||||
return localfn
|
||||
|
||||
def obtain(fn, data):
|
||||
import sys, bb
|
||||
fn = bb.data.expand(fn, data)
|
||||
localfn = bb.data.expand(localpath(fn, data), data)
|
||||
|
||||
if localfn != fn:
|
||||
dldir = bb.data.getVar('DL_DIR', data, 1)
|
||||
if not dldir:
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: DL_DIR not defined")
|
||||
return localfn
|
||||
bb.mkdirhier(dldir)
|
||||
try:
|
||||
bb.fetch.init([fn])
|
||||
except bb.fetch.NoMethodError:
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: no method: %s" % value)
|
||||
return localfn
|
||||
|
||||
try:
|
||||
bb.fetch.go(data)
|
||||
except bb.fetch.MissingParameterError:
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: missing parameters: %s" % value)
|
||||
return localfn
|
||||
except bb.fetch.FetchError:
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: failed: %s" % value)
|
||||
return localfn
|
||||
return localfn
|
||||
|
||||
|
||||
def include(oldfn, fn, data, error_out):
|
||||
"""
|
||||
@@ -57,13 +98,6 @@ def include(oldfn, fn, data, error_out):
|
||||
fn = bb.data.expand(fn, data)
|
||||
oldfn = bb.data.expand(oldfn, data)
|
||||
|
||||
if not os.path.isabs(fn):
|
||||
dname = os.path.dirname(oldfn)
|
||||
bbpath = "%s:%s" % (dname, bb.data.getVar("BBPATH", data, 1))
|
||||
abs_fn = bb.which(bbpath, fn)
|
||||
if abs_fn:
|
||||
fn = abs_fn
|
||||
|
||||
from bb.parse import handle
|
||||
try:
|
||||
ret = handle(fn, data, True)
|
||||
@@ -72,22 +106,42 @@ def include(oldfn, fn, data, error_out):
|
||||
raise ParseError("Could not %(error_out)s file %(fn)s" % vars() )
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF file '%s' not found" % fn)
|
||||
|
||||
def handle(fn, data, include):
|
||||
def handle(fn, data, include = 0):
|
||||
if include:
|
||||
inc_string = "including"
|
||||
else:
|
||||
inc_string = "reading"
|
||||
init(data)
|
||||
|
||||
if include == 0:
|
||||
bb.data.inheritFromOS(data)
|
||||
oldfile = None
|
||||
else:
|
||||
oldfile = bb.data.getVar('FILE', data)
|
||||
|
||||
abs_fn = resolve_file(fn, data)
|
||||
f = open(abs_fn, 'r')
|
||||
fn = obtain(fn, data)
|
||||
if not os.path.isabs(fn):
|
||||
f = None
|
||||
bbpath = bb.data.getVar("BBPATH", data, 1) or []
|
||||
for p in bbpath.split(":"):
|
||||
currname = os.path.join(p, fn)
|
||||
if os.access(currname, os.R_OK):
|
||||
f = open(currname, 'r')
|
||||
abs_fn = currname
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF %s %s" % (inc_string, currname))
|
||||
break
|
||||
if f is None:
|
||||
raise IOError("file '%s' not found" % fn)
|
||||
else:
|
||||
f = open(fn,'r')
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "CONF %s %s" % (inc_string,fn))
|
||||
abs_fn = fn
|
||||
|
||||
if include:
|
||||
bb.parse.mark_dependency(data, abs_fn)
|
||||
|
||||
statements = ast.StatementGroup()
|
||||
lineno = 0
|
||||
bb.data.setVar('FILE', fn, data)
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = f.readline()
|
||||
@@ -100,36 +154,53 @@ def handle(fn, data, include):
|
||||
s2 = f.readline()[:-1].strip()
|
||||
lineno = lineno + 1
|
||||
s = s[:-1] + s2
|
||||
feeder(lineno, s, fn, statements)
|
||||
feeder(lineno, s, fn, data)
|
||||
|
||||
# DONE WITH PARSING... time to evaluate
|
||||
bb.data.setVar('FILE', fn, data)
|
||||
statements.eval(data)
|
||||
if oldfile:
|
||||
bb.data.setVar('FILE', oldfile, data)
|
||||
|
||||
return data
|
||||
|
||||
def feeder(lineno, s, fn, statements):
|
||||
def feeder(lineno, s, fn, data):
|
||||
m = __config_regexp__.match(s)
|
||||
if m:
|
||||
groupd = m.groupdict()
|
||||
ast.handleData(statements, groupd)
|
||||
key = groupd["var"]
|
||||
if "exp" in groupd and groupd["exp"] != None:
|
||||
bb.data.setVarFlag(key, "export", 1, data)
|
||||
if "ques" in groupd and groupd["ques"] != None:
|
||||
val = bb.data.getVar(key, data)
|
||||
if val == None:
|
||||
val = groupd["value"]
|
||||
elif "colon" in groupd and groupd["colon"] != None:
|
||||
val = bb.data.expand(groupd["value"], data)
|
||||
elif "append" in groupd and groupd["append"] != None:
|
||||
val = "%s %s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
|
||||
elif "prepend" in groupd and groupd["prepend"] != None:
|
||||
val = "%s %s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
|
||||
elif "postdot" in groupd and groupd["postdot"] != None:
|
||||
val = "%s%s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
|
||||
elif "predot" in groupd and groupd["predot"] != None:
|
||||
val = "%s%s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
|
||||
else:
|
||||
val = groupd["value"]
|
||||
if 'flag' in groupd and groupd['flag'] != None:
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "setVarFlag(%s, %s, %s, data)" % (key, groupd['flag'], val))
|
||||
bb.data.setVarFlag(key, groupd['flag'], val, data)
|
||||
else:
|
||||
bb.data.setVar(key, val, data)
|
||||
return
|
||||
|
||||
m = __include_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInclude(statements, m, fn, lineno, False)
|
||||
s = bb.data.expand(m.group(1), data)
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "CONF %s:%d: including %s" % (fn, lineno, s))
|
||||
include(fn, s, data, False)
|
||||
return
|
||||
|
||||
m = __require_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInclude(statements, m, fn, lineno, True)
|
||||
return
|
||||
|
||||
m = __export_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleExport(statements, m)
|
||||
s = bb.data.expand(m.group(1), data)
|
||||
include(fn, s, data, "include required")
|
||||
return
|
||||
|
||||
raise ParseError("%s:%d: unparsed line: '%s'" % (fn, lineno, s));
|
||||
|
||||
@@ -1,121 +0,0 @@
|
||||
# BitBake Persistent Data Store
|
||||
#
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import bb, os
|
||||
|
||||
try:
|
||||
import sqlite3
|
||||
except ImportError:
|
||||
try:
|
||||
from pysqlite2 import dbapi2 as sqlite3
|
||||
except ImportError:
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "Importing sqlite3 and pysqlite2 failed, please install one of them. Python 2.5 or a 'python-pysqlite2' like package is likely to be what you need.")
|
||||
|
||||
sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "sqlite3 version 3.3.0 or later is required.")
|
||||
|
||||
class PersistData:
|
||||
"""
|
||||
BitBake Persistent Data Store
|
||||
|
||||
Used to store data in a central location such that other threads/tasks can
|
||||
access them at some future date.
|
||||
|
||||
The "domain" is used as a key to isolate each data pool and in this
|
||||
implementation corresponds to an SQL table. The SQL table consists of a
|
||||
simple key and value pair.
|
||||
|
||||
Why sqlite? It handles all the locking issues for us.
|
||||
"""
|
||||
def __init__(self, d):
|
||||
self.cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
|
||||
if self.cachedir in [None, '']:
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'PERSISTENT_DIR' or 'CACHE' variable.")
|
||||
try:
|
||||
os.stat(self.cachedir)
|
||||
except OSError:
|
||||
bb.mkdirhier(self.cachedir)
|
||||
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_persist_data.sqlite3")
|
||||
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
|
||||
|
||||
self.connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
|
||||
|
||||
def addDomain(self, domain):
|
||||
"""
|
||||
Should be called before any domain is used
|
||||
Creates it if it doesn't exist.
|
||||
"""
|
||||
self.connection.execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
|
||||
|
||||
def delDomain(self, domain):
|
||||
"""
|
||||
Removes a domain and all the data it contains
|
||||
"""
|
||||
self.connection.execute("DROP TABLE IF EXISTS %s;" % domain)
|
||||
|
||||
def getKeyValues(self, domain):
|
||||
"""
|
||||
Return a list of key + value pairs for a domain
|
||||
"""
|
||||
ret = {}
|
||||
data = self.connection.execute("SELECT key, value from %s;" % domain)
|
||||
for row in data:
|
||||
ret[str(row[0])] = str(row[1])
|
||||
|
||||
return ret
|
||||
|
||||
def getValue(self, domain, key):
|
||||
"""
|
||||
Return the value of a key for a domain
|
||||
"""
|
||||
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
|
||||
for row in data:
|
||||
return row[1]
|
||||
|
||||
def setValue(self, domain, key, value):
|
||||
"""
|
||||
Sets the value of a key for a domain
|
||||
"""
|
||||
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
|
||||
rows = 0
|
||||
for row in data:
|
||||
rows = rows + 1
|
||||
if rows:
|
||||
self._execute("UPDATE %s SET value=? WHERE key=?;" % domain, [value, key])
|
||||
else:
|
||||
self._execute("INSERT into %s(key, value) values (?, ?);" % domain, [key, value])
|
||||
|
||||
def delValue(self, domain, key):
|
||||
"""
|
||||
Deletes a key/value pair
|
||||
"""
|
||||
self._execute("DELETE from %s where key=?;" % domain, [key])
|
||||
|
||||
def _execute(self, *query):
|
||||
while True:
|
||||
try:
|
||||
self.connection.execute(*query)
|
||||
return
|
||||
except sqlite3.OperationalError, e:
|
||||
if 'database is locked' in str(e):
|
||||
continue
|
||||
raise
|
||||
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re
|
||||
import os, re
|
||||
from bb import data, utils
|
||||
import bb
|
||||
|
||||
@@ -31,12 +31,12 @@ class NoProvider(Exception):
|
||||
class NoRProvider(Exception):
|
||||
"""Exception raised when no provider of a runtime dependency can be found"""
|
||||
|
||||
|
||||
def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
Reorder pkg_pn by file priority and default preference
|
||||
If there is a PREFERRED_VERSION, find the highest-priority bbfile
|
||||
providing that version. If not, find the latest version provided by
|
||||
an bbfile in the highest-priority set.
|
||||
"""
|
||||
|
||||
if not pkg_pn:
|
||||
pkg_pn = dataCache.pkg_pn
|
||||
|
||||
@@ -44,69 +44,36 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
priorities = {}
|
||||
for f in files:
|
||||
priority = dataCache.bbfile_priority[f]
|
||||
preference = dataCache.pkg_dp[f]
|
||||
if priority not in priorities:
|
||||
priorities[priority] = {}
|
||||
if preference not in priorities[priority]:
|
||||
priorities[priority][preference] = []
|
||||
priorities[priority][preference].append(f)
|
||||
priorities[priority] = []
|
||||
priorities[priority].append(f)
|
||||
p_list = priorities.keys()
|
||||
p_list.sort(lambda a, b: a - b)
|
||||
tmp_pn = []
|
||||
for pri in sorted(priorities, lambda a, b: a - b):
|
||||
tmp_pref = []
|
||||
for pref in sorted(priorities[pri], lambda a, b: b - a):
|
||||
tmp_pref.extend(priorities[pri][pref])
|
||||
tmp_pn = [tmp_pref] + tmp_pn
|
||||
|
||||
return tmp_pn
|
||||
|
||||
def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
|
||||
"""
|
||||
Check if the version pe,pv,pr is the preferred one.
|
||||
If there is preferred version defined and ends with '%', then pv has to start with that version after removing the '%'
|
||||
"""
|
||||
if (pr == preferred_r or preferred_r == None):
|
||||
if (pe == preferred_e or preferred_e == None):
|
||||
if preferred_v == pv:
|
||||
return True
|
||||
if preferred_v != None and preferred_v.endswith('%') and pv.startswith(preferred_v[:len(preferred_v)-1]):
|
||||
return True
|
||||
return False
|
||||
|
||||
def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
Find the first provider in pkg_pn with a PREFERRED_VERSION set.
|
||||
"""
|
||||
for p in p_list:
|
||||
tmp_pn = [priorities[p]] + tmp_pn
|
||||
|
||||
preferred_file = None
|
||||
preferred_ver = None
|
||||
|
||||
localdata = data.createCopy(cfgData)
|
||||
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
|
||||
bb.data.setVar('OVERRIDES', "%s:%s" % (pn, data.getVar('OVERRIDES', localdata)), localdata)
|
||||
bb.data.update_data(localdata)
|
||||
|
||||
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
|
||||
if preferred_v:
|
||||
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
|
||||
m = re.match('(.*)_(.*)', preferred_v)
|
||||
if m:
|
||||
if m.group(1):
|
||||
preferred_e = int(m.group(1)[:-1])
|
||||
else:
|
||||
preferred_e = None
|
||||
preferred_v = m.group(2)
|
||||
if m.group(3):
|
||||
preferred_r = m.group(3)[1:]
|
||||
else:
|
||||
preferred_r = None
|
||||
preferred_v = m.group(1)
|
||||
preferred_r = m.group(2)
|
||||
else:
|
||||
preferred_e = None
|
||||
preferred_r = None
|
||||
|
||||
for file_set in pkg_pn:
|
||||
for file_set in tmp_pn:
|
||||
for f in file_set:
|
||||
pe,pv,pr = dataCache.pkg_pepvpr[f]
|
||||
if preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
|
||||
pv,pr = dataCache.pkg_pvpr[f]
|
||||
if preferred_v == pv and (preferred_r == pr or preferred_r == None):
|
||||
preferred_file = f
|
||||
preferred_ver = (pe, pv, pr)
|
||||
preferred_ver = (pv, pr)
|
||||
break
|
||||
if preferred_file:
|
||||
break;
|
||||
@@ -114,8 +81,6 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
pv_str = '%s-%s' % (preferred_v, preferred_r)
|
||||
else:
|
||||
pv_str = preferred_v
|
||||
if not (preferred_e is None):
|
||||
pv_str = '%s:%s' % (preferred_e, pv_str)
|
||||
itemstr = ""
|
||||
if item:
|
||||
itemstr = " (for item %s)" % item
|
||||
@@ -124,62 +89,37 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
else:
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "selecting %s as PREFERRED_VERSION %s of package %s%s" % (preferred_file, pv_str, pn, itemstr))
|
||||
|
||||
return (preferred_ver, preferred_file)
|
||||
del localdata
|
||||
|
||||
|
||||
def findLatestProvider(pn, cfgData, dataCache, file_set):
|
||||
"""
|
||||
Return the highest version of the providers in file_set.
|
||||
Take default preferences into account.
|
||||
"""
|
||||
# get highest priority file set
|
||||
files = tmp_pn[0]
|
||||
latest = None
|
||||
latest_p = 0
|
||||
latest_f = None
|
||||
for file_name in file_set:
|
||||
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
|
||||
for file_name in files:
|
||||
pv,pr = dataCache.pkg_pvpr[file_name]
|
||||
dp = dataCache.pkg_dp[file_name]
|
||||
|
||||
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
|
||||
latest = (pe, pv, pr)
|
||||
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pv, pr)) < 0)) or (dp > latest_p):
|
||||
latest = (pv, pr)
|
||||
latest_f = file_name
|
||||
latest_p = dp
|
||||
|
||||
return (latest, latest_f)
|
||||
|
||||
|
||||
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
If there is a PREFERRED_VERSION, find the highest-priority bbfile
|
||||
providing that version. If not, find the latest version provided by
|
||||
an bbfile in the highest-priority set.
|
||||
"""
|
||||
|
||||
sortpkg_pn = sortPriorities(pn, dataCache, pkg_pn)
|
||||
# Find the highest priority provider with a PREFERRED_VERSION set
|
||||
(preferred_ver, preferred_file) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
|
||||
# Find the latest version of the highest priority provider
|
||||
(latest, latest_f) = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[0])
|
||||
|
||||
if preferred_file is None:
|
||||
preferred_file = latest_f
|
||||
preferred_ver = latest
|
||||
|
||||
return (latest, latest_f, preferred_ver, preferred_file)
|
||||
return (latest,latest_f,preferred_ver, preferred_file)
|
||||
|
||||
|
||||
def _filterProviders(providers, item, cfgData, dataCache):
|
||||
#
|
||||
# RP - build_cache_fail needs to move elsewhere
|
||||
#
|
||||
def filterProviders(providers, item, cfgData, dataCache, build_cache_fail = {}):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
"""
|
||||
eligible = []
|
||||
preferred_versions = {}
|
||||
sortpkg_pn = {}
|
||||
|
||||
# The order of providers depends on the order of the files on the disk
|
||||
# up to here. Sort pkg_pn to make dependency issues reproducible rather
|
||||
# than effectively random.
|
||||
providers.sort()
|
||||
|
||||
# Collate providers by PN
|
||||
pkg_pn = {}
|
||||
@@ -191,24 +131,21 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "providers for %s are: %s" % (item, pkg_pn.keys()))
|
||||
|
||||
# First add PREFERRED_VERSIONS
|
||||
for pn in pkg_pn:
|
||||
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
|
||||
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
|
||||
if preferred_versions[pn][1]:
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
# Now add latest verisons
|
||||
for pn in sortpkg_pn:
|
||||
if pn in preferred_versions and preferred_versions[pn][1]:
|
||||
continue
|
||||
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
|
||||
for pn in pkg_pn.keys():
|
||||
preferred_versions[pn] = bb.providers.findBestProvider(pn, cfgData, dataCache, pkg_pn, item)[2:4]
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
|
||||
for p in eligible:
|
||||
if p in build_cache_fail:
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "rejecting already-failed %s" % p)
|
||||
eligible.remove(p)
|
||||
|
||||
if len(eligible) == 0:
|
||||
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
|
||||
return 0
|
||||
|
||||
|
||||
# If pn == item, give it a slight default preference
|
||||
# This means PREFERRED_PROVIDER_foobar defaults to foobar if available
|
||||
for p in providers:
|
||||
@@ -221,74 +158,31 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
eligible.remove(fn)
|
||||
eligible = [fn] + eligible
|
||||
|
||||
return eligible
|
||||
|
||||
|
||||
def filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
Takes a "normal" target item
|
||||
"""
|
||||
|
||||
eligible = _filterProviders(providers, item, cfgData, dataCache)
|
||||
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
|
||||
if prefervar:
|
||||
dataCache.preferred[item] = prefervar
|
||||
|
||||
foundUnique = False
|
||||
if item in dataCache.preferred:
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
if dataCache.preferred[item] == pn:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
foundUnique = True
|
||||
break
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
|
||||
|
||||
return eligible, foundUnique
|
||||
|
||||
def filterProvidersRunTime(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
Takes a "runtime" target item
|
||||
"""
|
||||
|
||||
eligible = _filterProviders(providers, item, cfgData, dataCache)
|
||||
|
||||
# Should use dataCache.preferred here?
|
||||
preferred = []
|
||||
preferred_vars = []
|
||||
for p in eligible:
|
||||
# look to see if one of them is already staged, or marked as preferred.
|
||||
# if so, bump it to the head of the queue
|
||||
for p in providers:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
provides = dataCache.pn_provides[pn]
|
||||
for provide in provides:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "checking PREFERRED_PROVIDER_%s" % (provide))
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
|
||||
if prefervar == pn:
|
||||
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to %s" % (pn, item, var))
|
||||
preferred_vars.append(var)
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
preferred.append(p)
|
||||
break
|
||||
pv, pr = dataCache.pkg_pvpr[p]
|
||||
|
||||
numberPreferred = len(preferred)
|
||||
stamp = '%s.do_populate_staging' % dataCache.stamp[p]
|
||||
if os.path.exists(stamp):
|
||||
(newvers, fn) = preferred_versions[pn]
|
||||
if not fn in eligible:
|
||||
# package was made ineligible by already-failed check
|
||||
continue
|
||||
oldver = "%s-%s" % (pv, pr)
|
||||
newver = '-'.join(newvers)
|
||||
if (newver != oldver):
|
||||
extra_chat = "%s (%s) already staged but upgrading to %s to satisfy %s" % (pn, oldver, newver, item)
|
||||
else:
|
||||
extra_chat = "Selecting already-staged %s (%s) to satisfy %s" % (pn, oldver, item)
|
||||
|
||||
if numberPreferred > 1:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Conflicting PREFERRED_PROVIDER entries were found which resulted in an attempt to select multiple providers (%s) for runtime dependecy %s\nThe entries resulting in this conflict were: %s" % (preferred, item, preferred_vars))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "%s" % extra_chat)
|
||||
eligible.remove(fn)
|
||||
eligible = [fn] + eligible
|
||||
break
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
|
||||
|
||||
return eligible, numberPreferred
|
||||
|
||||
regexp_cache = {}
|
||||
return eligible
|
||||
|
||||
def getRuntimeProviders(dataCache, rdepend):
|
||||
"""
|
||||
@@ -307,16 +201,7 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
|
||||
# Only search dynamic packages if we can't find anything in other variables
|
||||
for pattern in dataCache.packages_dynamic:
|
||||
pattern = pattern.replace('+', "\+")
|
||||
if pattern in regexp_cache:
|
||||
regexp = regexp_cache[pattern]
|
||||
else:
|
||||
try:
|
||||
regexp = re.compile(pattern)
|
||||
except:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Error parsing re expression: %s" % pattern)
|
||||
raise
|
||||
regexp_cache[pattern] = regexp
|
||||
regexp = re.compile(pattern)
|
||||
if regexp.match(rdepend):
|
||||
rproviders += dataCache.packages_dynamic[pattern]
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,181 +0,0 @@
|
||||
#
|
||||
# BitBake 'dummy' Passthrough Server
|
||||
#
|
||||
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2006 - 2008 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
This module implements an xmlrpc server for BitBake.
|
||||
|
||||
Use this by deriving a class from BitBakeXMLRPCServer and then adding
|
||||
methods which you want to "export" via XMLRPC. If the methods have the
|
||||
prefix xmlrpc_, then registering those function will happen automatically,
|
||||
if not, you need to call register_function.
|
||||
|
||||
Use register_idle_function() to add a function which the xmlrpc server
|
||||
calls from within server_forever when no requests are pending. Make sure
|
||||
that those functions are non-blocking or else you will introduce latency
|
||||
in the server's main loop.
|
||||
"""
|
||||
|
||||
import time
|
||||
import bb
|
||||
from bb.ui import uievent
|
||||
import xmlrpclib
|
||||
import pickle
|
||||
|
||||
DEBUG = False
|
||||
|
||||
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import inspect, select
|
||||
|
||||
class BitBakeServerCommands():
|
||||
def __init__(self, server, cooker):
|
||||
self.cooker = cooker
|
||||
self.server = server
|
||||
|
||||
def runCommand(self, command):
|
||||
"""
|
||||
Run a cooker command on the server
|
||||
"""
|
||||
#print "Running Command %s" % command
|
||||
return self.cooker.command.runCommand(command)
|
||||
|
||||
def terminateServer(self):
|
||||
"""
|
||||
Trigger the server to quit
|
||||
"""
|
||||
self.server.server_exit()
|
||||
#print "Server (cooker) exitting"
|
||||
return
|
||||
|
||||
def ping(self):
|
||||
"""
|
||||
Dummy method which can be used to check the server is still alive
|
||||
"""
|
||||
return True
|
||||
|
||||
eventQueue = []
|
||||
|
||||
class BBUIEventQueue:
|
||||
class event:
|
||||
def __init__(self, parent):
|
||||
self.parent = parent
|
||||
@staticmethod
|
||||
def send(event):
|
||||
bb.server.none.eventQueue.append(pickle.loads(event))
|
||||
@staticmethod
|
||||
def quit():
|
||||
return
|
||||
|
||||
def __init__(self, BBServer):
|
||||
self.eventQueue = bb.server.none.eventQueue
|
||||
self.BBServer = BBServer
|
||||
self.EventHandle = bb.event.register_UIHhandler(self)
|
||||
|
||||
def getEvent(self):
|
||||
if len(self.eventQueue) == 0:
|
||||
return None
|
||||
|
||||
return self.eventQueue.pop(0)
|
||||
|
||||
def waitEvent(self, delay):
|
||||
event = self.getEvent()
|
||||
if event:
|
||||
return event
|
||||
self.BBServer.idle_commands(delay)
|
||||
return self.getEvent()
|
||||
|
||||
def queue_event(self, event):
|
||||
self.eventQueue.append(event)
|
||||
|
||||
def system_quit( self ):
|
||||
bb.event.unregister_UIHhandler(self.EventHandle)
|
||||
|
||||
class BitBakeServer():
|
||||
# remove this when you're done with debugging
|
||||
# allow_reuse_address = True
|
||||
|
||||
def __init__(self, cooker):
|
||||
self._idlefuns = {}
|
||||
self.commands = BitBakeServerCommands(self, cooker)
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
assert callable(function)
|
||||
self._idlefuns[function] = data
|
||||
|
||||
def idle_commands(self, delay):
|
||||
#print "Idle queue length %s" % len(self._idlefuns)
|
||||
#print "Idle timeout, running idle functions"
|
||||
#if len(self._idlefuns) == 0:
|
||||
nextsleep = delay
|
||||
for function, data in self._idlefuns.items():
|
||||
try:
|
||||
retval = function(self, data, False)
|
||||
#print "Idle function returned %s" % (retval)
|
||||
if retval is False:
|
||||
del self._idlefuns[function]
|
||||
elif retval is True:
|
||||
nextsleep = None
|
||||
elif nextsleep is None:
|
||||
continue
|
||||
elif retval < nextsleep:
|
||||
nextsleep = retval
|
||||
except SystemExit:
|
||||
raise
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
pass
|
||||
if nextsleep is not None:
|
||||
#print "Sleeping for %s (%s)" % (nextsleep, delay)
|
||||
time.sleep(nextsleep)
|
||||
|
||||
def server_exit(self):
|
||||
# Tell idle functions we're exiting
|
||||
for function, data in self._idlefuns.items():
|
||||
try:
|
||||
retval = function(self, data, True)
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitbakeServerInfo():
|
||||
def __init__(self, server):
|
||||
self.server = server
|
||||
self.commands = server.commands
|
||||
|
||||
class BitBakeServerFork():
|
||||
def __init__(self, serverinfo, command, logfile):
|
||||
serverinfo.forkCommand = command
|
||||
serverinfo.logfile = logfile
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, serverinfo):
|
||||
self.server = serverinfo.server
|
||||
self.connection = serverinfo.commands
|
||||
self.events = bb.server.none.BBUIEventQueue(self.server)
|
||||
|
||||
def terminate(self):
|
||||
try:
|
||||
self.events.system_quit()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
self.connection.terminateServer()
|
||||
except:
|
||||
pass
|
||||
|
||||
@@ -1,187 +0,0 @@
|
||||
#
|
||||
# BitBake XMLRPC Server
|
||||
#
|
||||
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2006 - 2008 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
This module implements an xmlrpc server for BitBake.
|
||||
|
||||
Use this by deriving a class from BitBakeXMLRPCServer and then adding
|
||||
methods which you want to "export" via XMLRPC. If the methods have the
|
||||
prefix xmlrpc_, then registering those function will happen automatically,
|
||||
if not, you need to call register_function.
|
||||
|
||||
Use register_idle_function() to add a function which the xmlrpc server
|
||||
calls from within server_forever when no requests are pending. Make sure
|
||||
that those functions are non-blocking or else you will introduce latency
|
||||
in the server's main loop.
|
||||
"""
|
||||
|
||||
import bb
|
||||
import xmlrpclib, sys
|
||||
from bb import daemonize
|
||||
from bb.ui import uievent
|
||||
|
||||
DEBUG = False
|
||||
|
||||
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import inspect, select
|
||||
|
||||
if sys.hexversion < 0x020600F0:
|
||||
print "Sorry, python 2.6 or later is required for bitbake's XMLRPC mode"
|
||||
sys.exit(1)
|
||||
|
||||
class BitBakeServerCommands():
|
||||
def __init__(self, server, cooker):
|
||||
self.cooker = cooker
|
||||
self.server = server
|
||||
|
||||
def registerEventHandler(self, host, port):
|
||||
"""
|
||||
Register a remote UI Event Handler
|
||||
"""
|
||||
s = xmlrpclib.Server("http://%s:%d" % (host, port), allow_none=True)
|
||||
return bb.event.register_UIHhandler(s)
|
||||
|
||||
def unregisterEventHandler(self, handlerNum):
|
||||
"""
|
||||
Unregister a remote UI Event Handler
|
||||
"""
|
||||
return bb.event.unregister_UIHhandler(handlerNum)
|
||||
|
||||
def runCommand(self, command):
|
||||
"""
|
||||
Run a cooker command on the server
|
||||
"""
|
||||
return self.cooker.command.runCommand(command)
|
||||
|
||||
def terminateServer(self):
|
||||
"""
|
||||
Trigger the server to quit
|
||||
"""
|
||||
self.server.quit = True
|
||||
print "Server (cooker) exitting"
|
||||
return
|
||||
|
||||
def ping(self):
|
||||
"""
|
||||
Dummy method which can be used to check the server is still alive
|
||||
"""
|
||||
return True
|
||||
|
||||
class BitBakeServer(SimpleXMLRPCServer):
|
||||
# remove this when you're done with debugging
|
||||
# allow_reuse_address = True
|
||||
|
||||
def __init__(self, cooker, interface = ("localhost", 0)):
|
||||
"""
|
||||
Constructor
|
||||
"""
|
||||
SimpleXMLRPCServer.__init__(self, interface,
|
||||
requestHandler=SimpleXMLRPCRequestHandler,
|
||||
logRequests=False, allow_none=True)
|
||||
self._idlefuns = {}
|
||||
self.host, self.port = self.socket.getsockname()
|
||||
#self.register_introspection_functions()
|
||||
commands = BitBakeServerCommands(self, cooker)
|
||||
self.autoregister_all_functions(commands, "")
|
||||
|
||||
def autoregister_all_functions(self, context, prefix):
|
||||
"""
|
||||
Convenience method for registering all functions in the scope
|
||||
of this class that start with a common prefix
|
||||
"""
|
||||
methodlist = inspect.getmembers(context, inspect.ismethod)
|
||||
for name, method in methodlist:
|
||||
if name.startswith(prefix):
|
||||
self.register_function(method, name[len(prefix):])
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
assert callable(function)
|
||||
self._idlefuns[function] = data
|
||||
|
||||
def serve_forever(self):
|
||||
"""
|
||||
Serve Requests. Overloaded to honor a quit command
|
||||
"""
|
||||
self.quit = False
|
||||
self.timeout = 0 # Run Idle calls for our first callback
|
||||
while not self.quit:
|
||||
#print "Idle queue length %s" % len(self._idlefuns)
|
||||
self.handle_request()
|
||||
#print "Idle timeout, running idle functions"
|
||||
nextsleep = None
|
||||
for function, data in self._idlefuns.items():
|
||||
try:
|
||||
retval = function(self, data, False)
|
||||
if retval is False:
|
||||
del self._idlefuns[function]
|
||||
elif retval is True:
|
||||
nextsleep = 0
|
||||
elif nextsleep is 0:
|
||||
continue
|
||||
elif nextsleep is None:
|
||||
nextsleep = retval
|
||||
elif retval < nextsleep:
|
||||
nextsleep = retval
|
||||
except SystemExit:
|
||||
raise
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
pass
|
||||
if nextsleep is None and len(self._idlefuns) > 0:
|
||||
nextsleep = 0
|
||||
self.timeout = nextsleep
|
||||
# Tell idle functions we're exiting
|
||||
for function, data in self._idlefuns.items():
|
||||
try:
|
||||
retval = function(self, data, True)
|
||||
except:
|
||||
pass
|
||||
|
||||
self.server_close()
|
||||
return
|
||||
|
||||
class BitbakeServerInfo():
|
||||
def __init__(self, server):
|
||||
self.host = server.host
|
||||
self.port = server.port
|
||||
|
||||
class BitBakeServerFork():
|
||||
def __init__(self, serverinfo, command, logfile):
|
||||
daemonize.createDaemon(command, logfile)
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, serverinfo):
|
||||
self.connection = xmlrpclib.Server("http://%s:%s" % (serverinfo.host, serverinfo.port), allow_none=True)
|
||||
self.events = uievent.BBUIEventQueue(self.connection)
|
||||
|
||||
def terminate(self):
|
||||
# Don't wait for server indefinitely
|
||||
import socket
|
||||
socket.setdefaulttimeout(2)
|
||||
try:
|
||||
self.events.system_quit()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
self.connection.terminateServer()
|
||||
except:
|
||||
pass
|
||||
|
||||
@@ -68,6 +68,7 @@ leave_mainloop = False
|
||||
last_exception = None
|
||||
cooker = None
|
||||
parsed = False
|
||||
initdata = None
|
||||
debug = os.environ.get( "BBSHELL_DEBUG", "" )
|
||||
|
||||
##########################################################################
|
||||
@@ -103,11 +104,10 @@ class BitBakeShellCommands:
|
||||
|
||||
def _findProvider( self, item ):
|
||||
self._checkParsed()
|
||||
# Need to use taskData for this information
|
||||
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
|
||||
if not preferred: preferred = item
|
||||
try:
|
||||
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
|
||||
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status, cooker.build_cache_fail)
|
||||
except KeyError:
|
||||
if item in cooker.status.providers:
|
||||
pf = cooker.status.providers[item][0]
|
||||
@@ -144,48 +144,53 @@ class BitBakeShellCommands:
|
||||
|
||||
def build( self, params, cmd = "build" ):
|
||||
"""Build a providee"""
|
||||
global last_exception
|
||||
globexpr = params[0]
|
||||
self._checkParsed()
|
||||
names = globfilter( cooker.status.pkg_pn, globexpr )
|
||||
names = globfilter( cooker.status.pkg_pn.keys(), globexpr )
|
||||
if len( names ) == 0: names = [ globexpr ]
|
||||
print "SHELL: Building %s" % ' '.join( names )
|
||||
|
||||
oldcmd = cooker.configuration.cmd
|
||||
cooker.configuration.cmd = cmd
|
||||
cooker.build_cache = []
|
||||
cooker.build_cache_fail = []
|
||||
|
||||
td = taskdata.TaskData(cooker.configuration.abort)
|
||||
localdata = data.createCopy(cooker.configuration.data)
|
||||
data.update_data(localdata)
|
||||
data.expandKeys(localdata)
|
||||
|
||||
try:
|
||||
tasks = []
|
||||
for name in names:
|
||||
td.add_provider(localdata, cooker.status, name)
|
||||
td.add_provider(cooker.configuration.data, cooker.status, name)
|
||||
providers = td.get_provider(name)
|
||||
|
||||
if len(providers) == 0:
|
||||
raise Providers.NoProvider
|
||||
|
||||
tasks.append([name, "do_%s" % cmd])
|
||||
tasks.append([name, "do_%s" % cooker.configuration.cmd])
|
||||
|
||||
td.add_unresolved(localdata, cooker.status)
|
||||
td.add_unresolved(cooker.configuration.data, cooker.status)
|
||||
|
||||
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
|
||||
rq.prepare_runqueue()
|
||||
rq.execute_runqueue()
|
||||
rq = runqueue.RunQueue()
|
||||
rq.prepare_runqueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
|
||||
rq.execute_runqueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
|
||||
|
||||
except Providers.NoProvider:
|
||||
print "ERROR: No Provider"
|
||||
global last_exception
|
||||
last_exception = Providers.NoProvider
|
||||
|
||||
except runqueue.TaskFailure, fnids:
|
||||
for fnid in fnids:
|
||||
print "ERROR: '%s' failed" % td.fn_index[fnid]
|
||||
global last_exception
|
||||
last_exception = runqueue.TaskFailure
|
||||
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % names
|
||||
global last_exception
|
||||
last_exception = e
|
||||
|
||||
cooker.configuration.cmd = oldcmd
|
||||
|
||||
build.usage = "<providee>"
|
||||
|
||||
@@ -204,11 +209,6 @@ class BitBakeShellCommands:
|
||||
self.build( params, "configure" )
|
||||
configure.usage = "<providee>"
|
||||
|
||||
def install( self, params ):
|
||||
"""Execute 'install' on a providee"""
|
||||
self.build( params, "install" )
|
||||
install.usage = "<providee>"
|
||||
|
||||
def edit( self, params ):
|
||||
"""Call $EDITOR on a providee"""
|
||||
name = params[0]
|
||||
@@ -220,8 +220,8 @@ class BitBakeShellCommands:
|
||||
edit.usage = "<providee>"
|
||||
|
||||
def environment( self, params ):
|
||||
"""Dump out the outer BitBake environment"""
|
||||
cooker.showEnvironment()
|
||||
"""Dump out the outer BitBake environment (see bbread)"""
|
||||
data.emit_env(sys.__stdout__, cooker.configuration.data, True)
|
||||
|
||||
def exit_( self, params ):
|
||||
"""Leave the BitBake Shell"""
|
||||
@@ -236,19 +236,40 @@ class BitBakeShellCommands:
|
||||
|
||||
def fileBuild( self, params, cmd = "build" ):
|
||||
"""Parse and build a .bb file"""
|
||||
global last_exception
|
||||
name = params[0]
|
||||
bf = completeFilePath( name )
|
||||
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
|
||||
|
||||
oldcmd = cooker.configuration.cmd
|
||||
cooker.configuration.cmd = cmd
|
||||
cooker.build_cache = []
|
||||
cooker.build_cache_fail = []
|
||||
|
||||
thisdata = copy.deepcopy( initdata )
|
||||
# Caution: parse.handle modifies thisdata, hence it would
|
||||
# lead to pollution cooker.configuration.data, which is
|
||||
# why we use it on a safe copy we obtained from cooker right after
|
||||
# parsing the initial *.conf files
|
||||
try:
|
||||
cooker.buildFile(bf, cmd)
|
||||
bbfile_data = parse.handle( bf, thisdata )
|
||||
except parse.ParseError:
|
||||
print "ERROR: Unable to open or parse '%s'" % bf
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % name
|
||||
last_exception = e
|
||||
else:
|
||||
# Remove stamp for target if force mode active
|
||||
if cooker.configuration.force:
|
||||
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (cmd, bf))
|
||||
bb.build.del_stamp('do_%s' % cmd, bbfile_data)
|
||||
|
||||
item = data.getVar('PN', bbfile_data, 1)
|
||||
data.setVar( "_task_cache", [], bbfile_data ) # force
|
||||
try:
|
||||
cooker.tryBuildPackage( os.path.abspath( bf ), item, cmd, bbfile_data, True )
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % name
|
||||
global last_exception
|
||||
last_exception = e
|
||||
|
||||
cooker.configuration.cmd = oldcmd
|
||||
fileBuild.usage = "<bbfile>"
|
||||
|
||||
def fileClean( self, params ):
|
||||
@@ -273,7 +294,7 @@ class BitBakeShellCommands:
|
||||
print "SHELL: Parsing '%s'" % bbfile
|
||||
parse.update_mtime( bbfile )
|
||||
cooker.bb_cache.cacheValidUpdate(bbfile)
|
||||
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data, cooker.status)
|
||||
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data)
|
||||
cooker.bb_cache.sync()
|
||||
if False: #fromCache:
|
||||
print "SHELL: File has not been updated, not reparsing"
|
||||
@@ -294,7 +315,9 @@ class BitBakeShellCommands:
|
||||
def help( self, params ):
|
||||
"""Show a comprehensive list of commands and their purpose"""
|
||||
print "="*30, "Available Commands", "="*30
|
||||
for cmd in sorted(cmds):
|
||||
allcmds = cmds.keys()
|
||||
allcmds.sort()
|
||||
for cmd in allcmds:
|
||||
function,numparams,usage,helptext = cmds[cmd]
|
||||
print "| %s | %s" % (usage.ljust(30), helptext)
|
||||
print "="*78
|
||||
@@ -320,10 +343,10 @@ class BitBakeShellCommands:
|
||||
what, globexpr = params
|
||||
if what == "files":
|
||||
self._checkParsed()
|
||||
for key in globfilter( cooker.status.pkg_fn, globexpr ): print key
|
||||
for key in globfilter( cooker.status.pkg_fn.keys(), globexpr ): print key
|
||||
elif what == "providers":
|
||||
self._checkParsed()
|
||||
for key in globfilter( cooker.status.pkg_pn, globexpr ): print key
|
||||
for key in globfilter( cooker.status.pkg_pn.keys(), globexpr ): print key
|
||||
else:
|
||||
print "Usage: match %s" % self.print_.usage
|
||||
match.usage = "<files|providers> <glob>"
|
||||
@@ -375,11 +398,6 @@ SRC_URI = ""
|
||||
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
|
||||
new.usage = "<directory> <filename>"
|
||||
|
||||
def package( self, params ):
|
||||
"""Execute 'package' on a providee"""
|
||||
self.build( params, "package" )
|
||||
package.usage = "<providee>"
|
||||
|
||||
def pasteBin( self, params ):
|
||||
"""Send a command + output buffer to the pastebin at http://rafb.net/paste"""
|
||||
index = params[0]
|
||||
@@ -471,10 +489,10 @@ SRC_URI = ""
|
||||
what = params[0]
|
||||
if what == "files":
|
||||
self._checkParsed()
|
||||
for key in cooker.status.pkg_fn: print key
|
||||
for key in cooker.status.pkg_fn.keys(): print key
|
||||
elif what == "providers":
|
||||
self._checkParsed()
|
||||
for key in cooker.status.providers: print key
|
||||
for key in cooker.status.providers.keys(): print key
|
||||
else:
|
||||
print "Usage: print %s" % self.print_.usage
|
||||
print_.usage = "<files|providers>"
|
||||
@@ -489,7 +507,7 @@ SRC_URI = ""
|
||||
|
||||
def showdata( self, params ):
|
||||
"""Execute 'showdata' on a providee"""
|
||||
cooker.showEnvironment(None, params)
|
||||
self.build( params, "showdata" )
|
||||
showdata.usage = "<providee>"
|
||||
|
||||
def setVar( self, params ):
|
||||
@@ -513,12 +531,14 @@ SRC_URI = ""
|
||||
|
||||
def stage( self, params ):
|
||||
"""Execute 'stage' on a providee"""
|
||||
self.build( params, "populate_staging" )
|
||||
self.build( params, "stage" )
|
||||
stage.usage = "<providee>"
|
||||
|
||||
def status( self, params ):
|
||||
"""<just for testing>"""
|
||||
print "-" * 78
|
||||
print "build cache = '%s'" % cooker.build_cache
|
||||
print "build cache fail = '%s'" % cooker.build_cache_fail
|
||||
print "building list = '%s'" % cooker.building_list
|
||||
print "build path = '%s'" % cooker.build_path
|
||||
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
|
||||
@@ -537,7 +557,6 @@ SRC_URI = ""
|
||||
|
||||
def which( self, params ):
|
||||
"""Computes the providers for a given providee"""
|
||||
# Need to use taskData for this information
|
||||
item = params[0]
|
||||
|
||||
self._checkParsed()
|
||||
@@ -546,7 +565,8 @@ SRC_URI = ""
|
||||
if not preferred: preferred = item
|
||||
|
||||
try:
|
||||
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
|
||||
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status,
|
||||
cooker.build_cache_fail)
|
||||
except KeyError:
|
||||
lv, lf, pv, pf = (None,)*4
|
||||
|
||||
@@ -567,9 +587,8 @@ SRC_URI = ""
|
||||
|
||||
def completeFilePath( bbfile ):
|
||||
"""Get the complete bbfile path"""
|
||||
if not cooker.status: return bbfile
|
||||
if not cooker.status.pkg_fn: return bbfile
|
||||
for key in cooker.status.pkg_fn:
|
||||
for key in cooker.status.pkg_fn.keys():
|
||||
if key.endswith( bbfile ):
|
||||
return key
|
||||
return bbfile
|
||||
@@ -613,7 +632,7 @@ def completer( text, state ):
|
||||
allmatches = cooker.configuration.data.keys()
|
||||
elif u == "<bbfile>":
|
||||
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
|
||||
else: allmatches = [ x.split("/")[-1] for x in cooker.status.pkg_fn ]
|
||||
else: allmatches = [ x.split("/")[-1] for x in cooker.status.pkg_fn.keys() ]
|
||||
elif u == "<providee>":
|
||||
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
|
||||
else: allmatches = cooker.status.providers.iterkeys()
|
||||
@@ -720,6 +739,10 @@ class BitBakeShell:
|
||||
|
||||
print __credits__
|
||||
|
||||
# save initial cooker configuration (will be reused in file*** commands)
|
||||
global initdata
|
||||
initdata = copy.deepcopy( cooker.configuration.data )
|
||||
|
||||
def cleanup( self ):
|
||||
"""Write readline history and clean up resources"""
|
||||
debugOut( "writing command history" )
|
||||
|
||||
@@ -23,26 +23,14 @@ Task data collection and handling
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import bb
|
||||
|
||||
def re_match_strings(target, strings):
|
||||
"""
|
||||
Whether or not the string 'target' matches
|
||||
any one string of the strings which can be regular expression string
|
||||
"""
|
||||
import re
|
||||
|
||||
for name in strings:
|
||||
if (name==target or
|
||||
re.search(name,target)!=None):
|
||||
return True
|
||||
return False
|
||||
from bb import data, fetch, event, mkdirhier, utils
|
||||
import bb, os
|
||||
|
||||
class TaskData:
|
||||
"""
|
||||
BitBake Task Data implementation
|
||||
"""
|
||||
def __init__(self, abort = True, tryaltconfigs = False):
|
||||
def __init__(self, abort = True):
|
||||
self.build_names_index = []
|
||||
self.run_names_index = []
|
||||
self.fn_index = []
|
||||
@@ -55,7 +43,6 @@ class TaskData:
|
||||
self.tasks_fnid = []
|
||||
self.tasks_name = []
|
||||
self.tasks_tdepends = []
|
||||
self.tasks_idepends = []
|
||||
# Cache to speed up task ID lookups
|
||||
self.tasks_lookup = {}
|
||||
|
||||
@@ -69,7 +56,6 @@ class TaskData:
|
||||
self.failed_fnids = []
|
||||
|
||||
self.abort = abort
|
||||
self.tryaltconfigs = tryaltconfigs
|
||||
|
||||
def getbuild_id(self, name):
|
||||
"""
|
||||
@@ -104,16 +90,6 @@ class TaskData:
|
||||
|
||||
return self.fn_index.index(name)
|
||||
|
||||
def gettask_ids(self, fnid):
|
||||
"""
|
||||
Return an array of the ID numbers matching a given fnid.
|
||||
"""
|
||||
ids = []
|
||||
if fnid in self.tasks_lookup:
|
||||
for task in self.tasks_lookup[fnid]:
|
||||
ids.append(self.tasks_lookup[fnid][task])
|
||||
return ids
|
||||
|
||||
def gettask_id(self, fn, task, create = True):
|
||||
"""
|
||||
Return an ID number for the task matching fn and task.
|
||||
@@ -132,7 +108,6 @@ class TaskData:
|
||||
self.tasks_name.append(task)
|
||||
self.tasks_fnid.append(fnid)
|
||||
self.tasks_tdepends.append([])
|
||||
self.tasks_idepends.append([])
|
||||
|
||||
listid = len(self.tasks_name) - 1
|
||||
|
||||
@@ -147,6 +122,7 @@ class TaskData:
|
||||
Add tasks for a given fn to the database
|
||||
"""
|
||||
|
||||
task_graph = dataCache.task_queues[fn]
|
||||
task_deps = dataCache.task_deps[fn]
|
||||
|
||||
fnid = self.getfn_id(fn)
|
||||
@@ -158,26 +134,15 @@ class TaskData:
|
||||
if fnid in self.tasks_fnid:
|
||||
return
|
||||
|
||||
for task in task_deps['tasks']:
|
||||
|
||||
# Work out task dependencies
|
||||
# Work out task dependencies
|
||||
for task in task_graph.allnodes():
|
||||
parentids = []
|
||||
for dep in task_deps['parents'][task]:
|
||||
for dep in task_graph.getparents(task):
|
||||
parentid = self.gettask_id(fn, dep)
|
||||
parentids.append(parentid)
|
||||
taskid = self.gettask_id(fn, task)
|
||||
self.tasks_tdepends[taskid].extend(parentids)
|
||||
|
||||
# Touch all intertask dependencies
|
||||
if 'depends' in task_deps and task in task_deps['depends']:
|
||||
ids = []
|
||||
for dep in task_deps['depends'][task].split():
|
||||
if dep:
|
||||
if ":" not in dep:
|
||||
bb.msg.fatal(bb.msg.domain.TaskData, "Error, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (dep, fn))
|
||||
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
|
||||
self.tasks_idepends[taskid].extend(ids)
|
||||
|
||||
# Work out build dependencies
|
||||
if not fnid in self.depids:
|
||||
dependids = {}
|
||||
@@ -192,11 +157,11 @@ class TaskData:
|
||||
rdepends = dataCache.rundeps[fn]
|
||||
rrecs = dataCache.runrecs[fn]
|
||||
for package in rdepends:
|
||||
for rdepend in bb.utils.explode_deps(rdepends[package]):
|
||||
for rdepend in rdepends[package]:
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime dependency %s for %s" % (rdepend, fn))
|
||||
rdependids[self.getrun_id(rdepend)] = None
|
||||
for package in rrecs:
|
||||
for rdepend in bb.utils.explode_deps(rrecs[package]):
|
||||
for rdepend in rrecs[package]:
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime recommendation %s for %s" % (rdepend, fn))
|
||||
rdependids[self.getrun_id(rdepend)] = None
|
||||
self.rdepids[fnid] = rdependids.keys()
|
||||
@@ -276,7 +241,7 @@ class TaskData:
|
||||
"""
|
||||
unresolved = []
|
||||
for target in self.build_names_index:
|
||||
if re_match_strings(target, dataCache.ignored_dependencies):
|
||||
if target in dataCache.ignored_dependencies:
|
||||
continue
|
||||
if self.build_names_index.index(target) in self.failed_deps:
|
||||
continue
|
||||
@@ -291,7 +256,7 @@ class TaskData:
|
||||
"""
|
||||
unresolved = []
|
||||
for target in self.run_names_index:
|
||||
if re_match_strings(target, dataCache.ignored_dependencies):
|
||||
if target in dataCache.ignored_dependencies:
|
||||
continue
|
||||
if self.run_names_index.index(target) in self.failed_rdeps:
|
||||
continue
|
||||
@@ -354,10 +319,7 @@ class TaskData:
|
||||
self.add_provider_internal(cfgData, dataCache, item)
|
||||
except bb.providers.NoProvider:
|
||||
if self.abort:
|
||||
if self.get_rdependees_str(item):
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (item))
|
||||
bb.msg.error(bb.msg.domain.Provider, "No providers of build target %s (for %s)" % (item, self.get_dependees_str(item)))
|
||||
raise
|
||||
targetid = self.getbuild_id(item)
|
||||
self.remove_buildtarget(targetid)
|
||||
@@ -371,15 +333,12 @@ class TaskData:
|
||||
added internally during dependency resolution
|
||||
"""
|
||||
|
||||
if re_match_strings(item, dataCache.ignored_dependencies):
|
||||
if item in dataCache.ignored_dependencies:
|
||||
return
|
||||
|
||||
if not item in dataCache.providers:
|
||||
if self.get_rdependees_str(item):
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
|
||||
else:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (item))
|
||||
bb.event.fire(bb.event.NoProvider(item), cfgData)
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "No providers of build target %s (for %s)" % (item, self.get_dependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData))
|
||||
raise bb.providers.NoProvider(item)
|
||||
|
||||
if self.have_build_target(item):
|
||||
@@ -387,22 +346,41 @@ class TaskData:
|
||||
|
||||
all_p = dataCache.providers[item]
|
||||
|
||||
eligible, foundUnique = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
|
||||
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
|
||||
eligible = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
|
||||
|
||||
for p in eligible:
|
||||
fnid = self.getfn_id(p)
|
||||
if fnid in self.failed_fnids:
|
||||
eligible.remove(p)
|
||||
|
||||
if not eligible:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "No buildable provider PROVIDES '%s' but '%s' DEPENDS on or otherwise requires it. Enable debugging and see earlier logs to find unbuildable providers." % (item, self.get_dependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item), cfgData)
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "No providers of build target %s after filtering (for %s)" % (item, self.get_dependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData))
|
||||
raise bb.providers.NoProvider(item)
|
||||
|
||||
if len(eligible) > 1 and foundUnique == False:
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
|
||||
if prefervar:
|
||||
dataCache.preferred[item] = prefervar
|
||||
|
||||
discriminated = False
|
||||
if item in dataCache.preferred:
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
if dataCache.preferred[item] == pn:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
discriminated = True
|
||||
break
|
||||
|
||||
if len(eligible) > 1 and discriminated == False:
|
||||
if item not in self.consider_msgs_cache:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "multiple providers are available for %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "consider defining PREFERRED_PROVIDER_%s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item, providers_list), cfgData)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list,cfgData))
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
for fn in eligible:
|
||||
@@ -422,7 +400,7 @@ class TaskData:
|
||||
(takes item names from RDEPENDS/PACKAGES namespace)
|
||||
"""
|
||||
|
||||
if re_match_strings(item, dataCache.ignored_dependencies):
|
||||
if item in dataCache.ignored_dependencies:
|
||||
return
|
||||
|
||||
if self.have_runtime_target(item):
|
||||
@@ -431,36 +409,53 @@ class TaskData:
|
||||
all_p = bb.providers.getRuntimeProviders(dataCache, item)
|
||||
|
||||
if not all_p:
|
||||
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables" % (self.get_rdependees_str(item), item))
|
||||
bb.event.fire(bb.event.NoProvider(item, runtime=True), cfgData)
|
||||
bb.msg.error(bb.msg.domain.Provider, "No providers of runtime build target %s (for %s)" % (item, self.get_rdependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
|
||||
raise bb.providers.NoRProvider(item)
|
||||
|
||||
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
|
||||
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
|
||||
eligible = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
|
||||
|
||||
for p in eligible:
|
||||
fnid = self.getfn_id(p)
|
||||
if fnid in self.failed_fnids:
|
||||
eligible.remove(p)
|
||||
|
||||
if not eligible:
|
||||
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables of any buildable targets.\nEnable debugging and see earlier logs to find unbuildable targets." % (self.get_rdependees_str(item), item))
|
||||
bb.event.fire(bb.event.NoProvider(item, runtime=True), cfgData)
|
||||
bb.msg.error(bb.msg.domain.Provider, "No providers of runtime build target %s after filtering (for %s)" % (item, self.get_rdependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
|
||||
raise bb.providers.NoRProvider(item)
|
||||
|
||||
if len(eligible) > 1 and numberPreferred == 0:
|
||||
# Should use dataCache.preferred here?
|
||||
preferred = []
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
provides = dataCache.pn_provides[pn]
|
||||
for provide in provides:
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
|
||||
if prefervar == pn:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to PREFERRED_PROVIDERS" % (pn, item))
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
preferred.append(p)
|
||||
|
||||
if len(eligible) > 1 and len(preferred) == 0:
|
||||
if item not in self.consider_msgs_cache:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "consider defining a PREFERRED_PROVIDER entry to match runtime %s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, runtime=True), cfgData)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
if numberPreferred > 1:
|
||||
if len(preferred) > 1:
|
||||
if item not in self.consider_msgs_cache:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
for fn in preferred:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (top %s entries preferred) (%s);" % (item, numberPreferred, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple preferred providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "consider defining only one PREFERRED_PROVIDER entry to match runtime %s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, runtime=True), cfgData)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
# run through the list until we find one that we can build
|
||||
@@ -468,77 +463,60 @@ class TaskData:
|
||||
fnid = self.getfn_id(fn)
|
||||
if fnid in self.failed_fnids:
|
||||
continue
|
||||
bb.msg.debug(2, bb.msg.domain.Provider, "adding '%s' to satisfy runtime '%s'" % (fn, item))
|
||||
bb.msg.debug(2, bb.msg.domain.Provider, "adding %s to satisfy runtime %s" % (fn, item))
|
||||
self.add_runtime_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
def fail_fnid(self, fnid, missing_list = []):
|
||||
def fail_fnid(self, fnid):
|
||||
"""
|
||||
Mark a file as failed (unbuildable)
|
||||
Remove any references from build and runtime provider lists
|
||||
|
||||
missing_list, A list of missing requirements for this target
|
||||
"""
|
||||
if fnid in self.failed_fnids:
|
||||
return
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "File '%s' is unbuildable, removing..." % self.fn_index[fnid])
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "Removing failed file %s" % self.fn_index[fnid])
|
||||
self.failed_fnids.append(fnid)
|
||||
for target in self.build_targets:
|
||||
if fnid in self.build_targets[target]:
|
||||
self.build_targets[target].remove(fnid)
|
||||
if len(self.build_targets[target]) == 0:
|
||||
self.remove_buildtarget(target, missing_list)
|
||||
self.remove_buildtarget(target)
|
||||
for target in self.run_targets:
|
||||
if fnid in self.run_targets[target]:
|
||||
self.run_targets[target].remove(fnid)
|
||||
if len(self.run_targets[target]) == 0:
|
||||
self.remove_runtarget(target, missing_list)
|
||||
self.remove_runtarget(target)
|
||||
|
||||
def remove_buildtarget(self, targetid, missing_list = []):
|
||||
def remove_buildtarget(self, targetid):
|
||||
"""
|
||||
Mark a build target as failed (unbuildable)
|
||||
Trigger removal of any files that have this as a dependency
|
||||
"""
|
||||
if not missing_list:
|
||||
missing_list = [self.build_names_index[targetid]]
|
||||
else:
|
||||
missing_list = [self.build_names_index[targetid]] + missing_list
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "Removing failed build target %s" % self.build_names_index[targetid])
|
||||
self.failed_deps.append(targetid)
|
||||
dependees = self.get_dependees(targetid)
|
||||
for fnid in dependees:
|
||||
self.fail_fnid(fnid, missing_list)
|
||||
for taskid in range(len(self.tasks_idepends)):
|
||||
idepends = self.tasks_idepends[taskid]
|
||||
for (idependid, idependtask) in idepends:
|
||||
if idependid == targetid:
|
||||
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
|
||||
|
||||
self.fail_fnid(fnid)
|
||||
if self.abort and targetid in self.external_targets:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
|
||||
bb.msg.error(bb.msg.domain.Provider, "No buildable providers available for required build target %s" % self.build_names_index[targetid])
|
||||
raise bb.providers.NoProvider
|
||||
|
||||
def remove_runtarget(self, targetid, missing_list = []):
|
||||
def remove_runtarget(self, targetid):
|
||||
"""
|
||||
Mark a run target as failed (unbuildable)
|
||||
Trigger removal of any files that have this as a dependency
|
||||
"""
|
||||
if not missing_list:
|
||||
missing_list = [self.run_names_index[targetid]]
|
||||
else:
|
||||
missing_list = [self.run_names_index[targetid]] + missing_list
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.run_names_index[targetid], missing_list))
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "Removing failed runtime build target %s" % self.run_names_index[targetid])
|
||||
self.failed_rdeps.append(targetid)
|
||||
dependees = self.get_rdependees(targetid)
|
||||
for fnid in dependees:
|
||||
self.fail_fnid(fnid, missing_list)
|
||||
self.fail_fnid(fnid)
|
||||
|
||||
def add_unresolved(self, cfgData, dataCache):
|
||||
"""
|
||||
Resolve all unresolved build and runtime targets
|
||||
"""
|
||||
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
|
||||
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving missing task queue dependencies")
|
||||
while 1:
|
||||
added = 0
|
||||
for target in self.get_unresolved_build_targets(dataCache):
|
||||
@@ -548,10 +526,6 @@ class TaskData:
|
||||
except bb.providers.NoProvider:
|
||||
targetid = self.getbuild_id(target)
|
||||
if self.abort and targetid in self.external_targets:
|
||||
if self.get_rdependees_str(target):
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (target, self.get_dependees_str(target)))
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (target))
|
||||
raise
|
||||
self.remove_buildtarget(targetid)
|
||||
for target in self.get_unresolved_run_targets(dataCache):
|
||||
@@ -560,7 +534,7 @@ class TaskData:
|
||||
added = added + 1
|
||||
except bb.providers.NoRProvider:
|
||||
self.remove_runtarget(self.getrun_id(target))
|
||||
bb.msg.debug(1, bb.msg.domain.TaskData, "Resolved " + str(added) + " extra dependencies")
|
||||
bb.msg.debug(1, bb.msg.domain.TaskData, "Resolved " + str(added) + " extra dependecies")
|
||||
if added == 0:
|
||||
break
|
||||
# self.dump_data()
|
||||
@@ -571,26 +545,14 @@ class TaskData:
|
||||
"""
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "build_names:")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.build_names_index))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "run_names:")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.run_names_index))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "build_targets:")
|
||||
for buildid in range(len(self.build_names_index)):
|
||||
target = self.build_names_index[buildid]
|
||||
targets = "None"
|
||||
if buildid in self.build_targets:
|
||||
targets = self.build_targets[buildid]
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (buildid, target, targets))
|
||||
|
||||
for target in self.build_targets.keys():
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s: %s" % (self.build_names_index[target], self.build_targets[target]))
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "run_targets:")
|
||||
for runid in range(len(self.run_names_index)):
|
||||
target = self.run_names_index[runid]
|
||||
targets = "None"
|
||||
if runid in self.run_targets:
|
||||
targets = self.run_targets[runid]
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (runid, target, targets))
|
||||
|
||||
for target in self.run_targets.keys():
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s: %s" % (self.run_names_index[target], self.run_targets[target]))
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
|
||||
for task in range(len(self.tasks_name)):
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
|
||||
@@ -598,12 +560,7 @@ class TaskData:
|
||||
self.fn_index[self.tasks_fnid[task]],
|
||||
self.tasks_name[task],
|
||||
self.tasks_tdepends[task]))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
|
||||
for fnid in self.depids:
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.depids[fnid]))
|
||||
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime ids (per fn):")
|
||||
for fnid in self.rdepids:
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))
|
||||
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
#
|
||||
# BitBake UI Implementation
|
||||
#
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
#
|
||||
# BitBake UI Implementation
|
||||
#
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
@@ -1,457 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2008 Intel Corporation
|
||||
#
|
||||
# Authored by Rob Bradford <rob@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import gobject
|
||||
import threading
|
||||
import os
|
||||
import datetime
|
||||
import time
|
||||
|
||||
class BuildConfiguration:
|
||||
""" Represents a potential *or* historic *or* concrete build. It
|
||||
encompasses all the things that we need to tell bitbake to do to make it
|
||||
build what we want it to build.
|
||||
|
||||
It also stored the metadata URL and the set of possible machines (and the
|
||||
distros / images / uris for these. Apart from the metdata URL these are
|
||||
not serialised to file (since they may be transient). In some ways this
|
||||
functionality might be shifted to the loader class."""
|
||||
|
||||
def __init__ (self):
|
||||
self.metadata_url = None
|
||||
|
||||
# Tuple of (distros, image, urls)
|
||||
self.machine_options = {}
|
||||
|
||||
self.machine = None
|
||||
self.distro = None
|
||||
self.image = None
|
||||
self.urls = []
|
||||
self.extra_urls = []
|
||||
self.extra_pkgs = []
|
||||
|
||||
def get_machines_model (self):
|
||||
model = gtk.ListStore (gobject.TYPE_STRING)
|
||||
for machine in self.machine_options.keys():
|
||||
model.append ([machine])
|
||||
|
||||
return model
|
||||
|
||||
def get_distro_and_images_models (self, machine):
|
||||
distro_model = gtk.ListStore (gobject.TYPE_STRING)
|
||||
|
||||
for distro in self.machine_options[machine][0]:
|
||||
distro_model.append ([distro])
|
||||
|
||||
image_model = gtk.ListStore (gobject.TYPE_STRING)
|
||||
|
||||
for image in self.machine_options[machine][1]:
|
||||
image_model.append ([image])
|
||||
|
||||
return (distro_model, image_model)
|
||||
|
||||
def get_repos (self):
|
||||
self.urls = self.machine_options[self.machine][2]
|
||||
return self.urls
|
||||
|
||||
# It might be a lot lot better if we stored these in like, bitbake conf
|
||||
# file format.
|
||||
@staticmethod
|
||||
def load_from_file (filename):
|
||||
f = open (filename, "r")
|
||||
|
||||
conf = BuildConfiguration()
|
||||
for line in f.readlines():
|
||||
data = line.split (";")[1]
|
||||
if (line.startswith ("metadata-url;")):
|
||||
conf.metadata_url = data.strip()
|
||||
continue
|
||||
if (line.startswith ("url;")):
|
||||
conf.urls += [data.strip()]
|
||||
continue
|
||||
if (line.startswith ("extra-url;")):
|
||||
conf.extra_urls += [data.strip()]
|
||||
continue
|
||||
if (line.startswith ("machine;")):
|
||||
conf.machine = data.strip()
|
||||
continue
|
||||
if (line.startswith ("distribution;")):
|
||||
conf.distro = data.strip()
|
||||
continue
|
||||
if (line.startswith ("image;")):
|
||||
conf.image = data.strip()
|
||||
continue
|
||||
|
||||
f.close ()
|
||||
return conf
|
||||
|
||||
# Serialise to a file. This is part of the build process and we use this
|
||||
# to be able to repeat a given build (using the same set of parameters)
|
||||
# but also so that we can include the details of the image / machine /
|
||||
# distro in the build manager tree view.
|
||||
def write_to_file (self, filename):
|
||||
f = open (filename, "w")
|
||||
|
||||
lines = []
|
||||
|
||||
if (self.metadata_url):
|
||||
lines += ["metadata-url;%s\n" % (self.metadata_url)]
|
||||
|
||||
for url in self.urls:
|
||||
lines += ["url;%s\n" % (url)]
|
||||
|
||||
for url in self.extra_urls:
|
||||
lines += ["extra-url;%s\n" % (url)]
|
||||
|
||||
if (self.machine):
|
||||
lines += ["machine;%s\n" % (self.machine)]
|
||||
|
||||
if (self.distro):
|
||||
lines += ["distribution;%s\n" % (self.distro)]
|
||||
|
||||
if (self.image):
|
||||
lines += ["image;%s\n" % (self.image)]
|
||||
|
||||
f.writelines (lines)
|
||||
f.close ()
|
||||
|
||||
class BuildResult(gobject.GObject):
|
||||
""" Represents an historic build. Perhaps not successful. But it includes
|
||||
things such as the files that are in the directory (the output from the
|
||||
build) as well as a deserialised BuildConfiguration file that is stored in
|
||||
".conf" in the directory for the build.
|
||||
|
||||
This is GObject so that it can be included in the TreeStore."""
|
||||
|
||||
(STATE_COMPLETE, STATE_FAILED, STATE_ONGOING) = \
|
||||
(0, 1, 2)
|
||||
|
||||
def __init__ (self, parent, identifier):
|
||||
gobject.GObject.__init__ (self)
|
||||
self.date = None
|
||||
|
||||
self.files = []
|
||||
self.status = None
|
||||
self.identifier = identifier
|
||||
self.path = os.path.join (parent, identifier)
|
||||
|
||||
# Extract the date, since the directory name is of the
|
||||
# format build-<year><month><day>-<ordinal> we can easily
|
||||
# pull it out.
|
||||
# TODO: Better to stat a file?
|
||||
(_ , date, revision) = identifier.split ("-")
|
||||
print date
|
||||
|
||||
year = int (date[0:4])
|
||||
month = int (date[4:6])
|
||||
day = int (date[6:8])
|
||||
|
||||
self.date = datetime.date (year, month, day)
|
||||
|
||||
self.conf = None
|
||||
|
||||
# By default builds are STATE_FAILED unless we find a "complete" file
|
||||
# in which case they are STATE_COMPLETE
|
||||
self.state = BuildResult.STATE_FAILED
|
||||
for file in os.listdir (self.path):
|
||||
if (file.startswith (".conf")):
|
||||
conffile = os.path.join (self.path, file)
|
||||
self.conf = BuildConfiguration.load_from_file (conffile)
|
||||
elif (file.startswith ("complete")):
|
||||
self.state = BuildResult.STATE_COMPLETE
|
||||
else:
|
||||
self.add_file (file)
|
||||
|
||||
def add_file (self, file):
|
||||
# Just add the file for now. Don't care about the type.
|
||||
self.files += [(file, None)]
|
||||
|
||||
class BuildManagerModel (gtk.TreeStore):
|
||||
""" Model for the BuildManagerTreeView. This derives from gtk.TreeStore
|
||||
but it abstracts nicely what the columns mean and the setup of the columns
|
||||
in the model. """
|
||||
|
||||
(COL_IDENT, COL_DESC, COL_MACHINE, COL_DISTRO, COL_BUILD_RESULT, COL_DATE, COL_STATE) = \
|
||||
(0, 1, 2, 3, 4, 5, 6)
|
||||
|
||||
def __init__ (self):
|
||||
gtk.TreeStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_OBJECT,
|
||||
gobject.TYPE_INT64,
|
||||
gobject.TYPE_INT)
|
||||
|
||||
class BuildManager (gobject.GObject):
|
||||
""" This class manages the historic builds that have been found in the
|
||||
"results" directory but is also used for starting a new build."""
|
||||
|
||||
__gsignals__ = {
|
||||
'population-finished' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'populate-error' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
())
|
||||
}
|
||||
|
||||
def update_build_result (self, result, iter):
|
||||
# Convert the date into something we can sort by.
|
||||
date = long (time.mktime (result.date.timetuple()))
|
||||
|
||||
# Add a top level entry for the build
|
||||
|
||||
self.model.set (iter,
|
||||
BuildManagerModel.COL_IDENT, result.identifier,
|
||||
BuildManagerModel.COL_DESC, result.conf.image,
|
||||
BuildManagerModel.COL_MACHINE, result.conf.machine,
|
||||
BuildManagerModel.COL_DISTRO, result.conf.distro,
|
||||
BuildManagerModel.COL_BUILD_RESULT, result,
|
||||
BuildManagerModel.COL_DATE, date,
|
||||
BuildManagerModel.COL_STATE, result.state)
|
||||
|
||||
# And then we use the files in the directory as the children for the
|
||||
# top level iter.
|
||||
for file in result.files:
|
||||
self.model.append (iter, (None, file[0], None, None, None, date, -1))
|
||||
|
||||
# This function is called as an idle by the BuildManagerPopulaterThread
|
||||
def add_build_result (self, result):
|
||||
gtk.gdk.threads_enter()
|
||||
self.known_builds += [result]
|
||||
|
||||
self.update_build_result (result, self.model.append (None))
|
||||
|
||||
gtk.gdk.threads_leave()
|
||||
|
||||
def notify_build_finished (self):
|
||||
# This is a bit of a hack. If we have a running build running then we
|
||||
# will have a row in the model in STATE_ONGOING. Find it and make it
|
||||
# as if it was a proper historic build (well, it is completed now....)
|
||||
|
||||
# We need to use the iters here rather than the Python iterator
|
||||
# interface to the model since we need to pass it into
|
||||
# update_build_result
|
||||
|
||||
iter = self.model.get_iter_first()
|
||||
|
||||
while (iter):
|
||||
(ident, state) = self.model.get(iter,
|
||||
BuildManagerModel.COL_IDENT,
|
||||
BuildManagerModel.COL_STATE)
|
||||
|
||||
if state == BuildResult.STATE_ONGOING:
|
||||
result = BuildResult (self.results_directory, ident)
|
||||
self.update_build_result (result, iter)
|
||||
iter = self.model.iter_next(iter)
|
||||
|
||||
def notify_build_succeeded (self):
|
||||
# Write the "complete" file so that when we create the BuildResult
|
||||
# object we put into the model
|
||||
|
||||
complete_file_path = os.path.join (self.cur_build_directory, "complete")
|
||||
f = file (complete_file_path, "w")
|
||||
f.close()
|
||||
self.notify_build_finished()
|
||||
|
||||
def notify_build_failed (self):
|
||||
# Without a "complete" file then this will mark the build as failed:
|
||||
self.notify_build_finished()
|
||||
|
||||
# This function is called as an idle
|
||||
def emit_population_finished_signal (self):
|
||||
gtk.gdk.threads_enter()
|
||||
self.emit ("population-finished")
|
||||
gtk.gdk.threads_leave()
|
||||
|
||||
class BuildManagerPopulaterThread (threading.Thread):
|
||||
def __init__ (self, manager, directory):
|
||||
threading.Thread.__init__ (self)
|
||||
self.manager = manager
|
||||
self.directory = directory
|
||||
|
||||
def run (self):
|
||||
# For each of the "build-<...>" directories ..
|
||||
|
||||
if os.path.exists (self.directory):
|
||||
for directory in os.listdir (self.directory):
|
||||
|
||||
if not directory.startswith ("build-"):
|
||||
continue
|
||||
|
||||
build_result = BuildResult (self.directory, directory)
|
||||
self.manager.add_build_result (build_result)
|
||||
|
||||
gobject.idle_add (BuildManager.emit_population_finished_signal,
|
||||
self.manager)
|
||||
|
||||
def __init__ (self, server, results_directory):
|
||||
gobject.GObject.__init__ (self)
|
||||
|
||||
# The builds that we've found from walking the result directory
|
||||
self.known_builds = []
|
||||
|
||||
# Save out the bitbake server, we need this for issuing commands to
|
||||
# the cooker:
|
||||
self.server = server
|
||||
|
||||
# The TreeStore that we use
|
||||
self.model = BuildManagerModel ()
|
||||
|
||||
# The results directory is where we create (and look for) the
|
||||
# build-<xyz>-<n> directories. We need to populate ourselves from
|
||||
# directory
|
||||
self.results_directory = results_directory
|
||||
self.populate_from_directory (self.results_directory)
|
||||
|
||||
def populate_from_directory (self, directory):
|
||||
thread = BuildManager.BuildManagerPopulaterThread (self, directory)
|
||||
thread.start()
|
||||
|
||||
# Come up with the name for the next build ident by combining "build-"
|
||||
# with the date formatted as yyyymmdd and then an ordinal. We do this by
|
||||
# an optimistic algorithm incrementing the ordinal if we find that it
|
||||
# already exists.
|
||||
def get_next_build_ident (self):
|
||||
today = datetime.date.today ()
|
||||
datestr = str (today.year) + str (today.month) + str (today.day)
|
||||
|
||||
revision = 0
|
||||
test_name = "build-%s-%d" % (datestr, revision)
|
||||
test_path = os.path.join (self.results_directory, test_name)
|
||||
|
||||
while (os.path.exists (test_path)):
|
||||
revision += 1
|
||||
test_name = "build-%s-%d" % (datestr, revision)
|
||||
test_path = os.path.join (self.results_directory, test_name)
|
||||
|
||||
return test_name
|
||||
|
||||
# Take a BuildConfiguration and then try and build it based on the
|
||||
# parameters of that configuration. S
|
||||
def do_build (self, conf):
|
||||
server = self.server
|
||||
|
||||
# Work out the build directory. Note we actually create the
|
||||
# directories here since we need to write the ".conf" file. Otherwise
|
||||
# we could have relied on bitbake's builder thread to actually make
|
||||
# the directories as it proceeds with the build.
|
||||
ident = self.get_next_build_ident ()
|
||||
build_directory = os.path.join (self.results_directory,
|
||||
ident)
|
||||
self.cur_build_directory = build_directory
|
||||
os.makedirs (build_directory)
|
||||
|
||||
conffile = os.path.join (build_directory, ".conf")
|
||||
conf.write_to_file (conffile)
|
||||
|
||||
# Add a row to the model representing this ongoing build. It's kinda a
|
||||
# fake entry. If this build completes or fails then this gets updated
|
||||
# with the real stuff like the historic builds
|
||||
date = long (time.time())
|
||||
self.model.append (None, (ident, conf.image, conf.machine, conf.distro,
|
||||
None, date, BuildResult.STATE_ONGOING))
|
||||
try:
|
||||
server.runCommand(["setVariable", "BUILD_IMAGES_FROM_FEEDS", 1])
|
||||
server.runCommand(["setVariable", "MACHINE", conf.machine])
|
||||
server.runCommand(["setVariable", "DISTRO", conf.distro])
|
||||
server.runCommand(["setVariable", "PACKAGE_CLASSES", "package_ipk"])
|
||||
server.runCommand(["setVariable", "BBFILES", \
|
||||
"""${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-moblin/packages/*/*.bb"""])
|
||||
server.runCommand(["setVariable", "TMPDIR", "${OEROOT}/build/tmp"])
|
||||
server.runCommand(["setVariable", "IPK_FEED_URIS", \
|
||||
" ".join(conf.get_repos())])
|
||||
server.runCommand(["setVariable", "DEPLOY_DIR_IMAGE",
|
||||
build_directory])
|
||||
server.runCommand(["buildTargets", [conf.image], "rootfs"])
|
||||
|
||||
except Exception, e:
|
||||
print e
|
||||
|
||||
class BuildManagerTreeView (gtk.TreeView):
|
||||
""" The tree view for the build manager. This shows the historic builds
|
||||
and so forth. """
|
||||
|
||||
# We use this function to control what goes in the cell since we store
|
||||
# the date in the model as seconds since the epoch (for sorting) and so we
|
||||
# need to make it human readable.
|
||||
def date_format_custom_cell_data_func (self, col, cell, model, iter):
|
||||
date = model.get (iter, BuildManagerModel.COL_DATE)[0]
|
||||
datestr = time.strftime("%A %d %B %Y", time.localtime(date))
|
||||
cell.set_property ("text", datestr)
|
||||
|
||||
# This format function controls what goes in the cell. We use this to map
|
||||
# the integer state to a string and also to colourise the text
|
||||
def state_format_custom_cell_data_fun (self, col, cell, model, iter):
|
||||
state = model.get (iter, BuildManagerModel.COL_STATE)[0]
|
||||
|
||||
if (state == BuildResult.STATE_ONGOING):
|
||||
cell.set_property ("text", "Active")
|
||||
cell.set_property ("foreground", "#000000")
|
||||
elif (state == BuildResult.STATE_FAILED):
|
||||
cell.set_property ("text", "Failed")
|
||||
cell.set_property ("foreground", "#ff0000")
|
||||
elif (state == BuildResult.STATE_COMPLETE):
|
||||
cell.set_property ("text", "Complete")
|
||||
cell.set_property ("foreground", "#00ff00")
|
||||
else:
|
||||
cell.set_property ("text", "")
|
||||
|
||||
def __init__ (self):
|
||||
gtk.TreeView.__init__(self)
|
||||
|
||||
# Misc descriptiony thing
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn (None, renderer,
|
||||
text=BuildManagerModel.COL_DESC)
|
||||
self.append_column (col)
|
||||
|
||||
# Machine
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Machine", renderer,
|
||||
text=BuildManagerModel.COL_MACHINE)
|
||||
self.append_column (col)
|
||||
|
||||
# distro
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Distribution", renderer,
|
||||
text=BuildManagerModel.COL_DISTRO)
|
||||
self.append_column (col)
|
||||
|
||||
# date (using a custom function for formatting the cell contents it
|
||||
# takes epoch -> human readable string)
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Date", renderer,
|
||||
text=BuildManagerModel.COL_DATE)
|
||||
self.append_column (col)
|
||||
col.set_cell_data_func (renderer,
|
||||
self.date_format_custom_cell_data_func)
|
||||
|
||||
# For status.
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Status", renderer,
|
||||
text = BuildManagerModel.COL_STATE)
|
||||
self.append_column (col)
|
||||
col.set_cell_data_func (renderer,
|
||||
self.state_format_custom_cell_data_fun)
|
||||
|
||||
@@ -1,606 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!DOCTYPE glade-interface SYSTEM "glade-2.0.dtd">
|
||||
<!--Generated with glade3 3.4.5 on Mon Nov 10 12:24:12 2008 -->
|
||||
<glade-interface>
|
||||
<widget class="GtkDialog" id="build_dialog">
|
||||
<property name="title" translatable="yes">Start a build</property>
|
||||
<property name="window_position">GTK_WIN_POS_CENTER_ON_PARENT</property>
|
||||
<property name="type_hint">GDK_WINDOW_TYPE_HINT_DIALOG</property>
|
||||
<property name="has_separator">False</property>
|
||||
<child internal-child="vbox">
|
||||
<widget class="GtkVBox" id="dialog-vbox1">
|
||||
<property name="visible">True</property>
|
||||
<property name="spacing">2</property>
|
||||
<child>
|
||||
<widget class="GtkTable" id="build_table">
|
||||
<property name="visible">True</property>
|
||||
<property name="border_width">6</property>
|
||||
<property name="n_rows">7</property>
|
||||
<property name="n_columns">3</property>
|
||||
<property name="column_spacing">5</property>
|
||||
<property name="row_spacing">6</property>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="status_alignment">
|
||||
<property name="visible">True</property>
|
||||
<property name="left_padding">12</property>
|
||||
<child>
|
||||
<widget class="GtkHBox" id="status_hbox">
|
||||
<property name="spacing">6</property>
|
||||
<child>
|
||||
<widget class="GtkImage" id="status_image">
|
||||
<property name="visible">True</property>
|
||||
<property name="no_show_all">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="stock">gtk-dialog-error</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="status_label">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes">If you see this text something is wrong...</property>
|
||||
<property name="use_markup">True</property>
|
||||
<property name="use_underline">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">2</property>
|
||||
<property name="bottom_attach">3</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label2">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes"><b>Build configuration</b></property>
|
||||
<property name="use_markup">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">3</property>
|
||||
<property name="bottom_attach">4</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkComboBox" id="image_combo">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">2</property>
|
||||
<property name="top_attach">6</property>
|
||||
<property name="bottom_attach">7</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="image_label">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="xpad">12</property>
|
||||
<property name="label" translatable="yes">Image:</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">6</property>
|
||||
<property name="bottom_attach">7</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkComboBox" id="distribution_combo">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">2</property>
|
||||
<property name="top_attach">5</property>
|
||||
<property name="bottom_attach">6</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="distribution_label">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="xpad">12</property>
|
||||
<property name="label" translatable="yes">Distribution:</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">5</property>
|
||||
<property name="bottom_attach">6</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkComboBox" id="machine_combo">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">2</property>
|
||||
<property name="top_attach">4</property>
|
||||
<property name="bottom_attach">5</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="machine_label">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="xpad">12</property>
|
||||
<property name="label" translatable="yes">Machine:</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">4</property>
|
||||
<property name="bottom_attach">5</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkButton" id="refresh_button">
|
||||
<property name="visible">True</property>
|
||||
<property name="sensitive">False</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="label" translatable="yes">gtk-refresh</property>
|
||||
<property name="use_stock">True</property>
|
||||
<property name="response_id">0</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">2</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">1</property>
|
||||
<property name="bottom_attach">2</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkEntry" id="location_entry">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="width_chars">32</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">2</property>
|
||||
<property name="top_attach">1</property>
|
||||
<property name="bottom_attach">2</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label3">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="xpad">12</property>
|
||||
<property name="label" translatable="yes">Location:</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">1</property>
|
||||
<property name="bottom_attach">2</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label1">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes"><b>Repository</b></property>
|
||||
<property name="use_markup">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment1">
|
||||
<property name="visible">True</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">2</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">4</property>
|
||||
<property name="bottom_attach">5</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment2">
|
||||
<property name="visible">True</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">2</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">5</property>
|
||||
<property name="bottom_attach">6</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment3">
|
||||
<property name="visible">True</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">2</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">6</property>
|
||||
<property name="bottom_attach">7</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child internal-child="action_area">
|
||||
<widget class="GtkHButtonBox" id="dialog-action_area1">
|
||||
<property name="visible">True</property>
|
||||
<property name="layout_style">GTK_BUTTONBOX_END</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="pack_type">GTK_PACK_END</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<widget class="GtkDialog" id="dialog2">
|
||||
<property name="window_position">GTK_WIN_POS_CENTER_ON_PARENT</property>
|
||||
<property name="type_hint">GDK_WINDOW_TYPE_HINT_DIALOG</property>
|
||||
<property name="has_separator">False</property>
|
||||
<child internal-child="vbox">
|
||||
<widget class="GtkVBox" id="dialog-vbox2">
|
||||
<property name="visible">True</property>
|
||||
<property name="spacing">2</property>
|
||||
<child>
|
||||
<widget class="GtkTable" id="table2">
|
||||
<property name="visible">True</property>
|
||||
<property name="border_width">6</property>
|
||||
<property name="n_rows">7</property>
|
||||
<property name="n_columns">3</property>
|
||||
<property name="column_spacing">6</property>
|
||||
<property name="row_spacing">6</property>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label7">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes"><b>Repositories</b></property>
|
||||
<property name="use_markup">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment4">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="left_padding">12</property>
|
||||
<child>
|
||||
<widget class="GtkScrolledWindow" id="scrolledwindow1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="hscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<property name="vscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<child>
|
||||
<widget class="GtkTreeView" id="treeview1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="headers_clickable">True</property>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">2</property>
|
||||
<property name="bottom_attach">3</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkEntry" id="entry1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">1</property>
|
||||
<property name="bottom_attach">2</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label9">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes"><b>Additional packages</b></property>
|
||||
<property name="use_markup">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">4</property>
|
||||
<property name="bottom_attach">5</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment6">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="xscale">0</property>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label8">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="yalign">0</property>
|
||||
<property name="xpad">12</property>
|
||||
<property name="label" translatable="yes">Location: </property>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">1</property>
|
||||
<property name="bottom_attach">2</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment7">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">1</property>
|
||||
<property name="xscale">0</property>
|
||||
<child>
|
||||
<widget class="GtkHButtonBox" id="hbuttonbox1">
|
||||
<property name="visible">True</property>
|
||||
<property name="spacing">5</property>
|
||||
<child>
|
||||
<widget class="GtkButton" id="button7">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="label" translatable="yes">gtk-remove</property>
|
||||
<property name="use_stock">True</property>
|
||||
<property name="response_id">0</property>
|
||||
</widget>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkButton" id="button6">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="label" translatable="yes">gtk-edit</property>
|
||||
<property name="use_stock">True</property>
|
||||
<property name="response_id">0</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkButton" id="button5">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="label" translatable="yes">gtk-add</property>
|
||||
<property name="use_stock">True</property>
|
||||
<property name="response_id">0</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="position">2</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">3</property>
|
||||
<property name="bottom_attach">4</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment5">
|
||||
<property name="visible">True</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">3</property>
|
||||
<property name="bottom_attach">4</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkLabel" id="label10">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="yalign">0</property>
|
||||
<property name="xpad">12</property>
|
||||
<property name="label" translatable="yes">Search:</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="top_attach">5</property>
|
||||
<property name="bottom_attach">6</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkEntry" id="entry2">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="left_attach">1</property>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">5</property>
|
||||
<property name="bottom_attach">6</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkAlignment" id="alignment8">
|
||||
<property name="visible">True</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="left_padding">12</property>
|
||||
<child>
|
||||
<widget class="GtkScrolledWindow" id="scrolledwindow2">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="hscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<property name="vscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<child>
|
||||
<widget class="GtkTreeView" id="treeview2">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="headers_clickable">True</property>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="right_attach">3</property>
|
||||
<property name="top_attach">6</property>
|
||||
<property name="bottom_attach">7</property>
|
||||
<property name="y_options"></property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child internal-child="action_area">
|
||||
<widget class="GtkHButtonBox" id="dialog-action_area2">
|
||||
<property name="visible">True</property>
|
||||
<property name="layout_style">GTK_BUTTONBOX_END</property>
|
||||
<child>
|
||||
<widget class="GtkButton" id="button4">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="label" translatable="yes">gtk-close</property>
|
||||
<property name="use_stock">True</property>
|
||||
<property name="response_id">0</property>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="pack_type">GTK_PACK_END</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
<widget class="GtkWindow" id="main_window">
|
||||
<child>
|
||||
<widget class="GtkVBox" id="main_window_vbox">
|
||||
<property name="visible">True</property>
|
||||
<child>
|
||||
<widget class="GtkToolbar" id="main_toolbar">
|
||||
<property name="visible">True</property>
|
||||
<child>
|
||||
<widget class="GtkToolButton" id="main_toolbutton_build">
|
||||
<property name="visible">True</property>
|
||||
<property name="label" translatable="yes">Build</property>
|
||||
<property name="stock_id">gtk-execute</property>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkVPaned" id="vpaned1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<child>
|
||||
<widget class="GtkScrolledWindow" id="results_scrolledwindow">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="hscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<property name="vscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="resize">False</property>
|
||||
<property name="shrink">True</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<widget class="GtkScrolledWindow" id="progress_scrolledwindow">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="hscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<property name="vscrollbar_policy">GTK_POLICY_AUTOMATIC</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="resize">True</property>
|
||||
<property name="shrink">True</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
<packing>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</widget>
|
||||
</child>
|
||||
</widget>
|
||||
</glade-interface>
|
||||
@@ -1,180 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2008 Intel Corporation
|
||||
#
|
||||
# Authored by Rob Bradford <rob@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import gobject
|
||||
|
||||
class RunningBuildModel (gtk.TreeStore):
|
||||
(COL_TYPE, COL_PACKAGE, COL_TASK, COL_MESSAGE, COL_ICON, COL_ACTIVE) = (0, 1, 2, 3, 4, 5)
|
||||
def __init__ (self):
|
||||
gtk.TreeStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN)
|
||||
|
||||
class RunningBuild (gobject.GObject):
|
||||
__gsignals__ = {
|
||||
'build-succeeded' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'build-failed' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
())
|
||||
}
|
||||
pids_to_task = {}
|
||||
tasks_to_iter = {}
|
||||
|
||||
def __init__ (self):
|
||||
gobject.GObject.__init__ (self)
|
||||
self.model = RunningBuildModel()
|
||||
|
||||
def handle_event (self, event):
|
||||
# Handle an event from the event queue, this may result in updating
|
||||
# the model and thus the UI. Or it may be to tell us that the build
|
||||
# has finished successfully (or not, as the case may be.)
|
||||
|
||||
parent = None
|
||||
pid = 0
|
||||
package = None
|
||||
task = None
|
||||
|
||||
# If we have a pid attached to this message/event try and get the
|
||||
# (package, task) pair for it. If we get that then get the parent iter
|
||||
# for the message.
|
||||
if hasattr(event, 'pid'):
|
||||
pid = event.pid
|
||||
if self.pids_to_task.has_key(pid):
|
||||
(package, task) = self.pids_to_task[pid]
|
||||
parent = self.tasks_to_iter[(package, task)]
|
||||
|
||||
if isinstance(event, bb.msg.Msg):
|
||||
# Set a pretty icon for the message based on it's type.
|
||||
if isinstance(event, bb.msg.MsgWarn):
|
||||
icon = "dialog-warning"
|
||||
elif isinstance(event, bb.msg.MsgErr):
|
||||
icon = "dialog-error"
|
||||
else:
|
||||
icon = None
|
||||
|
||||
# Ignore the "Running task i of n .." messages
|
||||
if (event._message.startswith ("Running task")):
|
||||
return
|
||||
|
||||
# Add the message to the tree either at the top level if parent is
|
||||
# None otherwise as a descendent of a task.
|
||||
self.model.append (parent,
|
||||
(event.__name__.split()[-1], # e.g. MsgWarn, MsgError
|
||||
package,
|
||||
task,
|
||||
event._message,
|
||||
icon,
|
||||
False))
|
||||
elif isinstance(event, bb.build.TaskStarted):
|
||||
(package, task) = (event._package, event._task)
|
||||
|
||||
# Save out this PID.
|
||||
self.pids_to_task[pid] = (package,task)
|
||||
|
||||
# Check if we already have this package in our model. If so then
|
||||
# that can be the parent for the task. Otherwise we create a new
|
||||
# top level for the package.
|
||||
if (self.tasks_to_iter.has_key ((package, None))):
|
||||
parent = self.tasks_to_iter[(package, None)]
|
||||
else:
|
||||
parent = self.model.append (None, (None,
|
||||
package,
|
||||
None,
|
||||
"Package: %s" % (package),
|
||||
None,
|
||||
False))
|
||||
self.tasks_to_iter[(package, None)] = parent
|
||||
|
||||
# Because this parent package now has an active child mark it as
|
||||
# such.
|
||||
self.model.set(parent, self.model.COL_ICON, "gtk-execute")
|
||||
|
||||
# Add an entry in the model for this task
|
||||
i = self.model.append (parent, (None,
|
||||
package,
|
||||
task,
|
||||
"Task: %s" % (task),
|
||||
None,
|
||||
False))
|
||||
|
||||
# Save out the iter so that we can find it when we have a message
|
||||
# that we need to attach to a task.
|
||||
self.tasks_to_iter[(package, task)] = i
|
||||
|
||||
# Mark this task as active.
|
||||
self.model.set(i, self.model.COL_ICON, "gtk-execute")
|
||||
|
||||
elif isinstance(event, bb.build.Task):
|
||||
|
||||
if isinstance(event, bb.build.TaskFailed):
|
||||
# Mark the task as failed
|
||||
i = self.tasks_to_iter[(package, task)]
|
||||
self.model.set(i, self.model.COL_ICON, "dialog-error")
|
||||
|
||||
# Mark the parent package as failed
|
||||
i = self.tasks_to_iter[(package, None)]
|
||||
self.model.set(i, self.model.COL_ICON, "dialog-error")
|
||||
else:
|
||||
# Mark the task as inactive
|
||||
i = self.tasks_to_iter[(package, task)]
|
||||
self.model.set(i, self.model.COL_ICON, None)
|
||||
|
||||
# Mark the parent package as inactive
|
||||
i = self.tasks_to_iter[(package, None)]
|
||||
self.model.set(i, self.model.COL_ICON, None)
|
||||
|
||||
|
||||
# Clear the iters and the pids since when the task goes away the
|
||||
# pid will no longer be used for messages
|
||||
del self.tasks_to_iter[(package, task)]
|
||||
del self.pids_to_task[pid]
|
||||
|
||||
elif isinstance(event, bb.event.BuildCompleted):
|
||||
failures = int (event._failures)
|
||||
|
||||
# Emit the appropriate signal depending on the number of failures
|
||||
if (failures > 1):
|
||||
self.emit ("build-failed")
|
||||
else:
|
||||
self.emit ("build-succeeded")
|
||||
|
||||
class RunningBuildTreeView (gtk.TreeView):
|
||||
def __init__ (self):
|
||||
gtk.TreeView.__init__ (self)
|
||||
|
||||
# The icon that indicates whether we're building or failed.
|
||||
renderer = gtk.CellRendererPixbuf ()
|
||||
col = gtk.TreeViewColumn ("Status", renderer)
|
||||
col.add_attribute (renderer, "icon-name", 4)
|
||||
self.append_column (col)
|
||||
|
||||
# The message of the build.
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Message", renderer, text=3)
|
||||
self.append_column (col)
|
||||
|
||||
|
||||
@@ -1,272 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK based Dependency Explorer
|
||||
#
|
||||
# Copyright (C) 2007 Ross Burton
|
||||
# Copyright (C) 2007 - 2008 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import gtk
|
||||
import threading
|
||||
import xmlrpclib
|
||||
|
||||
# Package Model
|
||||
(COL_PKG_NAME) = (0)
|
||||
|
||||
# Dependency Model
|
||||
(TYPE_DEP, TYPE_RDEP) = (0, 1)
|
||||
(COL_DEP_TYPE, COL_DEP_PARENT, COL_DEP_PACKAGE) = (0, 1, 2)
|
||||
|
||||
class PackageDepView(gtk.TreeView):
|
||||
def __init__(self, model, dep_type, label):
|
||||
gtk.TreeView.__init__(self)
|
||||
self.current = None
|
||||
self.dep_type = dep_type
|
||||
self.filter_model = model.filter_new()
|
||||
self.filter_model.set_visible_func(self._filter)
|
||||
self.set_model(self.filter_model)
|
||||
#self.connect("row-activated", self.on_package_activated, COL_DEP_PACKAGE)
|
||||
self.append_column(gtk.TreeViewColumn(label, gtk.CellRendererText(), text=COL_DEP_PACKAGE))
|
||||
|
||||
def _filter(self, model, iter):
|
||||
(this_type, package) = model.get(iter, COL_DEP_TYPE, COL_DEP_PARENT)
|
||||
if this_type != self.dep_type: return False
|
||||
return package == self.current
|
||||
|
||||
def set_current_package(self, package):
|
||||
self.current = package
|
||||
self.filter_model.refilter()
|
||||
|
||||
class PackageReverseDepView(gtk.TreeView):
|
||||
def __init__(self, model, label):
|
||||
gtk.TreeView.__init__(self)
|
||||
self.current = None
|
||||
self.filter_model = model.filter_new()
|
||||
self.filter_model.set_visible_func(self._filter)
|
||||
self.set_model(self.filter_model)
|
||||
self.append_column(gtk.TreeViewColumn(label, gtk.CellRendererText(), text=COL_DEP_PARENT))
|
||||
|
||||
def _filter(self, model, iter):
|
||||
package = model.get_value(iter, COL_DEP_PACKAGE)
|
||||
return package == self.current
|
||||
|
||||
def set_current_package(self, package):
|
||||
self.current = package
|
||||
self.filter_model.refilter()
|
||||
|
||||
class DepExplorer(gtk.Window):
|
||||
def __init__(self):
|
||||
gtk.Window.__init__(self)
|
||||
self.set_title("Dependency Explorer")
|
||||
self.set_default_size(500, 500)
|
||||
self.connect("delete-event", gtk.main_quit)
|
||||
|
||||
# Create the data models
|
||||
self.pkg_model = gtk.ListStore(gobject.TYPE_STRING)
|
||||
self.depends_model = gtk.ListStore(gobject.TYPE_INT, gobject.TYPE_STRING, gobject.TYPE_STRING)
|
||||
|
||||
pane = gtk.HPaned()
|
||||
pane.set_position(250)
|
||||
self.add(pane)
|
||||
|
||||
# The master list of packages
|
||||
scrolled = gtk.ScrolledWindow()
|
||||
scrolled.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
|
||||
scrolled.set_shadow_type(gtk.SHADOW_IN)
|
||||
self.pkg_treeview = gtk.TreeView(self.pkg_model)
|
||||
self.pkg_treeview.get_selection().connect("changed", self.on_cursor_changed)
|
||||
self.pkg_treeview.append_column(gtk.TreeViewColumn("Package", gtk.CellRendererText(), text=COL_PKG_NAME))
|
||||
pane.add1(scrolled)
|
||||
scrolled.add(self.pkg_treeview)
|
||||
|
||||
box = gtk.VBox(homogeneous=True, spacing=4)
|
||||
|
||||
# Runtime Depends
|
||||
scrolled = gtk.ScrolledWindow()
|
||||
scrolled.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
|
||||
scrolled.set_shadow_type(gtk.SHADOW_IN)
|
||||
self.rdep_treeview = PackageDepView(self.depends_model, TYPE_RDEP, "Runtime Depends")
|
||||
self.rdep_treeview.connect("row-activated", self.on_package_activated, COL_DEP_PACKAGE)
|
||||
scrolled.add(self.rdep_treeview)
|
||||
box.add(scrolled)
|
||||
|
||||
# Build Depends
|
||||
scrolled = gtk.ScrolledWindow()
|
||||
scrolled.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
|
||||
scrolled.set_shadow_type(gtk.SHADOW_IN)
|
||||
self.dep_treeview = PackageDepView(self.depends_model, TYPE_DEP, "Build Depends")
|
||||
self.dep_treeview.connect("row-activated", self.on_package_activated, COL_DEP_PACKAGE)
|
||||
scrolled.add(self.dep_treeview)
|
||||
box.add(scrolled)
|
||||
pane.add2(box)
|
||||
|
||||
# Reverse Depends
|
||||
scrolled = gtk.ScrolledWindow()
|
||||
scrolled.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
|
||||
scrolled.set_shadow_type(gtk.SHADOW_IN)
|
||||
self.revdep_treeview = PackageReverseDepView(self.depends_model, "Reverse Depends")
|
||||
self.revdep_treeview.connect("row-activated", self.on_package_activated, COL_DEP_PARENT)
|
||||
scrolled.add(self.revdep_treeview)
|
||||
box.add(scrolled)
|
||||
pane.add2(box)
|
||||
|
||||
self.show_all()
|
||||
|
||||
def on_package_activated(self, treeview, path, column, data_col):
|
||||
model = treeview.get_model()
|
||||
package = model.get_value(model.get_iter(path), data_col)
|
||||
|
||||
pkg_path = []
|
||||
def finder(model, path, iter, needle):
|
||||
package = model.get_value(iter, COL_PKG_NAME)
|
||||
if package == needle:
|
||||
pkg_path.append(path)
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
self.pkg_model.foreach(finder, package)
|
||||
if pkg_path:
|
||||
self.pkg_treeview.get_selection().select_path(pkg_path[0])
|
||||
self.pkg_treeview.scroll_to_cell(pkg_path[0])
|
||||
|
||||
def on_cursor_changed(self, selection):
|
||||
(model, it) = selection.get_selected()
|
||||
if iter is None:
|
||||
current_package = None
|
||||
else:
|
||||
current_package = model.get_value(it, COL_PKG_NAME)
|
||||
self.rdep_treeview.set_current_package(current_package)
|
||||
self.dep_treeview.set_current_package(current_package)
|
||||
self.revdep_treeview.set_current_package(current_package)
|
||||
|
||||
|
||||
def parse(depgraph, pkg_model, depends_model):
|
||||
|
||||
for package in depgraph["pn"]:
|
||||
pkg_model.set(pkg_model.append(), COL_PKG_NAME, package)
|
||||
|
||||
for package in depgraph["depends"]:
|
||||
for depend in depgraph["depends"][package]:
|
||||
depends_model.set (depends_model.append(),
|
||||
COL_DEP_TYPE, TYPE_DEP,
|
||||
COL_DEP_PARENT, package,
|
||||
COL_DEP_PACKAGE, depend)
|
||||
|
||||
for package in depgraph["rdepends-pn"]:
|
||||
for rdepend in depgraph["rdepends-pn"][package]:
|
||||
depends_model.set (depends_model.append(),
|
||||
COL_DEP_TYPE, TYPE_RDEP,
|
||||
COL_DEP_PARENT, package,
|
||||
COL_DEP_PACKAGE, rdepend)
|
||||
|
||||
class ProgressBar(gtk.Window):
|
||||
def __init__(self):
|
||||
|
||||
gtk.Window.__init__(self)
|
||||
self.set_title("Parsing .bb files, please wait...")
|
||||
self.set_default_size(500, 0)
|
||||
self.connect("delete-event", gtk.main_quit)
|
||||
|
||||
self.progress = gtk.ProgressBar()
|
||||
self.add(self.progress)
|
||||
self.show_all()
|
||||
|
||||
class gtkthread(threading.Thread):
|
||||
quit = threading.Event()
|
||||
def __init__(self, shutdown):
|
||||
threading.Thread.__init__(self)
|
||||
self.setDaemon(True)
|
||||
self.shutdown = shutdown
|
||||
|
||||
def run(self):
|
||||
gobject.threads_init()
|
||||
gtk.gdk.threads_init()
|
||||
gtk.main()
|
||||
gtkthread.quit.set()
|
||||
|
||||
def init(server, eventHandler):
|
||||
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
if not cmdline or cmdline[0] != "generateDotGraph":
|
||||
print "This UI is only compatible with the -g option"
|
||||
return
|
||||
ret = server.runCommand(["generateDepTreeEvent", cmdline[1], cmdline[2]])
|
||||
if ret != True:
|
||||
print "Couldn't run command! %s" % ret
|
||||
return
|
||||
except xmlrpclib.Fault, x:
|
||||
print "XMLRPC Fault getting commandline:\n %s" % x
|
||||
return
|
||||
|
||||
shutdown = 0
|
||||
|
||||
gtkgui = gtkthread(shutdown)
|
||||
gtkgui.start()
|
||||
|
||||
gtk.gdk.threads_enter()
|
||||
pbar = ProgressBar()
|
||||
dep = DepExplorer()
|
||||
gtk.gdk.threads_leave()
|
||||
|
||||
while True:
|
||||
try:
|
||||
event = eventHandler.waitEvent(0.25)
|
||||
if gtkthread.quit.isSet():
|
||||
break
|
||||
|
||||
if event is None:
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseProgress):
|
||||
x = event.sofar
|
||||
y = event.total
|
||||
if x == y:
|
||||
print("\nParsing finished. %d cached, %d parsed, %d skipped, %d masked, %d errors."
|
||||
% ( event.cached, event.parsed, event.skipped, event.masked, event.errors))
|
||||
pbar.hide()
|
||||
gtk.gdk.threads_enter()
|
||||
pbar.progress.set_fraction(float(x)/float(y))
|
||||
pbar.progress.set_text("%d/%d (%2d %%)" % (x, y, x*100/y))
|
||||
gtk.gdk.threads_leave()
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.event.DepTreeGenerated):
|
||||
gtk.gdk.threads_enter()
|
||||
parse(event._depgraph, dep.pkg_model, dep.depends_model)
|
||||
gtk.gdk.threads_leave()
|
||||
|
||||
if isinstance(event, bb.command.CookerCommandCompleted):
|
||||
continue
|
||||
if isinstance(event, bb.command.CookerCommandFailed):
|
||||
print "Command execution failed: %s" % event.error
|
||||
break
|
||||
if isinstance(event, bb.cooker.CookerExit):
|
||||
break
|
||||
|
||||
continue
|
||||
|
||||
except KeyboardInterrupt:
|
||||
if shutdown == 2:
|
||||
print "\nThird Keyboard Interrupt, exit.\n"
|
||||
break
|
||||
if shutdown == 1:
|
||||
print "\nSecond Keyboard Interrupt, stopping...\n"
|
||||
server.runCommand(["stateStop"])
|
||||
if shutdown == 0:
|
||||
print "\nKeyboard Interrupt, closing down...\n"
|
||||
server.runCommand(["stateShutdown"])
|
||||
shutdown = shutdown + 1
|
||||
pass
|
||||
|
||||
@@ -1,77 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2008 Intel Corporation
|
||||
#
|
||||
# Authored by Rob Bradford <rob@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import gtk
|
||||
import xmlrpclib
|
||||
from bb.ui.crumbs.runningbuild import RunningBuildTreeView, RunningBuild
|
||||
|
||||
def event_handle_idle_func (eventHandler, build):
|
||||
|
||||
# Consume as many messages as we can in the time available to us
|
||||
event = eventHandler.getEvent()
|
||||
while event:
|
||||
build.handle_event (event)
|
||||
event = eventHandler.getEvent()
|
||||
|
||||
return True
|
||||
|
||||
class MainWindow (gtk.Window):
|
||||
def __init__ (self):
|
||||
gtk.Window.__init__ (self, gtk.WINDOW_TOPLEVEL)
|
||||
|
||||
# Setup tree view and the scrolled window
|
||||
scrolled_window = gtk.ScrolledWindow ()
|
||||
self.add (scrolled_window)
|
||||
self.cur_build_tv = RunningBuildTreeView()
|
||||
scrolled_window.add (self.cur_build_tv)
|
||||
|
||||
def init (server, eventHandler):
|
||||
gobject.threads_init()
|
||||
gtk.gdk.threads_init()
|
||||
|
||||
window = MainWindow ()
|
||||
window.show_all ()
|
||||
|
||||
# Create the object for the current build
|
||||
running_build = RunningBuild ()
|
||||
window.cur_build_tv.set_model (running_build.model)
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
print cmdline
|
||||
if not cmdline:
|
||||
return 1
|
||||
ret = server.runCommand(cmdline)
|
||||
if ret != True:
|
||||
print "Couldn't get default commandline! %s" % ret
|
||||
return 1
|
||||
except xmlrpclib.Fault, x:
|
||||
print "XMLRPC Fault getting commandline:\n %s" % x
|
||||
return 1
|
||||
|
||||
# Use a timeout function for probing the event queue to find out if we
|
||||
# have a message waiting for us.
|
||||
gobject.timeout_add (200,
|
||||
event_handle_idle_func,
|
||||
eventHandler,
|
||||
running_build)
|
||||
|
||||
gtk.main()
|
||||
|
||||
@@ -1,182 +0,0 @@
|
||||
#
|
||||
# BitBake (No)TTY UI Implementation
|
||||
#
|
||||
# Handling output to TTYs or files (no TTY)
|
||||
#
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
|
||||
import sys
|
||||
import itertools
|
||||
import xmlrpclib
|
||||
from bb import ui
|
||||
from bb.ui import uihelper
|
||||
|
||||
|
||||
parsespin = itertools.cycle( r'|/-\\' )
|
||||
|
||||
def init(server, eventHandler):
|
||||
|
||||
# Get values of variables which control our output
|
||||
includelogs = server.runCommand(["getVariable", "BBINCLUDELOGS"])
|
||||
loglines = server.runCommand(["getVariable", "BBINCLUDELOGS_LINES"])
|
||||
|
||||
helper = uihelper.BBUIHelper()
|
||||
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
#print cmdline
|
||||
if not cmdline:
|
||||
return 1
|
||||
ret = server.runCommand(cmdline)
|
||||
if ret != True:
|
||||
print "Couldn't get default commandline! %s" % ret
|
||||
return 1
|
||||
except xmlrpclib.Fault, x:
|
||||
print "XMLRPC Fault getting commandline:\n %s" % x
|
||||
return 1
|
||||
|
||||
shutdown = 0
|
||||
return_value = 0
|
||||
while True:
|
||||
try:
|
||||
event = eventHandler.waitEvent(0.25)
|
||||
if event is None:
|
||||
continue
|
||||
#print event
|
||||
helper.eventHandler(event)
|
||||
if isinstance(event, bb.runqueue.runQueueExitWait):
|
||||
if not shutdown:
|
||||
shutdown = 1
|
||||
if shutdown and helper.needUpdate:
|
||||
activetasks, failedtasks = helper.getTasks()
|
||||
if activetasks:
|
||||
print "Waiting for %s active tasks to finish:" % len(activetasks)
|
||||
tasknum = 1
|
||||
for task in activetasks:
|
||||
print "%s: %s (pid %s)" % (tasknum, activetasks[task]["title"], task)
|
||||
tasknum = tasknum + 1
|
||||
|
||||
if isinstance(event, bb.msg.MsgPlain):
|
||||
print event._message
|
||||
continue
|
||||
if isinstance(event, bb.msg.MsgDebug):
|
||||
print 'DEBUG: ' + event._message
|
||||
continue
|
||||
if isinstance(event, bb.msg.MsgNote):
|
||||
print 'NOTE: ' + event._message
|
||||
continue
|
||||
if isinstance(event, bb.msg.MsgWarn):
|
||||
print 'WARNING: ' + event._message
|
||||
continue
|
||||
if isinstance(event, bb.msg.MsgError):
|
||||
return_value = 1
|
||||
print 'ERROR: ' + event._message
|
||||
continue
|
||||
if isinstance(event, bb.msg.MsgFatal):
|
||||
return_value = 1
|
||||
print 'FATAL: ' + event._message
|
||||
break
|
||||
if isinstance(event, bb.build.TaskFailed):
|
||||
return_value = 1
|
||||
logfile = event.logfile
|
||||
if logfile:
|
||||
print "ERROR: Logfile of failure stored in: %s" % logfile
|
||||
if 1 or includelogs:
|
||||
print "Log data follows:"
|
||||
f = open(logfile, "r")
|
||||
lines = []
|
||||
while True:
|
||||
l = f.readline()
|
||||
if l == '':
|
||||
break
|
||||
l = l.rstrip()
|
||||
if loglines:
|
||||
lines.append(' | %s' % l)
|
||||
if len(lines) > int(loglines):
|
||||
lines.pop(0)
|
||||
else:
|
||||
print '| %s' % l
|
||||
f.close()
|
||||
if lines:
|
||||
for line in lines:
|
||||
print line
|
||||
if isinstance(event, bb.build.TaskBase):
|
||||
print "NOTE: %s" % event._message
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseProgress):
|
||||
x = event.sofar
|
||||
y = event.total
|
||||
if os.isatty(sys.stdout.fileno()):
|
||||
sys.stdout.write("\rNOTE: Handling BitBake files: %s (%04d/%04d) [%2d %%]" % ( parsespin.next(), x, y, x*100/y ) )
|
||||
sys.stdout.flush()
|
||||
else:
|
||||
if x == 1:
|
||||
sys.stdout.write("Parsing .bb files, please wait...")
|
||||
sys.stdout.flush()
|
||||
if x == y:
|
||||
sys.stdout.write("done.")
|
||||
sys.stdout.flush()
|
||||
if x == y:
|
||||
print("\nParsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
|
||||
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors))
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.command.CookerCommandCompleted):
|
||||
break
|
||||
if isinstance(event, bb.command.CookerCommandSetExitCode):
|
||||
return_value = event.exitcode
|
||||
continue
|
||||
if isinstance(event, bb.command.CookerCommandFailed):
|
||||
return_value = 1
|
||||
print "Command execution failed: %s" % event.error
|
||||
break
|
||||
if isinstance(event, bb.cooker.CookerExit):
|
||||
break
|
||||
|
||||
# ignore
|
||||
if isinstance(event, bb.event.BuildStarted):
|
||||
continue
|
||||
if isinstance(event, bb.event.BuildCompleted):
|
||||
continue
|
||||
if isinstance(event, bb.event.MultipleProviders):
|
||||
continue
|
||||
if isinstance(event, bb.runqueue.runQueueEvent):
|
||||
continue
|
||||
if isinstance(event, bb.runqueue.runQueueExitWait):
|
||||
continue
|
||||
if isinstance(event, bb.event.StampUpdate):
|
||||
continue
|
||||
if isinstance(event, bb.event.ConfigParsed):
|
||||
continue
|
||||
if isinstance(event, bb.event.RecipeParsed):
|
||||
continue
|
||||
print "Unknown Event: %s" % event
|
||||
|
||||
except KeyboardInterrupt:
|
||||
if shutdown == 2:
|
||||
print "\nThird Keyboard Interrupt, exit.\n"
|
||||
break
|
||||
if shutdown == 1:
|
||||
print "\nSecond Keyboard Interrupt, stopping...\n"
|
||||
server.runCommand(["stateStop"])
|
||||
if shutdown == 0:
|
||||
print "\nKeyboard Interrupt, closing down...\n"
|
||||
server.runCommand(["stateShutdown"])
|
||||
shutdown = shutdown + 1
|
||||
pass
|
||||
return return_value
|
||||
@@ -1,335 +0,0 @@
|
||||
#
|
||||
# BitBake Curses UI Implementation
|
||||
#
|
||||
# Implements an ncurses frontend for the BitBake utility.
|
||||
#
|
||||
# Copyright (C) 2006 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
We have the following windows:
|
||||
|
||||
1.) Main Window: Shows what we are ultimately building and how far we are. Includes status bar
|
||||
2.) Thread Activity Window: Shows one status line for every concurrent bitbake thread.
|
||||
3.) Command Line Window: Contains an interactive command line where you can interact w/ Bitbake.
|
||||
|
||||
Basic window layout is like that:
|
||||
|
||||
|---------------------------------------------------------|
|
||||
| <Main Window> | <Thread Activity Window> |
|
||||
| | 0: foo do_compile complete|
|
||||
| Building Gtk+-2.6.10 | 1: bar do_patch complete |
|
||||
| Status: 60% | ... |
|
||||
| | ... |
|
||||
| | ... |
|
||||
|---------------------------------------------------------|
|
||||
|<Command Line Window> |
|
||||
|>>> which virtual/kernel |
|
||||
|openzaurus-kernel |
|
||||
|>>> _ |
|
||||
|---------------------------------------------------------|
|
||||
|
||||
"""
|
||||
|
||||
import os, sys, curses, itertools, time
|
||||
import bb
|
||||
import xmlrpclib
|
||||
from bb import ui
|
||||
from bb.ui import uihelper
|
||||
|
||||
parsespin = itertools.cycle( r'|/-\\' )
|
||||
|
||||
X = 0
|
||||
Y = 1
|
||||
WIDTH = 2
|
||||
HEIGHT = 3
|
||||
|
||||
MAXSTATUSLENGTH = 32
|
||||
|
||||
class NCursesUI:
|
||||
"""
|
||||
NCurses UI Class
|
||||
"""
|
||||
class Window:
|
||||
"""Base Window Class"""
|
||||
def __init__( self, x, y, width, height, fg=curses.COLOR_BLACK, bg=curses.COLOR_WHITE ):
|
||||
self.win = curses.newwin( height, width, y, x )
|
||||
self.dimensions = ( x, y, width, height )
|
||||
"""
|
||||
if curses.has_colors():
|
||||
color = 1
|
||||
curses.init_pair( color, fg, bg )
|
||||
self.win.bkgdset( ord(' '), curses.color_pair(color) )
|
||||
else:
|
||||
self.win.bkgdset( ord(' '), curses.A_BOLD )
|
||||
"""
|
||||
self.erase()
|
||||
self.setScrolling()
|
||||
self.win.noutrefresh()
|
||||
|
||||
def erase( self ):
|
||||
self.win.erase()
|
||||
|
||||
def setScrolling( self, b = True ):
|
||||
self.win.scrollok( b )
|
||||
self.win.idlok( b )
|
||||
|
||||
def setBoxed( self ):
|
||||
self.boxed = True
|
||||
self.win.box()
|
||||
self.win.noutrefresh()
|
||||
|
||||
def setText( self, x, y, text, *args ):
|
||||
self.win.addstr( y, x, text, *args )
|
||||
self.win.noutrefresh()
|
||||
|
||||
def appendText( self, text, *args ):
|
||||
self.win.addstr( text, *args )
|
||||
self.win.noutrefresh()
|
||||
|
||||
def drawHline( self, y ):
|
||||
self.win.hline( y, 0, curses.ACS_HLINE, self.dimensions[WIDTH] )
|
||||
self.win.noutrefresh()
|
||||
|
||||
class DecoratedWindow( Window ):
|
||||
"""Base class for windows with a box and a title bar"""
|
||||
def __init__( self, title, x, y, width, height, fg=curses.COLOR_BLACK, bg=curses.COLOR_WHITE ):
|
||||
NCursesUI.Window.__init__( self, x+1, y+3, width-2, height-4, fg, bg )
|
||||
self.decoration = NCursesUI.Window( x, y, width, height, fg, bg )
|
||||
self.decoration.setBoxed()
|
||||
self.decoration.win.hline( 2, 1, curses.ACS_HLINE, width-2 )
|
||||
self.setTitle( title )
|
||||
|
||||
def setTitle( self, title ):
|
||||
self.decoration.setText( 1, 1, title.center( self.dimensions[WIDTH]-2 ), curses.A_BOLD )
|
||||
|
||||
#-------------------------------------------------------------------------#
|
||||
# class TitleWindow( Window ):
|
||||
#-------------------------------------------------------------------------#
|
||||
# """Title Window"""
|
||||
# def __init__( self, x, y, width, height ):
|
||||
# NCursesUI.Window.__init__( self, x, y, width, height )
|
||||
# version = bb.__version__
|
||||
# title = "BitBake %s" % version
|
||||
# credit = "(C) 2003-2007 Team BitBake"
|
||||
# #self.win.hline( 2, 1, curses.ACS_HLINE, width-2 )
|
||||
# self.win.border()
|
||||
# self.setText( 1, 1, title.center( self.dimensions[WIDTH]-2 ), curses.A_BOLD )
|
||||
# self.setText( 1, 2, credit.center( self.dimensions[WIDTH]-2 ), curses.A_BOLD )
|
||||
|
||||
#-------------------------------------------------------------------------#
|
||||
class ThreadActivityWindow( DecoratedWindow ):
|
||||
#-------------------------------------------------------------------------#
|
||||
"""Thread Activity Window"""
|
||||
def __init__( self, x, y, width, height ):
|
||||
NCursesUI.DecoratedWindow.__init__( self, "Thread Activity", x, y, width, height )
|
||||
|
||||
def setStatus( self, thread, text ):
|
||||
line = "%02d: %s" % ( thread, text )
|
||||
width = self.dimensions[WIDTH]
|
||||
if ( len(line) > width ):
|
||||
line = line[:width-3] + "..."
|
||||
else:
|
||||
line = line.ljust( width )
|
||||
self.setText( 0, thread, line )
|
||||
|
||||
#-------------------------------------------------------------------------#
|
||||
class MainWindow( DecoratedWindow ):
|
||||
#-------------------------------------------------------------------------#
|
||||
"""Main Window"""
|
||||
def __init__( self, x, y, width, height ):
|
||||
self.StatusPosition = width - MAXSTATUSLENGTH
|
||||
NCursesUI.DecoratedWindow.__init__( self, None, x, y, width, height )
|
||||
curses.nl()
|
||||
|
||||
def setTitle( self, title ):
|
||||
title = "BitBake %s" % bb.__version__
|
||||
self.decoration.setText( 2, 1, title, curses.A_BOLD )
|
||||
self.decoration.setText( self.StatusPosition - 8, 1, "Status:", curses.A_BOLD )
|
||||
|
||||
def setStatus(self, status):
|
||||
while len(status) < MAXSTATUSLENGTH:
|
||||
status = status + " "
|
||||
self.decoration.setText( self.StatusPosition, 1, status, curses.A_BOLD )
|
||||
|
||||
|
||||
#-------------------------------------------------------------------------#
|
||||
class ShellOutputWindow( DecoratedWindow ):
|
||||
#-------------------------------------------------------------------------#
|
||||
"""Interactive Command Line Output"""
|
||||
def __init__( self, x, y, width, height ):
|
||||
NCursesUI.DecoratedWindow.__init__( self, "Command Line Window", x, y, width, height )
|
||||
|
||||
#-------------------------------------------------------------------------#
|
||||
class ShellInputWindow( Window ):
|
||||
#-------------------------------------------------------------------------#
|
||||
"""Interactive Command Line Input"""
|
||||
def __init__( self, x, y, width, height ):
|
||||
NCursesUI.Window.__init__( self, x, y, width, height )
|
||||
|
||||
# put that to the top again from curses.textpad import Textbox
|
||||
# self.textbox = Textbox( self.win )
|
||||
# t = threading.Thread()
|
||||
# t.run = self.textbox.edit
|
||||
# t.start()
|
||||
|
||||
#-------------------------------------------------------------------------#
|
||||
def main(self, stdscr, server, eventHandler):
|
||||
#-------------------------------------------------------------------------#
|
||||
height, width = stdscr.getmaxyx()
|
||||
|
||||
# for now split it like that:
|
||||
# MAIN_y + THREAD_y = 2/3 screen at the top
|
||||
# MAIN_x = 2/3 left, THREAD_y = 1/3 right
|
||||
# CLI_y = 1/3 of screen at the bottom
|
||||
# CLI_x = full
|
||||
|
||||
main_left = 0
|
||||
main_top = 0
|
||||
main_height = ( height / 3 * 2 )
|
||||
main_width = ( width / 3 ) * 2
|
||||
clo_left = main_left
|
||||
clo_top = main_top + main_height
|
||||
clo_height = height - main_height - main_top - 1
|
||||
clo_width = width
|
||||
cli_left = main_left
|
||||
cli_top = clo_top + clo_height
|
||||
cli_height = 1
|
||||
cli_width = width
|
||||
thread_left = main_left + main_width
|
||||
thread_top = main_top
|
||||
thread_height = main_height
|
||||
thread_width = width - main_width
|
||||
|
||||
#tw = self.TitleWindow( 0, 0, width, main_top )
|
||||
mw = self.MainWindow( main_left, main_top, main_width, main_height )
|
||||
taw = self.ThreadActivityWindow( thread_left, thread_top, thread_width, thread_height )
|
||||
clo = self.ShellOutputWindow( clo_left, clo_top, clo_width, clo_height )
|
||||
cli = self.ShellInputWindow( cli_left, cli_top, cli_width, cli_height )
|
||||
cli.setText( 0, 0, "BB>" )
|
||||
|
||||
mw.setStatus("Idle")
|
||||
|
||||
helper = uihelper.BBUIHelper()
|
||||
shutdown = 0
|
||||
|
||||
try:
|
||||
cmdline = server.runCommand(["getCmdLineAction"])
|
||||
if not cmdline:
|
||||
return
|
||||
ret = server.runCommand(cmdline)
|
||||
if ret != True:
|
||||
print "Couldn't get default commandlind! %s" % ret
|
||||
return
|
||||
except xmlrpclib.Fault, x:
|
||||
print "XMLRPC Fault getting commandline:\n %s" % x
|
||||
return
|
||||
|
||||
exitflag = False
|
||||
while not exitflag:
|
||||
try:
|
||||
event = eventHandler.waitEvent(0.25)
|
||||
if not event:
|
||||
continue
|
||||
helper.eventHandler(event)
|
||||
#mw.appendText("%s\n" % event[0])
|
||||
if isinstance(event, bb.build.Task):
|
||||
mw.appendText("NOTE: %s\n" % event._message)
|
||||
if isinstance(event, bb.msg.MsgDebug):
|
||||
mw.appendText('DEBUG: ' + event._message + '\n')
|
||||
if isinstance(event, bb.msg.MsgNote):
|
||||
mw.appendText('NOTE: ' + event._message + '\n')
|
||||
if isinstance(event, bb.msg.MsgWarn):
|
||||
mw.appendText('WARNING: ' + event._message + '\n')
|
||||
if isinstance(event, bb.msg.MsgError):
|
||||
mw.appendText('ERROR: ' + event._message + '\n')
|
||||
if isinstance(event, bb.msg.MsgFatal):
|
||||
mw.appendText('FATAL: ' + event._message + '\n')
|
||||
if isinstance(event, bb.event.ParseProgress):
|
||||
x = event.sofar
|
||||
y = event.total
|
||||
if x == y:
|
||||
mw.setStatus("Idle")
|
||||
mw.appendText("Parsing finished. %d cached, %d parsed, %d skipped, %d masked."
|
||||
% ( event.cached, event.parsed, event.skipped, event.masked ))
|
||||
else:
|
||||
mw.setStatus("Parsing: %s (%04d/%04d) [%2d %%]" % ( parsespin.next(), x, y, x*100/y ) )
|
||||
# if isinstance(event, bb.build.TaskFailed):
|
||||
# if event.logfile:
|
||||
# if data.getVar("BBINCLUDELOGS", d):
|
||||
# bb.msg.error(bb.msg.domain.Build, "log data follows (%s)" % logfile)
|
||||
# number_of_lines = data.getVar("BBINCLUDELOGS_LINES", d)
|
||||
# if number_of_lines:
|
||||
# os.system('tail -n%s %s' % (number_of_lines, logfile))
|
||||
# else:
|
||||
# f = open(logfile, "r")
|
||||
# while True:
|
||||
# l = f.readline()
|
||||
# if l == '':
|
||||
# break
|
||||
# l = l.rstrip()
|
||||
# print '| %s' % l
|
||||
# f.close()
|
||||
# else:
|
||||
# bb.msg.error(bb.msg.domain.Build, "see log in %s" % logfile)
|
||||
|
||||
if isinstance(event, bb.command.CookerCommandCompleted):
|
||||
exitflag = True
|
||||
if isinstance(event, bb.command.CookerCommandFailed):
|
||||
mw.appendText("Command execution failed: %s" % event.error)
|
||||
time.sleep(2)
|
||||
exitflag = True
|
||||
if isinstance(event, bb.cooker.CookerExit):
|
||||
exitflag = True
|
||||
|
||||
if helper.needUpdate:
|
||||
activetasks, failedtasks = helper.getTasks()
|
||||
taw.erase()
|
||||
taw.setText(0, 0, "")
|
||||
if activetasks:
|
||||
taw.appendText("Active Tasks:\n")
|
||||
for task in activetasks:
|
||||
taw.appendText(task)
|
||||
if failedtasks:
|
||||
taw.appendText("Failed Tasks:\n")
|
||||
for task in failedtasks:
|
||||
taw.appendText(task)
|
||||
|
||||
curses.doupdate()
|
||||
except KeyboardInterrupt:
|
||||
if shutdown == 2:
|
||||
mw.appendText("Third Keyboard Interrupt, exit.\n")
|
||||
exitflag = True
|
||||
if shutdown == 1:
|
||||
mw.appendText("Second Keyboard Interrupt, stopping...\n")
|
||||
server.runCommand(["stateStop"])
|
||||
if shutdown == 0:
|
||||
mw.appendText("Keyboard Interrupt, closing down...\n")
|
||||
server.runCommand(["stateShutdown"])
|
||||
shutdown = shutdown + 1
|
||||
pass
|
||||
|
||||
def init(server, eventHandler):
|
||||
if not os.isatty(sys.stdout.fileno()):
|
||||
print "FATAL: Unable to run 'ncurses' UI without a TTY."
|
||||
return
|
||||
ui = NCursesUI()
|
||||
try:
|
||||
curses.wrapper(ui.main, server, eventHandler)
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
@@ -1,425 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2008 Intel Corporation
|
||||
#
|
||||
# Authored by Rob Bradford <rob@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import gobject
|
||||
import gtk.glade
|
||||
import threading
|
||||
import urllib2
|
||||
import os
|
||||
|
||||
from bb.ui.crumbs.buildmanager import BuildManager, BuildConfiguration
|
||||
from bb.ui.crumbs.buildmanager import BuildManagerTreeView
|
||||
|
||||
from bb.ui.crumbs.runningbuild import RunningBuild, RunningBuildTreeView
|
||||
|
||||
# The metadata loader is used by the BuildSetupDialog to download the
|
||||
# available options to populate the dialog
|
||||
class MetaDataLoader(gobject.GObject):
|
||||
""" This class provides the mechanism for loading the metadata (the
|
||||
fetching and parsing) from a given URL. The metadata encompasses details
|
||||
on what machines are available. The distribution and images available for
|
||||
the machine and the the uris to use for building the given machine."""
|
||||
__gsignals__ = {
|
||||
'success' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'error' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING,))
|
||||
}
|
||||
|
||||
# We use these little helper functions to ensure that we take the gdk lock
|
||||
# when emitting the signal. These functions are called as idles (so that
|
||||
# they happen in the gtk / main thread's main loop.
|
||||
def emit_error_signal (self, remark):
|
||||
gtk.gdk.threads_enter()
|
||||
self.emit ("error", remark)
|
||||
gtk.gdk.threads_leave()
|
||||
|
||||
def emit_success_signal (self):
|
||||
gtk.gdk.threads_enter()
|
||||
self.emit ("success")
|
||||
gtk.gdk.threads_leave()
|
||||
|
||||
def __init__ (self):
|
||||
gobject.GObject.__init__ (self)
|
||||
|
||||
class LoaderThread(threading.Thread):
|
||||
""" This class provides an asynchronous loader for the metadata (by
|
||||
using threads and signals). This is useful since the metadata may be
|
||||
at a remote URL."""
|
||||
class LoaderImportException (Exception):
|
||||
pass
|
||||
|
||||
def __init__(self, loader, url):
|
||||
threading.Thread.__init__ (self)
|
||||
self.url = url
|
||||
self.loader = loader
|
||||
|
||||
def run (self):
|
||||
result = {}
|
||||
try:
|
||||
f = urllib2.urlopen (self.url)
|
||||
|
||||
# Parse the metadata format. The format is....
|
||||
# <machine>;<default distro>|<distro>...;<default image>|<image>...;<type##url>|...
|
||||
for line in f.readlines():
|
||||
components = line.split(";")
|
||||
if (len (components) < 4):
|
||||
raise MetaDataLoader.LoaderThread.LoaderImportException
|
||||
machine = components[0]
|
||||
distros = components[1].split("|")
|
||||
images = components[2].split("|")
|
||||
urls = components[3].split("|")
|
||||
|
||||
result[machine] = (distros, images, urls)
|
||||
|
||||
# Create an object representing this *potential*
|
||||
# configuration. It can become concrete if the machine, distro
|
||||
# and image are all chosen in the UI
|
||||
configuration = BuildConfiguration()
|
||||
configuration.metadata_url = self.url
|
||||
configuration.machine_options = result
|
||||
self.loader.configuration = configuration
|
||||
|
||||
# Emit that we've actually got a configuration
|
||||
gobject.idle_add (MetaDataLoader.emit_success_signal,
|
||||
self.loader)
|
||||
|
||||
except MetaDataLoader.LoaderThread.LoaderImportException, e:
|
||||
gobject.idle_add (MetaDataLoader.emit_error_signal, self.loader,
|
||||
"Repository metadata corrupt")
|
||||
except Exception, e:
|
||||
gobject.idle_add (MetaDataLoader.emit_error_signal, self.loader,
|
||||
"Unable to download repository metadata")
|
||||
print e
|
||||
|
||||
def try_fetch_from_url (self, url):
|
||||
# Try and download the metadata. Firing a signal if successful
|
||||
thread = MetaDataLoader.LoaderThread(self, url)
|
||||
thread.start()
|
||||
|
||||
class BuildSetupDialog (gtk.Dialog):
|
||||
RESPONSE_BUILD = 1
|
||||
|
||||
# A little helper method that just sets the states on the widgets based on
|
||||
# whether we've got good metadata or not.
|
||||
def set_configurable (self, configurable):
|
||||
if (self.configurable == configurable):
|
||||
return
|
||||
|
||||
self.configurable = configurable
|
||||
for widget in self.conf_widgets:
|
||||
widget.set_sensitive (configurable)
|
||||
|
||||
if not configurable:
|
||||
self.machine_combo.set_active (-1)
|
||||
self.distribution_combo.set_active (-1)
|
||||
self.image_combo.set_active (-1)
|
||||
|
||||
# GTK widget callbacks
|
||||
def refresh_button_clicked (self, button):
|
||||
# Refresh button clicked.
|
||||
|
||||
url = self.location_entry.get_chars (0, -1)
|
||||
self.loader.try_fetch_from_url(url)
|
||||
|
||||
def repository_entry_editable_changed (self, entry):
|
||||
if (len (entry.get_chars (0, -1)) > 0):
|
||||
self.refresh_button.set_sensitive (True)
|
||||
else:
|
||||
self.refresh_button.set_sensitive (False)
|
||||
self.clear_status_message()
|
||||
|
||||
# If we were previously configurable we are no longer since the
|
||||
# location entry has been changed
|
||||
self.set_configurable (False)
|
||||
|
||||
def machine_combo_changed (self, combobox):
|
||||
active_iter = combobox.get_active_iter()
|
||||
|
||||
if not active_iter:
|
||||
return
|
||||
|
||||
model = combobox.get_model()
|
||||
|
||||
if model:
|
||||
chosen_machine = model.get (active_iter, 0)[0]
|
||||
|
||||
(distros_model, images_model) = \
|
||||
self.loader.configuration.get_distro_and_images_models (chosen_machine)
|
||||
|
||||
self.distribution_combo.set_model (distros_model)
|
||||
self.image_combo.set_model (images_model)
|
||||
|
||||
# Callbacks from the loader
|
||||
def loader_success_cb (self, loader):
|
||||
self.status_image.set_from_icon_name ("info",
|
||||
gtk.ICON_SIZE_BUTTON)
|
||||
self.status_image.show()
|
||||
self.status_label.set_label ("Repository metadata successfully downloaded")
|
||||
|
||||
# Set the models on the combo boxes based on the models generated from
|
||||
# the configuration that the loader has created
|
||||
|
||||
# We just need to set the machine here, that then determines the
|
||||
# distro and image options. Cunning huh? :-)
|
||||
|
||||
self.configuration = self.loader.configuration
|
||||
model = self.configuration.get_machines_model ()
|
||||
self.machine_combo.set_model (model)
|
||||
|
||||
self.set_configurable (True)
|
||||
|
||||
def loader_error_cb (self, loader, message):
|
||||
self.status_image.set_from_icon_name ("error",
|
||||
gtk.ICON_SIZE_BUTTON)
|
||||
self.status_image.show()
|
||||
self.status_label.set_text ("Error downloading repository metadata")
|
||||
for widget in self.conf_widgets:
|
||||
widget.set_sensitive (False)
|
||||
|
||||
def clear_status_message (self):
|
||||
self.status_image.hide()
|
||||
self.status_label.set_label (
|
||||
"""<i>Enter the repository location and press _Refresh</i>""")
|
||||
|
||||
def __init__ (self):
|
||||
gtk.Dialog.__init__ (self)
|
||||
|
||||
# Cancel
|
||||
self.add_button (gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL)
|
||||
|
||||
# Build
|
||||
button = gtk.Button ("_Build", None, True)
|
||||
image = gtk.Image ()
|
||||
image.set_from_stock (gtk.STOCK_EXECUTE,gtk.ICON_SIZE_BUTTON)
|
||||
button.set_image (image)
|
||||
self.add_action_widget (button, BuildSetupDialog.RESPONSE_BUILD)
|
||||
button.show_all ()
|
||||
|
||||
# Pull in *just* the table from the Glade XML data.
|
||||
gxml = gtk.glade.XML (os.path.dirname(__file__) + "/crumbs/puccho.glade",
|
||||
root = "build_table")
|
||||
table = gxml.get_widget ("build_table")
|
||||
self.vbox.pack_start (table, True, False, 0)
|
||||
|
||||
# Grab all the widgets that we need to turn on/off when we refresh...
|
||||
self.conf_widgets = []
|
||||
self.conf_widgets += [gxml.get_widget ("machine_label")]
|
||||
self.conf_widgets += [gxml.get_widget ("distribution_label")]
|
||||
self.conf_widgets += [gxml.get_widget ("image_label")]
|
||||
self.conf_widgets += [gxml.get_widget ("machine_combo")]
|
||||
self.conf_widgets += [gxml.get_widget ("distribution_combo")]
|
||||
self.conf_widgets += [gxml.get_widget ("image_combo")]
|
||||
|
||||
# Grab the status widgets
|
||||
self.status_image = gxml.get_widget ("status_image")
|
||||
self.status_label = gxml.get_widget ("status_label")
|
||||
|
||||
# Grab the refresh button and connect to the clicked signal
|
||||
self.refresh_button = gxml.get_widget ("refresh_button")
|
||||
self.refresh_button.connect ("clicked", self.refresh_button_clicked)
|
||||
|
||||
# Grab the location entry and connect to editable::changed
|
||||
self.location_entry = gxml.get_widget ("location_entry")
|
||||
self.location_entry.connect ("changed",
|
||||
self.repository_entry_editable_changed)
|
||||
|
||||
# Grab the machine combo and hook onto the changed signal. This then
|
||||
# allows us to populate the distro and image combos
|
||||
self.machine_combo = gxml.get_widget ("machine_combo")
|
||||
self.machine_combo.connect ("changed", self.machine_combo_changed)
|
||||
|
||||
# Setup the combo
|
||||
cell = gtk.CellRendererText()
|
||||
self.machine_combo.pack_start(cell, True)
|
||||
self.machine_combo.add_attribute(cell, 'text', 0)
|
||||
|
||||
# Grab the distro and image combos. We need these to populate with
|
||||
# models once the machine is chosen
|
||||
self.distribution_combo = gxml.get_widget ("distribution_combo")
|
||||
cell = gtk.CellRendererText()
|
||||
self.distribution_combo.pack_start(cell, True)
|
||||
self.distribution_combo.add_attribute(cell, 'text', 0)
|
||||
|
||||
self.image_combo = gxml.get_widget ("image_combo")
|
||||
cell = gtk.CellRendererText()
|
||||
self.image_combo.pack_start(cell, True)
|
||||
self.image_combo.add_attribute(cell, 'text', 0)
|
||||
|
||||
# Put the default descriptive text in the status box
|
||||
self.clear_status_message()
|
||||
|
||||
# Mark as non-configurable, this is just greys out the widgets the
|
||||
# user can't yet use
|
||||
self.configurable = False
|
||||
self.set_configurable(False)
|
||||
|
||||
# Show the table
|
||||
table.show_all ()
|
||||
|
||||
# The loader and some signals connected to it to update the status
|
||||
# area
|
||||
self.loader = MetaDataLoader()
|
||||
self.loader.connect ("success", self.loader_success_cb)
|
||||
self.loader.connect ("error", self.loader_error_cb)
|
||||
|
||||
def update_configuration (self):
|
||||
""" A poorly named function but it updates the internal configuration
|
||||
from the widgets. This can make that configuration concrete and can
|
||||
thus be used for building """
|
||||
# Extract the chosen machine from the combo
|
||||
model = self.machine_combo.get_model()
|
||||
active_iter = self.machine_combo.get_active_iter()
|
||||
if (active_iter):
|
||||
self.configuration.machine = model.get(active_iter, 0)[0]
|
||||
|
||||
# Extract the chosen distro from the combo
|
||||
model = self.distribution_combo.get_model()
|
||||
active_iter = self.distribution_combo.get_active_iter()
|
||||
if (active_iter):
|
||||
self.configuration.distro = model.get(active_iter, 0)[0]
|
||||
|
||||
# Extract the chosen image from the combo
|
||||
model = self.image_combo.get_model()
|
||||
active_iter = self.image_combo.get_active_iter()
|
||||
if (active_iter):
|
||||
self.configuration.image = model.get(active_iter, 0)[0]
|
||||
|
||||
# This function operates to pull events out from the event queue and then push
|
||||
# them into the RunningBuild (which then drives the RunningBuild which then
|
||||
# pushes through and updates the progress tree view.)
|
||||
#
|
||||
# TODO: Should be a method on the RunningBuild class
|
||||
def event_handle_timeout (eventHandler, build):
|
||||
# Consume as many messages as we can ...
|
||||
event = eventHandler.getEvent()
|
||||
while event:
|
||||
build.handle_event (event)
|
||||
event = eventHandler.getEvent()
|
||||
return True
|
||||
|
||||
class MainWindow (gtk.Window):
|
||||
|
||||
# Callback that gets fired when the user hits a button in the
|
||||
# BuildSetupDialog.
|
||||
def build_dialog_box_response_cb (self, dialog, response_id):
|
||||
conf = None
|
||||
if (response_id == BuildSetupDialog.RESPONSE_BUILD):
|
||||
dialog.update_configuration()
|
||||
print dialog.configuration.machine, dialog.configuration.distro, \
|
||||
dialog.configuration.image
|
||||
conf = dialog.configuration
|
||||
|
||||
dialog.destroy()
|
||||
|
||||
if conf:
|
||||
self.manager.do_build (conf)
|
||||
|
||||
def build_button_clicked_cb (self, button):
|
||||
dialog = BuildSetupDialog ()
|
||||
|
||||
# For some unknown reason Dialog.run causes nice little deadlocks ... :-(
|
||||
dialog.connect ("response", self.build_dialog_box_response_cb)
|
||||
dialog.show()
|
||||
|
||||
def __init__ (self):
|
||||
gtk.Window.__init__ (self)
|
||||
|
||||
# Pull in *just* the main vbox from the Glade XML data and then pack
|
||||
# that inside the window
|
||||
gxml = gtk.glade.XML (os.path.dirname(__file__) + "/crumbs/puccho.glade",
|
||||
root = "main_window_vbox")
|
||||
vbox = gxml.get_widget ("main_window_vbox")
|
||||
self.add (vbox)
|
||||
|
||||
# Create the tree views for the build manager view and the progress view
|
||||
self.build_manager_view = BuildManagerTreeView()
|
||||
self.running_build_view = RunningBuildTreeView()
|
||||
|
||||
# Grab the scrolled windows that we put the tree views into
|
||||
self.results_scrolledwindow = gxml.get_widget ("results_scrolledwindow")
|
||||
self.progress_scrolledwindow = gxml.get_widget ("progress_scrolledwindow")
|
||||
|
||||
# Put the tree views inside ...
|
||||
self.results_scrolledwindow.add (self.build_manager_view)
|
||||
self.progress_scrolledwindow.add (self.running_build_view)
|
||||
|
||||
# Hook up the build button...
|
||||
self.build_button = gxml.get_widget ("main_toolbutton_build")
|
||||
self.build_button.connect ("clicked", self.build_button_clicked_cb)
|
||||
|
||||
# I'm not very happy about the current ownership of the RunningBuild. I have
|
||||
# my suspicions that this object should be held by the BuildManager since we
|
||||
# care about the signals in the manager
|
||||
|
||||
def running_build_succeeded_cb (running_build, manager):
|
||||
# Notify the manager that a build has succeeded. This is necessary as part
|
||||
# of the 'hack' that we use for making the row in the model / view
|
||||
# representing the ongoing build change into a row representing the
|
||||
# completed build. Since we know only one build can be running a time then
|
||||
# we can handle this.
|
||||
|
||||
# FIXME: Refactor all this so that the RunningBuild is owned by the
|
||||
# BuildManager. It can then hook onto the signals directly and drive
|
||||
# interesting things it cares about.
|
||||
manager.notify_build_succeeded ()
|
||||
print "build succeeded"
|
||||
|
||||
def running_build_failed_cb (running_build, manager):
|
||||
# As above
|
||||
print "build failed"
|
||||
manager.notify_build_failed ()
|
||||
|
||||
def init (server, eventHandler):
|
||||
# Initialise threading...
|
||||
gobject.threads_init()
|
||||
gtk.gdk.threads_init()
|
||||
|
||||
main_window = MainWindow ()
|
||||
main_window.show_all ()
|
||||
|
||||
# Set up the build manager stuff in general
|
||||
builds_dir = os.path.join (os.getcwd(), "results")
|
||||
manager = BuildManager (server, builds_dir)
|
||||
main_window.build_manager_view.set_model (manager.model)
|
||||
|
||||
# Do the running build setup
|
||||
running_build = RunningBuild ()
|
||||
main_window.running_build_view.set_model (running_build.model)
|
||||
running_build.connect ("build-succeeded", running_build_succeeded_cb,
|
||||
manager)
|
||||
running_build.connect ("build-failed", running_build_failed_cb, manager)
|
||||
|
||||
# We need to save the manager into the MainWindow so that the toolbar
|
||||
# button can use it.
|
||||
# FIXME: Refactor ?
|
||||
main_window.manager = manager
|
||||
|
||||
# Use a timeout function for probing the event queue to find out if we
|
||||
# have a message waiting for us.
|
||||
gobject.timeout_add (200,
|
||||
event_handle_timeout,
|
||||
eventHandler,
|
||||
running_build)
|
||||
|
||||
gtk.main()
|
||||
@@ -1,125 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2006 - 2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
"""
|
||||
Use this class to fork off a thread to recieve event callbacks from the bitbake
|
||||
server and queue them for the UI to process. This process must be used to avoid
|
||||
client/server deadlocks.
|
||||
"""
|
||||
|
||||
import socket, threading, pickle
|
||||
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
|
||||
class BBUIEventQueue:
|
||||
def __init__(self, BBServer):
|
||||
|
||||
self.eventQueue = []
|
||||
self.eventQueueLock = threading.Lock()
|
||||
self.eventQueueNotify = threading.Event()
|
||||
|
||||
self.BBServer = BBServer
|
||||
|
||||
self.t = threading.Thread()
|
||||
self.t.setDaemon(True)
|
||||
self.t.run = self.startCallbackHandler
|
||||
self.t.start()
|
||||
|
||||
def getEvent(self):
|
||||
|
||||
self.eventQueueLock.acquire()
|
||||
|
||||
if len(self.eventQueue) == 0:
|
||||
self.eventQueueLock.release()
|
||||
return None
|
||||
|
||||
item = self.eventQueue.pop(0)
|
||||
|
||||
if len(self.eventQueue) == 0:
|
||||
self.eventQueueNotify.clear()
|
||||
|
||||
self.eventQueueLock.release()
|
||||
return item
|
||||
|
||||
def waitEvent(self, delay):
|
||||
self.eventQueueNotify.wait(delay)
|
||||
return self.getEvent()
|
||||
|
||||
def queue_event(self, event):
|
||||
self.eventQueueLock.acquire()
|
||||
self.eventQueue.append(pickle.loads(event))
|
||||
self.eventQueueNotify.set()
|
||||
self.eventQueueLock.release()
|
||||
|
||||
def startCallbackHandler(self):
|
||||
|
||||
server = UIXMLRPCServer()
|
||||
self.host, self.port = server.socket.getsockname()
|
||||
|
||||
server.register_function( self.system_quit, "event.quit" )
|
||||
server.register_function( self.queue_event, "event.send" )
|
||||
server.socket.settimeout(1)
|
||||
|
||||
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)
|
||||
|
||||
self.server = server
|
||||
while not server.quit:
|
||||
server.handle_request()
|
||||
server.server_close()
|
||||
|
||||
def system_quit( self ):
|
||||
"""
|
||||
Shut down the callback thread
|
||||
"""
|
||||
try:
|
||||
self.BBServer.unregisterEventHandler(self.EventHandle)
|
||||
except:
|
||||
pass
|
||||
self.server.quit = True
|
||||
|
||||
class UIXMLRPCServer (SimpleXMLRPCServer):
|
||||
|
||||
def __init__( self, interface = ("localhost", 0) ):
|
||||
self.quit = False
|
||||
SimpleXMLRPCServer.__init__( self,
|
||||
interface,
|
||||
requestHandler=SimpleXMLRPCRequestHandler,
|
||||
logRequests=False, allow_none=True)
|
||||
|
||||
def get_request(self):
|
||||
while not self.quit:
|
||||
try:
|
||||
sock, addr = self.socket.accept()
|
||||
sock.settimeout(1)
|
||||
return (sock, addr)
|
||||
except socket.timeout:
|
||||
pass
|
||||
return (None,None)
|
||||
|
||||
def close_request(self, request):
|
||||
if request is None:
|
||||
return
|
||||
SimpleXMLRPCServer.close_request(self, request)
|
||||
|
||||
def process_request(self, request, client_address):
|
||||
if request is None:
|
||||
return
|
||||
SimpleXMLRPCServer.process_request(self, request, client_address)
|
||||
|
||||
|
||||
@@ -1,50 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
|
||||
# Copyright (C) 2006 - 2007 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
class BBUIHelper:
|
||||
def __init__(self):
|
||||
self.needUpdate = False
|
||||
self.running_tasks = {}
|
||||
self.failed_tasks = []
|
||||
|
||||
def eventHandler(self, event):
|
||||
if isinstance(event, bb.build.TaskStarted):
|
||||
self.running_tasks[event.pid] = { 'title' : "%s %s" % (event._package, event._task) }
|
||||
self.needUpdate = True
|
||||
if isinstance(event, bb.build.TaskSucceeded):
|
||||
del self.running_tasks[event.pid]
|
||||
self.needUpdate = True
|
||||
if isinstance(event, bb.build.TaskFailed):
|
||||
del self.running_tasks[event.pid]
|
||||
self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
|
||||
self.needUpdate = True
|
||||
|
||||
# Add runqueue event handling
|
||||
#if isinstance(event, bb.runqueue.runQueueTaskCompleted):
|
||||
# a = 1
|
||||
#if isinstance(event, bb.runqueue.runQueueTaskStarted):
|
||||
# a = 1
|
||||
#if isinstance(event, bb.runqueue.runQueueTaskFailed):
|
||||
# a = 1
|
||||
#if isinstance(event, bb.runqueue.runQueueExitWait):
|
||||
# a = 1
|
||||
|
||||
def getTasks(self):
|
||||
self.needUpdate = False
|
||||
return (self.running_tasks, self.failed_tasks)
|
||||
@@ -19,35 +19,32 @@ BitBake Utility Functions
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
separators = ".-"
|
||||
digits = "0123456789"
|
||||
ascii_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
|
||||
import re, fcntl, os, types, bb, string, stat, shutil
|
||||
from commands import getstatusoutput
|
||||
import re
|
||||
|
||||
def explode_version(s):
|
||||
r = []
|
||||
alpha_regexp = re.compile('^([a-zA-Z]+)(.*)$')
|
||||
numeric_regexp = re.compile('^(\d+)(.*)$')
|
||||
while (s != ''):
|
||||
if s[0] in string.digits:
|
||||
if s[0] in digits:
|
||||
m = numeric_regexp.match(s)
|
||||
r.append(int(m.group(1)))
|
||||
s = m.group(2)
|
||||
continue
|
||||
if s[0] in string.letters:
|
||||
if s[0] in ascii_letters:
|
||||
m = alpha_regexp.match(s)
|
||||
r.append(m.group(1))
|
||||
s = m.group(2)
|
||||
continue
|
||||
r.append(s[0])
|
||||
s = s[1:]
|
||||
return r
|
||||
|
||||
def vercmp_part(a, b):
|
||||
va = explode_version(a)
|
||||
vb = explode_version(b)
|
||||
sa = False
|
||||
sb = False
|
||||
while True:
|
||||
if va == []:
|
||||
ca = None
|
||||
@@ -59,28 +56,16 @@ def vercmp_part(a, b):
|
||||
cb = vb.pop(0)
|
||||
if ca == None and cb == None:
|
||||
return 0
|
||||
|
||||
if type(ca) is types.StringType:
|
||||
sa = ca in separators
|
||||
if type(cb) is types.StringType:
|
||||
sb = cb in separators
|
||||
if sa and not sb:
|
||||
return -1
|
||||
if not sa and sb:
|
||||
return 1
|
||||
|
||||
if ca > cb:
|
||||
return 1
|
||||
if ca < cb:
|
||||
return -1
|
||||
|
||||
def vercmp(ta, tb):
|
||||
(ea, va, ra) = ta
|
||||
(eb, vb, rb) = tb
|
||||
(va, ra) = ta
|
||||
(vb, rb) = tb
|
||||
|
||||
r = int(ea)-int(eb)
|
||||
if (r == 0):
|
||||
r = vercmp_part(va, vb)
|
||||
r = vercmp_part(va, vb)
|
||||
if (r == 0):
|
||||
r = vercmp_part(ra, rb)
|
||||
return r
|
||||
@@ -98,45 +83,18 @@ def explode_deps(s):
|
||||
for i in l:
|
||||
if i[0] == '(':
|
||||
flag = True
|
||||
#j = []
|
||||
if not flag:
|
||||
j = []
|
||||
if flag:
|
||||
j.append(i)
|
||||
else:
|
||||
r.append(i)
|
||||
#else:
|
||||
# j.append(i)
|
||||
if flag and i.endswith(')'):
|
||||
flag = False
|
||||
# Ignore version
|
||||
#r[-1] += ' ' + ' '.join(j)
|
||||
return r
|
||||
|
||||
def explode_dep_versions(s):
|
||||
"""
|
||||
Take an RDEPENDS style string of format:
|
||||
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
|
||||
and return a dictonary of dependencies and versions.
|
||||
"""
|
||||
r = {}
|
||||
l = s.split()
|
||||
lastdep = None
|
||||
lastver = ""
|
||||
inversion = False
|
||||
for i in l:
|
||||
if i[0] == '(':
|
||||
inversion = True
|
||||
lastver = i[1:] or ""
|
||||
#j = []
|
||||
elif inversion and i.endswith(')'):
|
||||
inversion = False
|
||||
lastver = lastver + " " + (i[:-1] or "")
|
||||
r[lastdep] = lastver
|
||||
elif not inversion:
|
||||
r[i] = None
|
||||
lastdep = i
|
||||
lastver = ""
|
||||
elif inversion:
|
||||
lastver = lastver + " " + i
|
||||
|
||||
return r
|
||||
|
||||
def _print_trace(body, line):
|
||||
"""
|
||||
@@ -164,8 +122,8 @@ def better_compile(text, file, realfile):
|
||||
|
||||
# split the text into lines again
|
||||
body = text.split('\n')
|
||||
bb.msg.error(bb.msg.domain.Util, "Error in compiling python function in: ", realfile)
|
||||
bb.msg.error(bb.msg.domain.Util, "The lines leading to this error were:")
|
||||
bb.msg.error(bb.msg.domain.Util, "Error in compiling: ", realfile)
|
||||
bb.msg.error(bb.msg.domain.Util, "The lines resulting into this error were:")
|
||||
bb.msg.error(bb.msg.domain.Util, "\t%d:%s:'%s'" % (e.lineno, e.__class__.__name__, body[e.lineno-1]))
|
||||
|
||||
_print_trace(body, e.lineno)
|
||||
@@ -189,7 +147,7 @@ def better_exec(code, context, text, realfile):
|
||||
raise
|
||||
|
||||
# print the Header of the Error Message
|
||||
bb.msg.error(bb.msg.domain.Util, "Error in executing python function in: %s" % realfile)
|
||||
bb.msg.error(bb.msg.domain.Util, "Error in executing: ", realfile)
|
||||
bb.msg.error(bb.msg.domain.Util, "Exception:%s Message:%s" % (t,value) )
|
||||
|
||||
# let us find the line number now
|
||||
@@ -242,375 +200,3 @@ def Enum(*names):
|
||||
constants = tuple(constants)
|
||||
EnumType = EnumClass()
|
||||
return EnumType
|
||||
|
||||
def lockfile(name):
|
||||
"""
|
||||
Use the file fn as a lock file, return when the lock has been acquired.
|
||||
Returns a variable to pass to unlockfile().
|
||||
"""
|
||||
path = os.path.dirname(name)
|
||||
if not os.path.isdir(path):
|
||||
import bb, sys
|
||||
bb.msg.error(bb.msg.domain.Util, "Error, lockfile path does not exist!: %s" % path)
|
||||
sys.exit(1)
|
||||
|
||||
while True:
|
||||
# If we leave the lockfiles lying around there is no problem
|
||||
# but we should clean up after ourselves. This gives potential
|
||||
# for races though. To work around this, when we acquire the lock
|
||||
# we check the file we locked was still the lock file on disk.
|
||||
# by comparing inode numbers. If they don't match or the lockfile
|
||||
# no longer exists, we start again.
|
||||
|
||||
# This implementation is unfair since the last person to request the
|
||||
# lock is the most likely to win it.
|
||||
|
||||
try:
|
||||
lf = open(name, "a+")
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
|
||||
statinfo = os.fstat(lf.fileno())
|
||||
if os.path.exists(lf.name):
|
||||
statinfo2 = os.stat(lf.name)
|
||||
if statinfo.st_ino == statinfo2.st_ino:
|
||||
return lf
|
||||
# File no longer exists or changed, retry
|
||||
lf.close
|
||||
except Exception, e:
|
||||
continue
|
||||
|
||||
def unlockfile(lf):
|
||||
"""
|
||||
Unlock a file locked using lockfile()
|
||||
"""
|
||||
os.unlink(lf.name)
|
||||
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
|
||||
lf.close
|
||||
|
||||
def md5_file(filename):
|
||||
"""
|
||||
Return the hex string representation of the MD5 checksum of filename.
|
||||
"""
|
||||
try:
|
||||
import hashlib
|
||||
m = hashlib.md5()
|
||||
except ImportError:
|
||||
import md5
|
||||
m = md5.new()
|
||||
|
||||
for line in open(filename):
|
||||
m.update(line)
|
||||
return m.hexdigest()
|
||||
|
||||
def sha256_file(filename):
|
||||
"""
|
||||
Return the hex string representation of the 256-bit SHA checksum of
|
||||
filename. On Python 2.4 this will return None, so callers will need to
|
||||
handle that by either skipping SHA checks, or running a standalone sha256sum
|
||||
binary.
|
||||
"""
|
||||
try:
|
||||
import hashlib
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
s = hashlib.sha256()
|
||||
for line in open(filename):
|
||||
s.update(line)
|
||||
return s.hexdigest()
|
||||
|
||||
def preserved_envvars_list():
|
||||
return [
|
||||
'BBPATH',
|
||||
'BB_PRESERVE_ENV',
|
||||
'BB_ENV_WHITELIST',
|
||||
'BB_ENV_EXTRAWHITE',
|
||||
'COLORTERM',
|
||||
'DBUS_SESSION_BUS_ADDRESS',
|
||||
'DESKTOP_SESSION',
|
||||
'DESKTOP_STARTUP_ID',
|
||||
'DISPLAY',
|
||||
'GNOME_KEYRING_PID',
|
||||
'GNOME_KEYRING_SOCKET',
|
||||
'GPG_AGENT_INFO',
|
||||
'GTK_RC_FILES',
|
||||
'HOME',
|
||||
'LANG',
|
||||
'LOGNAME',
|
||||
'PATH',
|
||||
'PWD',
|
||||
'SESSION_MANAGER',
|
||||
'SHELL',
|
||||
'SSH_AUTH_SOCK',
|
||||
'TERM',
|
||||
'USER',
|
||||
'USERNAME',
|
||||
'_',
|
||||
'XAUTHORITY',
|
||||
'XDG_DATA_DIRS',
|
||||
'XDG_SESSION_COOKIE',
|
||||
]
|
||||
|
||||
def filter_environment(good_vars):
|
||||
"""
|
||||
Create a pristine environment for bitbake. This will remove variables that
|
||||
are not known and may influence the build in a negative way.
|
||||
"""
|
||||
|
||||
import bb
|
||||
|
||||
removed_vars = []
|
||||
for key in os.environ.keys():
|
||||
if key in good_vars:
|
||||
continue
|
||||
|
||||
removed_vars.append(key)
|
||||
os.unsetenv(key)
|
||||
del os.environ[key]
|
||||
|
||||
if len(removed_vars):
|
||||
bb.debug(1, "Removed the following variables from the environment:", ",".join(removed_vars))
|
||||
|
||||
return removed_vars
|
||||
|
||||
def clean_environment():
|
||||
"""
|
||||
Clean up any spurious environment variables. This will remove any
|
||||
variables the user hasn't chose to preserve.
|
||||
"""
|
||||
if 'BB_PRESERVE_ENV' not in os.environ:
|
||||
if 'BB_ENV_WHITELIST' in os.environ:
|
||||
good_vars = os.environ['BB_ENV_WHITELIST'].split()
|
||||
else:
|
||||
good_vars = preserved_envvars_list()
|
||||
if 'BB_ENV_EXTRAWHITE' in os.environ:
|
||||
good_vars.extend(os.environ['BB_ENV_EXTRAWHITE'].split())
|
||||
filter_environment(good_vars)
|
||||
|
||||
def empty_environment():
|
||||
"""
|
||||
Remove all variables from the environment.
|
||||
"""
|
||||
for s in os.environ.keys():
|
||||
os.unsetenv(s)
|
||||
del os.environ[s]
|
||||
|
||||
def build_environment(d):
|
||||
"""
|
||||
Build an environment from all exported variables.
|
||||
"""
|
||||
import bb
|
||||
for var in bb.data.keys(d):
|
||||
export = bb.data.getVarFlag(var, "export", d)
|
||||
if export:
|
||||
os.environ[var] = bb.data.getVar(var, d, True) or ""
|
||||
|
||||
def prunedir(topdir):
|
||||
# Delete everything reachable from the directory named in 'topdir'.
|
||||
# CAUTION: This is dangerous!
|
||||
for root, dirs, files in os.walk(topdir, topdown=False):
|
||||
for name in files:
|
||||
os.remove(os.path.join(root, name))
|
||||
for name in dirs:
|
||||
if os.path.islink(os.path.join(root, name)):
|
||||
os.remove(os.path.join(root, name))
|
||||
else:
|
||||
os.rmdir(os.path.join(root, name))
|
||||
os.rmdir(topdir)
|
||||
|
||||
#
|
||||
# Could also use return re.compile("(%s)" % "|".join(map(re.escape, suffixes))).sub(lambda mo: "", var)
|
||||
# but thats possibly insane and suffixes is probably going to be small
|
||||
#
|
||||
def prune_suffix(var, suffixes, d):
|
||||
# See if var ends with any of the suffixes listed and
|
||||
# remove it if found
|
||||
for suffix in suffixes:
|
||||
if var.endswith(suffix):
|
||||
return var.replace(suffix, "")
|
||||
return var
|
||||
|
||||
def mkdirhier(dir):
|
||||
"""Create a directory like 'mkdir -p', but does not complain if
|
||||
directory already exists like os.makedirs
|
||||
"""
|
||||
|
||||
bb.debug(3, "mkdirhier(%s)" % dir)
|
||||
try:
|
||||
os.makedirs(dir)
|
||||
bb.debug(2, "created " + dir)
|
||||
except OSError, e:
|
||||
if e.errno != 17: raise e
|
||||
|
||||
import stat
|
||||
|
||||
def movefile(src,dest,newmtime=None,sstat=None):
|
||||
"""Moves a file from src to dest, preserving all permissions and
|
||||
attributes; mtime will be preserved even when moving across
|
||||
filesystems. Returns true on success and false on failure. Move is
|
||||
atomic.
|
||||
"""
|
||||
|
||||
#print "movefile("+src+","+dest+","+str(newmtime)+","+str(sstat)+")"
|
||||
try:
|
||||
if not sstat:
|
||||
sstat=os.lstat(src)
|
||||
except Exception, e:
|
||||
print "movefile: Stating source file failed...", e
|
||||
return None
|
||||
|
||||
destexists=1
|
||||
try:
|
||||
dstat=os.lstat(dest)
|
||||
except:
|
||||
dstat=os.lstat(os.path.dirname(dest))
|
||||
destexists=0
|
||||
|
||||
if destexists:
|
||||
if stat.S_ISLNK(dstat[stat.ST_MODE]):
|
||||
try:
|
||||
os.unlink(dest)
|
||||
destexists=0
|
||||
except Exception, e:
|
||||
pass
|
||||
|
||||
if stat.S_ISLNK(sstat[stat.ST_MODE]):
|
||||
try:
|
||||
target=os.readlink(src)
|
||||
if destexists and not stat.S_ISDIR(dstat[stat.ST_MODE]):
|
||||
os.unlink(dest)
|
||||
os.symlink(target,dest)
|
||||
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
|
||||
os.unlink(src)
|
||||
return os.lstat(dest)
|
||||
except Exception, e:
|
||||
print "movefile: failed to properly create symlink:", dest, "->", target, e
|
||||
return None
|
||||
|
||||
renamefailed=1
|
||||
if sstat[stat.ST_DEV]==dstat[stat.ST_DEV]:
|
||||
try:
|
||||
ret=os.rename(src,dest)
|
||||
renamefailed=0
|
||||
except Exception, e:
|
||||
import errno
|
||||
if e[0]!=errno.EXDEV:
|
||||
# Some random error.
|
||||
print "movefile: Failed to move", src, "to", dest, e
|
||||
return None
|
||||
# Invalid cross-device-link 'bind' mounted or actually Cross-Device
|
||||
|
||||
if renamefailed:
|
||||
didcopy=0
|
||||
if stat.S_ISREG(sstat[stat.ST_MODE]):
|
||||
try: # For safety copy then move it over.
|
||||
shutil.copyfile(src,dest+"#new")
|
||||
os.rename(dest+"#new",dest)
|
||||
didcopy=1
|
||||
except Exception, e:
|
||||
print 'movefile: copy', src, '->', dest, 'failed.', e
|
||||
return None
|
||||
else:
|
||||
#we don't yet handle special, so we need to fall back to /bin/mv
|
||||
a=getstatusoutput("/bin/mv -f "+"'"+src+"' '"+dest+"'")
|
||||
if a[0]!=0:
|
||||
print "movefile: Failed to move special file:" + src + "' to '" + dest + "'", a
|
||||
return None # failure
|
||||
try:
|
||||
if didcopy:
|
||||
os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
|
||||
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
|
||||
os.unlink(src)
|
||||
except Exception, e:
|
||||
print "movefile: Failed to chown/chmod/unlink", dest, e
|
||||
return None
|
||||
|
||||
if newmtime:
|
||||
os.utime(dest,(newmtime,newmtime))
|
||||
else:
|
||||
os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
|
||||
newmtime=sstat[stat.ST_MTIME]
|
||||
return newmtime
|
||||
|
||||
def copyfile(src,dest,newmtime=None,sstat=None):
|
||||
"""
|
||||
Copies a file from src to dest, preserving all permissions and
|
||||
attributes; mtime will be preserved even when moving across
|
||||
filesystems. Returns true on success and false on failure.
|
||||
"""
|
||||
#print "copyfile("+src+","+dest+","+str(newmtime)+","+str(sstat)+")"
|
||||
try:
|
||||
if not sstat:
|
||||
sstat=os.lstat(src)
|
||||
except Exception, e:
|
||||
print "copyfile: Stating source file failed...", e
|
||||
return False
|
||||
|
||||
destexists=1
|
||||
try:
|
||||
dstat=os.lstat(dest)
|
||||
except:
|
||||
dstat=os.lstat(os.path.dirname(dest))
|
||||
destexists=0
|
||||
|
||||
if destexists:
|
||||
if stat.S_ISLNK(dstat[stat.ST_MODE]):
|
||||
try:
|
||||
os.unlink(dest)
|
||||
destexists=0
|
||||
except Exception, e:
|
||||
pass
|
||||
|
||||
if stat.S_ISLNK(sstat[stat.ST_MODE]):
|
||||
try:
|
||||
target=os.readlink(src)
|
||||
if destexists and not stat.S_ISDIR(dstat[stat.ST_MODE]):
|
||||
os.unlink(dest)
|
||||
os.symlink(target,dest)
|
||||
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
|
||||
return os.lstat(dest)
|
||||
except Exception, e:
|
||||
print "copyfile: failed to properly create symlink:", dest, "->", target, e
|
||||
return False
|
||||
|
||||
if stat.S_ISREG(sstat[stat.ST_MODE]):
|
||||
try: # For safety copy then move it over.
|
||||
shutil.copyfile(src,dest+"#new")
|
||||
os.rename(dest+"#new",dest)
|
||||
except Exception, e:
|
||||
print 'copyfile: copy', src, '->', dest, 'failed.', e
|
||||
return False
|
||||
else:
|
||||
#we don't yet handle special, so we need to fall back to /bin/mv
|
||||
a=getstatusoutput("/bin/cp -f "+"'"+src+"' '"+dest+"'")
|
||||
if a[0]!=0:
|
||||
print "copyfile: Failed to copy special file:" + src + "' to '" + dest + "'", a
|
||||
return False # failure
|
||||
try:
|
||||
os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
|
||||
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
|
||||
except Exception, e:
|
||||
print "copyfile: Failed to chown/chmod/unlink", dest, e
|
||||
return False
|
||||
|
||||
if newmtime:
|
||||
os.utime(dest,(newmtime,newmtime))
|
||||
else:
|
||||
os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
|
||||
newmtime=sstat[stat.ST_MTIME]
|
||||
return newmtime
|
||||
|
||||
def which(path, item, direction = 0):
|
||||
"""
|
||||
Locate a file in a PATH
|
||||
"""
|
||||
|
||||
paths = (path or "").split(':')
|
||||
if direction != 0:
|
||||
paths.reverse()
|
||||
|
||||
for p in paths:
|
||||
next = os.path.join(p, item)
|
||||
if os.path.exists(next):
|
||||
return next
|
||||
|
||||
return ""
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
|
||||
# changes incompatibly
|
||||
LCONF_VERSION = "1"
|
||||
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
${OEROOT}/meta \
|
||||
${OEROOT}/meta-moblin \
|
||||
"
|
||||
@@ -1,86 +1,48 @@
|
||||
# CONF_VERSION is increased each time build/conf/ changes incompatibly
|
||||
CONF_VERSION = "1"
|
||||
|
||||
# Where to cache the files Poky downloads
|
||||
DL_DIR ?= "${OEROOT}/sources"
|
||||
# Where to cache Poky's built staging output
|
||||
PSTAGE_DIR ?= "${OEROOT}/pstage"
|
||||
BBFILES = "${OEROOT}/meta/packages/*/*.bb"
|
||||
|
||||
# Uncomment and set to allow bitbake to execute multiple tasks at once.
|
||||
# For a quadcore, BB_NUMBER_THREADS = "4", PARALLEL_MAKE = "-j 4" would
|
||||
# be appropriate.
|
||||
# BB_NUMBER_THREADS = "4"
|
||||
# Also, make can be passed flags so it run parallel threads e.g.:
|
||||
# PARALLEL_MAKE = "-j 4"
|
||||
# To enable extra packages, uncomment the following lines:
|
||||
# BBFILES := "${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-extras/packages/*/*.bb"
|
||||
# BBFILE_COLLECTIONS = "normal extras"
|
||||
# BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
|
||||
# BBFILE_PATTERN_extras = "^${OEROOT}/meta/"
|
||||
# BBFILE_PRIORITY_normal = "5"
|
||||
# BBFILE_PRIORITY_extras = "5"
|
||||
|
||||
BBMASK = ""
|
||||
|
||||
# The machine to target
|
||||
MACHINE ?= "qemux86"
|
||||
MACHINE ?= "qemuarm"
|
||||
|
||||
# Other supported machines
|
||||
#MACHINE ?= "qemuarm"
|
||||
#MACHINE ?= "netbook"
|
||||
#MACHINE ?= "cmx270"
|
||||
#MACHINE ?= "qemux86"
|
||||
#MACHINE ?= "c7x0"
|
||||
#MACHINE ?= "akita"
|
||||
#MACHINE ?= "spitz"
|
||||
#MACHINE ?= "nokia770"
|
||||
#MACHINE ?= "nokia800"
|
||||
#MACHINE ?= "fic-gta01"
|
||||
#MACHINE ?= "bootcdx86"
|
||||
#MACHINE ?= "cm-x270"
|
||||
#MACHINE ?= "em-x270"
|
||||
#MACHINE ?= "htcuniversal"
|
||||
#MACHINE ?= "mx31ads"
|
||||
#MACHINE ?= "mx31litekit"
|
||||
#MACHINE ?= "mx31phy"
|
||||
#MACHINE ?= "zylonite"
|
||||
|
||||
DISTRO ?= "poky"
|
||||
|
||||
DISTRO = "poky"
|
||||
# For bleeding edge / experimental / unstable package versions
|
||||
# DISTRO ?= "poky-bleeding"
|
||||
# DISTRO = "poky-bleeding"
|
||||
|
||||
# Poky has various extra metadata collections (openmoko, extras).
|
||||
# To enable these, uncomment all (or some of) the following lines:
|
||||
# BBFILES = "\
|
||||
# ${OEROOT}/meta/packages/*/*.bb \
|
||||
# ${OEROOT}/meta-extras/packages/*/*.bb \
|
||||
# ${OEROOT}/meta-openmoko/packages/*/*.bb \
|
||||
# ${OEROOT}/meta-moblin/packages/*/*.bb \
|
||||
# "
|
||||
# BBFILE_COLLECTIONS = "normal extras openmoko moblin"
|
||||
# BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
|
||||
# BBFILE_PATTERN_extras = "^${OEROOT}/meta-extras/"
|
||||
# BBFILE_PATTERN_openmoko = "^${OEROOT}/meta-openmoko/"
|
||||
# BBFILE_PATTERN_moblin = "^${OEROOT}/meta-moblin/"
|
||||
# BBFILE_PRIORITY_normal = "5"
|
||||
# BBFILE_PRIORITY_extras = "5"
|
||||
# BBFILE_PRIORITY_openmoko = "5"
|
||||
# BBFILE_PRIORITY_moblin = "5"
|
||||
|
||||
BBMASK = ""
|
||||
|
||||
# EXTRA_IMAGE_FEATURES allows extra packages to be added to the generated images
|
||||
# IMAGE_FEATURES configuration of the generated images
|
||||
# (Some of these are automatically added to certain image types)
|
||||
# "dbg-pkgs" - add -dbg packages for all installed packages
|
||||
# (adds symbol information for debugging/profiling)
|
||||
# "dev-pkgs" - add -dev packages for all installed packages
|
||||
# (useful if you want to develop against libs in the image)
|
||||
# "tools-sdk" - add development tools (gcc, make, pkgconfig etc.)
|
||||
# "tools-debug" - add debugging tools (gdb, strace)
|
||||
# "tools-profile" - add profiling tools (oprofile, exmap, lttng valgrind (x86 only))
|
||||
# "tools-testapps" - add useful testing tools (ts_print, aplay, arecord etc.)
|
||||
# "debug-tweaks" - make an image for suitable of development
|
||||
# e.g. ssh root access has a blank password
|
||||
# There are other application targets too, see meta/classes/poky-image.bbclass
|
||||
# and meta/packages/tasks/task-poky.bb for more details.
|
||||
# "dev-pkgs" - add -dev packages for all installed packages
|
||||
# (useful if you want to develop against libs in the image)
|
||||
# "dbg-pkgs" - add -dbg packages for all installed packages
|
||||
# (adds symbol information for debugging/profiling)
|
||||
# "apps-core" - core applications
|
||||
# "apps-pda" - add PDA application suite (contacts, dates, etc.)
|
||||
# "dev-tools" - add development tools (gcc, make, pkgconfig etc.)
|
||||
# "dbg-tools" - add debugging tools (gdb, strace, oprofile, etc.)
|
||||
# "test-tools" - add useful testing tools (ts_print, aplay, arecord etc.)
|
||||
# "debug-tweaks" - make an image for suitable of development
|
||||
# e.g. ssh root access has a blank password
|
||||
|
||||
EXTRA_IMAGE_FEATURES = "tools-debug tools-profile tools-testapps debug-tweaks"
|
||||
|
||||
# The default IMAGE_FEATURES above are too large for the mx31phy and
|
||||
# c700/c750 machines which have limited space. The code below limits
|
||||
# the default features for those machines.
|
||||
EXTRA_IMAGE_FEATURES_c7x0 = "tools-testapps debug-tweaks"
|
||||
EXTRA_IMAGE_FEATURES_mx31phy = "debug-tweaks"
|
||||
EXTRA_IMAGE_FEATURES_mx31ads = "tools-testapps debug-tweaks"
|
||||
IMAGE_FEATURES = "dbg-tools test-tools debug-tweaks"
|
||||
|
||||
# A list of packaging systems used in generated images
|
||||
# The first package type listed will be used for rootfs generation
|
||||
@@ -89,17 +51,12 @@ EXTRA_IMAGE_FEATURES_mx31ads = "tools-testapps debug-tweaks"
|
||||
#PACKAGE_CLASSES ?= "package_deb package_ipk"
|
||||
PACKAGE_CLASSES ?= "package_ipk"
|
||||
|
||||
# POKYMODE controls the characteristics of the generated packages/images by
|
||||
# telling poky which type of toolchain to use.
|
||||
#
|
||||
# Options include several different EABI combinations and a compatibility
|
||||
# mode for the OABI mode poky previously used.
|
||||
#
|
||||
# The default is "eabi"
|
||||
# Use "oabi" for machines with kernels < 2.6.18 on ARM for example.
|
||||
# Use "external-MODE" to use the precompiled external toolchains where MODE
|
||||
# is the type of external toolchain to use e.g. eabi.
|
||||
# POKYMODE = "external-eabi"
|
||||
# POKYMODE controls the characteristics of the generated packages/images.
|
||||
# Options include several different EABI combinations and a
|
||||
# compatibility mode for the OABI mode poky use to use. Use "oabi" for machines
|
||||
# with kernels < 2.6.18 for example. The default is "eabi". These changes only
|
||||
# really apply for ARM machines.
|
||||
# POKYMODE = "oabi"
|
||||
|
||||
# Uncomment this to specify where BitBake should create its temporary files.
|
||||
# Note that a full build of everything in OpenEmbedded will take GigaBytes of hard
|
||||
@@ -107,13 +64,13 @@ PACKAGE_CLASSES ?= "package_ipk"
|
||||
# <build directory>/tmp
|
||||
TMPDIR = "${OEROOT}/build/tmp"
|
||||
|
||||
# Uncomment and set to allow bitbake to execute multiple tasks at once.
|
||||
# Note, This option is currently experimental - YMMV.
|
||||
# 'quilt' is also required on the host system
|
||||
# BB_NUMBER_THREADS = "1"
|
||||
|
||||
# Uncomment this if you are using the Openedhand provided qemu deb - see README
|
||||
# ASSUME_PROVIDED += "qemu-native"
|
||||
|
||||
# Comment this out if you don't have a 3.x gcc version available and wish
|
||||
# poky to build one for you. The 3.x gcc is required to build qemu-native.
|
||||
# ASSUME_PROVIDED += "gcc3-native"
|
||||
# Comment this out if you are *not* using provided qemu deb - see README
|
||||
ASSUME_PROVIDED += "qemu-native"
|
||||
|
||||
# Uncomment these two if you want BitBake to build images useful for debugging.
|
||||
# DEBUG_BUILD = "1"
|
||||
@@ -134,20 +91,8 @@ TMPDIR = "${OEROOT}/build/tmp"
|
||||
# Uncomment this if you want BitBake to emit the log if a build fails.
|
||||
BBINCLUDELOGS = "yes"
|
||||
|
||||
# Set this if you wish to make pkgconfig libraries from your system available
|
||||
# for native builds. Combined with extra ASSUME_PROVIDEDs this can allow
|
||||
# native builds of applications like oprofileui-native (unsupported feature).
|
||||
#EXTRA_NATIVE_PKGCONFIG_PATH = ":/usr/lib/pkgconfig"
|
||||
#ASSUME_PROVIDED += "gtk+-native libglade-native"
|
||||
# Specifies a location to search for pre-generated tarballs when fetching
|
||||
# a cvs:// URI. Uncomment this, if you not want to pull directly from CVS.
|
||||
CVS_TARBALL_STASH = "http://folks.o-hand.com/~richard/poky/sources/"
|
||||
|
||||
ENABLE_BINARY_LOCALE_GENERATION = "1"
|
||||
|
||||
# The architecture to build SDK items for, by setting this you can build SDK
|
||||
# packages for architectures other than the host i.e. building i586 packages
|
||||
# on an x86_64 host.
|
||||
# Supported values are i586 and x86_64
|
||||
#SDKMACHINE="i586"
|
||||
|
||||
# Poky can try and fetch packaged-staging packages from a http, https or ftp
|
||||
# mirror. Set this variable to the root of a pstage directory on a server.
|
||||
#PSTAGE_MIRROR ?= "http://someserver.tld/share/pstage"
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
#
|
||||
# local.conf covers user settings, site.conf covers site specific information
|
||||
# such as proxy server addresses and optionally any shared download location
|
||||
#
|
||||
# SITE_CONF_VERSION is increased each time build/conf/site.conf
|
||||
# changes incompatibly
|
||||
SCONF_VERSION = "1"
|
||||
|
||||
# Uncomment to cause CVS to use the proxy host specified
|
||||
#CVS_PROXY_HOST = "proxy.example.com"
|
||||
#CVS_PROXY_PORT = "81"
|
||||
|
||||
# For svn, you need to create ~/.subversion/servers containing:
|
||||
#[global]
|
||||
#http-proxy-host = proxy.example.com
|
||||
#http-proxy-port = 81
|
||||
#
|
||||
|
||||
# Uncomment to cause git to use the proxy host specificed
|
||||
# although this only works for http
|
||||
#GIT_PROXY_HOST = "proxy.example.com"
|
||||
#GIT_PROXY_PORT = "81"
|
||||
#export GIT_PROXY_COMMAND = "${OEROOT}/scripts/poky-git-proxy-command"
|
||||
|
||||
# GIT_PROXY_IGNORE_* lines define hosts which do not require a proxy to access
|
||||
#GIT_CORE_CONFIG = "Yes"
|
||||
#GIT_PROXY_IGNORE_1 = "host.server.com"
|
||||
#GIT_PROXY_IGNORE_2 = "another.server.com"
|
||||
|
||||
# If SOCKS is available run the following command to comple a simple transport
|
||||
# gcc scripts/poky-git-proxy-socks.c -o poky-git-proxy-socks
|
||||
# and then share that binary somewhere in PATH, then use the following settings
|
||||
#GIT_PROXY_HOST = "proxy.example.com"
|
||||
#GIT_PROXY_PORT = "81"
|
||||
#export GIT_PROXY_COMMAND = "${OEROOT}/scripts/poky-git-proxy-socks-command"
|
||||
|
||||
|
||||
# Uncomment this to use a shared download directory
|
||||
#DL_DIR = "/some/shared/download/directory/"
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
2008-02-29 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* development.xml:
|
||||
Disable images too big / lack context for now.
|
||||
* introduction.xml:
|
||||
Remove some OH specific stuff.
|
||||
* style.css:
|
||||
Remove limit on image size
|
||||
|
||||
2008-02-15 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* introduction.xml:
|
||||
Minor tweaks to 'What is Poky'
|
||||
|
||||
2008-02-15 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* poky-handbook.xml:
|
||||
* poky-handbook.png
|
||||
* poky-beaver.png
|
||||
* poky-logo.svg:
|
||||
* style.css:
|
||||
Add some title images.
|
||||
|
||||
2008-02-14 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* development.xml:
|
||||
remove uri's
|
||||
* style.css:
|
||||
Fix glossary
|
||||
|
||||
2008-02-06 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* Makefile:
|
||||
Add various xslto options for html.
|
||||
* introduction.xml:
|
||||
Remove link in title.
|
||||
* style.css:
|
||||
Add initial version
|
||||
@@ -1,38 +0,0 @@
|
||||
all: html pdf tarball
|
||||
|
||||
pdf:
|
||||
./poky-doc-tools/poky-docbook-to-pdf poky-handbook.xml
|
||||
./poky-doc-tools/poky-docbook-to-pdf bsp-guide.xml
|
||||
# -- old way --
|
||||
# dblatex poky-handbook.xml
|
||||
|
||||
XSLTOPTS = --stringparam html.stylesheet style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel 1 \
|
||||
--stringparam section.autolabel 1 \
|
||||
--xinclude
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
|
||||
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
|
||||
|
||||
html:
|
||||
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
|
||||
xsltproc $(XSLTOPTS) -o poky-handbook.html $(XSL_XHTML_URI) poky-handbook.xml
|
||||
xsltproc $(XSLTOPTS) -o bsp-guide.html $(XSL_XHTML_URI) bsp-guide.xml
|
||||
# -- old way --
|
||||
# xmlto xhtml-nochunks poky-handbook.xml
|
||||
|
||||
tarball: html
|
||||
tar -cvzf poky-handbook.tgz poky-handbook.html style.css screenshots/ss-sato.png poky-beaver.png poky-handbook.png
|
||||
|
||||
validate:
|
||||
xmllint --postvalid --xinclude --noout poky-handbook.xml
|
||||
|
||||
OUTPUTS = poky-handbook.tgz poky-handbook.html poky-handbook.pdf bsp-guide.pdf
|
||||
SOURCES = *.png *.xml *.css *.svg
|
||||
|
||||
publish:
|
||||
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/
|
||||
@@ -1,11 +0,0 @@
|
||||
Handbook Todo List:
|
||||
|
||||
* Document adding a new IMAGE_FEATURE to the customising images section
|
||||
* Add instructions about using zaurus/openmoko emulation
|
||||
* Add component overview/block diagrams
|
||||
* Software Deevelopment intro should mention its software development for
|
||||
intended target and could be a different arch etc and thus special case.
|
||||
* Expand insane.bbclass documentation to cover tests
|
||||
* Document remaining classes (see list in ref-classes)
|
||||
* Document formfactor
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<book id='poky-handbook' lang='en'
|
||||
xmlns:xi="http://www.w3.org/2003/XInclude"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
>
|
||||
<bookinfo>
|
||||
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref='common/poky-handbook.png'
|
||||
format='SVG'
|
||||
align='center' scalefit='1' width='100%'/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>Board Support Package (BSP) Developers Guide</title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<firstname>Richard</firstname> <surname>Purdie</surname>
|
||||
<affiliation>
|
||||
<orgname>Intel Corporation</orgname>
|
||||
</affiliation>
|
||||
<email>richard@linux.intel.com</email>
|
||||
</author>
|
||||
</authorgroup>
|
||||
|
||||
<revhistory>
|
||||
<revision>
|
||||
<revnumber>0.4</revnumber>
|
||||
<date>26 May 2010</date>
|
||||
<revremark>Alpha Draft</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
<year>2010</year>
|
||||
<holder>Intel Corporation</holder>
|
||||
</copyright>
|
||||
|
||||
<legalnotice>
|
||||
<para>
|
||||
Permission is granted to copy, distribute and/or modify this document under
|
||||
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales</ulink> as published by Creative Commons.
|
||||
</para>
|
||||
</legalnotice>
|
||||
|
||||
</bookinfo>
|
||||
|
||||
<xi:include href="bsp.xml"/>
|
||||
|
||||
<index id='index'>
|
||||
<title>Index</title>
|
||||
</index>
|
||||
|
||||
</book>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
287
handbook/bsp.xml
287
handbook/bsp.xml
@@ -1,287 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='bsp'>
|
||||
|
||||
<title>Board Support Packages (BSP) - Developers Guide</title>
|
||||
|
||||
<para>
|
||||
A Board Support Package (BSP) is a collection of information which together
|
||||
defines how to support a particular hardware device, set of devices or
|
||||
hardware platform. It will include information about the hardware features
|
||||
present on the device, kernel configuration information along with any
|
||||
additional hardware drivers required and also any additional software
|
||||
components required in addition to a generic Linux software stack for both
|
||||
essential and optional platform features.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The intend of this document is to define a structure for these components
|
||||
so that BSPs follow a commonly understood layout allowing them to be
|
||||
provided in a common way that everyone understands. It also allows end
|
||||
users to become familiar with one common format and encourages standardisation
|
||||
of software support of hardware.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The proposed format does have elements that are specific to the Poky and
|
||||
OpenEmbedded build systems. It is intended that this information can be
|
||||
used by other systems besides Poky/OpenEmbedded and that it will be simple
|
||||
to extract information and convert to other formats if required. The format
|
||||
descriped can be directly accepted as a layer by Poky using its standard
|
||||
layers mechanism but its important to recognise that the BSP captures all
|
||||
the hardware specific details in one place in a standard format which is
|
||||
useful for any person wishing to use the hardware platform regardless of
|
||||
the build system in use.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The BSP specification does not include a build system or other tooling,
|
||||
it is concerned with the hardware specific components only. At the end
|
||||
distribution point the BSP may be shipped combined with a build system
|
||||
and other tools but it is important to maintain the distinction that these
|
||||
are separate components which may just be combined in certain end products.
|
||||
</para>
|
||||
|
||||
<section id='bsp-filelayout'>
|
||||
<title>Example Filesystem Layout</title>
|
||||
|
||||
<para>
|
||||
The BSP consists of a file structure inside a base directory, meta-bsp in this example where "bsp" is a placeholder for the machine or platform name. Examples of some files that it could contain are:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<programlisting>
|
||||
meta-bsp/
|
||||
meta-bsp/binary/zImage
|
||||
meta-bsp/binary/poky-image-minimal.directdisk
|
||||
meta-bsp/conf/layer.conf
|
||||
meta-bsp/conf/machine/*.conf
|
||||
meta-bsp/conf/machine/include/tune-*.inc
|
||||
meta-bsp/packages/bootloader/bootloader_0.1.bb
|
||||
meta-bsp/packages/linux/linux-bsp-2.6.50/*.patch
|
||||
meta-bsp/packages/linux/linux-bsp-2.6.50/defconfig-bsp
|
||||
meta-bsp/packages/linux/linux-bsp_2.6.50.bb
|
||||
meta-bsp/packages/modem/modem-driver_0.1.bb
|
||||
meta-bsp/packages/modem/modem-daemon_0.1.bb
|
||||
meta-bsp/packages/image-creator/image-creator-native_0.1.bb
|
||||
meta-bsp/prebuilds/
|
||||
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The following sections detail what these files and directories could contain.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='bsp-filelayout-binary'>
|
||||
<title>Prebuilt User Binaries (meta-bsp/binary/*)</title>
|
||||
|
||||
<para>
|
||||
This optional area cotains useful prebuilt kernels and userspace filesystem
|
||||
images appropriate to the target system. Users could use these to get a system
|
||||
running and quickly get started on development tasks. The exact types of binaries
|
||||
present will be highly hardware dependent but a README file should be present
|
||||
explaining how to use them with the target hardware. If prebuilt binaries are
|
||||
present, source code to meet licensing requirements must also be provided in
|
||||
some form.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='bsp-filelayout-layer'>
|
||||
<title>Layer Configuration (meta-bsp/conf/layer.conf)</title>
|
||||
|
||||
<para>
|
||||
This file identifies the structure as a Poky layer. This file identifies the
|
||||
contents of the layer and how contains information about how Poky should use
|
||||
it. In general it will most likely be a standard boilerplate file consisting of:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<programlisting>
|
||||
# We have a conf directory, add to BBPATH
|
||||
BBPATH := "${BBPATH}${LAYERDIR}"
|
||||
|
||||
# We have a packages directory, add to BBFILES
|
||||
BBFILES := "${BBFILES} ${LAYERDIR}/packages/*/*.bb"
|
||||
|
||||
BBFILE_COLLECTIONS += "meta-bsp"
|
||||
BBFILE_PATTERN_meta-bsp := "^${LAYERDIR}/"
|
||||
BBFILE_PRIORITY_meta-bsp = "5"
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
which simply makes bitbake aware of the packages and conf directories.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This file is required for recognition of the BSP by Poky.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='bsp-filelayout-machine'>
|
||||
<title>Hardware Configuration Options (meta-bsp/conf/machine/*.conf)</title>
|
||||
|
||||
<para>
|
||||
The machine files bind together all the information contained elsewhere
|
||||
in the BSP into a format that Poky/OpenEmbedded can understand it in. If
|
||||
the BSP supports multiple machines, multiple machine configuration files
|
||||
can be present. These filenames correspond to the values users set the
|
||||
MACHINE variable to.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
These files would define things like which kernel package to use
|
||||
(PREFERRED_PROVIDER of virtual/kernel), which hardware drivers to
|
||||
include in different types of images, any special software components
|
||||
that are needed, any bootloader information and also any special image
|
||||
format requirements.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
At least one machine file is required for a Poky BSP layer but more than one may be present.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='bsp-filelayout-tune'>
|
||||
<title>Hardware Optimisation Options (meta-bsp/conf/machine/include/tune-*.inc)</title>
|
||||
|
||||
<para>
|
||||
These are shared hardware "tuning" definitions and are commonly used to
|
||||
pass specific optimisation flags to the compiler. An example is
|
||||
tune-atom.inc:
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
BASE_PACKAGE_ARCH = "core2"
|
||||
TARGET_CC_ARCH = "-m32 -march=core2 -msse3 -mtune=generic -mfpmath=sse"
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
which defines a new package architecture called "core2" and uses the
|
||||
optimisation flags specified which are carefully chosen to give best
|
||||
performance on atom cpus.
|
||||
</para>
|
||||
<para>
|
||||
The tune file would be included by the machine definition and can be
|
||||
contained in the BSP or reference one from the standard core set of
|
||||
files included with Poky itself.
|
||||
</para>
|
||||
<para>
|
||||
These files are optional for a Poky BSP layer.
|
||||
</para>
|
||||
</section>
|
||||
<section id='bsp-filelayout-kernel'>
|
||||
<title>Linux Kernel Configuration (meta-bsp/packages/linux/*)</title>
|
||||
|
||||
<para>
|
||||
These files make up the definition of a kernel to use with this
|
||||
hardware. In this case its a complete self contained kernel with its own
|
||||
configuration and patches but kernels can be shared between many
|
||||
machines as well. Taking some specific example files:
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
meta-bsp/packages/linux/linux-bsp_2.6.50.bb
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
which is the core kernel recipe which firstly details where to get the kernel
|
||||
source from. All standard source code locations are supported so this could
|
||||
be a release tarball, some git repository or source included in
|
||||
the directory within the BSP itself. It then contains information about which
|
||||
patches to apply and how to configure and build it. It can reuse the main
|
||||
Poky kernel build class meaning the definitions here can remain very simple.
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
linux-bsp-2.6.50/*.patch
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
which are patches which may be applied against the base kernel, wherever
|
||||
that may have been obtained from.
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
meta-bsp/packages/linux/linux-bsp-2.6.50/defconfig-bsp
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
which is the configuration information to use to configure the kernel.
|
||||
</para>
|
||||
<para>
|
||||
Examples of kernel recipes are available in Poky itself. These files are
|
||||
optional since a kernel from Poky itself could be selected although it
|
||||
would be unusual not to have a kernel configuration.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='bsp-filelayout-packages'>
|
||||
<title>Other Software (meta-bsp/packages/*)</title>
|
||||
|
||||
<para>
|
||||
This area includes other pieces of software which the hardware may need for best
|
||||
operation. These are just examples of the kind of things that may be
|
||||
encountered. The are standard .bb file recipes in the usual Poky format
|
||||
so for examples, see standard Poky recipes. The source can be included directly,
|
||||
referred to in source control systems or release tarballs of external software projects.
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
meta-bsp/packages/bootloader/bootloader_0.1.bb
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
Some kind of bootloader recipe which may be used to generate a new
|
||||
bootloader binary. Sometimes these are included in the final image
|
||||
format and needed to reflash hardware.
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
meta-bsp/packages/modem/modem-driver_0.1.bb
|
||||
meta-bsp/packages/modem/modem-daemon_0.1.bb
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
These are examples of a hardware driver and also a hardware daemon which
|
||||
may need to be included in images to make the hardware useful. "modem"
|
||||
is one example but there may be other components needed like firmware.
|
||||
</para>
|
||||
<para>
|
||||
<programlisting>
|
||||
meta-bsp/packages/image-creator/image-creator-native_0.1.bb
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
Sometimes the device will need an image in a very specific format for
|
||||
its update mechanism to accept and reflash with it. Recipes to build the
|
||||
tools needed to do this can be included with the BSP.
|
||||
</para>
|
||||
<para>
|
||||
These files only need be provided if the platform requires them.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='bsp-filelayout-prebuilds'>
|
||||
<title>Prebuild Data (meta-bsp/prebuilds/*)</title>
|
||||
|
||||
<para>
|
||||
The location can contains a precompiled representation of the source code
|
||||
contained elsewhere in the BSP layer. It can be processed and used by
|
||||
Poky to provide much faster build times assuming a compatible configuration is used.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
These files are optional.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
@@ -1,856 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id="platdev">
|
||||
<title>Platform Development with Poky</title>
|
||||
|
||||
<section id="platdev-appdev">
|
||||
<title>Software development</title>
|
||||
<para>
|
||||
Poky supports several methods of software development. These different
|
||||
forms of development are explained below and can be switched
|
||||
between as needed.
|
||||
</para>
|
||||
|
||||
<section id="platdev-appdev-external-sdk">
|
||||
<title>Developing externally using the Poky SDK</title>
|
||||
|
||||
<para>
|
||||
The meta-toolchain and meta-toolchain-sdk targets (<link linkend='ref-images'>see
|
||||
the images section</link>) build tarballs which contain toolchains and
|
||||
libraries suitable for application development outside Poky. These unpack into the
|
||||
<filename class="directory">/usr/local/poky</filename> directory and contain
|
||||
a setup script, e.g.
|
||||
<filename>/usr/local/poky/eabi-glibc/arm/environment-setup</filename> which
|
||||
can be sourced to initialise a suitable environment. After sourcing this, the
|
||||
compiler, QEMU scripts, QEMU binary, a special version of pkgconfig and other
|
||||
useful utilities are added to the PATH. Variables to assist pkgconfig and
|
||||
autotools are also set so that, for example, configure can find pre-generated test
|
||||
results for tests which need target hardware to run.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Using the toolchain with autotool enabled packages is straightforward, just pass the
|
||||
appropriate host option to configure e.g. "./configure --host=arm-poky-linux-gnueabi".
|
||||
For other projects it is usually a case of ensuring the cross tools are used e.g.
|
||||
CC=arm-poky-linux-gnueabi-gcc and LD=arm-poky-linux-gnueabi-ld.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta">
|
||||
<title>Developing externally using the Anjuta plugin</title>
|
||||
|
||||
<para>
|
||||
An Anjuta IDE plugin exists to make developing software within the Poky framework
|
||||
easier for the application developer. It presents a graphical IDE from which the
|
||||
developer can cross compile an application then deploy and execute the output in a QEMU
|
||||
emulation session. It also supports cross debugging and profiling.
|
||||
</para>
|
||||
<!-- DISBALED, TOO BIG!
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-anjuta-poky-1.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>The Anjuta Poky SDK plugin showing an active QEMU session running Sato</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
-->
|
||||
<para>
|
||||
To use the plugin, a toolchain and SDK built by Poky is required along with Anjuta it's development
|
||||
headers and the Anjuta plugin. The Poky Anjuta plugin is available to download as a tarball at the
|
||||
<ulink url='http://labs.o-hand.com/sources/anjuta-plugin-sdk/'>OpenedHand labs</ulink> page or
|
||||
directly from the Poky Git repository located at git://git.pokylinux.org/anjuta-poky; a web interface
|
||||
to the repository can be accessed at <ulink url='http://git.pokylinux.org/cgit.cgi/anjuta-poky/'/>.
|
||||
</para>
|
||||
<para>
|
||||
See the README file contained in the project for more information on dependencies and building
|
||||
the plugin. It's recommended you enable the experimental gdb integration by passing configure the
|
||||
--enable-gdb-integration switch.
|
||||
</para>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta-setup">
|
||||
<title>Setting up the Anjuta plugin</title>
|
||||
|
||||
<para>Extract the tarball for the toolchain into / as root. The
|
||||
toolchain will be installed into
|
||||
<filename class="directory">/usr/local/poky</filename>.</para>
|
||||
|
||||
<para>To use the plugin, first open or create an existing
|
||||
project. If creating a new project the "C GTK+" project type
|
||||
will allow itself to be cross-compiled. However you should be
|
||||
aware that this uses glade for the UI.</para>
|
||||
|
||||
<para>To activate the plugin go to
|
||||
<menuchoice><guimenu>Edit</guimenu><guimenuitem>Preferences</guimenuitem></menuchoice>,
|
||||
then choose <guilabel>General</guilabel> from the left hand side. Choose the
|
||||
Installed plugins tab, scroll down to <guilabel>Poky
|
||||
SDK</guilabel> and check the
|
||||
box. The plugin is now activated but first it must be
|
||||
configured.</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta-configuration">
|
||||
<title>Configuring the Anjuta plugin</title>
|
||||
|
||||
<para>The configuration options for the SDK can be found by choosing
|
||||
the <guilabel>Poky SDK</guilabel> icon from the left hand side. The following options
|
||||
need to be set:</para>
|
||||
|
||||
<itemizedlist>
|
||||
|
||||
<listitem><para><guilabel>SDK root</guilabel>: this is the root directory of the SDK
|
||||
for an ARM EABI SDK this will be <filename
|
||||
class="directory">/usr/local/poky/eabi-glibc/arm</filename>.
|
||||
This directory will contain directories named like "bin",
|
||||
"include", "var", etc. With the file chooser it is important
|
||||
to enter into the "arm" subdirectory for this
|
||||
example.</para></listitem>
|
||||
|
||||
<listitem><para><guilabel>Toolchain triplet</guilabel>: this is the cross compile
|
||||
triplet, e.g. "arm-poky-linux-gnueabi".</para></listitem>
|
||||
|
||||
<listitem><para><guilabel>Kernel</guilabel>: use the file chooser to select the kernel
|
||||
to use with QEMU</para></listitem>
|
||||
|
||||
<listitem><para><guilabel>Root filesystem</guilabel>: use the file chooser to select
|
||||
the root filesystem image, this should be an image (not a
|
||||
tarball)</para></listitem>
|
||||
</itemizedlist>
|
||||
<!-- DISBALED, TOO BIG!
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-anjuta-poky-2.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>Anjuta Preferences Dialog</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
-->
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-external-anjuta-usage">
|
||||
<title>Using the Anjuta plugin</title>
|
||||
|
||||
<para>As an example, cross-compiling a project, deploying it into
|
||||
QEMU and running a debugger against it and then doing a system
|
||||
wide profile.</para>
|
||||
|
||||
<para>Choose <menuchoice><guimenu>Build</guimenu><guimenuitem>Run
|
||||
Configure</guimenuitem></menuchoice> or
|
||||
<menuchoice><guimenu>Build</guimenu><guimenuitem>Run
|
||||
Autogenerate</guimenuitem></menuchoice> to run "configure"
|
||||
(or to run "autogen") for the project. This passes command line
|
||||
arguments to instruct it to cross-compile.</para>
|
||||
|
||||
<para>Next do
|
||||
<menuchoice><guimenu>Build</guimenu><guimenuitem>Build
|
||||
Project</guimenuitem></menuchoice> to build and compile the
|
||||
project. If you have previously built the project in the same
|
||||
tree without using the cross-compiler you may find that your
|
||||
project fails to link. Simply do
|
||||
<menuchoice><guimenu>Build</guimenu><guimenuitem>Clean
|
||||
Project</guimenuitem></menuchoice> to remove the old
|
||||
binaries. You may then try building again.</para>
|
||||
|
||||
<para>Next start QEMU by using
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Start
|
||||
QEMU</guimenuitem></menuchoice>, this will start QEMU and
|
||||
will show any error messages in the message view. Once Poky has
|
||||
fully booted within QEMU you may now deploy into it.</para>
|
||||
|
||||
<para>Once built and QEMU is running, choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Deploy</guimenuitem></menuchoice>,
|
||||
this will install the package into a temporary directory and
|
||||
then copy using rsync over SSH into the target. Progress and
|
||||
messages will be shown in the message view.</para>
|
||||
|
||||
<para>To debug a program installed into onto the target choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Debug
|
||||
remote</guimenuitem></menuchoice>. This prompts for the
|
||||
local binary to debug and also the command line to run on the
|
||||
target. The command line to run should include the full path to
|
||||
the to binary installed in the target. This will start a
|
||||
gdbserver over SSH on the target and also an instance of a
|
||||
cross-gdb in a local terminal. This will be preloaded to connect
|
||||
to the server and use the <guilabel>SDK root</guilabel> to find
|
||||
symbols. This gdb will connect to the target and load in
|
||||
various libraries and the target program. You should setup any
|
||||
breakpoints or watchpoints now since you might not be able to
|
||||
interrupt the execution later. You may stop
|
||||
the debugger on the target using
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Stop
|
||||
debugger</guimenuitem></menuchoice>.</para>
|
||||
|
||||
<para>It is also possible to execute a command in the target over
|
||||
SSH, the appropriate environment will be be set for the
|
||||
execution. Choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Run
|
||||
remote</guimenuitem></menuchoice> to do this. This will open
|
||||
a terminal with the SSH command inside.</para>
|
||||
|
||||
<para>To do a system wide profile against the system running in
|
||||
QEMU choose
|
||||
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Profile
|
||||
remote</guimenuitem></menuchoice>. This will start up
|
||||
OProfileUI with the appropriate parameters to connect to the
|
||||
server running inside QEMU and will also supply the path to the
|
||||
debug information necessary to get a useful profile.</para>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
|
||||
|
||||
<section id="platdev-appdev-qemu">
|
||||
<title>Developing externally in QEMU</title>
|
||||
<para>
|
||||
Running Poky QEMU images is covered in the <link
|
||||
linkend='intro-quickstart-qemu'>Running an Image</link> section.
|
||||
</para>
|
||||
<para>
|
||||
Poky's QEMU images contain a complete native toolchain. This means
|
||||
that applications can be developed within QEMU in the same was as a
|
||||
normal system. Using qemux86 on an x86 machine is fast since the
|
||||
guest and host architectures match, qemuarm is slower but gives
|
||||
faithful emulation of ARM specific issues. To speed things up these
|
||||
images support using distcc to call a cross-compiler outside the
|
||||
emulated system too. If <command>runqemu</command> was used to start
|
||||
QEMU, and distccd is present on the host system, any bitbake cross
|
||||
compiling toolchain available from the build system will automatically
|
||||
be used from within qemu simply by calling distcc
|
||||
(<command>export CC="distcc"</command> can be set in the enviroment).
|
||||
Alterntatively, if a suitable SDK/toolchain is present in
|
||||
<filename class="directory">/usr/local/poky</filename> it will also
|
||||
automatically be used.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
There are several options for connecting into the emulated system.
|
||||
QEMU provides a framebuffer interface which has standard consoles
|
||||
available. There is also a serial connection available which has a
|
||||
console to the system running on it and IP networking as standard.
|
||||
The images have a dropbear ssh server running with the root password
|
||||
disabled allowing standard ssh and scp commands to work. The images
|
||||
also contain an NFS server exporting the guest's root filesystem
|
||||
allowing that to be made available to the host.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-chroot">
|
||||
<title>Developing externally in a chroot</title>
|
||||
<para>
|
||||
If you have a system that matches the architecture of the Poky machine you're using,
|
||||
such as qemux86, you can run binaries directly from the image on the host system
|
||||
using a chroot combined with tools like <ulink url='http://projects.o-hand.com/xephyr'>Xephyr</ulink>.
|
||||
</para>
|
||||
<para>
|
||||
Poky has some scripts to make using its qemux86 images within a chroot easier. To use
|
||||
these you need to install the poky-scripts package or otherwise obtain the
|
||||
<filename>poky-chroot-setup</filename> and <filename>poky-chroot-run</filename> scripts.
|
||||
You also need Xephyr and chrootuid binaries available. To initialize a system use the setup script:
|
||||
</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# poky-chroot-setup <qemux86-rootfs.tgz> <target-directory>
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
which will unpack the specified qemux86 rootfs tarball into the target-directory.
|
||||
You can then start the system with:
|
||||
</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# poky-chroot-run <target-directory> <command>
|
||||
</literallayout>
|
||||
</para>
|
||||
<para>
|
||||
where the target-directory is the place the rootfs was unpacked to and command is
|
||||
an optional command to run. If no command is specified, the system will drop you
|
||||
within a bash shell. A Xephyr window will be displayed containing the emulated
|
||||
system and you may be asked for a password since some of the commands used for
|
||||
bind mounting directories need to be run using sudo.
|
||||
</para>
|
||||
<para>
|
||||
There are limits as to how far the the realism of the chroot environment extends.
|
||||
It is useful for simple development work or quick tests but full system emulation
|
||||
with QEMU offers a much more realistic environment for more complex development
|
||||
tasks. Note that chroot support within Poky is still experimental.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-insitu">
|
||||
<title>Developing in Poky directly</title>
|
||||
<para>
|
||||
Working directly in Poky is a fast and effective development technique.
|
||||
The idea is that you can directly edit files in
|
||||
<glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm>
|
||||
or the source directory <glossterm><link linkend='var-S'>S</link></glossterm>
|
||||
and then force specific tasks to rerun in order to test the changes.
|
||||
An example session working on the matchbox-desktop package might
|
||||
look like this:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake matchbox-desktop
|
||||
$ sh
|
||||
$ cd tmp/work/armv5te-poky-linux-gnueabi/matchbox-desktop-2.0+svnr1708-r0/
|
||||
$ cd matchbox-desktop-2
|
||||
$ vi src/main.c
|
||||
$ exit
|
||||
$ bitbake matchbox-desktop -c compile -f
|
||||
$ bitbake matchbox-desktop
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Here, we build the package, change into the work directory for the package,
|
||||
change a file, then recompile the package. Instead of using sh like this,
|
||||
you can also use two different terminals. The risk with working like this
|
||||
is that a command like unpack could wipe out the changes you've made to the
|
||||
work directory so you need to work carefully.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It is useful when making changes directly to the work directory files to do
|
||||
so using quilt as detailed in the <link linkend='usingpoky-modifying-packages-quilt'>
|
||||
modifying packages with quilt</link> section. The resulting patches can be copied
|
||||
into the recipe directory and used directly in the <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm>.
|
||||
</para>
|
||||
<para>
|
||||
For a review of the skills used in this section see Sections <link
|
||||
linkend="usingpoky-components-bitbake">2.1.1</link> and <link
|
||||
linkend="usingpoky-debugging-taskrunning">2.4.2</link>.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-devshell">
|
||||
<title>Developing with 'devshell'</title>
|
||||
|
||||
<para>
|
||||
When debugging certain commands or even to just edit packages, the
|
||||
'devshell' can be a useful tool. To start it you run a command like:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake matchbox-desktop -c devshell
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
which will open a terminal with a shell prompt within the Poky
|
||||
environment. This means PATH is setup to include the cross toolchain,
|
||||
the pkgconfig variables are setup to find the right .pc files,
|
||||
configure will be able to find the Poky site files etc. Within this
|
||||
environment, you can run configure or compile command as if they
|
||||
were being run by Poky itself. You are also changed into the
|
||||
source (<glossterm><link linkend='var-S'>S</link></glossterm>)
|
||||
directory automatically. When finished with the shell just exit it
|
||||
or close the terminal window.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The default shell used by devshell is the gnome-terminal. Other
|
||||
forms of terminal can also be used by setting the <glossterm>
|
||||
<link linkend='var-TERMCMD'>TERMCMD</link></glossterm> and <glossterm>
|
||||
<link linkend='var-TERMCMDRUN'>TERMCMDRUN</link></glossterm> variables
|
||||
in local.conf. For examples of the other options available, see
|
||||
<filename>meta/conf/bitbake.conf</filename>. An external shell is
|
||||
launched rather than opening directly into the original terminal
|
||||
window to make interaction with bitbakes multiple threads easier
|
||||
and also allow a client/server split of bitbake in the future
|
||||
(devshell will still work over X11 forwarding or similar).
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It is worth remembering that inside devshell you need to use the full
|
||||
compiler name such as <command>arm-poky-linux-gnueabi-gcc</command>
|
||||
instead of just <command>gcc</command> and the same applies to other
|
||||
applications from gcc, bintuils, libtool etc. Poky will have setup
|
||||
environmental variables such as CC to assist applications, such as make,
|
||||
find the correct tools.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-appdev-srcrev">
|
||||
<title>Developing within Poky with an external SCM based package</title>
|
||||
|
||||
<para>
|
||||
If you're working on a recipe which pulls from an external SCM it
|
||||
is possible to have Poky notice new changes added to the
|
||||
SCM and then build the latest version. This only works for SCMs
|
||||
where its possible to get a sensible revision number for changes.
|
||||
Currently it works for svn, git and bzr repositories.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To enable this behaviour it is simply a case of adding <glossterm>
|
||||
<link linkend='var-SRCREV'>SRCREV</link></glossterm>_pn-<glossterm>
|
||||
<link linkend='var-PN'>PN</link></glossterm> = "${AUTOREV}" to
|
||||
local.conf where <glossterm><link linkend='var-PN'>PN</link></glossterm>
|
||||
is the name of the package for which you want to enable automatic source
|
||||
revision updating.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-gdb-remotedebug">
|
||||
<title>Debugging with GDB Remotely</title>
|
||||
|
||||
<para>
|
||||
<ulink url="http://sourceware.org/gdb/">GDB</ulink> (The GNU Project Debugger)
|
||||
allows you to examine running programs to understand and fix problems and
|
||||
also to perform postmortem style analsys of program crashes. It is available
|
||||
as a package within poky and installed by default in sdk images. It works best
|
||||
when -dbg packages for the application being debugged are installed as the
|
||||
extra symbols give more meaningful output from GDB.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Sometimes, due to memory or disk space constraints, it is not possible
|
||||
to use GDB directly on the remote target to debug applications. This is
|
||||
due to the fact that
|
||||
GDB needs to load the debugging information and the binaries of the
|
||||
process being debugged. GDB then needs to perform many
|
||||
computations to locate information such as function names, variable
|
||||
names and values, stack traces, etc. even before starting the debugging
|
||||
process. This places load on the target system and can alter the
|
||||
characteristics of the program being debugged.
|
||||
</para>
|
||||
<para>
|
||||
This is where GDBSERVER comes into play as it runs on the remote target
|
||||
and does not load any debugging information from the debugged process.
|
||||
Instead, the debugging information processing is done by a GDB instance
|
||||
running on a distant computer - the host GDB. The host GDB then sends
|
||||
control commands to GDBSERVER to make it stop or start the debugged
|
||||
program, as well as read or write some memory regions of that debugged
|
||||
program. All the debugging information loading and processing as well
|
||||
as the heavy debugging duty is done by the host GDB, giving the
|
||||
GDBSERVER running on the target a chance to remain small and fast.
|
||||
</para>
|
||||
<para>
|
||||
As the host GDB is responsible for loading the debugging information and
|
||||
doing the necessary processing to make actual debugging happen, the
|
||||
user has to make sure it can access the unstripped binaries complete
|
||||
with their debugging information and compiled with no optimisations. The
|
||||
host GDB must also have local access to all the libraries used by the
|
||||
debugged program. On the remote target the binaries can remain stripped
|
||||
as GDBSERVER does not need any debugging information there. However they
|
||||
must also be compiled without optimisation matching the host's binaries.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The binary being debugged on the remote target machine is hence referred
|
||||
to as the 'inferior' in keeping with GDB documentation and terminology.
|
||||
Further documentation on GDB, is available on
|
||||
<ulink url="http://sourceware.org/gdb/documentation/">on their site</ulink>.
|
||||
</para>
|
||||
|
||||
<section id="platdev-gdb-remotedebug-launch-gdbserver">
|
||||
<title>Launching GDBSERVER on the target</title>
|
||||
<para>
|
||||
First, make sure gdbserver is installed on the target. If not,
|
||||
install the gdbserver package (which needs the libthread-db1
|
||||
package).
|
||||
</para>
|
||||
<para>
|
||||
To launch GDBSERVER on the target and make it ready to "debug" a
|
||||
program located at <emphasis>/path/to/inferior</emphasis>, connect
|
||||
to the target and launch:
|
||||
<programlisting>$ gdbserver localhost:2345 /path/to/inferior</programlisting>
|
||||
After that, gdbserver should be listening on port 2345 for debugging
|
||||
commands coming from a remote GDB process running on the host computer.
|
||||
Communication between the GDBSERVER and the host GDB will be done using
|
||||
TCP. To use other communication protocols please refer to the
|
||||
GDBSERVER documentation.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb">
|
||||
<title>Launching GDB on the host computer</title>
|
||||
|
||||
<para>
|
||||
Running GDB on the host computer takes a number of stages, described in the
|
||||
following sections.
|
||||
</para>
|
||||
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-buildcross">
|
||||
<title>Build the cross GDB package</title>
|
||||
<para>
|
||||
A suitable gdb cross binary is required which runs on your host computer but
|
||||
knows about the the ABI of the remote target. This can be obtained from
|
||||
the the Poky toolchain, e.g.
|
||||
<filename>/usr/local/poky/eabi-glibc/arm/bin/arm-poky-linux-gnueabi-gdb</filename>
|
||||
which "arm" is the target architecture and "linux-gnueabi" the target ABI.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively this can be built directly by Poky. To do this you would build
|
||||
the gdb-cross package so for example you would run:
|
||||
<programlisting>bitbake gdb-cross</programlisting>
|
||||
Once built, the cross gdb binary can be found at
|
||||
<programlisting>tmp/sysroots/<host-arch</usr/bin/<target-abi>-gdb </programlisting>
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-inferiorbins">
|
||||
|
||||
<title>Making the inferior binaries available</title>
|
||||
|
||||
<para>
|
||||
The inferior binary needs to be available to GDB complete with all debugging
|
||||
symbols in order to get the best possible results along with any libraries
|
||||
the inferior depends on and their debugging symbols. There are a number of
|
||||
ways this can be done.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Perhaps the easiest is to have an 'sdk' image corresponding to the plain
|
||||
image installed on the device. In the case of 'pky-image-sato',
|
||||
'poky-image-sdk' would contain suitable symbols. The sdk images already
|
||||
have the debugging symbols installed so its just a question expanding the
|
||||
archive to some location and telling GDB where this is.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively, poky can build a custom directory of files for a specific
|
||||
debugging purpose by reusing its tmp/rootfs directory, on the host computer
|
||||
in a slightly different way to normal. This directory contains the contents
|
||||
of the last built image. This process assumes the image running on the
|
||||
target was the last image to be built by Poky, the package <emphasis>foo</emphasis>
|
||||
contains the inferior binary to be debugged has been built without without
|
||||
optimisation and has debugging information available.
|
||||
</para>
|
||||
<para>
|
||||
Firstly you want to install the <emphasis>foo</emphasis> package to tmp/rootfs
|
||||
by doing:
|
||||
</para>
|
||||
<programlisting>tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf -o \
|
||||
tmp/rootfs/ update</programlisting>
|
||||
<para>
|
||||
then,
|
||||
</para>
|
||||
<programlisting>tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \
|
||||
-o tmp/rootfs install foo
|
||||
|
||||
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
|
||||
tmp/work/<target-abi>/poky-image-sato-1.0-r0/temp/opkg.conf \
|
||||
-o tmp/rootfs install foo-dbg</programlisting>
|
||||
<para>
|
||||
which installs the debugging information too.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-launchhost">
|
||||
|
||||
<title>Launch the host GDB</title>
|
||||
<para>
|
||||
To launch the host GDB, run the cross gdb binary identified above with
|
||||
the inferior binary specified on the commandline:
|
||||
<programlisting><target-abi>-gdb rootfs/usr/bin/foo</programlisting>
|
||||
This loads the binary of program <emphasis>foo</emphasis>
|
||||
as well as its debugging information. Once the gdb prompt
|
||||
appears, you must instruct GDB to load all the libraries
|
||||
of the inferior from tmp/rootfs:
|
||||
<programlisting>set solib-absolute-prefix /path/to/tmp/rootfs</programlisting>
|
||||
where <filename>/path/to/tmp/rootfs</filename> must be
|
||||
the absolute path to <filename>tmp/rootfs</filename> or wherever the
|
||||
binaries with debugging information are located.
|
||||
</para>
|
||||
<para>
|
||||
Now, tell GDB to connect to the GDBSERVER running on the remote target:
|
||||
<programlisting>target remote remote-target-ip-address:2345</programlisting>
|
||||
Where remote-target-ip-address is the IP address of the
|
||||
remote target where the GDBSERVER is running. 2345 is the
|
||||
port on which the GDBSERVER is running.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id="platdev-gdb-remotedebug-launch-gdb-using">
|
||||
|
||||
<title>Using the Debugger</title>
|
||||
<para>
|
||||
Debugging can now proceed as normal, as if the debugging were being done on the
|
||||
local machine, for example to tell GDB to break in the <emphasis>main</emphasis>
|
||||
function, for instance:
|
||||
<programlisting>break main</programlisting>
|
||||
and then to tell GDB to "continue" the inferior execution,
|
||||
<programlisting>continue</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
For more information about using GDB please see the
|
||||
project's online documentation at <ulink
|
||||
url="http://sourceware.org/gdb/download/onlinedocs/"/>.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-oprofile">
|
||||
<title>Profiling with OProfile</title>
|
||||
|
||||
<para>
|
||||
<ulink url="http://oprofile.sourceforge.net/">OProfile</ulink> is a
|
||||
statistical profiler well suited to finding performance
|
||||
bottlenecks in both userspace software and the kernel. It provides
|
||||
answers to questions like "Which functions does my application spend
|
||||
the most time in when doing X?". Poky is well integrated with OProfile
|
||||
to make profiling applications on target hardware straightforward.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To use OProfile you need an image with OProfile installed. The easiest
|
||||
way to do this is with "tools-profile" in <glossterm><link
|
||||
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>. You also
|
||||
need debugging symbols to be available on the system where the analysis
|
||||
will take place. This can be achieved with "dbg-pkgs" in <glossterm><link
|
||||
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm> or by
|
||||
installing the appropriate -dbg packages. For
|
||||
successful call graph analysis the binaries must preserve the frame
|
||||
pointer register and hence should be compiled with the
|
||||
"-fno-omit-framepointer" flag. In Poky this can be achieved with
|
||||
<glossterm><link linkend='var-SELECTED_OPTIMIZATION'>SELECTED_OPTIMIZATION
|
||||
</link></glossterm> = "-fexpensive-optimizations -fno-omit-framepointer
|
||||
-frename-registers -O2" or by setting <glossterm><link
|
||||
linkend='var-DEBUG_BUILD'>DEBUG_BUILD</link></glossterm> = "1" in
|
||||
local.conf (the latter will also add extra debug information making the
|
||||
debug packages large).
|
||||
</para>
|
||||
|
||||
<section id="platdev-oprofile-target">
|
||||
<title>Profiling on the target</title>
|
||||
|
||||
<para>
|
||||
All the profiling work can be performed on the target device. A
|
||||
simple OProfile session might look like:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# opcontrol --reset
|
||||
# opcontrol --start --separate=lib --no-vmlinux -c 5
|
||||
[do whatever is being profiled]
|
||||
# opcontrol --stop
|
||||
$ opreport -cl
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Here, the reset command clears any previously profiled data,
|
||||
OProfile is then started. The options used to start OProfile mean
|
||||
dynamic library data is kept separately per application, kernel
|
||||
profiling is disabled and callgraphing is enabled up to 5 levels
|
||||
deep. To profile the kernel, you would specify the
|
||||
<parameter>--vmlinux=/path/to/vmlinux</parameter> option (the vmlinux file is usually in
|
||||
<filename class="directory">/boot/</filename> in Poky and must match the running kernel). The profile is
|
||||
then stopped and the results viewed with opreport with options
|
||||
to see the separate library symbols and callgraph information.
|
||||
</para>
|
||||
<para>
|
||||
Callgraphing means OProfile not only logs infomation about which
|
||||
functions time is being spent in but also which functions
|
||||
called those functions (their parents) and which functions that
|
||||
function calls (its children). The higher the callgraphing depth,
|
||||
the more accurate the results but this also increased the loging
|
||||
overhead so it should be used with caution. On ARM, binaries need
|
||||
to have the frame pointer enabled for callgraphing to work (compile
|
||||
with the gcc option -fno-omit-framepointer).
|
||||
</para>
|
||||
<para>
|
||||
For more information on using OProfile please see the OProfile
|
||||
online documentation at <ulink
|
||||
url="http://oprofile.sourceforge.net/docs/"/>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-oprofile-oprofileui">
|
||||
<title>Using OProfileUI</title>
|
||||
|
||||
<para>
|
||||
A graphical user interface for OProfile is also available. You can
|
||||
either use prebuilt Debian packages from the <ulink
|
||||
url='http://debian.o-hand.com/'>OpenedHand repository</ulink> or
|
||||
download and build from svn at
|
||||
http://svn.o-hand.com/repos/oprofileui/trunk/. If the
|
||||
"tools-profile" image feature is selected, all necessary binaries
|
||||
are installed onto the target device for OProfileUI interaction.
|
||||
</para>
|
||||
|
||||
<!-- DISBALED, Need a more 'contexual' shot?
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-oprofile-viewer.png" format="PNG"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>OProfileUI Viewer showing an application being profiled on a remote device</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
-->
|
||||
<para>
|
||||
In order to convert the data in the sample format from the target
|
||||
to the host the <filename>opimport</filename> program is needed.
|
||||
This is not included in standard Debian OProfile packages but an
|
||||
OProfile package with this addition is also available from the <ulink
|
||||
url='http://debian.o-hand.com/'>OpenedHand repository</ulink>.
|
||||
We recommend using OProfile 0.9.3 or greater. Other patches to
|
||||
OProfile may be needed for recent OProfileUI features, but Poky
|
||||
usually includes all needed patches on the target device. Please
|
||||
see the <ulink
|
||||
url='http://svn.o-hand.com/repos/oprofileui/trunk/README'>
|
||||
OProfileUI README</ulink> for up to date information, and the
|
||||
<ulink url="http://labs.o-hand.com/oprofileui">OProfileUI website
|
||||
</ulink> for more information on the OProfileUI project.
|
||||
</para>
|
||||
|
||||
<section id="platdev-oprofile-oprofileui-online">
|
||||
<title>Online mode</title>
|
||||
|
||||
<para>
|
||||
This assumes a working network connection with the target
|
||||
hardware. In this case you just need to run <command>
|
||||
"oprofile-server"</command> on the device. By default it listens
|
||||
on port 4224. This can be changed with the <parameter>--port</parameter> command line
|
||||
option.
|
||||
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The client program is called <command>oprofile-viewer</command>. The
|
||||
UI is relatively straightforward, the key functionality is accessed
|
||||
through the buttons on the toolbar (which are duplicated in the
|
||||
menus.) These buttons are:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Connect - connect to the remote host, the IP address or hostname for the
|
||||
target can be supplied here.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Disconnect - disconnect from the target.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Start - start the profiling on the device.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Stop - stop the profiling on the device and download the data to the local
|
||||
host. This will generate the profile and show it in the viewer.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Download - download the data from the target, generate the profile and show it
|
||||
in the viewer.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Reset - reset the sample data on the device. This will remove the sample
|
||||
information that was collected on a previous sampling run. Ensure you do this
|
||||
if you do not want to include old sample information.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Save - save the data downloaded from the target to another directory for later
|
||||
examination.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Open - load data that was previously saved.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
The behaviour of the client is to download the complete 'profile archive' from
|
||||
the target to the host for processing. This archive is a directory containing
|
||||
the sample data, the object files and the debug information for said object
|
||||
files. This archive is then converted using a script included in this
|
||||
distribution ('oparchconv') that uses 'opimport' to convert the archive from
|
||||
the target to something that can be processed on the host.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Downloaded archives are kept in /tmp and cleared up when they are no longer in
|
||||
use.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If you wish to profile into the kernel, this is possible, you just need to ensure
|
||||
a vmlinux file matching the running kernel is available. In Poky this is usually
|
||||
located in /boot/vmlinux-KERNELVERSION, where KERNEL-version is the version of
|
||||
the kernel e.g. 2.6.23. Poky generates separate vmlinux packages for each kernel
|
||||
it builds so it should be a question of just ensuring a matching package is
|
||||
installed (<command> opkg install kernel-vmlinux</command>. These are automatically
|
||||
installed into development and profiling images alongside OProfile. There is a
|
||||
configuration option within the OProfileUI settings page where the location of
|
||||
the vmlinux file can be entered.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Waiting for debug symbols to transfer from the device can be slow and it's not
|
||||
always necessary to actually have them on device for OProfile use. All that is
|
||||
needed is a copy of the filesystem with the debug symbols present on the viewer
|
||||
system. The <link linkend='platdev-gdb-remotedebug-launch-gdb'>GDB remote debug
|
||||
section</link> covers how to create such a directory with Poky and the location
|
||||
of this directory can again be specified in the OProfileUI settings dialog. If
|
||||
specified, it will be used where the file checksums match those on the system
|
||||
being profiled.
|
||||
</para>
|
||||
</section>
|
||||
<section id="platdev-oprofile-oprofileui-offline">
|
||||
<title>Offline mode</title>
|
||||
|
||||
<para>
|
||||
If no network access to the target is available an archive for processing in
|
||||
'oprofile-viewer' can be generated with the following set of command.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# opcontrol --reset
|
||||
# opcontrol --start --separate=lib --no-vmlinux -c 5
|
||||
[do whatever is being profiled]
|
||||
# opcontrol --stop
|
||||
# oparchive -o my_archive
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Where my_archive is the name of the archive directory where you would like the
|
||||
profile archive to be kept. The directory will be created for you. This can
|
||||
then be copied to another host and loaded using 'oprofile-viewer''s open
|
||||
functionality. The archive will be converted if necessary.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,7 +0,0 @@
|
||||
DESCRIPTION = "GNU Helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "GPLv3"
|
||||
|
||||
SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.bz2"
|
||||
|
||||
inherit autotools
|
||||
@@ -1,8 +0,0 @@
|
||||
#include <stdio.h>
|
||||
|
||||
int main(void)
|
||||
{
|
||||
printf("Hello world!\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
DESCRIPTION = "Simple helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "MIT"
|
||||
|
||||
SRC_URI = "file://helloworld.c"
|
||||
|
||||
S = "${WORKDIR}"
|
||||
|
||||
do_compile() {
|
||||
${CC} helloworld.c -o helloworld
|
||||
}
|
||||
|
||||
do_install() {
|
||||
install -d ${D}${bindir}
|
||||
install -m 0755 helloworld ${D}${bindir}
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
require xorg-lib-common.inc
|
||||
|
||||
DESCRIPTION = "X11 Pixmap library"
|
||||
LICENSE = "X-BSD"
|
||||
DEPENDS += "libxext"
|
||||
PR = "r2"
|
||||
PE = "1"
|
||||
|
||||
XORG_PN = "libXpm"
|
||||
|
||||
PACKAGES =+ "sxpm cxpm"
|
||||
FILES_cxpm = "${bindir}/cxpm"
|
||||
FILES_sxpm = "${bindir}/sxpm"
|
||||
@@ -1,13 +0,0 @@
|
||||
DESCRIPTION = "Tools for managing memory technology devices."
|
||||
SECTION = "base"
|
||||
DEPENDS = "zlib"
|
||||
HOMEPAGE = "http://www.linux-mtd.infradead.org/"
|
||||
LICENSE = "GPLv2"
|
||||
|
||||
SRC_URI = "ftp://ftp.infradead.org/pub/mtd-utils/mtd-utils-${PV}.tar.gz"
|
||||
|
||||
CFLAGS_prepend = "-I ${S}/include "
|
||||
|
||||
do_install() {
|
||||
oe_runmake install DESTDIR=${D}
|
||||
}
|
||||
@@ -1,947 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='extendpoky'>
|
||||
|
||||
<title>Extending Poky</title>
|
||||
|
||||
<para>
|
||||
This section gives information about how to extend the functionality
|
||||
already present in Poky, documenting standard tasks such as adding new
|
||||
software packages, extending or customising images or porting poky to
|
||||
new hardware (adding a new machine). It also contains advice about how
|
||||
to manage the process of making changes to Poky to achieve best results.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-extend-addpkg'>
|
||||
<title>Adding a Package</title>
|
||||
|
||||
<para>
|
||||
To add package into Poky you need to write a recipe for it.
|
||||
Writing a recipe means creating a .bb file which sets various
|
||||
variables. The variables
|
||||
useful for recipes are detailed in the <link linkend='ref-varlocality-recipe-required'>
|
||||
recipe reference</link> section along with more detailed information
|
||||
about issues such as recipe naming.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Before writing a recipe from scratch it is often useful to check
|
||||
someone else hasn't written one already. OpenEmbedded is a good place
|
||||
to look as it has a wider scope and hence a wider range of packages.
|
||||
Poky aims to be compatible with OpenEmbedded so most recipes should
|
||||
just work in Poky.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For new packages, the simplest way to add a recipe is to base it on a similar
|
||||
pre-existing recipe. There are some examples below of how to add
|
||||
standard types of packages:
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-singlec'>
|
||||
<title>Single .c File Package (Hello World!)</title>
|
||||
|
||||
<para>
|
||||
To build an application from a single file stored locally requires a
|
||||
recipe which has the file listed in the <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm> variable. In addition
|
||||
the <function>do_compile</function> and <function>do_install</function>
|
||||
tasks need to be manually written. The <glossterm><link linkend='var-S'>
|
||||
S</link></glossterm> variable defines the directory containing the source
|
||||
code which in this case is set equal to <glossterm><link linkend='var-WORKDIR'>
|
||||
WORKDIR</link></glossterm>, the directory BitBake uses for the build.
|
||||
</para>
|
||||
<programlisting>
|
||||
DESCRIPTION = "Simple helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "MIT"
|
||||
LIC_FILES_CHKSUM = "file://COPYING;md5=ae764cfda68da96df20af9fbf9fe49bd"
|
||||
|
||||
SRC_URI = "file://helloworld.c"
|
||||
|
||||
S = "${WORKDIR}"
|
||||
|
||||
do_compile() {
|
||||
${CC} helloworld.c -o helloworld
|
||||
}
|
||||
|
||||
do_install() {
|
||||
install -d ${D}${bindir}
|
||||
install -m 0755 helloworld ${D}${bindir}
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
As a result of the build process "helloworld" and "helloworld-dbg"
|
||||
packages will be built.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-autotools'>
|
||||
<title>Autotooled Package</title>
|
||||
|
||||
<para>
|
||||
Applications which use autotools (autoconf, automake)
|
||||
require a recipe which has a source archive listed in
|
||||
<glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm> and
|
||||
<command>inherit autotools</command> to instruct BitBake to use the
|
||||
<filename>autotools.bbclass</filename> which has
|
||||
definitions of all the steps
|
||||
needed to build an autotooled application.
|
||||
The result of the build will be automatically packaged and if
|
||||
the application uses NLS to localise then packages with
|
||||
locale information will be generated (one package per
|
||||
language).
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
DESCRIPTION = "GNU Helloworld application"
|
||||
SECTION = "examples"
|
||||
LICENSE = "GPLv2"
|
||||
LIC_FILES_CHKSUM = "file://COPYING;md5=ae764cfda68da96df20af9fbf9fe49bd"
|
||||
|
||||
SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.bz2"
|
||||
|
||||
inherit autotools
|
||||
</programlisting>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-makefile'>
|
||||
<title>Makefile-Based Package</title>
|
||||
|
||||
<para>
|
||||
Applications which use GNU make require a recipe which has
|
||||
the source archive listed in <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm>.
|
||||
Adding a <function>do_compile</function> step
|
||||
is not needed as by default BitBake will start the "make"
|
||||
command to compile the application. If there is a need for
|
||||
additional options to make then they should be stored in the
|
||||
<glossterm><link
|
||||
linkend='var-EXTRA_OEMAKE'>EXTRA_OEMAKE</link></glossterm> variable - BitBake
|
||||
will pass them into the GNU
|
||||
make invocation. A <function>do_install</function> task is required
|
||||
- otherwise BitBake will run an empty <function>do_install</function>
|
||||
task by default.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Some applications may require extra parameters to be passed to
|
||||
the compiler, for example an additional header path. This can
|
||||
be done buy adding to the <glossterm><link
|
||||
linkend='var-CFLAGS'>CFLAGS</link></glossterm> variable, as in the example below.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
DESCRIPTION = "Tools for managing memory technology devices."
|
||||
SECTION = "base"
|
||||
DEPENDS = "zlib"
|
||||
HOMEPAGE = "http://www.linux-mtd.infradead.org/"
|
||||
LICENSE = "GPLv2"
|
||||
LIC_FILES_CHKSUM = "file://COPYING;md5=ae764cfda68da96df20af9fbf9fe49bd"
|
||||
|
||||
SRC_URI = "ftp://ftp.infradead.org/pub/mtd-utils/mtd-utils-${PV}.tar.gz"
|
||||
|
||||
CFLAGS_prepend = "-I ${S}/include "
|
||||
|
||||
do_install() {
|
||||
oe_runmake install DESTDIR=${D}
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-files'>
|
||||
<title>Controlling packages content</title>
|
||||
|
||||
<para>
|
||||
The variables <glossterm><link
|
||||
linkend='var-PACKAGES'>PACKAGES</link></glossterm> and
|
||||
<glossterm><link linkend='var-FILES'>FILES</link></glossterm> are used to split an
|
||||
application into multiple packages.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Below the "libXpm" recipe is used as an example. By
|
||||
default the "libXpm" recipe generates one package
|
||||
which contains the library
|
||||
and also a few binaries. The recipe can be adapted to
|
||||
split the binaries into separate packages.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
require xorg-lib-common.inc
|
||||
|
||||
DESCRIPTION = "X11 Pixmap library"
|
||||
LICENSE = "X-BSD"
|
||||
LIC_FILES_CHKSUM = "file://COPYING;md5=ae764cfda68da96df20af9fbf9fe49bd"
|
||||
DEPENDS += "libxext"
|
||||
|
||||
XORG_PN = "libXpm"
|
||||
|
||||
PACKAGES =+ "sxpm cxpm"
|
||||
FILES_cxpm = "${bindir}/cxpm"
|
||||
FILES_sxpm = "${bindir}/sxpm"
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
In this example we want to ship the "sxpm" and "cxpm" binaries
|
||||
in separate packages. Since "bindir" would be packaged into the
|
||||
main <glossterm><link linkend='var-PN'>PN</link></glossterm>
|
||||
package as standard we prepend the <glossterm><link
|
||||
linkend='var-PACKAGES'>PACKAGES</link></glossterm> variable so
|
||||
additional package names are added to the start of list. The
|
||||
extra <glossterm><link linkend='var-PN'>FILES</link></glossterm>_*
|
||||
variables then contain information to specify which files and
|
||||
directories goes into which package.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-addpkg-postinstalls'>
|
||||
<title>Post Install Scripts</title>
|
||||
|
||||
<para>
|
||||
To add a post-installation script to a package, add
|
||||
a <function>pkg_postinst_PACKAGENAME()</function>
|
||||
function to the .bb file
|
||||
where PACKAGENAME is the name of the package to attach
|
||||
the postinst script to. A post-installation function has the following structure:
|
||||
</para>
|
||||
<programlisting>
|
||||
pkg_postinst_PACKAGENAME () {
|
||||
#!/bin/sh -e
|
||||
# Commands to carry out
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
The script defined in the post installation function
|
||||
gets called when the rootfs is made. If the script succeeds,
|
||||
the package is marked as installed. If the script fails,
|
||||
the package is marked as unpacked and the script will be
|
||||
executed again on the first boot of the image.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Sometimes it is necessary that the execution of a post-installation
|
||||
script is delayed until the first boot, because the script
|
||||
needs to be executed on the device itself. To delay script execution
|
||||
until boot time, the post-installation function should have the
|
||||
following structure:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
pkg_postinst_PACKAGENAME () {
|
||||
#!/bin/sh -e
|
||||
if [ x"$D" = "x" ]; then
|
||||
# Actions to carry out on the device go here
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
The structure above delays execution until first boot
|
||||
because the <glossterm><link
|
||||
linkend='var-D'>D</link></glossterm> variable points
|
||||
to the 'image'
|
||||
directory when the rootfs is being made at build time but
|
||||
is unset when executed on the first boot.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage'>
|
||||
<title>Customising Images</title>
|
||||
|
||||
<para>
|
||||
Poky images can be customised to satisfy
|
||||
particular requirements. Several methods are detailed below
|
||||
along with guidelines of when to use them.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-extend-customimage-custombb'>
|
||||
<title>Customising Images through a custom image .bb files</title>
|
||||
|
||||
<para>
|
||||
One way to get additional software into an image is by creating a
|
||||
custom image. The recipe will contain two lines:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
IMAGE_INSTALL = "task-poky-x11-base package1 package2"
|
||||
|
||||
inherit poky-image
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
By creating a custom image, a developer has total control
|
||||
over the contents of the image. It is important to use
|
||||
the correct names of packages in the <glossterm><link
|
||||
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm> variable.
|
||||
The names must be in
|
||||
the OpenEmbedded notation instead of Debian notation, for example
|
||||
"glibc-dev" instead of "libc6-dev" etc.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The other method of creating a new image is by modifying
|
||||
an existing image. For example if a developer wants to add
|
||||
"strace" into "poky-image-sato" the following recipe can
|
||||
be used:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
require poky-image-sato.bb
|
||||
|
||||
IMAGE_INSTALL += "strace"
|
||||
</programlisting>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage-customtasks'>
|
||||
<title>Customising Images through custom tasks</title>
|
||||
|
||||
<para>
|
||||
For complex custom images, the best approach is to create a custom
|
||||
task package which is then used to build the image (or images). A good
|
||||
example of a tasks package is <filename>meta/packages/tasks/task-poky.bb
|
||||
</filename>. The <glossterm><link linkend='var-PACKAGES'>PACKAGES</link></glossterm>
|
||||
variable lists the task packages to build (along with the complementary
|
||||
-dbg and -dev packages). For each package added,
|
||||
<glossterm><link linkend='var-PACKAGES'>RDEPENDS</link></glossterm> and
|
||||
<glossterm><link linkend='var-PACKAGES'>RRECOMMENDS</link></glossterm>
|
||||
entries can then be added each containing a list of packages the parent
|
||||
task package should contain. An example would be:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<programlisting>
|
||||
DESCRIPTION = "My Custom Tasks"
|
||||
|
||||
PACKAGES = "\
|
||||
task-custom-apps \
|
||||
task-custom-apps-dbg \
|
||||
task-custom-apps-dev \
|
||||
task-custom-tools \
|
||||
task-custom-tools-dbg \
|
||||
task-custom-tools-dev \
|
||||
"
|
||||
|
||||
RDEPENDS_task-custom-apps = "\
|
||||
dropbear \
|
||||
portmap \
|
||||
psplash"
|
||||
|
||||
RDEPENDS_task-custom-tools = "\
|
||||
oprofile \
|
||||
oprofileui-server \
|
||||
lttng-control \
|
||||
lttng-viewer"
|
||||
|
||||
RRECOMMENDS_task-custom-tools = "\
|
||||
kernel-module-oprofile"
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In this example, two tasks packages are created, task-custom-apps and
|
||||
task-custom-tools with the dependencies and recommended package dependencies
|
||||
listed. To build an image using these task packages, you would then add
|
||||
"task-custom-apps" and/or "task-custom-tools" to <glossterm><link
|
||||
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm> or other forms
|
||||
of image dependencies as described in other areas of this section.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage-imagefeatures'>
|
||||
<title>Customising Images through custom <glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm></title>
|
||||
|
||||
<para>
|
||||
Ultimately users may want to add extra image "features" as used by Poky with the
|
||||
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
|
||||
variable. To create these, the best reference is <filename>meta/classes/poky-image.bbclass</filename>
|
||||
which illustrates how poky achieves this. In summary, the file looks at the contents of the
|
||||
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
|
||||
variable and based on this generates the <glossterm><link linkend='var-IMAGE_INSTALL'>
|
||||
IMAGE_INSTALL</link></glossterm> variable automatically. Extra features can be added by
|
||||
extending the class or creating a custom class for use with specialised image .bb files.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-extend-customimage-localconf'>
|
||||
<title>Customising Images through local.conf</title>
|
||||
|
||||
<para>
|
||||
It is possible to customise image contents by abusing
|
||||
variables used by distribution maintainers in local.conf.
|
||||
This method only allows the addition of packages and
|
||||
is not recommended.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To add an "strace" package into the image the following is
|
||||
added to local.conf:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
DISTRO_EXTRA_RDEPENDS += "strace"
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
However, since the <glossterm><link linkend='var-DISTRO_EXTRA_RDEPENDS'>
|
||||
DISTRO_EXTRA_RDEPENDS</link></glossterm> variable is for
|
||||
distribution maintainers this method does not make
|
||||
adding packages as simple as a custom .bb file. Using
|
||||
this method, a few packages will need to be recreated
|
||||
and the the image built.
|
||||
</para>
|
||||
<programlisting>
|
||||
bitbake -cclean task-boot task-base task-poky
|
||||
bitbake poky-image-sato
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Cleaning task-* packages is required because they use the
|
||||
<glossterm><link linkend='var-DISTRO_EXTRA_RDEPENDS'>
|
||||
DISTRO_EXTRA_RDEPENDS</link></glossterm> variable. There is no need to
|
||||
build them by hand as Poky images depend on the packages they contain so
|
||||
dependencies will be built automatically. For this reason we don't use the
|
||||
"rebuild" task in this case since "rebuild" does not care about
|
||||
dependencies - it only rebuilds the specified package.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
<section id="platdev-newmachine">
|
||||
<title>Porting Poky to a new machine</title>
|
||||
<para>
|
||||
Adding a new machine to Poky is a straightforward process and
|
||||
this section gives an idea of the changes that are needed. This guide is
|
||||
meant to cover adding machines similar to those Poky already supports.
|
||||
Adding a totally new architecture might require gcc/glibc changes as
|
||||
well as updates to the site information and, whilst well within Poky's
|
||||
capabilities, is outside the scope of this section.
|
||||
</para>
|
||||
|
||||
<section id="platdev-newmachine-conffile">
|
||||
<title>Adding the machine configuration file</title>
|
||||
<para>
|
||||
A .conf file needs to be added to conf/machine/ with details of the
|
||||
device being added. The name of the file determines the name Poky will
|
||||
use to reference this machine.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The most important variables to set in this file are <glossterm>
|
||||
<link linkend='var-TARGET_ARCH'>TARGET_ARCH</link></glossterm>
|
||||
(e.g. "arm"), <glossterm><link linkend='var-PREFERRED_PROVIDER'>
|
||||
PREFERRED_PROVIDER</link></glossterm>_virtual/kernel (see below) and
|
||||
<glossterm><link linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES
|
||||
</link></glossterm> (e.g. "kernel26 apm screen wifi"). Other variables
|
||||
like <glossterm><link linkend='var-SERIAL_CONSOLE'>SERIAL_CONSOLE
|
||||
</link></glossterm> (e.g. "115200 ttyS0"), <glossterm>
|
||||
<link linkend='var-KERNEL_IMAGETYPE'>KERNEL_IMAGETYPE</link>
|
||||
</glossterm> (e.g. "zImage") and <glossterm><link linkend='var-IMAGE_FSTYPES'>
|
||||
IMAGE_FSTYPES</link></glossterm> (e.g. "tar.gz jffs2") might also be
|
||||
needed. Full details on what these variables do and the meaning of
|
||||
their contents is available through the links.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-newmachine-kernel">
|
||||
<title>Adding a kernel for the machine</title>
|
||||
<para>
|
||||
Poky needs to be able to build a kernel for the machine. You need
|
||||
to either create a new kernel recipe for this machine or extend an
|
||||
existing recipe. There are plenty of kernel examples in the
|
||||
packages/linux directory which can be used as references.
|
||||
</para>
|
||||
<para>
|
||||
If creating a new recipe the "normal" recipe writing rules apply
|
||||
for setting up a <glossterm><link linkend='var-SRC_URI'>SRC_URI
|
||||
</link></glossterm> including any patches and setting <glossterm>
|
||||
<link linkend='var-S'>S</link></glossterm> to point at the source
|
||||
code. You will need to create a configure task which configures the
|
||||
unpacked kernel with a defconfig be that through a "make defconfig"
|
||||
command or more usually though copying in a suitable defconfig and
|
||||
running "make oldconfig". By making use of "inherit kernel" and also
|
||||
maybe some of the linux-*.inc files, most other functionality is
|
||||
centralised and the the defaults of the class normally work well.
|
||||
</para>
|
||||
<para>
|
||||
If extending an existing kernel it is usually a case of adding a
|
||||
suitable defconfig file in a location similar to that used by other
|
||||
machine's defconfig files in a given kernel, possibly listing it in
|
||||
the SRC_URI and adding the machine to the expression in <glossterm>
|
||||
<link linkend='var-COMPATIBLE_MACHINE'>COMPATIBLE_MACHINE</link>
|
||||
</glossterm>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id="platdev-newmachine-formfactor">
|
||||
<title>Adding a formfactor configuration file</title>
|
||||
<para>
|
||||
A formfactor configuration file provides information about the
|
||||
target hardware on which Poky is running, and that Poky cannot
|
||||
obtain from other sources such as the kernel. Some examples of
|
||||
information contained in a formfactor configuration file include
|
||||
framebuffer orientation, whether or not the system has a keyboard,
|
||||
the positioning of the keyboard in relation to the screen, and
|
||||
screen resolution.
|
||||
</para>
|
||||
<para>
|
||||
Sane defaults should be used in most cases, but if customisation is
|
||||
necessary you need to create a <filename>machconfig</filename> file
|
||||
under <filename>meta/packages/formfactor/files/MACHINENAME/</filename>
|
||||
where <literal>MACHINENAME</literal> is the name for which this infomation
|
||||
applies. For information about the settings available and the defaults, please see
|
||||
<filename>meta/packages/formfactor/files/config</filename>.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-changes'>
|
||||
<title>Making and Maintaining Changes</title>
|
||||
|
||||
<para>
|
||||
We recognise that people will want to extend/configure/optimise Poky for
|
||||
their specific uses, especially due to the extreme configurability and
|
||||
flexibility Poky offers. To ensure ease of keeping pace with future
|
||||
changes in Poky we recommend making changes to Poky in a controlled way.
|
||||
</para>
|
||||
<para>
|
||||
Poky supports the idea of <link
|
||||
linkend='usingpoky-changes-layers'>"layers"</link> which when used
|
||||
properly can massively ease future upgrades and allow segregation
|
||||
between the Poky core and a given developer's changes. Some other advice on
|
||||
managing changes to Poky is also given in the following section.
|
||||
</para>
|
||||
|
||||
<section id="usingpoky-changes-layers">
|
||||
<title>Bitbake Layers</title>
|
||||
|
||||
<para>
|
||||
Often, people want to extend Poky either through adding packages
|
||||
or overriding files contained within Poky to add their own
|
||||
functionality. Bitbake has a powerful mechanism called
|
||||
layers which provides a way to handle this extension in a fully
|
||||
supported and non-invasive fashion.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The Poky tree includes two additional layers which demonstrate
|
||||
this functionality, meta-moblin and meta-extras.
|
||||
The meta-extras repostory is not enabled by default but enabling
|
||||
it is as easy as adding the layers path to the BBLAYERS variable in
|
||||
your bblayers.conf, this is how all layers are enabled in Poky builds:
|
||||
</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>LCONF_VERSION = "1"
|
||||
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
${OEROOT}/meta \
|
||||
${OEROOT}/meta-moblin \
|
||||
${OEROOT}/meta-extras \
|
||||
"
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Bitbake parses the conf/layer.conf of each of the layers in BBLAYERS
|
||||
to add the layers packages, classes and configuration to Poky.
|
||||
To create your own layer, independent of the main Poky repository,
|
||||
you need only create a directory with a conf/layer.conf file and
|
||||
add the directory to your bblayers.conf.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The meta-extras layer.conf demonstrates the required syntax:
|
||||
<literallayout class='monospaced'># We have a conf and classes directory, add to BBPATH
|
||||
BBPATH := "${BBPATH}${LAYERDIR}"
|
||||
|
||||
# We have a packages directory, add to BBFILES
|
||||
BBFILES := "${BBFILES} ${LAYERDIR}/packages/*/*.bb"
|
||||
|
||||
BBFILE_COLLECTIONS += "extras"
|
||||
BBFILE_PATTERN_extras := "^${LAYERDIR}/"
|
||||
BBFILE_PRIORITY_extras = "5"
|
||||
|
||||
require conf/distro/include/poky-extras-src-revisions.inc
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
As can be seen, the layers recipes are added to BBFILES. The
|
||||
BBFILE_COLLECTIONS variable is then appended to with the
|
||||
layer name. The BBFILE_PATTERN variable is immediately expanded
|
||||
with a regular expression used to match files from BBFILES into
|
||||
a particular layer, in this case by using the base pathname.
|
||||
The BBFILE_PRIORITY variable then assigns different
|
||||
priorities to the files in different layers. This is useful
|
||||
in situations where the same package might appear in multiple
|
||||
layers and allows you to choose which layer should 'win'.
|
||||
Note the use of LAYERDIR with the immediate expansion operator.
|
||||
LAYERDIR expands to the directory of the current layer and
|
||||
requires use of the immediate expansion operator so that Bitbake
|
||||
does not lazily expand the variable when it's parsing a
|
||||
different directory.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Extra bbclasses and configuration are added to the BBPATH
|
||||
environment variable. In this case, the first file with the
|
||||
matching name found in BBPATH is the one that is used, just
|
||||
like the PATH variable for binaries. It is therefore recommended
|
||||
that you use unique bbclass and configuration file names in your
|
||||
custom layer.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The recommended approach for custom layers is to store them in a
|
||||
git repository of the format meta-prvt-XXXX and have this repository
|
||||
cloned alongside the other meta directories in the Poky tree.
|
||||
This way you can keep your Poky tree and it's configuration entirely
|
||||
inside OEROOT.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-changes-commits'>
|
||||
<title>Committing Changes</title>
|
||||
|
||||
<para>
|
||||
Modifications to Poky are often managed under some kind of source
|
||||
revision control system. The policy for committing to such systems
|
||||
is important as some simple policy can significantly improve
|
||||
usability. The tips below are based on the policy followed for the
|
||||
Poky core.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It helps to use a consistent style for commit messages when committing
|
||||
changes. We've found a style where the first line of a commit message
|
||||
summarises the change and starts with the name of any package affected
|
||||
work well. Not all changes are to specific packages so the prefix could
|
||||
also be a machine name or class name instead. If a change needs a longer
|
||||
description this should follow the summary.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Any commit should be self contained in that it should leave the
|
||||
metadata in a consistent state, buildable before and after the
|
||||
commit. This helps ensure the autobuilder test results are valid
|
||||
but is good practice regardless.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-changes-prbump'>
|
||||
<title>Package Revision Incrementing</title>
|
||||
|
||||
<para>
|
||||
If a committed change will result in changing the package output
|
||||
then the value of the <glossterm><link linkend='var-PR'>PR</link>
|
||||
</glossterm> variable needs to be increased (commonly referred to
|
||||
as 'bumped') as part of that commit. Only integer values are used
|
||||
and <glossterm><link linkend='var-PR'>PR</link></glossterm> =
|
||||
"r0" should not be added into new recipes as this is default value.
|
||||
When upgrading the version of a package (<glossterm><link
|
||||
linkend='var-PV'>PV</link></glossterm>), the <glossterm><link
|
||||
linkend='var-PR'>PR</link></glossterm> variable should be removed.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The aim is that the package version will only ever increase. If
|
||||
for some reason <glossterm><link linkend='var-PV'>PV</link></glossterm>
|
||||
will change and but not increase, the <glossterm><link
|
||||
linkend='var-PE'>PE</link></glossterm> (Package Epoch) can
|
||||
be increased (it defaults to '0'). The version numbers aim to
|
||||
follow the <ulink url='http://www.debian.org/doc/debian-policy/ch-controlfields.html'>
|
||||
Debian Version Field Policy Guidelines</ulink> which define how
|
||||
versions are compared and hence what "increasing" means.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
There are two reasons for doing this, the first is to ensure that
|
||||
when a developer updates and rebuilds, they get all the changes to
|
||||
the repository and don't have to remember to rebuild any sections.
|
||||
The second is to ensure that target users are able to upgrade their
|
||||
devices via their package manager such as with the <command>
|
||||
opkg update;opkg upgrade</command> commands (or similar for
|
||||
dpkg/apt or rpm based systems). The aim is to ensure Poky has
|
||||
upgradable packages in all cases.
|
||||
</para>
|
||||
</section>
|
||||
<section id='usingpoky-changes-collaborate'>
|
||||
<title>Using Poky in a Team Environment</title>
|
||||
|
||||
<para>
|
||||
It may not be immediately clear how Poky can work in a team environment,
|
||||
or scale to a large team of developers. The specifics of any situation
|
||||
will determine the best solution and poky offers immense flexibility in
|
||||
that aspect but there are some practises that experience has shown to work
|
||||
well.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The core component of any development effort with Poky is often an
|
||||
automated build testing framework and image generation process. This
|
||||
can be used to check that the metadata is buildable, highlight when
|
||||
commits break the builds and provide up to date images allowing people
|
||||
to test the end result and use them as a base platform for further
|
||||
development. Experience shows that buildbot is a good fit for this role
|
||||
and that it works well to configure it to make two types of build -
|
||||
incremental builds and 'from scratch'/full builds. The incremental builds
|
||||
can be tied to a commit hook which triggers them each time a commit is
|
||||
made to the metadata and are a useful acid test of whether a given commit
|
||||
breaks the build in some serious way. They catch lots of simple errors
|
||||
and whilst they won't catch 100% of failures, the tests are fast so
|
||||
developers can get feedback on their changes quickly. The full builds
|
||||
are builds that build everything from the ground up and test everything.
|
||||
They usually happen at preset times such as at night when the machine
|
||||
load isn't high from the incremental builds.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Most teams have pieces of software undergoing active development. It is of
|
||||
significant benefit to put these under control of a source control system
|
||||
compatible with Poky such as git or svn. The autobuilder can then be set to
|
||||
pull the latest revisions of these packages so the latest commits get tested
|
||||
by the builds allowing any issues to be highlighted quickly. Poky easily
|
||||
supports configurations where there is both a stable known good revision
|
||||
and a floating revision to test. Poky can also only take changes from specific
|
||||
source control branches giving another way it can be used to track/test only
|
||||
specified changes.
|
||||
</para>
|
||||
<para>
|
||||
Perhaps the hardest part of setting this up is the policy that surrounds
|
||||
the different source control systems, be them software projects or the Poky
|
||||
metadata itself. The circumstances will be different in each case but this is
|
||||
one of Poky's advantages - the system itself doesn't force any particular policy
|
||||
unlike a lot of build systems, allowing the best policy to be chosen for the
|
||||
circumstances.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-changes-updatingimages'>
|
||||
<title>Updating Existing Images</title>
|
||||
|
||||
<para>
|
||||
Often, rather than reflashing a new image you might wish to install updated
|
||||
packages into an existing running system. This can be done by sharing the <filename class="directory">tmp/deploy/ipk/</filename> directory through a web server and then on the device, changing <filename>/etc/opkg/base-feeds.conf</filename> to point at this server, for example by adding:
|
||||
</para>
|
||||
<literallayout class='monospaced'>
|
||||
src/gz all http://www.mysite.com/somedir/deploy/ipk/all
|
||||
src/gz armv7a http://www.mysite.com/somedir/deploy/ipk/armv7a
|
||||
src/gz beagleboard http://www.mysite.com/somedir/deploy/ipk/beagleboard</literallayout>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-modifing-packages'>
|
||||
<title>Modifying Package Source Code</title>
|
||||
|
||||
<para>
|
||||
Poky is usually used to build software rather than modifying
|
||||
it. However, there are ways Poky can be used to modify software.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
During building, the sources are available in <glossterm><link
|
||||
linkend='var-WORKDIR'>WORKDIR</link></glossterm> directory.
|
||||
Where exactly this is depends on the type of package and the
|
||||
architecture of target device. For a standard recipe not
|
||||
related to <glossterm><link
|
||||
linkend='var-MACHINE'>MACHINE</link></glossterm> it will be
|
||||
<filename>tmp/work/PACKAGE_ARCH-poky-TARGET_OS/PN-PV-PR/</filename>.
|
||||
Target device dependent packages use <glossterm><link
|
||||
linkend='var-MACHINE'>MACHINE
|
||||
</link></glossterm>
|
||||
instead of <glossterm><link linkend='var-PACKAGE_ARCH'>PACKAGE_ARCH
|
||||
</link></glossterm>
|
||||
in the directory name.
|
||||
</para>
|
||||
|
||||
<tip>
|
||||
<para>
|
||||
Check the package recipe sets the <glossterm><link
|
||||
linkend='var-S'>S</link></glossterm> variable to something
|
||||
other than standard <filename>WORKDIR/PN-PV/</filename> value.
|
||||
</para>
|
||||
</tip>
|
||||
<para>
|
||||
After building a package, a user can modify the package source code
|
||||
without problem. The easiest way to test changes is by calling the
|
||||
"compile" task:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
bitbake --cmd compile --force NAME_OF_PACKAGE
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Other tasks may also be called this way.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-modifying-packages-quilt'>
|
||||
<title>Modifying Package Source Code with quilt</title>
|
||||
|
||||
<para>
|
||||
By default Poky uses <ulink
|
||||
url='http://savannah.nongnu.org/projects/quilt'>quilt</ulink>
|
||||
to manage patches in <function>do_patch</function> task.
|
||||
It is a powerful tool which can be used to track all
|
||||
modifications done to package sources.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Before modifying source code it is important to
|
||||
notify quilt so it will track changes into new patch
|
||||
file:
|
||||
<programlisting>
|
||||
quilt new NAME-OF-PATCH.patch
|
||||
</programlisting>
|
||||
|
||||
Then add all files which will be modified into that
|
||||
patch:
|
||||
<programlisting>
|
||||
quilt add file1 file2 file3
|
||||
</programlisting>
|
||||
|
||||
Now start editing. At the end quilt needs to be used
|
||||
to generate final patch which will contain all
|
||||
modifications:
|
||||
<programlisting>
|
||||
quilt refresh
|
||||
</programlisting>
|
||||
|
||||
The resulting patch file can be found in the
|
||||
<filename class="directory">patches/</filename> subdirectory of the source
|
||||
(<glossterm><link linkend='var-S'>S</link></glossterm>) directory. For future builds it
|
||||
should be copied into
|
||||
Poky metadata and added into <glossterm><link
|
||||
linkend='var-SRC_URI'>SRC_URI</link></glossterm> of a recipe:
|
||||
<programlisting>
|
||||
SRC_URI += "file://NAME-OF-PATCH.patch"
|
||||
</programlisting>
|
||||
|
||||
This also requires a bump of <glossterm><link
|
||||
linkend='var-PR'>PR</link></glossterm> value in the same recipe as we changed resulting packages.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
</section>
|
||||
<section id='usingpoky-configuring-LIC_FILES_CHKSUM'>
|
||||
<title>Configuring the LIC_FILES_CHKSUM variable</title>
|
||||
<para>
|
||||
The changes in the license text inside source code files is tracked
|
||||
using the LIC_FILES_CHKSUM metadata variable.
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-specifying-LIC_FILES_CHKSUM'>
|
||||
<title>Specifying the LIC_FILES_CHKSUM variable </title>
|
||||
|
||||
<programlisting>
|
||||
LIC_FILES_CHKSUM = "file://COPYING; md5=xxxx \
|
||||
file://licfile1.txt; beginline=5; endline=29;md5=yyyy \
|
||||
file://licfile2.txt; endline=50;md5=zzzz \
|
||||
..."
|
||||
</programlisting>
|
||||
</section>
|
||||
|
||||
<section id='usingpoky-LIC_FILES_CHKSUM-explanation-of-syntax'>
|
||||
<title>Explanation of syntax</title>
|
||||
|
||||
<para>
|
||||
This parameter lists all the important files containing the text
|
||||
of licenses for the
|
||||
source code. It is also possible to specify on which line the license text
|
||||
starts and on which line it ends within that file using the "beginline" and
|
||||
"endline" parameters. If the "beginline" parameter is not specified then license
|
||||
text begins from the 1st line is assumed. Similarly if "endline" parameter is
|
||||
not specified then the license text ends at the last line in the file is
|
||||
assumed. So if a file contains only licensing information, then there is no need
|
||||
to specify "beginline" and "endline" parameters.
|
||||
</para>
|
||||
<para>
|
||||
The "md5" parameter stores the md5 checksum of the license text. So if
|
||||
the license text changes in any way from a file, then it's md5 sum will differ and will not
|
||||
match with the previously stored md5 checksum. This mismatch will trigger build
|
||||
failure, notifying developer about the license text md5 mismatch, and allowing
|
||||
the developer to review the license text changes. Also note that if md5 checksum
|
||||
is not matched while building, the correct md5 checksum is printed in the build
|
||||
log.
|
||||
</para>
|
||||
<para>
|
||||
There is no limit on how many files can be specified on this parameter. But generally every
|
||||
project would need specifying of just one or two files for license tracking.
|
||||
Many projects would have a "COPYING" file which will store all the
|
||||
license information for all the source code files. If the "COPYING" file
|
||||
is valid then tracking only that file would be enough.
|
||||
</para>
|
||||
<tip>
|
||||
<para>
|
||||
1. If you specify empty or invalid "md5" parameter; then while building
|
||||
the package, bitbake will give md5 not matched error, and also show the correct
|
||||
"md5" parameter value in the build log
|
||||
</para>
|
||||
<para>
|
||||
2. If the whole file contains only license text, then there is no need to
|
||||
specify "beginline" and "endline" parameters.
|
||||
</para>
|
||||
</tip>
|
||||
</section>
|
||||
</section>
|
||||
<section id='usingpoky-configuring-DISTRO_PN_ALIAS'>
|
||||
<title>Configuring the DISTRO_PN_ALIAS variable</title>
|
||||
<para>
|
||||
Sometimes the names of the same packages are different in different
|
||||
linux distributions; and that can becomes an issue for the distro_check
|
||||
task to check if the given recipe package exists in other linux distros.
|
||||
This issue is avoided by defining per distro recipe name alias:
|
||||
DISTRO_PN_ALIAS
|
||||
</para>
|
||||
|
||||
<section id='usingpoky-specifying-DISTRO_PN_ALIAS'>
|
||||
<title>Specifying the DISTRO_PN_ALIAS variable </title>
|
||||
|
||||
<programlisting>
|
||||
DISTRO_PN_ALIAS = "distro1=package_name_alias1; distro2=package_name_alias2 \
|
||||
distro3=package_name_alias3; \
|
||||
..."
|
||||
</programlisting>
|
||||
<para>
|
||||
Look at the meta/packages/xorg-app/xset_1.0.4.bb recipe file for an example.
|
||||
</para>
|
||||
<tip>
|
||||
<para>
|
||||
The current code can check if the src package for a recipe exists in the latest
|
||||
releases of these distributions automatically.
|
||||
</para>
|
||||
<programlisting>
|
||||
Fedora, OpenSuSE, Debian, Ubuntu, Mandriva
|
||||
</programlisting>
|
||||
<para>
|
||||
For example, this command will generate a report, listing which linux distros include the
|
||||
sources for each of the poky recipe.
|
||||
</para>
|
||||
<programlisting>
|
||||
bitbake world -f -c distro_check
|
||||
</programlisting>
|
||||
<para>
|
||||
The results will be stored in the build/tmp/log/distro_check-${DATETIME}.results file.
|
||||
</para>
|
||||
</tip>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
314
handbook/faq.xml
314
handbook/faq.xml
@@ -1,314 +0,0 @@
|
||||
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<appendix id='faq'>
|
||||
<title>FAQ</title>
|
||||
<qandaset>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How does Poky differ from <ulink url='http://www.openembedded.org/'>OpenEmbedded</ulink>?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Poky is a derivative of <ulink
|
||||
url='http://www.openembedded.org/'>OpenEmbedded</ulink>, a stable,
|
||||
smaller subset focused on the GNOME Mobile environment. Development
|
||||
in Poky is closely tied to OpenEmbedded with features being merged
|
||||
regularly between the two for mutual benefit.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How can you claim Poky is stable?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
There are three areas that help with stability;
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
We keep Poky small and focused - around 650 packages compared to over 5000 for full OE
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
We only support hardware that we have access to for testing
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
We have a Buildbot which provides continuous build and integration tests
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How do I get support for my board added to Poky?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
There are two main ways to get a board supported in Poky;
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Send us the board if we don't have it yet
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Send us bitbake recipes if you have them (see the Poky handbook to find out how to create recipes)
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
Usually if it's not a completely exotic board then adding support in Poky should be fairly straightforward.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
Are there any products running poky ?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
The <ulink url='http://vernier.com/labquest/'>Vernier Labquest</ulink> is using Poky (for more about the Labquest see the case study at OpenedHand). There are a number of pre-production devices using Poky and we will announce those as soon as they are released.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
What is the Poky output ?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
The output of a Poky build will depend on how it was started, as the same set of recipes can be used to output various formats. Usually the output is a flashable image ready for the target device.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How do I add my package to Poky?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
To add a package you need to create a bitbake recipe - see the Poky handbook to find out how to create a recipe.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
Do I have to reflash my entire board with a new poky image when recompiling a package?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Poky can build packages in various formats, ipk (for ipkg/opkg), Debian package (.deb), or RPM. The packages can then be upgraded using the package tools on the device, much like on a desktop distribution like Ubuntu or Fedora.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
What is GNOME Mobile? What's the difference between GNOME Mobile and GNOME?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
<ulink url='http://www.gnome.org/mobile/'>GNOME Mobile</ulink> is a subset of the GNOME platform targeted at mobile and embedded devices. The the main difference between GNOME Mobile and standard GNOME is that desktop-orientated libraries have been removed, along with deprecated libraries, creating a much smaller footprint.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I see the error 'chmod: XXXXX new permissions are r-xrwxrwx, not r-xr-xr-x'. What's wrong?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
You're probably running the build on an NTFS filesystem. Use a sane one like ext2/3/4 instead!
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
How do I make Poky work in RHEL/CentOS?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
To get Poky working under RHEL/CentOS 5.1 you need to first install some required packages. The standard CentOS packages needed are:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
"Development tools" (selected during installation)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
texi2html
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
compat-gcc-34
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
On top of those the following external packages are needed:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
python-sqlite2 from <ulink
|
||||
url='http://dag.wieers.com/rpm/packages/python-sqlite2/'>DAG
|
||||
repository</ulink>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
help2man from <ulink
|
||||
url='http://centos.karan.org/el5/extras/testing/i386/RPMS/help2man-1.33.1-2.noarch.rpm'>Karan
|
||||
repository</ulink>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Once these packages are installed Poky will be able to build standard images however there
|
||||
may be a problem with QEMU segfaulting. You can either disable the generation of binary
|
||||
locales by setting <glossterm><link linkend='var-ENABLE_BINARY_LOCALE_GENERATION'>ENABLE_BINARY_LOCALE_GENERATION</link>
|
||||
</glossterm> to "0" or remove the linux-2.6-execshield.patch from the kernel and rebuild
|
||||
it since its that patch which causes the problems with QEMU.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I see lots of 404 responses for files on http://folks.o-hand.com/~richard/poky/sources/*. Is something wrong?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Nothing is wrong, Poky will check any configured source mirrors before downloading
|
||||
from the upstream sources. It does this searching for both source archives and
|
||||
pre-checked out versions of SCM managed software. This is so in large installations,
|
||||
it can reduce load on the SCM servers themselves. The address above is one of the
|
||||
default mirrors configured into standard Poky so if an upstream source disappears,
|
||||
we can place sources there so builds continue to work.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I have a machine specific data in a package for one machine only but the package is
|
||||
being marked as machine specific in all cases, how do I stop it?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Set <glossterm><link linkend='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'>SRC_URI_OVERRIDES_PACKAGE_ARCH</link>
|
||||
</glossterm> = "0" in the .bb file but make sure the package is manually marked as
|
||||
machine specific in the case that needs it. The code which handles <glossterm><link
|
||||
linkend='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'>SRC_URI_OVERRIDES_PACKAGE_ARCH</link></glossterm>
|
||||
is in base.bbclass.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I'm behind a firewall and need to use a proxy server. How do I do that?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
Most source fetching by Poky is done by wget and you therefore need to specify the proxy
|
||||
settings in a .wgetrc file in your home directory. Example settings in that file would be
|
||||
'http_proxy = http://proxy.yoyodyne.com:18023/' and 'ftp_proxy = http://proxy.yoyodyne.com:18023/'.
|
||||
Poky also includes a site.conf.sample file which shows how to configure cvs and git proxy servers
|
||||
if needed.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I'm using Ubuntu Intrepid and am seeing build failures. Whats wrong?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
In Intrepid, Ubuntu turned on by default normally optional compile-time security features
|
||||
and warnings. There are more details at <ulink
|
||||
url='https://wiki.ubuntu.com/CompilerFlags'>https://wiki.ubuntu.com/CompilerFlags</ulink>.
|
||||
You can work around this problem by disabling those options by adding " -Wno-format-security -U_FORTIFY_SOURCE"
|
||||
to the BUILD_CPPFLAGS variable in conf/bitbake.conf.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
Whats the difference between foo and foo-native?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
The *-native targets are designed to run on the system the buil is running on. These are usually tools that are needed to assist the build in some way such as quilt-native which is used to apply patches. The non-native version is the one that would run on the target device.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
I'm seeing random build failures. Help?!
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
If the same build is failing in totally different and random ways the most likely explaination is that either the hardware you're running it on has some problem or if you are running it under virtualisation, the virtualisation probably has bugs. Poky processes a massive amount of data causing lots of network, disk and cpu activity and is senstive to even single bit failure in any of these areas. Totally random failures have always been traced back to hardware or virtualisation issues.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
<qandaentry>
|
||||
<question>
|
||||
<para>
|
||||
What do we need to ship for licence complience?
|
||||
</para>
|
||||
</question>
|
||||
<answer>
|
||||
<para>
|
||||
This is a difficult question and you need to consult your lawyer for the answer for your specific case. Its worth bearing in mind that for GPL complience there needs to be enough information shipped to allow someone else to rebuild the same end result as you are shipping. This means sharing the source code, any patches applied to it but also any configuration information about how that package was configured and built.
|
||||
</para>
|
||||
</answer>
|
||||
</qandaentry>
|
||||
</qandaset>
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
|
||||
@@ -1,329 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
|
||||
|
||||
<chapter id='intro'>
|
||||
<title>Introduction</title>
|
||||
|
||||
<section id='intro-what-is'>
|
||||
<title>What is Poky?</title>
|
||||
|
||||
<para>
|
||||
|
||||
Poky is an open source platform build tool. It is a complete
|
||||
software development environment for the creation of Linux
|
||||
devices. It aids the design, development, building, debugging,
|
||||
simulation and testing of complete modern software stacks
|
||||
using Linux, the X Window System and GNOME Mobile
|
||||
based application frameworks. It is based on <ulink
|
||||
url='http://openembedded.org/'>OpenEmbedded</ulink> but has
|
||||
been customised with a particular focus.
|
||||
|
||||
</para>
|
||||
|
||||
<para> Poky was setup to:</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Provide an open source Linux, X11, Matchbox, GTK+, Pimlico, Clutter, and other <ulink url='http://gnome.org/mobile'>GNOME Mobile</ulink> technologies based full platform build and development tool.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a focused, stable, subset of OpenEmbedded that can be easily and reliably built and developed upon.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Fully support a wide range of x86 and ARM hardware and device virtulisation</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
Poky is primarily a platform builder which generates filesystem images
|
||||
based on open source software such as the Kdrive X server, the Matchbox
|
||||
window manager, the GTK+ toolkit and the D-Bus message bus system. Images
|
||||
for many kinds of devices can be generated, however the standard example
|
||||
machines target QEMU full system emulation (both x86 and ARM) and the ARM based
|
||||
Sharp Zaurus series of devices. Poky's ability to boot inside a QEMU
|
||||
emulator makes it particularly suitable as a test platform for development
|
||||
of embedded software.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
An important component integrated within Poky is Sato, a GNOME Mobile
|
||||
based user interface environment.
|
||||
It is designed to work well with screens at very high DPI and restricted
|
||||
size, such as those often found on smartphones and PDAs. It is coded with
|
||||
focus on efficiency and speed so that it works smoothly on hand-held and
|
||||
other embedded hardware. It will sit neatly on top of any device
|
||||
using the GNOME Mobile stack, providing a well defined user experience.
|
||||
</para>
|
||||
|
||||
<screenshot>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="screenshots/ss-sato.png" format="PNG" align='center' scalefit='1' width="100%" contentdepth="100%"/>
|
||||
</imageobject>
|
||||
<caption>
|
||||
<para>The Sato Desktop - A screenshot from a machine running a Poky built image</para>
|
||||
</caption>
|
||||
</mediaobject>
|
||||
</screenshot>
|
||||
|
||||
|
||||
<para>
|
||||
|
||||
Poky has a growing open source community and is also backed up by commercial organisations including <ulink url="http://www.intel.com/">Intel Corporation</ulink>.
|
||||
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='intro-manualoverview'>
|
||||
<title>Documentation Overview</title>
|
||||
|
||||
<para>
|
||||
The handbook is split into sections covering different aspects of Poky.
|
||||
The <link linkend='usingpoky'>'Using Poky' section</link> gives an overview
|
||||
of the components that make up Poky followed by information about using and
|
||||
debugging the Poky build system. The <link linkend='extendpoky'>'Extending Poky' section</link>
|
||||
gives information about how to extend and customise Poky along with advice
|
||||
on how to manage these changes. The <link linkend='platdev'>'Platform Development with Poky'
|
||||
section</link> gives information about interaction between Poky and target
|
||||
hardware for common platform development tasks such as software development,
|
||||
debugging and profiling. The rest of the manual
|
||||
consists of several reference sections each giving details on a specific
|
||||
section of Poky functionality.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This manual applies to Poky Release 3.1 (Pinky).
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
|
||||
<section id='intro-requirements'>
|
||||
<title>System Requirements</title>
|
||||
|
||||
<para>
|
||||
We recommend Debian-based distributions, in particular a recent Ubuntu
|
||||
release (7.04 or newer), as the host system for Poky. Nothing in Poky is
|
||||
distribution specific and
|
||||
other distributions will most likely work as long as the appropriate
|
||||
prerequisites are installed - we know of Poky being used successfully on Redhat,
|
||||
SUSE, Gentoo and Slackware host systems.
|
||||
</para>
|
||||
|
||||
<para>On a Debian-based system, you need the following packages installed:</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>build-essential</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>python (version 2.6 or later)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>diffstat</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>texinfo</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>texi2html</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>cvs</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>subversion</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>wget</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>gawk</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>help2man</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>bochsbios (only to run qemux86 images)</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
Debian users can add debian.o-hand.com to their APT sources (See
|
||||
<ulink url='http://debian.o-hand.com'/>
|
||||
for instructions on doing this) and then run <command>
|
||||
"apt-get install qemu poky-depends poky-scripts"</command> which will
|
||||
automatically install all these dependencies. Virtualisation images with
|
||||
Poky and all dependencies can also easily be built if required.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Poky can use a system provided QEMU or build its own depending on how it's
|
||||
configured. See the options in <filename>local.conf</filename> for more details.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='intro-quickstart'>
|
||||
<title>Quick Start</title>
|
||||
|
||||
<section id='intro-quickstart-build'>
|
||||
<title>Building and Running an Image</title>
|
||||
|
||||
<para>
|
||||
If you want to try Poky, you can do so in a few commands. The example below
|
||||
checks out the Poky source code, sets up a build environment, builds an
|
||||
image and then runs that image under the QEMU emulator in x86 system emulation mode:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ wget http://pokylinux.org/releases/poky-green-3.3.tar.bz2
|
||||
$ tar xjvf poky-green-3.3.tar.bz2
|
||||
$ cd green-3.3/
|
||||
$ source poky-init-build-env
|
||||
$ bitbake poky-image-sato
|
||||
$ runqemu qemux86
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<note>
|
||||
<para>
|
||||
This process will need Internet access, about 3 GB of disk space
|
||||
available, and you should expect the build to take about 4 - 5 hours since
|
||||
it is building an entire Linux system from source including the toolchain!
|
||||
</para>
|
||||
</note>
|
||||
|
||||
<para>
|
||||
To build for other machines see the <glossterm><link
|
||||
linkend='var-MACHINE'>MACHINE</link></glossterm> variable in build/conf/local.conf.
|
||||
This file contains other useful configuration information and the default version
|
||||
has examples of common setup needs and is worth
|
||||
reading. To take advantage of multiple processor cores to speed up builds for example, set the
|
||||
<glossterm><link linkend='var-BB_NUMBER_THREADS'>BB_NUMBER_THREADS</link></glossterm>
|
||||
and <glossterm><link linkend='var-PARALLEL_MAKE'>PARALLEL_MAKE</link></glossterm> variables.
|
||||
|
||||
The images/kernels built by Poky are placed in the <filename class="directory">tmp/deploy/images</filename>
|
||||
directory.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You could also run <command>"poky-qemu zImage-qemuarm.bin poky-image-sato-qemuarm.ext2"
|
||||
</command> within the images directory if you have the poky-scripts Debian package
|
||||
installed from debian.o-hand.com. This allows the QEMU images to be used standalone
|
||||
outside the Poky build environment.
|
||||
</para>
|
||||
<para>
|
||||
To setup networking within QEMU see the <link linkend='usingpoky-install-qemu-networking'>
|
||||
QEMU/USB networking with IP masquerading</link> section.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section id='intro-quickstart-qemu'>
|
||||
<title>Downloading and Using Prebuilt Images</title>
|
||||
|
||||
<para>
|
||||
Prebuilt images from Poky are also available if you just want to run the system
|
||||
under QEMU. To use these you need to:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Add debian.o-hand.com to your APT sources (See
|
||||
<ulink url='http://debian.o-hand.com'/> for instructions on doing this)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Install patched QEMU and poky-scripts:</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ apt-get install qemu poky-scripts
|
||||
</literallayout>
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Download a Poky QEMU release kernel (*zImage*qemu*.bin) and compressed
|
||||
filesystem image (poky-image-*-qemu*.ext2.bz2) which
|
||||
you'll need to decompress with 'bzip2 -d'. These are available from the
|
||||
<ulink url='http://pokylinux.org/releases/blinky-3.0/'>last release</ulink>
|
||||
or from the <ulink url='http://pokylinux.org/autobuild/poky/'>autobuilder</ulink>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Start the image:</para>
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
$ poky-qemu <kernel> <image>
|
||||
</literallayout>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<note><para>
|
||||
A patched version of QEMU is required at present. A suitable version is available from
|
||||
<ulink url='http://debian.o-hand.com'/>, it can be built
|
||||
by poky (bitbake qemu-native) or can be downloaded/built as part of the toolchain/SDK tarballs.
|
||||
</para></note>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section id='intro-getit'>
|
||||
<title>Obtaining Poky</title>
|
||||
|
||||
<section id='intro-getit-releases'>
|
||||
<title>Releases</title>
|
||||
|
||||
<para>Periodically, we make releases of Poky and these are available
|
||||
at <ulink url='http://pokylinux.org/releases/'/>.
|
||||
These are more stable and tested than the nightly development images.</para>
|
||||
</section>
|
||||
|
||||
<section id='intro-getit-nightly'>
|
||||
<title>Nightly Builds</title>
|
||||
|
||||
<para>
|
||||
We make nightly builds of Poky for testing purposes and to make the
|
||||
latest developments available. The output from these builds is available
|
||||
at <ulink url='http://pokylinux.org/autobuild/'/>
|
||||
where the numbers increase for each subsequent build and can be used to reference it.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Automated builds are available for "standard" Poky and for Poky SDKs and toolchains as well
|
||||
as any testing versions we might have such as poky-bleeding. The toolchains can
|
||||
be used either as external standalone toolchains or can be combined with Poky as a
|
||||
prebuilt toolchain to reduce build time. Using the external toolchains is simply a
|
||||
case of untarring the tarball into the root of your system (it only creates files in
|
||||
<filename class="directory">/usr/local/poky</filename>) and then enabling the option
|
||||
in <filename>local.conf</filename>.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section id='intro-getit-dev'>
|
||||
<title>Development Checkouts</title>
|
||||
|
||||
<para>
|
||||
Poky is available from our GIT repository located at
|
||||
git://git.pokylinux.org/poky.git; a web interface to the repository
|
||||
can be accessed at <ulink url='http://git.pokylinux.org/'/>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The 'master' is where the deveopment work takes place and you should use this if you're
|
||||
after to work with the latest cutting edge developments. It is possible trunk
|
||||
can suffer temporary periods of instability while new features are developed and
|
||||
if this is undesireable we recommend using one of the release branches.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 26 KiB |
@@ -1,30 +0,0 @@
|
||||
2008-02-15 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* common/Makefile.am:
|
||||
* common/poky-handbook.png:
|
||||
Add a PNG image for the manual. Seems our logo SVG
|
||||
is too complex/transparent for PDF
|
||||
|
||||
2008-02-14 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* common/Makefile.am:
|
||||
* common/fop-config.xml.in:
|
||||
* common/poky-db-pdf.xsl:
|
||||
* poky-docbook-to-pdf.in:
|
||||
Font tweakage.
|
||||
|
||||
2008-01-27 Matthew Allum <mallum@openedhand.com>
|
||||
|
||||
* INSTALL:
|
||||
* Makefile.am:
|
||||
* README:
|
||||
* autogen.sh:
|
||||
* common/Makefile.am:
|
||||
* common/fop-config.xml.in:
|
||||
* common/ohand-color.svg:
|
||||
* common/poky-db-pdf.xsl:
|
||||
* common/poky.svg:
|
||||
* common/titlepage.templates.xml:
|
||||
* configure.ac:
|
||||
* poky-docbook-to-pdf.in:
|
||||
Initial import.
|
||||
@@ -1,236 +0,0 @@
|
||||
Installation Instructions
|
||||
*************************
|
||||
|
||||
Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005 Free
|
||||
Software Foundation, Inc.
|
||||
|
||||
This file is free documentation; the Free Software Foundation gives
|
||||
unlimited permission to copy, distribute and modify it.
|
||||
|
||||
Basic Installation
|
||||
==================
|
||||
|
||||
These are generic installation instructions.
|
||||
|
||||
The `configure' shell script attempts to guess correct values for
|
||||
various system-dependent variables used during compilation. It uses
|
||||
those values to create a `Makefile' in each directory of the package.
|
||||
It may also create one or more `.h' files containing system-dependent
|
||||
definitions. Finally, it creates a shell script `config.status' that
|
||||
you can run in the future to recreate the current configuration, and a
|
||||
file `config.log' containing compiler output (useful mainly for
|
||||
debugging `configure').
|
||||
|
||||
It can also use an optional file (typically called `config.cache'
|
||||
and enabled with `--cache-file=config.cache' or simply `-C') that saves
|
||||
the results of its tests to speed up reconfiguring. (Caching is
|
||||
disabled by default to prevent problems with accidental use of stale
|
||||
cache files.)
|
||||
|
||||
If you need to do unusual things to compile the package, please try
|
||||
to figure out how `configure' could check whether to do them, and mail
|
||||
diffs or instructions to the address given in the `README' so they can
|
||||
be considered for the next release. If you are using the cache, and at
|
||||
some point `config.cache' contains results you don't want to keep, you
|
||||
may remove or edit it.
|
||||
|
||||
The file `configure.ac' (or `configure.in') is used to create
|
||||
`configure' by a program called `autoconf'. You only need
|
||||
`configure.ac' if you want to change it or regenerate `configure' using
|
||||
a newer version of `autoconf'.
|
||||
|
||||
The simplest way to compile this package is:
|
||||
|
||||
1. `cd' to the directory containing the package's source code and type
|
||||
`./configure' to configure the package for your system. If you're
|
||||
using `csh' on an old version of System V, you might need to type
|
||||
`sh ./configure' instead to prevent `csh' from trying to execute
|
||||
`configure' itself.
|
||||
|
||||
Running `configure' takes awhile. While running, it prints some
|
||||
messages telling which features it is checking for.
|
||||
|
||||
2. Type `make' to compile the package.
|
||||
|
||||
3. Optionally, type `make check' to run any self-tests that come with
|
||||
the package.
|
||||
|
||||
4. Type `make install' to install the programs and any data files and
|
||||
documentation.
|
||||
|
||||
5. You can remove the program binaries and object files from the
|
||||
source code directory by typing `make clean'. To also remove the
|
||||
files that `configure' created (so you can compile the package for
|
||||
a different kind of computer), type `make distclean'. There is
|
||||
also a `make maintainer-clean' target, but that is intended mainly
|
||||
for the package's developers. If you use it, you may have to get
|
||||
all sorts of other programs in order to regenerate files that came
|
||||
with the distribution.
|
||||
|
||||
Compilers and Options
|
||||
=====================
|
||||
|
||||
Some systems require unusual options for compilation or linking that the
|
||||
`configure' script does not know about. Run `./configure --help' for
|
||||
details on some of the pertinent environment variables.
|
||||
|
||||
You can give `configure' initial values for configuration parameters
|
||||
by setting variables in the command line or in the environment. Here
|
||||
is an example:
|
||||
|
||||
./configure CC=c89 CFLAGS=-O2 LIBS=-lposix
|
||||
|
||||
*Note Defining Variables::, for more details.
|
||||
|
||||
Compiling For Multiple Architectures
|
||||
====================================
|
||||
|
||||
You can compile the package for more than one kind of computer at the
|
||||
same time, by placing the object files for each architecture in their
|
||||
own directory. To do this, you must use a version of `make' that
|
||||
supports the `VPATH' variable, such as GNU `make'. `cd' to the
|
||||
directory where you want the object files and executables to go and run
|
||||
the `configure' script. `configure' automatically checks for the
|
||||
source code in the directory that `configure' is in and in `..'.
|
||||
|
||||
If you have to use a `make' that does not support the `VPATH'
|
||||
variable, you have to compile the package for one architecture at a
|
||||
time in the source code directory. After you have installed the
|
||||
package for one architecture, use `make distclean' before reconfiguring
|
||||
for another architecture.
|
||||
|
||||
Installation Names
|
||||
==================
|
||||
|
||||
By default, `make install' installs the package's commands under
|
||||
`/usr/local/bin', include files under `/usr/local/include', etc. You
|
||||
can specify an installation prefix other than `/usr/local' by giving
|
||||
`configure' the option `--prefix=PREFIX'.
|
||||
|
||||
You can specify separate installation prefixes for
|
||||
architecture-specific files and architecture-independent files. If you
|
||||
pass the option `--exec-prefix=PREFIX' to `configure', the package uses
|
||||
PREFIX as the prefix for installing programs and libraries.
|
||||
Documentation and other data files still use the regular prefix.
|
||||
|
||||
In addition, if you use an unusual directory layout you can give
|
||||
options like `--bindir=DIR' to specify different values for particular
|
||||
kinds of files. Run `configure --help' for a list of the directories
|
||||
you can set and what kinds of files go in them.
|
||||
|
||||
If the package supports it, you can cause programs to be installed
|
||||
with an extra prefix or suffix on their names by giving `configure' the
|
||||
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
|
||||
|
||||
Optional Features
|
||||
=================
|
||||
|
||||
Some packages pay attention to `--enable-FEATURE' options to
|
||||
`configure', where FEATURE indicates an optional part of the package.
|
||||
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
|
||||
is something like `gnu-as' or `x' (for the X Window System). The
|
||||
`README' should mention any `--enable-' and `--with-' options that the
|
||||
package recognizes.
|
||||
|
||||
For packages that use the X Window System, `configure' can usually
|
||||
find the X include and library files automatically, but if it doesn't,
|
||||
you can use the `configure' options `--x-includes=DIR' and
|
||||
`--x-libraries=DIR' to specify their locations.
|
||||
|
||||
Specifying the System Type
|
||||
==========================
|
||||
|
||||
There may be some features `configure' cannot figure out automatically,
|
||||
but needs to determine by the type of machine the package will run on.
|
||||
Usually, assuming the package is built to be run on the _same_
|
||||
architectures, `configure' can figure that out, but if it prints a
|
||||
message saying it cannot guess the machine type, give it the
|
||||
`--build=TYPE' option. TYPE can either be a short name for the system
|
||||
type, such as `sun4', or a canonical name which has the form:
|
||||
|
||||
CPU-COMPANY-SYSTEM
|
||||
|
||||
where SYSTEM can have one of these forms:
|
||||
|
||||
OS KERNEL-OS
|
||||
|
||||
See the file `config.sub' for the possible values of each field. If
|
||||
`config.sub' isn't included in this package, then this package doesn't
|
||||
need to know the machine type.
|
||||
|
||||
If you are _building_ compiler tools for cross-compiling, you should
|
||||
use the option `--target=TYPE' to select the type of system they will
|
||||
produce code for.
|
||||
|
||||
If you want to _use_ a cross compiler, that generates code for a
|
||||
platform different from the build platform, you should specify the
|
||||
"host" platform (i.e., that on which the generated programs will
|
||||
eventually be run) with `--host=TYPE'.
|
||||
|
||||
Sharing Defaults
|
||||
================
|
||||
|
||||
If you want to set default values for `configure' scripts to share, you
|
||||
can create a site shell script called `config.site' that gives default
|
||||
values for variables like `CC', `cache_file', and `prefix'.
|
||||
`configure' looks for `PREFIX/share/config.site' if it exists, then
|
||||
`PREFIX/etc/config.site' if it exists. Or, you can set the
|
||||
`CONFIG_SITE' environment variable to the location of the site script.
|
||||
A warning: not all `configure' scripts look for a site script.
|
||||
|
||||
Defining Variables
|
||||
==================
|
||||
|
||||
Variables not defined in a site shell script can be set in the
|
||||
environment passed to `configure'. However, some packages may run
|
||||
configure again during the build, and the customized values of these
|
||||
variables may be lost. In order to avoid this problem, you should set
|
||||
them in the `configure' command line, using `VAR=value'. For example:
|
||||
|
||||
./configure CC=/usr/local2/bin/gcc
|
||||
|
||||
causes the specified `gcc' to be used as the C compiler (unless it is
|
||||
overridden in the site shell script). Here is a another example:
|
||||
|
||||
/bin/bash ./configure CONFIG_SHELL=/bin/bash
|
||||
|
||||
Here the `CONFIG_SHELL=/bin/bash' operand causes subsequent
|
||||
configuration-related scripts to be executed by `/bin/bash'.
|
||||
|
||||
`configure' Invocation
|
||||
======================
|
||||
|
||||
`configure' recognizes the following options to control how it operates.
|
||||
|
||||
`--help'
|
||||
`-h'
|
||||
Print a summary of the options to `configure', and exit.
|
||||
|
||||
`--version'
|
||||
`-V'
|
||||
Print the version of Autoconf used to generate the `configure'
|
||||
script, and exit.
|
||||
|
||||
`--cache-file=FILE'
|
||||
Enable the cache: use and save the results of the tests in FILE,
|
||||
traditionally `config.cache'. FILE defaults to `/dev/null' to
|
||||
disable caching.
|
||||
|
||||
`--config-cache'
|
||||
`-C'
|
||||
Alias for `--cache-file=config.cache'.
|
||||
|
||||
`--quiet'
|
||||
`--silent'
|
||||
`-q'
|
||||
Do not print messages saying which checks are being made. To
|
||||
suppress all normal output, redirect it to `/dev/null' (any error
|
||||
messages will still be shown).
|
||||
|
||||
`--srcdir=DIR'
|
||||
Look for the package's source code in directory DIR. Usually
|
||||
`configure' can determine that directory automatically.
|
||||
|
||||
`configure' also accepts some other, not widely useful, options. Run
|
||||
`configure --help' for more details.
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
SUBDIRS = common
|
||||
|
||||
EXTRA_DIST = poky-docbook-to-pdf.in
|
||||
|
||||
bin_SCRIPTS = poky-docbook-to-pdf
|
||||
|
||||
edit = sed \
|
||||
-e 's,@datadir\@,$(pkgdatadir),g' \
|
||||
-e 's,@prefix\@,$(prefix),g' \
|
||||
-e 's,@version\@,@VERSION@,g'
|
||||
|
||||
##
|
||||
# These URI should be rewritten by your distribution's xml catalog to
|
||||
# match your localy installed XSL stylesheets.
|
||||
XSL_BASE_URI="http://docbook.sourceforge.net/release/xsl/current"
|
||||
XSL_TEMPLATE_URI = $(XSL_BASE_URI)/template/titlepage.xsl
|
||||
|
||||
poky-docbook-to-pdf: poky-docbook-to-pdf.in
|
||||
rm -f poky-docbook-to-pdf
|
||||
$(edit) poky-docbook-to-pdf.in > poky-docbook-to-pdf
|
||||
|
||||
clean-local:
|
||||
rm -fr poky-docbook-to-pdf
|
||||
rm -fr poky-pr-docbook-to-pdf
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user