mirror of
https://git.yoctoproject.org/poky
synced 2026-01-30 21:38:43 +01:00
Compare commits
29 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
afd0958d27 | ||
|
|
2b9dbe57a4 | ||
|
|
d538d8e171 | ||
|
|
d42f0b9153 | ||
|
|
f5003084d5 | ||
|
|
763b11d0fd | ||
|
|
7d292468e7 | ||
|
|
f51bb81248 | ||
|
|
d3b8687ea6 | ||
|
|
93f7d74492 | ||
|
|
e2c567f51e | ||
|
|
7646f333e1 | ||
|
|
b318d8f87b | ||
|
|
0128976cb0 | ||
|
|
5a4342cb2e | ||
|
|
6e4579b32b | ||
|
|
bd11abb22c | ||
|
|
e1798f5e39 | ||
|
|
b43a425119 | ||
|
|
01a5883616 | ||
|
|
ed145bbdfe | ||
|
|
dc123ba831 | ||
|
|
887a7768cf | ||
|
|
aba90fdf2f | ||
|
|
27cff3f045 | ||
|
|
506386d148 | ||
|
|
f5d24f0574 | ||
|
|
d36f183ade | ||
|
|
6129f9fc44 |
37
.gitignore
vendored
37
.gitignore
vendored
@@ -1,20 +1,35 @@
|
||||
*.pyc
|
||||
*.pyo
|
||||
build*/conf/local.conf
|
||||
build*/conf/bblayers.conf
|
||||
build*/downloads
|
||||
build*/tmp/
|
||||
build*/sstate-cache
|
||||
pyshtables.py
|
||||
build/conf/local.conf
|
||||
build/conf/bblayers.conf
|
||||
build/tmp/
|
||||
pstage/
|
||||
scripts/oe-git-proxy-socks
|
||||
scripts/poky-git-proxy-socks
|
||||
sources/
|
||||
meta-*
|
||||
!meta-skeleton
|
||||
!meta-demoapps
|
||||
meta-darwin
|
||||
meta-maemo
|
||||
meta-prvt*
|
||||
poky-autobuilder*
|
||||
*.swp
|
||||
*.orig
|
||||
*.rej
|
||||
*~
|
||||
|
||||
handbook/poky-doc-tools/Makefile
|
||||
handbook/poky-doc-tools/Makefile.in
|
||||
handbook/poky-doc-tools/aclocal.m4
|
||||
handbook/poky-doc-tools/autom4te.cache/
|
||||
handbook/poky-doc-tools/common/Makefile
|
||||
handbook/poky-doc-tools/common/Makefile.in
|
||||
handbook/poky-doc-tools/common/fop-config.xml
|
||||
handbook/poky-doc-tools/config.log
|
||||
handbook/poky-doc-tools/config.status
|
||||
handbook/poky-doc-tools/configure
|
||||
handbook/poky-doc-tools/install-sh
|
||||
handbook/poky-doc-tools/missing
|
||||
handbook/poky-doc-tools/poky-docbook-to-pdf
|
||||
handbook/poky-handbook.html
|
||||
handbook/poky-handbook.pdf
|
||||
handbook/poky-handbook.tgz
|
||||
handbook/bsp-guide.html
|
||||
handbook/bsp-guide.pdf
|
||||
|
||||
|
||||
34
README
34
README
@@ -1,29 +1,15 @@
|
||||
Poky
|
||||
====
|
||||
|
||||
Poky is an integration of various components to form a complete prepackaged
|
||||
build system and development environment. It features support for building
|
||||
customised embedded device style images. There are reference demo images
|
||||
featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
|
||||
cross-architecture application development using QEMU emulation and a
|
||||
standalone toolchain and SDK with IDE integration.
|
||||
Poky platform builder is a combined cross build system and development
|
||||
environment. It features support for building X11/Matchbox/GTK based
|
||||
filesystem images for various embedded devices and boards. It also
|
||||
supports cross-architecture application development using QEMU emulation
|
||||
and a standalone toolchain and SDK with IDE integration.
|
||||
|
||||
Poky has an extensive handbook, the source of which is contained in
|
||||
the handbook directory. For compiled HTML or pdf versions of this,
|
||||
see the Poky website http://pokylinux.org.
|
||||
|
||||
Additional information on the specifics of hardware that Poky supports
|
||||
is available in README.hardware. Further hardware support can easily be added
|
||||
in the form of layers which extend the systems capabilities in a modular way.
|
||||
|
||||
As an integration layer Poky consists of several upstream projects such as
|
||||
BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
|
||||
e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
|
||||
|
||||
The Yocto Project has extensive documentation about the system including a
|
||||
reference manual which can be found at:
|
||||
http://yoctoproject.org/community/documentation
|
||||
|
||||
OpenEmbedded-Core is a layer containing the core metadata for current versions
|
||||
of OpenEmbedded. It is distro-less (can build a functional image with
|
||||
DISTRO = "") and contains only emulated machine support.
|
||||
|
||||
For information about OpenEmbedded, see the OpenEmbedded website:
|
||||
http://www.openembedded.org/
|
||||
|
||||
is available in README.hardware.
|
||||
|
||||
623
README.hardware
623
README.hardware
@@ -1,469 +1,436 @@
|
||||
Poky Hardware README
|
||||
====================
|
||||
Poky Hardware Reference Guide
|
||||
=============================
|
||||
|
||||
This file gives details about using Poky with different hardware reference
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device.
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device. To discuss
|
||||
support for further hardware reference boards/devices please contact OpenedHand.
|
||||
|
||||
Support for additional devices is normally added by creating BSP layers - for
|
||||
more information please see the Yocto Board Support Package (BSP) Developer's
|
||||
Guide - documentation source is in documentation/bspguide or download the PDF
|
||||
from:
|
||||
|
||||
http://yoctoproject.org/community/documentation
|
||||
|
||||
Support for machines other than QEMU may be moved out to separate BSP layers in
|
||||
future versions.
|
||||
|
||||
|
||||
QEMU Emulation Targets
|
||||
======================
|
||||
QEMU Emulation Images (qemuarm and qemux86)
|
||||
===========================================
|
||||
|
||||
To simplify development Poky supports building images to work with the QEMU
|
||||
emulator in system emulation mode. Several architectures are currently
|
||||
supported:
|
||||
|
||||
* ARM (qemuarm)
|
||||
* x86 (qemux86)
|
||||
* x86-64 (qemux86-64)
|
||||
* PowerPC (qemuppc)
|
||||
* MIPS (qemumips)
|
||||
|
||||
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
|
||||
MACHINE setting corresponding to the target is given in brackets.
|
||||
|
||||
emulator in system emulation mode. Two architectures are currently supported,
|
||||
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
|
||||
in the Poky Handbook.
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
The following boards are supported by Poky's core layer:
|
||||
The following boards are supported by Poky:
|
||||
|
||||
* Texas Instruments Beagleboard (beagleboard)
|
||||
* Freescale MPC8315E-RDB (mpc8315e-rdb)
|
||||
* Ubiquiti Networks RouterStation Pro (routerstationpro)
|
||||
* Compulab CM-X270 (cm-x270)
|
||||
* Compulab EM-X270 (em-x270)
|
||||
* FreeScale iMX31ADS (mx31ads)
|
||||
* Marvell PXA3xx Zylonite (zylonite)
|
||||
* Logic iMX31 Lite Kit (mx31litekit)
|
||||
* Phytec phyCORE-iMX31 (mx31phy)
|
||||
|
||||
For more information see the board's section below. The Poky MACHINE setting
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
The following consumer devices are supported by Poky's core layer:
|
||||
The following consumer devices are supported by Poky:
|
||||
|
||||
* Intel Atom based PCs and devices (atom-pc)
|
||||
* FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
* HTC Universal (htcuniversal)
|
||||
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
* Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
* Sharp Zaurus SL-C1000 (akita)
|
||||
* Sharp Zaurus SL-C3x00 series (spitz)
|
||||
|
||||
For more information see the device's section below. The Poky MACHINE setting
|
||||
corresponding to the device is given in brackets.
|
||||
For more information see board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
Poky Boot CD (bootcdx86)
|
||||
========================
|
||||
|
||||
The Poky boot CD iso images are designed as a demonstration of the Poky
|
||||
environment and to show the versatile image formats Poky can generate. It will
|
||||
run on Pentium2 or greater PC style computers. The iso image can be
|
||||
burnt to CD and then booted from.
|
||||
|
||||
Specific Hardware Documentation
|
||||
===============================
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
Intel Atom based PCs and devices (atom-pc)
|
||||
==========================================
|
||||
Compulab CM-X270 (cm-x270)
|
||||
==========================
|
||||
|
||||
The atom-pc MACHINE is tested on the following platforms:
|
||||
The bootloader on this board doesn't support writing jffs2 images directly to
|
||||
NAND and normally uses a proprietary kernel flash driver. To allow the use of
|
||||
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
|
||||
is booted which contains mtd utilities and this is then used to write the main
|
||||
filesystem.
|
||||
|
||||
o Asus EeePC 901
|
||||
o Acer Aspire One
|
||||
o Toshiba NB305
|
||||
o Intel Embedded Development Board 1-N450 (Black Sand)
|
||||
It is assumed the board is connected to a network where a TFTP server is
|
||||
available and that a serial terminal is available to communicate with the
|
||||
bootloader (38400, 8N1). If a DHCP server is available the device will use it
|
||||
to obtain an IP address. If not, run:
|
||||
|
||||
and is likely to work on many unlisted Atom based devices. The MACHINE type
|
||||
supports ethernet, wifi, sound, and i915 graphics by default in addition to
|
||||
common PC input devices, busses, and so on.
|
||||
ARMmon > setip dhcp off
|
||||
ARMmon > setip ip 192.168.1.203
|
||||
ARMmon > setip mask 255.255.255.0
|
||||
|
||||
Depending on the device, it can boot from a traditional hard-disk, a USB device,
|
||||
or over the network. Writing poky generated images to physical media is
|
||||
straightforward with a caveat for USB devices. The following examples assume the
|
||||
target boot device is /dev/sdb, be sure to verify this and use the correct
|
||||
device as the following commands are run as root and are not reversable.
|
||||
To reflash the kernel:
|
||||
|
||||
USB Device:
|
||||
1. Build a live image. This image type consists of a simple filesystem
|
||||
without a partition table, which is suitable for USB keys, and with the
|
||||
default setup for the atom-pc machine, this image type is built
|
||||
automatically for any image you build. For example:
|
||||
ARMmon > download kernel tftp zimage 192.168.1.202
|
||||
ARMmon > flash kernel
|
||||
|
||||
$ bitbake core-image-minimal
|
||||
where zimage is the name of the kernel on the TFTP server and its IP address is
|
||||
192.168.1.202. The names of the files must be all lowercase.
|
||||
|
||||
2. Use the "dd" utility to write the image to the raw block device. For
|
||||
example:
|
||||
To reflash the initrd/initramfs:
|
||||
|
||||
# dd if=core-image-minimal-atom-pc.hddimg of=/dev/sdb
|
||||
ARMmon > download ramdisk tftp diskimage 192.168.1.202
|
||||
ARMmon > flash ramdisk
|
||||
|
||||
If the device fails to boot with "Boot error" displayed, or apparently
|
||||
stops just after the SYSLINUX version banner, it is likely the BIOS cannot
|
||||
understand the physical layout of the disk (or rather it expects a
|
||||
particular layout and cannot handle anything else). There are two possible
|
||||
solutions to this problem:
|
||||
where diskimage is the name of the initramfs image (a cpio.gz file).
|
||||
|
||||
1. Change the BIOS USB Device setting to HDD mode. The label will vary by
|
||||
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
|
||||
geometry from the device.
|
||||
To boot the initramfs:
|
||||
|
||||
2. Without such an option, the BIOS generally boots the device in USB-ZIP
|
||||
mode. To write an image to a USB device that will be bootable in
|
||||
USB-ZIP mode, carry out the following actions:
|
||||
ARMmon > ramdisk on
|
||||
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
|
||||
|
||||
a. Determine the geometry of your USB device using fdisk:
|
||||
To reflash the main image login to the system as user "root", then run:
|
||||
|
||||
# fdisk /dev/sdb
|
||||
Command (m for help): p
|
||||
# ifconfig eth0 192.168.1.203
|
||||
# tftp -g -r mainimage 192.168.1.202
|
||||
# flash_eraseall /dev/mtd1
|
||||
# nandwrite /dev/mtd1 mainimage
|
||||
|
||||
Disk /dev/sdb: 4011 MB, 4011491328 bytes
|
||||
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
|
||||
...
|
||||
which configures the network interface with the IP address 192.168.1.203,
|
||||
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
|
||||
the flash and then writes the new image to the flash.
|
||||
|
||||
Command (m for help): q
|
||||
The main image can then be booted with:
|
||||
|
||||
b. Configure the USB device for USB-ZIP mode:
|
||||
|
||||
# mkdiskimage -4 /dev/sdb 1019 124 62
|
||||
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
|
||||
|
||||
Where 1019, 124 and 62 are the cylinder, head and sectors/track counts
|
||||
as reported by fdisk (substitute the values reported for your device).
|
||||
When the operation has finished and the access LED (if any) on the
|
||||
device stops flashing, remove and reinsert the device to allow the
|
||||
kernel to detect the new partition layout.
|
||||
Note that the initramfs image is built by poky in a slightly different mode to
|
||||
normal since it uses uclibc. To generate this use a command like:
|
||||
|
||||
c. Copy the contents of the poky image to the USB-ZIP mode device:
|
||||
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
|
||||
|
||||
# mkdir /tmp/image
|
||||
# mkdir /tmp/usbkey
|
||||
# mount -o loop core-image-minimal-atom-pc.hddimg /tmp/image
|
||||
# mount /dev/sdb4 /tmp/usbkey
|
||||
# cp -rf /tmp/image/* /tmp/usbkey
|
||||
|
||||
d. Install the syslinux boot loader:
|
||||
Compulab EM-X270 (em-x270)
|
||||
==========================
|
||||
|
||||
# syslinux /dev/sdb4
|
||||
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
|
||||
Compulab website. Inside the images directory of this ZIP file is another ZIP
|
||||
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
|
||||
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
|
||||
|
||||
e. Unmount everything:
|
||||
Insert this USB disk into the supplied adapter and connect this to the
|
||||
board. Whilst holding down the the suspend button press the reset button. The
|
||||
board will now boot off the USB key and into a version of Angstrom. On the
|
||||
desktop is an icon labelled "Updater". Run this program to launch the updater
|
||||
that will flash the Poky kernel and rootfs to the board.
|
||||
|
||||
# umount /tmp/image
|
||||
# umount /tmp/usbkey
|
||||
|
||||
Install the boot device in the target board and configure the BIOS to boot
|
||||
from it.
|
||||
FreeScale iMX31ADS (mx31ads)
|
||||
===========================
|
||||
|
||||
For more details on the USB-ZIP scenario, see the syslinux documentation:
|
||||
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
|
||||
The correct serial port is the top-most female connector to the right of the
|
||||
ethernet socket.
|
||||
|
||||
For uploading data to RedBoot we are going to use tftp. In this example we
|
||||
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
|
||||
|
||||
Texas Instruments Beagleboard (beagleboard)
|
||||
===========================================
|
||||
To set the IP address, run:
|
||||
|
||||
The Beagleboard is an ARM Cortex-A8 development board with USB, DVI-D, S-Video,
|
||||
2D/3D accelerated graphics, audio, serial, JTAG, and SD/MMC. The xM adds a
|
||||
faster CPU, more RAM, an ethernet port, more USB ports, microSD, and removes
|
||||
the NAND flash. The beagleboard MACHINE is tested on the following platforms:
|
||||
ip_address -l 192.168.9.2/24 -h 192.168.9.1
|
||||
|
||||
o Beagleboard C4
|
||||
o Beagleboard xM rev A & B
|
||||
To download a kernel called "zimage" from the TFTP server, run:
|
||||
|
||||
The Beagleboard C4 has NAND, while the xM does not. For the sake of simplicity,
|
||||
these instructions assume you have erased the NAND on the C4 so its boot
|
||||
behavior matches that of the xM. To do this, issue the following commands from
|
||||
the u-boot prompt (note that the unlock may be unecessary depending on the
|
||||
version of u-boot installed on your board and only one of the erase commands
|
||||
will succeed):
|
||||
load -r -b 0x100000 zimage
|
||||
|
||||
# nand unlock
|
||||
# nand erase
|
||||
# nand erase.chip
|
||||
To write the kernel to flash run:
|
||||
|
||||
To further tailor these instructions for your board, please refer to the
|
||||
documentation at http://www.beagleboard.org.
|
||||
fis create kernel
|
||||
|
||||
From a Linux system with access to the image files perform the following steps
|
||||
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
|
||||
if used via a usb card reader):
|
||||
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
|
||||
|
||||
1. Partition and format an SD card:
|
||||
# fdisk -lu /dev/mmcblk0
|
||||
|
||||
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
|
||||
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
|
||||
Units = sectors of 1 * 512 = 512 bytes
|
||||
|
||||
Device Boot Start End Blocks Id System
|
||||
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
|
||||
/dev/mmcblk0p2 144585 465884 160650 83 Linux
|
||||
load -r -b 0x100000 rootfs
|
||||
|
||||
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
|
||||
# mke2fs -j -L "root" /dev/mmcblk0p2
|
||||
To write the root filesystem to flash run:
|
||||
|
||||
The following assumes the SD card partition 1 and 2 are mounted at
|
||||
/media/boot and /media/root respectively. Removing the card and reinserting
|
||||
it will do just that on most modern Linux desktop environments.
|
||||
|
||||
The files referenced below are made available after the build in
|
||||
build/tmp/deploy/images.
|
||||
fis create root
|
||||
|
||||
2. Install the boot loaders
|
||||
# cp MLO-beagleboard /media/boot/MLO
|
||||
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
|
||||
To load and boot a kernel and rootfs from flash:
|
||||
|
||||
3. Install the root filesystem
|
||||
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beagleboard.tar.bz2
|
||||
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
|
||||
fis load kernel
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
|
||||
|
||||
4. Install the kernel uImage
|
||||
# cp uImage-beagleboard.bin /media/boot/uImage
|
||||
To load and boot a kernel from a TFTP server with the rootfs over NFS:
|
||||
|
||||
5. Prepare a u-boot script to simplify the boot process
|
||||
The Beagleboard can be made to boot at this point from the u-boot command
|
||||
shell. To automate this process, generate a user.scr script as follows.
|
||||
load -r -b 0x100000 zimage
|
||||
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
|
||||
|
||||
Install uboot-mkimage (from uboot-mkimage on Ubuntu or uboot-tools on Fedora).
|
||||
The instructions above are for using the (default) NOR flash on the board,
|
||||
there is also 128M of NAND flash. It is possible to install Poky to the NAND
|
||||
flash which gives more space for the rootfs and instructions for using this are
|
||||
given below. To switch to the NAND flash:
|
||||
|
||||
Prepare a script config:
|
||||
factive NAND
|
||||
|
||||
# (cat << EOF
|
||||
setenv bootcmd 'mmc init; fatload mmc 0:1 0x80300000 uImage; bootm 0x80300000'
|
||||
setenv bootargs 'console=tty0 console=ttyO2,115200n8 root=/dev/mmcblk0p2 rootwait rootfstype=ext3 ro'
|
||||
boot
|
||||
EOF
|
||||
) > serial-boot.cmd
|
||||
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Core Minimal" -d ./serial-boot.cmd ./boot.scr
|
||||
# cp boot.scr /media/boot
|
||||
This will then restart RedBoot using the NAND rather than the NOR. If you
|
||||
have not used the NAND before then it is unlikely that there will be a
|
||||
partition table yet. You can get the list of partitions with 'fis list'.
|
||||
|
||||
6. Unmount the SD partitions, insert the SD card into the Beagleboard, and
|
||||
boot the Beagleboard
|
||||
If this shows no partitions then you can create them with:
|
||||
|
||||
Note: As of the 2.6.37 linux-yocto kernel recipe, the Beagleboard uses the
|
||||
OMAP_SERIAL device (ttyO2). If you are using an older kernel, such as the
|
||||
2.6.34 linux-yocto-stable, be sure to replace ttyO2 with ttyS2 above. You
|
||||
should also override the machine SERIAL_CONSOLE in your local.conf in
|
||||
order to setup the getty on the serial line:
|
||||
fis init
|
||||
|
||||
SERIAL_CONSOLE_beagleboard = "115200 ttyS2"
|
||||
The output of 'fis list' should now show:
|
||||
|
||||
Name FLASH addr Mem addr Length Entry point
|
||||
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
|
||||
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
|
||||
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
|
||||
|
||||
Freescale MPC8315E-RDB (mpc8315e-rdb)
|
||||
=====================================
|
||||
Partitions for the kernel and rootfs need to be created:
|
||||
|
||||
The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
|
||||
software development of network attached storage (NAS) and digital media server
|
||||
applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
|
||||
includes a built-in security accelerator.
|
||||
fis create -l 0x1A0000 -e 0x00100000 kernel
|
||||
fis create -l 0x5000000 -e 0x00100000 root
|
||||
|
||||
(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
|
||||
same board in an enclosure with accessories. In any case it is fully
|
||||
compatible with the instructions given here.)
|
||||
You may now use the instructions above for flashing. However it is important
|
||||
to note that the erase block size for the NAND is different to the NOR so the
|
||||
JFFS erase size will need to be changed to 0x4000. Stardard images are built
|
||||
for NOR and you will need to build custom images for NAND.
|
||||
|
||||
Setup instructions
|
||||
------------------
|
||||
You will also need to update the kernel command line to use the correct root
|
||||
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
|
||||
scheme shown above. If this fails then you can doublecheck against the output
|
||||
from the kernel when it evaluates the available mtd partitions.
|
||||
|
||||
You will need the following:
|
||||
* NFS root setup on your workstation
|
||||
* TFTP server installed on your workstation
|
||||
* Straight-thru 9-conductor serial cable (DB9, M/F) connected from your
|
||||
PC to UART1
|
||||
* Ethernet connected to the first ethernet port on the board
|
||||
|
||||
--- Preparation ---
|
||||
Marvell PXA3xx Zylonite (zylonite)
|
||||
==================================
|
||||
|
||||
Note: if you have altered your board's ethernet MAC address(es) from the
|
||||
defaults, or you need to do so because you want multiple boards on the same
|
||||
network, then you will need to change the values in the dts file (patch
|
||||
linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
|
||||
you have left them at the factory default then you shouldn't need to do
|
||||
anything here.
|
||||
These instructions assume the Zylonite is connected to a machine running a TFTP
|
||||
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
|
||||
to access the blob bootloader. The kernel is on the TFTP server as
|
||||
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
|
||||
the images are to be saved in NAND flash.
|
||||
|
||||
--- Booting from NFS root ---
|
||||
The following commands setup blob:
|
||||
|
||||
Load the kernel and dtb (device tree blob), and boot the system as follows:
|
||||
blob> setip client 192.168.123.4
|
||||
blob> setip server 192.168.123.5
|
||||
|
||||
1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
|
||||
files from the Poky build tmp/deploy directory, and make them available on
|
||||
your TFTP server.
|
||||
To flash the kernel:
|
||||
|
||||
2. Connect the board's first serial port to your workstation and then start up
|
||||
your favourite serial terminal so that you will be able to interact with
|
||||
the serial console. If you don't have a favourite, picocom is suggested:
|
||||
blob> tftp zylonite-kernel
|
||||
blob> nandwrite -j 0x80800000 0x60000 0x200000
|
||||
|
||||
$ picocom /dev/ttyUSB0 -b 115200
|
||||
To flash the rootfs:
|
||||
|
||||
3. Power up or reset the board and press a key on the terminal when prompted
|
||||
to get to the U-Boot command line
|
||||
blob> tftp zylonite-rootfs
|
||||
blob> nanderase -j 0x260000 0x5000000
|
||||
blob> nandwrite -j 0x80800000 0x260000 <length>
|
||||
|
||||
4. Set up the environment in U-Boot:
|
||||
(where <length> is the rootfs size which will be printed by the tftp step)
|
||||
|
||||
=> setenv ipaddr <board ip>
|
||||
=> setenv serverip <tftp server ip>
|
||||
=> setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
|
||||
To boot the board:
|
||||
|
||||
5. Download the kernel and dtb, and boot:
|
||||
blob> nkernel
|
||||
blob> boot
|
||||
|
||||
=> tftp 800000 uImage-mpc8315e-rdb.bin
|
||||
=> tftp 780000 uImage-mpc8315e-rdb.dtb
|
||||
=> bootm 800000 - 780000
|
||||
|
||||
Logic iMX31 Lite Kit (mx31litekit)
|
||||
===============================
|
||||
|
||||
Ubiquiti Networks RouterStation Pro (routerstationpro)
|
||||
======================================================
|
||||
The easiest method to boot this board is to take an MMC/SD card and format
|
||||
the first partition as ext2, then extract the poky image onto this as root.
|
||||
Assuming the board is network connected, a TFTP server is available at
|
||||
192.168.1.33 and a serial terminal is available (115200 8N1), the following
|
||||
commands will boot a kernel called "mx31kern" from the TFTP server:
|
||||
|
||||
The RouterStation Pro is an Atheros AR7161 MIPS-based board. Geared towards
|
||||
networking applications, it has all of the usual features as well as three
|
||||
type IIIA mini-PCI slots and an on-board 3-port 10/100/1000 Ethernet switch,
|
||||
in addition to the 10/100/1000 Ethernet WAN port which supports
|
||||
Power-over-Ethernet.
|
||||
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
|
||||
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
|
||||
losh> exec 0x80100000 -
|
||||
|
||||
Setup instructions
|
||||
------------------
|
||||
|
||||
You will need the following:
|
||||
* A serial cable - female to female (or female to male + gender changer)
|
||||
NOTE: cable must be straight through, *not* a null modem cable.
|
||||
* USB flash drive or hard disk that is able to be powered from the
|
||||
board's USB port.
|
||||
* tftp server installed on your workstation
|
||||
Phytec phyCORE-iMX31 (mx31phy)
|
||||
==============================
|
||||
|
||||
NOTE: in the following instructions it is assumed that /dev/sdb corresponds
|
||||
to the USB disk when it is plugged into your workstation. If this is not the
|
||||
case in your setup then please be careful to substitute the correct device
|
||||
name in all commands where appropriate.
|
||||
Support for this board is currently being developed. Experimental jffs2
|
||||
images and a suitable kernel are available and are known to work with the
|
||||
board.
|
||||
|
||||
--- Preparation ---
|
||||
|
||||
1) Build an image (e.g. core-image-minimal) using "routerstationpro" as the
|
||||
MACHINE
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
2) Partition the USB drive so that primary partition 1 is type Linux (83).
|
||||
Minimum size depends on your root image size - core-image-minimal probably
|
||||
only needs 8-16MB, other images will need more.
|
||||
FIC Neo1973 GTA01 smartphone (fic-gta01)
|
||||
========================================
|
||||
|
||||
# fdisk /dev/sdb
|
||||
Command (m for help): p
|
||||
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
|
||||
which you can build with "bitbake dfu-util-native" command.
|
||||
|
||||
Disk /dev/sdb: 4011 MB, 4011491328 bytes
|
||||
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
|
||||
Units = sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disk identifier: 0x0009e87d
|
||||
Flashing requires these steps:
|
||||
|
||||
Device Boot Start End Blocks Id System
|
||||
/dev/sdb1 62 1952751 976345 83 Linux
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB.
|
||||
3. Hold AUX key and press Power key. There should be a bootmenu
|
||||
on screen.
|
||||
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
|
||||
The output should look like this:
|
||||
|
||||
3) Format partition 1 on the USB as ext3
|
||||
dfu-util - (C) 2007 by OpenMoko Inc.
|
||||
This program is Free Software and has ABSOLUTELY NO WARRANTY
|
||||
|
||||
# mke2fs -j /dev/sdb1
|
||||
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
|
||||
|
||||
4) Mount partition 1 and then extract the contents of
|
||||
tmp/deploy/images/core-image-XXXX.tar.bz2 into it (preserving permissions).
|
||||
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
|
||||
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
|
||||
|
||||
# mount /dev/sdb1 /media/sdb1
|
||||
# cd /media/sdb1
|
||||
# tar -xvjpf tmp/deploy/images/core-image-XXXX.tar.bz2
|
||||
|
||||
5) Unmount the USB drive and then plug it into the board's USB port
|
||||
HTC Universal (htcuniversal)
|
||||
============================
|
||||
|
||||
6) Connect the board's serial port to your workstation and then start up
|
||||
your favourite serial terminal so that you will be able to interact with
|
||||
the serial console. If you don't have a favourite, picocom is suggested:
|
||||
Note: HTC Universal support is highly experimental.
|
||||
|
||||
$ picocom /dev/ttyUSB0 -b 115200
|
||||
On the HTC Universal, entirely replacing the Windows installation is not
|
||||
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
|
||||
has booted, Windows is no longer in memory or active but when power is removed,
|
||||
the user will be returned to windows and will need to return to Linux from
|
||||
there.
|
||||
|
||||
7) Connect the network into eth0 (the one that is NOT the 3 port switch). If
|
||||
you are using power-over-ethernet then the board will power up at this point.
|
||||
Once an MMC/SD card is available it is suggested its split into two partitions,
|
||||
one for a program called HaRET which lets you boot Linux from within Windows
|
||||
and the second for the rootfs. The HaRET partition should be the first partition
|
||||
on the card and be vfat formatted. It doesn't need to be large, just enough for
|
||||
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
|
||||
second partition. The first partition should be vfat so Windows recognises it
|
||||
as if it doesn't, it has been known to reformat cards.
|
||||
|
||||
8) Start up the board, watch the serial console. Hit Ctrl+C to abort the
|
||||
autostart if the board is configured that way (it is by default). The
|
||||
bootloader's fconfig command can be used to disable autostart and configure
|
||||
the IP settings if you need to change them (default IP is 192.168.1.20).
|
||||
On the first partition you need three files:
|
||||
|
||||
9) Make the kernel (tmp/deploy/images/vmlinux-routerstationpro.bin) available
|
||||
on the tftp server.
|
||||
* a HaRET binary (version 0.5.1 works well and a working version
|
||||
should be part of the last Poky release)
|
||||
* a kernel renamed to "zImage"
|
||||
* a default.txt which contains:
|
||||
|
||||
10) If you are going to write the kernel to flash (optional - see "Booting a
|
||||
kernel directly" below for the alternative), remove the current kernel and
|
||||
rootfs flash partitions. You can list the partitions using the following
|
||||
bootloader command:
|
||||
set kernel "zImage"
|
||||
set mtype "855"
|
||||
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
|
||||
boot2
|
||||
|
||||
RedBoot> fis list
|
||||
On the second parition the root file system is extracted as root. A different
|
||||
partition layout or other kernel options can be changed in the default.txt file.
|
||||
|
||||
You can delete the existing kernel and rootfs with these commands:
|
||||
When inserted into the device, Windows should see the card and let you browse
|
||||
its contents using File Explorer. Running the HaRET binary will present a dialog
|
||||
box (maybe after messages warning about running unsigned binaries) where you
|
||||
select OK and you should then see Poky boot. Kernel messages can be seen by
|
||||
adding psplash=false to the kernel commandline.
|
||||
|
||||
RedBoot> fis delete kernel
|
||||
RedBoot> fis delete rootfs
|
||||
|
||||
--- Booting a kernel directly ---
|
||||
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
|
||||
============================================================
|
||||
|
||||
1) Load the kernel using the following bootloader command:
|
||||
Note: Nokia tablet support is highly experimental.
|
||||
|
||||
RedBoot> load -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin
|
||||
The Nokia internet tablet devices are OMAP based tablet formfactor devices
|
||||
with large screens (800x480), wifi and touchscreen.
|
||||
|
||||
You should see a message on it being successfully loaded.
|
||||
To flash images to these devices you need the "flasher" utility which can be
|
||||
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
|
||||
utility needs to be run as root and the usb filesystem needs to be mounted
|
||||
although most distributions will have done this for you. Once you have this
|
||||
follow these steps:
|
||||
|
||||
2) Execute the kernel:
|
||||
1. Power down the device.
|
||||
2. Connect the device to the host machine via USB
|
||||
(connecting power to the device doesn't hurt either).
|
||||
3. Run "flasher -i"
|
||||
4. Power on the device.
|
||||
5. The program should give an indication it's found
|
||||
a tablet device. If not, recheck the cables, make sure you're
|
||||
root and usbfs/usbdevfs is mounted.
|
||||
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
|
||||
jffs2 image file to use as the root filesystem
|
||||
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
|
||||
and <kernel> is the kernel to use
|
||||
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
|
||||
7. Run "flasher -R" to reboot the device.
|
||||
8. The device should boot into Poky.
|
||||
|
||||
RedBoot> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
|
||||
The nokia800 images and kernel will run on both the N800 and N810.
|
||||
|
||||
Note that specifying the command line with -c is important as linux-yocto does
|
||||
not provide a default command line.
|
||||
|
||||
--- Writing a kernel to flash ---
|
||||
Sharp Zaurus SL-C7x0 series (c7x0)
|
||||
==================================
|
||||
|
||||
1) Go to your tftp server and gzip the kernel you want in flash. It should
|
||||
halve the size.
|
||||
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
|
||||
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
|
||||
these devices follow these steps:
|
||||
|
||||
2) Load the kernel using the following bootloader command:
|
||||
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
|
||||
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
|
||||
card as "initrd.bin":
|
||||
|
||||
RedBoot> load -r -b 0x80600000 -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin.gz
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
|
||||
|
||||
This should output something similar to the following:
|
||||
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
|
||||
"zImage.bin":
|
||||
|
||||
Raw file loaded 0x80600000-0x8087c537, assumed entry at 0x80600000
|
||||
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
|
||||
|
||||
Calculate the length by subtracting the first number from the second number
|
||||
and then rounding the result up to the nearest 0x1000.
|
||||
4. Copy an updater script (updater.sh.c7x0) onto the card
|
||||
as "updater.sh":
|
||||
|
||||
3) Using the length calculated above, create a flash partition for the kernel:
|
||||
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
|
||||
|
||||
RedBoot> fis create -b 0x80600000 -l 0x240000 kernel
|
||||
5. Power down the Zaurus.
|
||||
6. Hold "OK" key and power on the device. An update menu should appear
|
||||
(in Japanese).
|
||||
7. Choose "Update" (item 4).
|
||||
8. The next screen will ask for the source, choose the appropriate
|
||||
card (CF or SD).
|
||||
9. Make sure AC power is connected.
|
||||
10. The next screen asks for confirmation, choose "Yes" (the left button).
|
||||
11. The update process will start, flash the files on the card onto
|
||||
the device and the device will then reboot into Poky.
|
||||
|
||||
(change 0x240000 to your rounded length -- change "kernel" to whatever
|
||||
you want to name your kernel)
|
||||
|
||||
--- Booting a kernel from flash ---
|
||||
Sharp Zaurus SL-C1000 (akita)
|
||||
=============================
|
||||
|
||||
To boot the flashed kernel perform the following steps.
|
||||
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
|
||||
c7x0. To install Poky images on this device follow the instructions for
|
||||
the c7x0 but replace "c7x0" with "akita" where appropriate.
|
||||
|
||||
1) At the bootloader prompt, load the kernel:
|
||||
|
||||
RedBoot> fis load -d -e kernel
|
||||
Sharp Zaurus SL-C3x00 series (spitz)
|
||||
====================================
|
||||
|
||||
(Change the name "kernel" above if you chose something different earlier)
|
||||
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
|
||||
to akita but with an internal microdrive. The installation procedure
|
||||
assumes a standard microdrive based device where the root (first)
|
||||
partition has been enlarged to fit the image (at least 100MB,
|
||||
400MB for the SDK).
|
||||
|
||||
(-e means 'elf', -d 'decompress')
|
||||
The procedure is the same as for the c7x0 and akita models with the
|
||||
following differences:
|
||||
|
||||
2) Execute the kernel using the exec command as above.
|
||||
1. Instead of a jffs2 image you need to copy a compressed tarball of the
|
||||
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
|
||||
card as "hdimage1.tgz":
|
||||
|
||||
--- Automating the boot process ---
|
||||
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
|
||||
|
||||
2. You additionally need to copy a special tar utility (gnu-tar) onto
|
||||
the card as "gnu-tar":
|
||||
|
||||
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar
|
||||
|
||||
After writing the kernel to flash and testing the load and exec commands
|
||||
manually, you can automate the boot process with a boot script.
|
||||
|
||||
1) RedBoot> fconfig
|
||||
(Answer the questions not specified here as they pertain to your environment)
|
||||
2) Run script at boot: true
|
||||
Boot script:
|
||||
.. fis load -d -e kernel
|
||||
.. exec
|
||||
Enter script, terminate with empty line
|
||||
>> fis load -d -e kernel
|
||||
>> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
|
||||
>>
|
||||
3) Answer the remaining questions and write the changes to flash:
|
||||
Update RedBoot non-volatile configuration - continue (y/n)? y
|
||||
... Erase from 0xbfff0000-0xc0000000: .
|
||||
... Program from 0x87ff0000-0x88000000 at 0xbfff0000: .
|
||||
4) Power cycle the board.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
Tim Ansell <mithro@mithis.net>
|
||||
Phil Blundell <pb@handhelds.org>
|
||||
Seb Frankengul <seb@frankengul.org>
|
||||
Holger Freyther <holger@moiji-mobile.com>
|
||||
Holger Freyther <zecke@handhelds.org>
|
||||
Marcin Juszkiewicz <marcin@juszkiewicz.com.pl>
|
||||
Chris Larson <kergoth@handhelds.org>
|
||||
Ulrich Luckas <luckas@musoft.de>
|
||||
|
||||
@@ -138,7 +138,7 @@ Changes in Bitbake 1.9.x:
|
||||
directory != the cache dir.
|
||||
- Add md5 and sha256 checksum generation functions to utils.py
|
||||
- Correctly handle '-' characters in class names (#2958)
|
||||
- Make sure expandKeys has been called on the data dictionary before running tasks
|
||||
- Make sure expandKeys has been called on the data dictonary before running tasks
|
||||
- Correctly add a task override in the form task-TASKNAME.
|
||||
- Revert the '-' character fix in class names since it breaks things
|
||||
- When a regexp fails to compile for PACKAGES_DYNAMIC, print a more useful error (#4444)
|
||||
|
||||
@@ -22,243 +22,176 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import sys, logging
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
|
||||
'lib'))
|
||||
|
||||
import optparse
|
||||
import warnings
|
||||
from traceback import format_exception
|
||||
try:
|
||||
import bb
|
||||
except RuntimeError as exc:
|
||||
sys.exit(str(exc))
|
||||
from bb import event
|
||||
import bb.msg
|
||||
import sys, os, getopt, re, time, optparse, xmlrpclib
|
||||
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
from bb import cooker
|
||||
from bb import ui
|
||||
from bb import server
|
||||
from bb.server import none
|
||||
#from bb.server import xmlrpc
|
||||
|
||||
__version__ = "1.15.1"
|
||||
logger = logging.getLogger("BitBake")
|
||||
__version__ = "1.9.0"
|
||||
|
||||
if sys.hexversion < 0x020500F0:
|
||||
print "Sorry, python 2.5 or later is required for this version of bitbake"
|
||||
sys.exit(1)
|
||||
|
||||
class BBConfiguration(object):
|
||||
#============================================================================#
|
||||
# BBOptions
|
||||
#============================================================================#
|
||||
class BBConfiguration( object ):
|
||||
"""
|
||||
Manages build options and configurations for one run
|
||||
"""
|
||||
|
||||
def __init__(self, options):
|
||||
def __init__( self, options ):
|
||||
for key, val in options.__dict__.items():
|
||||
setattr(self, key, val)
|
||||
self.pkgs_to_build = []
|
||||
setattr( self, key, val )
|
||||
|
||||
|
||||
def get_ui(config):
|
||||
if config.ui:
|
||||
interface = config.ui
|
||||
else:
|
||||
interface = 'knotty'
|
||||
def print_exception(exc, value, tb):
|
||||
"""
|
||||
Print the exception to stderr, only showing the traceback if bitbake
|
||||
debugging is enabled.
|
||||
"""
|
||||
if not bb.msg.debug_level['default']:
|
||||
tb = None
|
||||
|
||||
try:
|
||||
# Dynamically load the UI based on the ui name. Although we
|
||||
# suggest a fixed set this allows you to have flexibility in which
|
||||
# ones are available.
|
||||
module = __import__("bb.ui", fromlist = [interface])
|
||||
return getattr(module, interface).main
|
||||
except AttributeError:
|
||||
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
|
||||
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default], knotty2." % interface)
|
||||
sys.__excepthook__(exc, value, tb)
|
||||
|
||||
|
||||
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
|
||||
warnlog = logging.getLogger("BitBake.Warnings")
|
||||
_warnings_showwarning = warnings.showwarning
|
||||
def _showwarning(message, category, filename, lineno, file=None, line=None):
|
||||
if file is not None:
|
||||
if _warnings_showwarning is not None:
|
||||
_warnings_showwarning(message, category, filename, lineno, file, line)
|
||||
else:
|
||||
s = warnings.formatwarning(message, category, filename, lineno)
|
||||
warnlog.warn(s)
|
||||
|
||||
warnings.showwarning = _showwarning
|
||||
warnings.filterwarnings("ignore")
|
||||
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
|
||||
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
|
||||
warnings.filterwarnings("ignore", category=ImportWarning)
|
||||
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
|
||||
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
|
||||
|
||||
#============================================================================#
|
||||
# main
|
||||
#============================================================================#
|
||||
|
||||
def main():
|
||||
parser = optparse.OptionParser(
|
||||
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
|
||||
usage = """%prog [options] [package ...]
|
||||
return_value = 0
|
||||
pythonver = sys.version_info
|
||||
if pythonver[0] < 2 or (pythonver[0] == 2 and pythonver[1] < 5):
|
||||
print "Sorry, bitbake needs python 2.5 or later."
|
||||
sys.exit(1)
|
||||
|
||||
parser = optparse.OptionParser( version = "BitBake Build Tool Core version %s, %%prog version %s" % ( bb.__version__, __version__ ),
|
||||
usage = """%prog [options] [package ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of BitBake files.
|
||||
It expects that BBFILES is defined, which is a space separated list of files to
|
||||
be executed. BBFILES does support wildcards.
|
||||
Default BBFILES are the .bb files in the current directory.""")
|
||||
Default BBFILES are the .bb files in the current directory.""" )
|
||||
|
||||
parser.add_option("-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES. Does not handle any dependencies.",
|
||||
action = "store", dest = "buildfile", default = None)
|
||||
parser.add_option( "-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES.",
|
||||
action = "store", dest = "buildfile", default = None )
|
||||
|
||||
parser.add_option("-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
|
||||
action = "store_false", dest = "abort", default = True)
|
||||
parser.add_option( "-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
|
||||
action = "store_false", dest = "abort", default = True )
|
||||
|
||||
parser.add_option("-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
|
||||
action = "store_true", dest = "tryaltconfigs", default = False)
|
||||
parser.add_option( "-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
|
||||
action = "store_true", dest = "tryaltconfigs", default = False )
|
||||
|
||||
parser.add_option("-f", "--force", help = "force run of specified cmd, regardless of stamp status",
|
||||
action = "store_true", dest = "force", default = False)
|
||||
parser.add_option( "-f", "--force", help = "force run of specified cmd, regardless of stamp status",
|
||||
action = "store_true", dest = "force", default = False )
|
||||
|
||||
parser.add_option("-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
|
||||
action = "store", dest = "cmd")
|
||||
parser.add_option( "-i", "--interactive", help = "drop into the interactive mode also called the BitBake shell.",
|
||||
action = "store_true", dest = "interactive", default = False )
|
||||
|
||||
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
|
||||
action = "append", dest = "prefile", default = [])
|
||||
parser.add_option( "-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
|
||||
action = "store", dest = "cmd" )
|
||||
|
||||
parser.add_option("-R", "--postread", help = "read the specified file after bitbake.conf",
|
||||
action = "append", dest = "postfile", default = [])
|
||||
parser.add_option( "-r", "--read", help = "read the specified file before bitbake.conf",
|
||||
action = "append", dest = "file", default = [] )
|
||||
|
||||
parser.add_option("-v", "--verbose", help = "output more chit-chat to the terminal",
|
||||
action = "store_true", dest = "verbose", default = False)
|
||||
parser.add_option( "-v", "--verbose", help = "output more chit-chat to the terminal",
|
||||
action = "store_true", dest = "verbose", default = False )
|
||||
|
||||
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
|
||||
parser.add_option( "-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
|
||||
action = "count", dest="debug", default = 0)
|
||||
|
||||
parser.add_option("-n", "--dry-run", help = "don't execute, just go through the motions",
|
||||
action = "store_true", dest = "dry_run", default = False)
|
||||
parser.add_option( "-n", "--dry-run", help = "don't execute, just go through the motions",
|
||||
action = "store_true", dest = "dry_run", default = False )
|
||||
|
||||
parser.add_option("-S", "--dump-signatures", help = "don't execute, just dump out the signature construction information",
|
||||
action = "store_true", dest = "dump_signatures", default = False)
|
||||
parser.add_option( "-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
|
||||
action = "store_true", dest = "parse_only", default = False )
|
||||
|
||||
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
|
||||
action = "store_true", dest = "parse_only", default = False)
|
||||
parser.add_option( "-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
|
||||
action = "store_true", dest = "disable_psyco", default = False )
|
||||
|
||||
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
|
||||
action = "store_true", dest = "show_versions", default = False)
|
||||
parser.add_option( "-s", "--show-versions", help = "show current and preferred versions of all packages",
|
||||
action = "store_true", dest = "show_versions", default = False )
|
||||
|
||||
parser.add_option("-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
|
||||
action = "store_true", dest = "show_environment", default = False)
|
||||
parser.add_option( "-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
|
||||
action = "store_true", dest = "show_environment", default = False )
|
||||
|
||||
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
|
||||
action = "store_true", dest = "dot_graph", default = False)
|
||||
parser.add_option( "-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
|
||||
action = "store_true", dest = "dot_graph", default = False )
|
||||
|
||||
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
|
||||
action = "append", dest = "extra_assume_provided", default = [])
|
||||
parser.add_option( "-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
|
||||
action = "append", dest = "extra_assume_provided", default = [] )
|
||||
|
||||
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
|
||||
action = "append", dest = "debug_domains", default = [])
|
||||
parser.add_option( "-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
|
||||
action = "append", dest = "debug_domains", default = [] )
|
||||
|
||||
parser.add_option("-P", "--profile", help = "profile the command and print a report",
|
||||
action = "store_true", dest = "profile", default = False)
|
||||
parser.add_option( "-P", "--profile", help = "profile the command and print a report",
|
||||
action = "store_true", dest = "profile", default = False )
|
||||
|
||||
parser.add_option("-u", "--ui", help = "userinterface to use",
|
||||
parser.add_option( "-u", "--ui", help = "userinterface to use",
|
||||
action = "store", dest = "ui")
|
||||
|
||||
parser.add_option("-t", "--servertype", help = "Choose which server to use, none, process or xmlrpc",
|
||||
action = "store", dest = "servertype")
|
||||
parser.add_option( "", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
|
||||
action = "store_true", dest = "revisions_changed", default = False )
|
||||
|
||||
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
|
||||
action = "store_true", dest = "revisions_changed", default = False)
|
||||
|
||||
parser.add_option("", "--server-only", help = "Run bitbake without UI, the frontend can connect with bitbake server itself",
|
||||
action = "store_true", dest = "server_only", default = False)
|
||||
|
||||
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to",
|
||||
action = "store", dest = "bind", default = False)
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
|
||||
configuration = BBConfiguration(options)
|
||||
configuration.pkgs_to_build = []
|
||||
configuration.pkgs_to_build.extend(args[1:])
|
||||
|
||||
ui_main = get_ui(configuration)
|
||||
|
||||
# Server type can be xmlrpc, process or none currently, if nothing is specified,
|
||||
# the default server is process
|
||||
if configuration.servertype:
|
||||
server_type = configuration.servertype
|
||||
else:
|
||||
server_type = 'process'
|
||||
|
||||
try:
|
||||
module = __import__("bb.server", fromlist = [server_type])
|
||||
server = getattr(module, server_type)
|
||||
except AttributeError:
|
||||
sys.exit("FATAL: Invalid server type '%s' specified.\n"
|
||||
"Valid interfaces: xmlrpc, process [default], none." % servertype)
|
||||
|
||||
if configuration.server_only:
|
||||
if configuration.servertype != "xmlrpc":
|
||||
sys.exit("FATAL: If '--server-only' is defined, we must set the servertype as 'xmlrpc'.\n")
|
||||
if not configuration.bind:
|
||||
sys.exit("FATAL: The '--server-only' option requires a name/address to bind to with the -B option.\n")
|
||||
|
||||
if configuration.bind and configuration.servertype != "xmlrpc":
|
||||
sys.exit("FATAL: If '-B' or '--bind' is defined, we must set the servertype as 'xmlrpc'.\n")
|
||||
#server = bb.server.xmlrpc
|
||||
server = bb.server.none
|
||||
|
||||
# Save a logfile for cooker into the current working directory. When the
|
||||
# server is daemonized this logfile will be truncated.
|
||||
cooker_logfile = os.path.join(os.getcwd(), "cooker.log")
|
||||
cooker_logfile = os.path.join (os.getcwd(), "cooker.log")
|
||||
|
||||
bb.msg.init_msgconfig(configuration.verbose, configuration.debug,
|
||||
configuration.debug_domains)
|
||||
cooker = bb.cooker.BBCooker(configuration, server)
|
||||
|
||||
# Ensure logging messages get sent to the UI as events
|
||||
handler = bb.event.LogHandler()
|
||||
logger.addHandler(handler)
|
||||
|
||||
# Before we start modifying the environment we should take a pristine
|
||||
# copy for possible later use
|
||||
initialenv = os.environ.copy()
|
||||
# Clear away any spurious environment variables. But don't wipe the
|
||||
# environment totally. This is necessary to ensure the correct operation
|
||||
# of the UIs (e.g. for DISPLAY, etc.)
|
||||
bb.utils.clean_environment()
|
||||
|
||||
server = server.BitBakeServer()
|
||||
if configuration.bind:
|
||||
server.initServer((configuration.bind, 0))
|
||||
else:
|
||||
server.initServer()
|
||||
|
||||
idle = server.getServerIdleCB()
|
||||
|
||||
cooker = bb.cooker.BBCooker(configuration, idle, initialenv)
|
||||
cooker.parseCommandLine()
|
||||
|
||||
server.addcooker(cooker)
|
||||
server.saveConnectionDetails()
|
||||
server.detach(cooker_logfile)
|
||||
serverinfo = server.BitbakeServerInfo(cooker.server)
|
||||
|
||||
# Should no longer need to ever reference cooker
|
||||
server.BitBakeServerFork(serverinfo, cooker.serve, cooker_logfile)
|
||||
del cooker
|
||||
|
||||
logger.removeHandler(handler)
|
||||
sys.excepthook = print_exception
|
||||
|
||||
if not configuration.server_only:
|
||||
# Setup a connection to the server (cooker)
|
||||
server_connection = server.establishConnection()
|
||||
# Setup a connection to the server (cooker)
|
||||
serverConnection = server.BitBakeServerConnection(serverinfo)
|
||||
|
||||
try:
|
||||
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
|
||||
finally:
|
||||
bb.event.ui_queue = []
|
||||
server_connection.terminate()
|
||||
# Launch the UI
|
||||
if configuration.ui:
|
||||
ui = configuration.ui
|
||||
else:
|
||||
print("server address: %s, server port: %s" % (server.serverinfo.host, server.serverinfo.port))
|
||||
ui = "knotty"
|
||||
|
||||
return 1
|
||||
try:
|
||||
# Dynamically load the UI based on the ui name. Although we
|
||||
# suggest a fixed set this allows you to have flexibility in which
|
||||
# ones are available.
|
||||
exec "from bb.ui import " + ui
|
||||
exec "return_value = " + ui + ".init(serverConnection.connection, serverConnection.events)"
|
||||
except ImportError:
|
||||
print "FATAL: Invalid user interface '%s' specified. " % ui
|
||||
print "Valid interfaces are 'ncurses', 'depexp' or the default, 'knotty'."
|
||||
except Exception, e:
|
||||
print "FATAL: Unable to start to '%s' UI due to exception: %s." % (configuration.ui, e)
|
||||
finally:
|
||||
serverConnection.terminate()
|
||||
return return_value
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
ret = main()
|
||||
except Exception:
|
||||
ret = 1
|
||||
import traceback
|
||||
traceback.print_exc(5)
|
||||
ret = main()
|
||||
sys.exit(ret)
|
||||
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
|
||||
import bb.siggen
|
||||
|
||||
if len(sys.argv) > 2:
|
||||
bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
|
||||
else:
|
||||
bb.siggen.dump_sigfile(sys.argv[1])
|
||||
@@ -1,9 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
|
||||
import bb.siggen
|
||||
|
||||
bb.siggen.dump_sigfile(sys.argv[1])
|
||||
@@ -1,593 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# This script has subcommands which operate against your bitbake layers, either
|
||||
# displaying useful information, or acting against them.
|
||||
# See the help output for details on available commands.
|
||||
|
||||
# Copyright (C) 2011 Mentor Graphics Corporation
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
|
||||
import cmd
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import fnmatch
|
||||
from collections import defaultdict
|
||||
|
||||
bindir = os.path.dirname(__file__)
|
||||
topdir = os.path.dirname(bindir)
|
||||
sys.path[0:0] = [os.path.join(topdir, 'lib')]
|
||||
|
||||
import bb.cache
|
||||
import bb.cooker
|
||||
import bb.providers
|
||||
import bb.utils
|
||||
from bb.cooker import state
|
||||
import bb.fetch2
|
||||
|
||||
|
||||
logger = logging.getLogger('BitBake')
|
||||
|
||||
|
||||
def main(args):
|
||||
# Set up logging
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
bb.msg.addDefaultlogFilter(console)
|
||||
console.setFormatter(format)
|
||||
logger.addHandler(console)
|
||||
|
||||
initialenv = os.environ.copy()
|
||||
bb.utils.clean_environment()
|
||||
|
||||
cmds = Commands(initialenv)
|
||||
if args:
|
||||
# Allow user to specify e.g. show-layers instead of show_layers
|
||||
args = [args[0].replace('-', '_')] + args[1:]
|
||||
cmds.onecmd(' '.join(args))
|
||||
else:
|
||||
cmds.do_help('')
|
||||
return cmds.returncode
|
||||
|
||||
|
||||
class Commands(cmd.Cmd):
|
||||
def __init__(self, initialenv):
|
||||
cmd.Cmd.__init__(self)
|
||||
self.returncode = 0
|
||||
self.config = Config(parse_only=True)
|
||||
self.cooker = bb.cooker.BBCooker(self.config,
|
||||
self.register_idle_function,
|
||||
initialenv)
|
||||
self.config_data = self.cooker.configuration.data
|
||||
bb.providers.logger.setLevel(logging.ERROR)
|
||||
self.cooker_data = None
|
||||
self.bblayers = (self.config_data.getVar('BBLAYERS', True) or "").split()
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
pass
|
||||
|
||||
def prepare_cooker(self):
|
||||
sys.stderr.write("Parsing recipes..")
|
||||
logger.setLevel(logging.WARNING)
|
||||
|
||||
try:
|
||||
while self.cooker.state in (state.initial, state.parsing):
|
||||
self.cooker.updateCache()
|
||||
except KeyboardInterrupt:
|
||||
self.cooker.shutdown()
|
||||
self.cooker.updateCache()
|
||||
sys.exit(2)
|
||||
|
||||
logger.setLevel(logging.INFO)
|
||||
sys.stderr.write("done.\n")
|
||||
|
||||
self.cooker_data = self.cooker.status
|
||||
self.cooker_data.appends = self.cooker.appendlist
|
||||
|
||||
def check_prepare_cooker(self):
|
||||
if not self.cooker_data:
|
||||
self.prepare_cooker()
|
||||
|
||||
def default(self, line):
|
||||
"""Handle unrecognised commands"""
|
||||
sys.stderr.write("Unrecognised command or option\n")
|
||||
self.do_help('')
|
||||
|
||||
def do_help(self, topic):
|
||||
"""display general help or help on a specified command"""
|
||||
if topic:
|
||||
sys.stdout.write('%s: ' % topic)
|
||||
cmd.Cmd.do_help(self, topic.replace('-', '_'))
|
||||
else:
|
||||
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
|
||||
sys.stdout.write("Available commands:\n")
|
||||
procnames = self.get_names()
|
||||
for procname in procnames:
|
||||
if procname[:3] == 'do_':
|
||||
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
|
||||
doc = getattr(self, procname).__doc__
|
||||
if doc:
|
||||
sys.stdout.write(" %s\n" % doc.splitlines()[0])
|
||||
|
||||
def do_show_layers(self, args):
|
||||
"""show current configured layers"""
|
||||
self.check_prepare_cooker()
|
||||
logger.plain('')
|
||||
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
|
||||
logger.plain('=' * 74)
|
||||
for layerdir in self.bblayers:
|
||||
layername = self.get_layer_name(layerdir)
|
||||
layerpri = 0
|
||||
for layer, _, regex, pri in self.cooker.status.bbfile_config_priorities:
|
||||
if regex.match(os.path.join(layerdir, 'test')):
|
||||
layerpri = pri
|
||||
break
|
||||
|
||||
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), layerpri))
|
||||
|
||||
|
||||
def version_str(self, pe, pv, pr = None):
|
||||
verstr = "%s" % pv
|
||||
if pr:
|
||||
verstr = "%s-%s" % (verstr, pr)
|
||||
if pe:
|
||||
verstr = "%s:%s" % (pe, verstr)
|
||||
return verstr
|
||||
|
||||
|
||||
def do_show_overlayed(self, args):
|
||||
"""list overlayed recipes (where the same recipe exists in another layer that has a higher layer priority)
|
||||
|
||||
usage: show-overlayed [-f] [-s]
|
||||
|
||||
Lists the names of overlayed recipes and the available versions in each
|
||||
layer, with the preferred version first. Note that skipped recipes that
|
||||
are overlayed will also be listed, with a " (skipped)" suffix.
|
||||
|
||||
Options:
|
||||
-f instead of the default formatting, list filenames of higher priority
|
||||
recipes with the ones they overlay indented underneath
|
||||
-s only list overlayed recipes where the version is the same
|
||||
"""
|
||||
self.check_prepare_cooker()
|
||||
|
||||
show_filenames = False
|
||||
show_same_ver_only = False
|
||||
for arg in args.split():
|
||||
if arg == '-f':
|
||||
show_filenames = True
|
||||
elif arg == '-s':
|
||||
show_same_ver_only = True
|
||||
else:
|
||||
sys.stderr.write("show-overlayed: invalid option %s\n" % arg)
|
||||
self.do_help('')
|
||||
return
|
||||
|
||||
items_listed = self.list_recipes('Overlayed recipes', None, True, show_same_ver_only, show_filenames, True)
|
||||
|
||||
# Check for overlayed .bbclass files
|
||||
classes = defaultdict(list)
|
||||
for layerdir in self.bblayers:
|
||||
classdir = os.path.join(layerdir, 'classes')
|
||||
if os.path.exists(classdir):
|
||||
for classfile in os.listdir(classdir):
|
||||
if os.path.splitext(classfile)[1] == '.bbclass':
|
||||
classes[classfile].append(classdir)
|
||||
|
||||
# Locating classes and other files is a bit more complicated than recipes -
|
||||
# layer priority is not a factor; instead BitBake uses the first matching
|
||||
# file in BBPATH, which is manipulated directly by each layer's
|
||||
# conf/layer.conf in turn, thus the order of layers in bblayers.conf is a
|
||||
# factor - however, each layer.conf is free to either prepend or append to
|
||||
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
|
||||
# not be exactly the order present in bblayers.conf either.
|
||||
bbpath = str(self.config_data.getVar('BBPATH', True))
|
||||
overlayed_class_found = False
|
||||
for (classfile, classdirs) in classes.items():
|
||||
if len(classdirs) > 1:
|
||||
if not overlayed_class_found:
|
||||
logger.plain('=== Overlayed classes ===')
|
||||
overlayed_class_found = True
|
||||
|
||||
mainfile = bb.utils.which(bbpath, os.path.join('classes', classfile))
|
||||
if show_filenames:
|
||||
logger.plain('%s' % mainfile)
|
||||
else:
|
||||
# We effectively have to guess the layer here
|
||||
logger.plain('%s:' % classfile)
|
||||
mainlayername = '?'
|
||||
for layerdir in self.bblayers:
|
||||
classdir = os.path.join(layerdir, 'classes')
|
||||
if mainfile.startswith(classdir):
|
||||
mainlayername = self.get_layer_name(layerdir)
|
||||
logger.plain(' %s' % mainlayername)
|
||||
for classdir in classdirs:
|
||||
fullpath = os.path.join(classdir, classfile)
|
||||
if fullpath != mainfile:
|
||||
if show_filenames:
|
||||
print(' %s' % fullpath)
|
||||
else:
|
||||
print(' %s' % self.get_layer_name(os.path.dirname(classdir)))
|
||||
|
||||
if overlayed_class_found:
|
||||
items_listed = True;
|
||||
|
||||
if not items_listed:
|
||||
logger.plain('No overlayed files found.')
|
||||
|
||||
|
||||
def do_show_recipes(self, args):
|
||||
"""list available recipes, showing the layer they are provided by
|
||||
|
||||
usage: show-recipes [-f] [-m] [pnspec]
|
||||
|
||||
Lists the names of overlayed recipes and the available versions in each
|
||||
layer, with the preferred version first. Optionally you may specify
|
||||
pnspec to match a specified recipe name (supports wildcards). Note that
|
||||
skipped recipes will also be listed, with a " (skipped)" suffix.
|
||||
|
||||
Options:
|
||||
-f instead of the default formatting, list filenames of higher priority
|
||||
recipes with other available recipes indented underneath
|
||||
-m only list where multiple recipes (in the same layer or different
|
||||
layers) exist for the same recipe name
|
||||
"""
|
||||
self.check_prepare_cooker()
|
||||
|
||||
show_filenames = False
|
||||
show_multi_provider_only = False
|
||||
pnspec = None
|
||||
title = 'Available recipes:'
|
||||
for arg in args.split():
|
||||
if arg == '-f':
|
||||
show_filenames = True
|
||||
elif arg == '-m':
|
||||
show_multi_provider_only = True
|
||||
elif not arg.startswith('-'):
|
||||
pnspec = arg
|
||||
title = 'Available recipes matching %s:' % pnspec
|
||||
else:
|
||||
sys.stderr.write("show-recipes: invalid option %s\n" % arg)
|
||||
self.do_help('')
|
||||
return
|
||||
self.list_recipes(title, pnspec, False, False, show_filenames, show_multi_provider_only)
|
||||
|
||||
|
||||
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
|
||||
pkg_pn = self.cooker.status.pkg_pn
|
||||
(latest_versions, preferred_versions) = bb.providers.findProviders(self.cooker.configuration.data, self.cooker.status, pkg_pn)
|
||||
allproviders = bb.providers.allProviders(self.cooker.status)
|
||||
|
||||
# Ensure we list skipped recipes
|
||||
# We are largely guessing about PN, PV and the preferred version here,
|
||||
# but we have no choice since skipped recipes are not fully parsed
|
||||
skiplist = self.cooker.skiplist.keys()
|
||||
skiplist.sort( key=lambda fileitem: self.cooker.calc_bbfile_priority(fileitem) )
|
||||
skiplist.reverse()
|
||||
for fn in skiplist:
|
||||
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
|
||||
p = recipe_parts[0]
|
||||
if len(recipe_parts) > 1:
|
||||
ver = (None, recipe_parts[1], None)
|
||||
else:
|
||||
ver = (None, 'unknown', None)
|
||||
allproviders[p].append((ver, fn))
|
||||
if not p in pkg_pn:
|
||||
pkg_pn[p] = 'dummy'
|
||||
preferred_versions[p] = (ver, fn)
|
||||
|
||||
def print_item(f, pn, ver, layer, ispref):
|
||||
if f in skiplist:
|
||||
skipped = ' (skipped)'
|
||||
else:
|
||||
skipped = ''
|
||||
if show_filenames:
|
||||
if ispref:
|
||||
logger.plain("%s%s", f, skipped)
|
||||
else:
|
||||
logger.plain(" %s%s", f, skipped)
|
||||
else:
|
||||
if ispref:
|
||||
logger.plain("%s:", pn)
|
||||
logger.plain(" %s %s%s", layer.ljust(20), ver, skipped)
|
||||
|
||||
preffiles = []
|
||||
items_listed = False
|
||||
for p in sorted(pkg_pn):
|
||||
if pnspec:
|
||||
if not fnmatch.fnmatch(p, pnspec):
|
||||
continue
|
||||
|
||||
if len(allproviders[p]) > 1 or not show_multi_provider_only:
|
||||
pref = preferred_versions[p]
|
||||
preffile = bb.cache.Cache.virtualfn2realfn(pref[1])[0]
|
||||
if preffile not in preffiles:
|
||||
preflayer = self.get_file_layer(preffile)
|
||||
multilayer = False
|
||||
same_ver = True
|
||||
provs = []
|
||||
for prov in allproviders[p]:
|
||||
provfile = bb.cache.Cache.virtualfn2realfn(prov[1])[0]
|
||||
provlayer = self.get_file_layer(provfile)
|
||||
provs.append((provfile, provlayer, prov[0]))
|
||||
if provlayer != preflayer:
|
||||
multilayer = True
|
||||
if prov[0] != pref[0]:
|
||||
same_ver = False
|
||||
|
||||
if (multilayer or not show_overlayed_only) and (same_ver or not show_same_ver_only):
|
||||
if not items_listed:
|
||||
logger.plain('=== %s ===' % title)
|
||||
items_listed = True
|
||||
print_item(preffile, p, self.version_str(pref[0][0], pref[0][1]), preflayer, True)
|
||||
for (provfile, provlayer, provver) in provs:
|
||||
if provfile != preffile:
|
||||
print_item(provfile, p, self.version_str(provver[0], provver[1]), provlayer, False)
|
||||
# Ensure we don't show two entries for BBCLASSEXTENDed recipes
|
||||
preffiles.append(preffile)
|
||||
|
||||
return items_listed
|
||||
|
||||
|
||||
def do_flatten(self, args):
|
||||
"""flattens layer configuration into a separate output directory.
|
||||
|
||||
usage: flatten [layer1 layer2 [layer3]...] <outputdir>
|
||||
|
||||
Takes the specified layers (or all layers in the current layer
|
||||
configuration if none are specified) and builds a "flattened" directory
|
||||
containing the contents of all layers, with any overlayed recipes removed
|
||||
and bbappends appended to the corresponding recipes. Note that some manual
|
||||
cleanup may still be necessary afterwards, in particular:
|
||||
|
||||
* where non-recipe files (such as patches) are overwritten (the flatten
|
||||
command will show a warning for these)
|
||||
* where anything beyond the normal layer setup has been added to
|
||||
layer.conf (only the lowest priority number layer's layer.conf is used)
|
||||
* overridden/appended items from bbappends will need to be tidied up
|
||||
* when the flattened layers do not have the same directory structure (the
|
||||
flatten command should show a warning when this will cause a problem)
|
||||
|
||||
Warning: if you flatten several layers where another layer is intended to
|
||||
be used "inbetween" them (in layer priority order) such that recipes /
|
||||
bbappends in the layers interact, and then attempt to use the new output
|
||||
layer together with that other layer, you may no longer get the same
|
||||
build results (as the layer priority order has effectively changed).
|
||||
"""
|
||||
arglist = args.split()
|
||||
if len(arglist) < 1:
|
||||
logger.error('Please specify an output directory')
|
||||
self.do_help('flatten')
|
||||
return
|
||||
|
||||
if len(arglist) == 2:
|
||||
logger.error('If you specify layers to flatten you must specify at least two')
|
||||
self.do_help('flatten')
|
||||
return
|
||||
|
||||
outputdir = arglist[-1]
|
||||
if os.path.exists(outputdir) and os.listdir(outputdir):
|
||||
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
|
||||
return
|
||||
|
||||
self.check_prepare_cooker()
|
||||
layers = self.bblayers
|
||||
if len(arglist) > 2:
|
||||
layernames = arglist[:-1]
|
||||
found_layernames = []
|
||||
found_layerdirs = []
|
||||
for layerdir in layers:
|
||||
layername = self.get_layer_name(layerdir)
|
||||
if layername in layernames:
|
||||
found_layerdirs.append(layerdir)
|
||||
found_layernames.append(layername)
|
||||
|
||||
for layername in layernames:
|
||||
if not layername in found_layernames:
|
||||
logger.error('Unable to find layer %s in current configuration, please run "%s show-layers" to list configured layers' % (layername, os.path.basename(sys.argv[0])))
|
||||
return
|
||||
layers = found_layerdirs
|
||||
else:
|
||||
layernames = []
|
||||
|
||||
# Ensure a specified path matches our list of layers
|
||||
def layer_path_match(path):
|
||||
for layerdir in layers:
|
||||
if path.startswith(os.path.join(layerdir, '')):
|
||||
return layerdir
|
||||
return None
|
||||
|
||||
appended_recipes = []
|
||||
for layer in layers:
|
||||
overlayed = []
|
||||
for f in self.cooker.overlayed.iterkeys():
|
||||
for of in self.cooker.overlayed[f]:
|
||||
if of.startswith(layer):
|
||||
overlayed.append(of)
|
||||
|
||||
logger.plain('Copying files from %s...' % layer )
|
||||
for root, dirs, files in os.walk(layer):
|
||||
for f1 in files:
|
||||
f1full = os.sep.join([root, f1])
|
||||
if f1full in overlayed:
|
||||
logger.plain(' Skipping overlayed file %s' % f1full )
|
||||
else:
|
||||
ext = os.path.splitext(f1)[1]
|
||||
if ext != '.bbappend':
|
||||
fdest = f1full[len(layer):]
|
||||
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
|
||||
bb.utils.mkdirhier(os.path.dirname(fdest))
|
||||
if os.path.exists(fdest):
|
||||
if f1 == 'layer.conf' and root.endswith('/conf'):
|
||||
logger.plain(' Skipping layer config file %s' % f1full )
|
||||
continue
|
||||
else:
|
||||
logger.warn('Overwriting file %s', fdest)
|
||||
bb.utils.copyfile(f1full, fdest)
|
||||
if ext == '.bb':
|
||||
if f1 in self.cooker_data.appends:
|
||||
appends = self.cooker_data.appends[f1]
|
||||
if appends:
|
||||
logger.plain(' Applying appends to %s' % fdest )
|
||||
for appendname in appends:
|
||||
if layer_path_match(appendname):
|
||||
self.apply_append(appendname, fdest)
|
||||
appended_recipes.append(f1)
|
||||
|
||||
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
|
||||
for recipename in self.cooker_data.appends.iterkeys():
|
||||
if recipename not in appended_recipes:
|
||||
appends = self.cooker_data.appends[recipename]
|
||||
first_append = None
|
||||
for appendname in appends:
|
||||
layer = layer_path_match(appendname)
|
||||
if layer:
|
||||
if first_append:
|
||||
self.apply_append(appendname, first_append)
|
||||
else:
|
||||
fdest = appendname[len(layer):]
|
||||
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
|
||||
bb.utils.mkdirhier(os.path.dirname(fdest))
|
||||
bb.utils.copyfile(appendname, fdest)
|
||||
first_append = fdest
|
||||
|
||||
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
|
||||
# have come from)
|
||||
first_regex = None
|
||||
layerdir = layers[0]
|
||||
for layername, pattern, regex, _ in self.cooker.status.bbfile_config_priorities:
|
||||
if regex.match(os.path.join(layerdir, 'test')):
|
||||
first_regex = regex
|
||||
break
|
||||
|
||||
if first_regex:
|
||||
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
|
||||
bbfiles = str(self.config_data.getVar('BBFILES', True)).split()
|
||||
bbfiles_layer = []
|
||||
for item in bbfiles:
|
||||
if first_regex.match(item):
|
||||
newpath = os.path.join(outputdir, item[len(layerdir)+1:])
|
||||
bbfiles_layer.append(newpath)
|
||||
|
||||
if bbfiles_layer:
|
||||
# Check that all important layer files match BBFILES
|
||||
for root, dirs, files in os.walk(outputdir):
|
||||
for f1 in files:
|
||||
ext = os.path.splitext(f1)[1]
|
||||
if ext in ['.bb', '.bbappend']:
|
||||
f1full = os.sep.join([root, f1])
|
||||
entry_found = False
|
||||
for item in bbfiles_layer:
|
||||
if fnmatch.fnmatch(f1full, item):
|
||||
entry_found = True
|
||||
break
|
||||
if not entry_found:
|
||||
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
|
||||
|
||||
def get_file_layer(self, filename):
|
||||
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
for layerdir in self.bblayers:
|
||||
if regex.match(os.path.join(layerdir, 'test')):
|
||||
return self.get_layer_name(layerdir)
|
||||
return "?"
|
||||
|
||||
def get_layer_name(self, layerdir):
|
||||
return os.path.basename(layerdir.rstrip(os.sep))
|
||||
|
||||
def apply_append(self, appendname, recipename):
|
||||
appendfile = open(appendname, 'r')
|
||||
recipefile = open(recipename, 'a')
|
||||
recipefile.write('\n')
|
||||
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
|
||||
recipefile.writelines(appendfile.readlines())
|
||||
recipefile.close()
|
||||
appendfile.close()
|
||||
|
||||
def do_show_appends(self, args):
|
||||
"""list bbappend files and recipe files they apply to
|
||||
|
||||
usage: show-appends
|
||||
|
||||
Recipes are listed with the bbappends that apply to them as subitems.
|
||||
"""
|
||||
self.check_prepare_cooker()
|
||||
if not self.cooker_data.appends:
|
||||
logger.plain('No append files found')
|
||||
return
|
||||
|
||||
logger.plain('State of append files:')
|
||||
|
||||
pnlist = list(self.cooker_data.pkg_pn.keys())
|
||||
pnlist.sort()
|
||||
for pn in pnlist:
|
||||
self.show_appends_for_pn(pn)
|
||||
|
||||
self.show_appends_for_skipped()
|
||||
|
||||
def show_appends_for_pn(self, pn):
|
||||
filenames = self.cooker_data.pkg_pn[pn]
|
||||
|
||||
best = bb.providers.findBestProvider(pn,
|
||||
self.cooker.configuration.data,
|
||||
self.cooker_data,
|
||||
self.cooker_data.pkg_pn)
|
||||
best_filename = os.path.basename(best[3])
|
||||
|
||||
self.show_appends_output(filenames, best_filename)
|
||||
|
||||
def show_appends_for_skipped(self):
|
||||
filenames = [os.path.basename(f)
|
||||
for f in self.cooker.skiplist.iterkeys()]
|
||||
self.show_appends_output(filenames, None, " (skipped)")
|
||||
|
||||
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
|
||||
appended, missing = self.get_appends_for_files(filenames)
|
||||
if appended:
|
||||
for basename, appends in appended:
|
||||
logger.plain('%s%s:', basename, name_suffix)
|
||||
for append in appends:
|
||||
logger.plain(' %s', append)
|
||||
|
||||
if best_filename:
|
||||
if best_filename in missing:
|
||||
logger.warn('%s: missing append for preferred version',
|
||||
best_filename)
|
||||
self.returncode |= 1
|
||||
|
||||
|
||||
def get_appends_for_files(self, filenames):
|
||||
appended, notappended = [], []
|
||||
for filename in filenames:
|
||||
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
|
||||
if cls:
|
||||
continue
|
||||
|
||||
basename = os.path.basename(filename)
|
||||
appends = self.cooker_data.appends.get(basename)
|
||||
if appends:
|
||||
appended.append((basename, list(appends)))
|
||||
else:
|
||||
notappended.append(basename)
|
||||
return appended, notappended
|
||||
|
||||
|
||||
class Config(object):
|
||||
def __init__(self, **options):
|
||||
self.pkgs_to_build = []
|
||||
self.debug_domains = []
|
||||
self.extra_assume_provided = []
|
||||
self.prefile = []
|
||||
self.postfile = []
|
||||
self.debug = 0
|
||||
self.__dict__.update(options)
|
||||
|
||||
def __getattr__(self, attribute):
|
||||
try:
|
||||
return super(Config, self).__getattribute__(attribute)
|
||||
except AttributeError:
|
||||
return None
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv[1:]) or 0)
|
||||
@@ -1,55 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import sys,logging
|
||||
import optparse
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
|
||||
|
||||
import prserv
|
||||
import prserv.serv
|
||||
|
||||
__version__="1.0.0"
|
||||
|
||||
PRHOST_DEFAULT='0.0.0.0'
|
||||
PRPORT_DEFAULT=8585
|
||||
|
||||
def main():
|
||||
parser = optparse.OptionParser(
|
||||
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
|
||||
usage = "%prog < --start | --stop > [options]")
|
||||
|
||||
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
|
||||
dest="dbfile", type="string", default="prserv.sqlite3")
|
||||
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
|
||||
dest="logfile", type="string", default="prserv.log")
|
||||
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
|
||||
action = "store", type="string", dest="loglevel", default = "INFO")
|
||||
parser.add_option("--start", help="start daemon",
|
||||
action="store_true", dest="start")
|
||||
parser.add_option("--stop", help="stop daemon",
|
||||
action="store_true", dest="stop")
|
||||
parser.add_option("--host", help="ip address to bind", action="store",
|
||||
dest="host", type="string", default=PRHOST_DEFAULT)
|
||||
parser.add_option("--port", help="port number(default: 8585)", action="store",
|
||||
dest="port", type="int", default=PRPORT_DEFAULT)
|
||||
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
|
||||
|
||||
if options.start:
|
||||
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
|
||||
elif options.stop:
|
||||
ret=prserv.serv.stop_daemon(options.host, options.port)
|
||||
else:
|
||||
ret=parser.print_help()
|
||||
return ret
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
ret = main()
|
||||
except Exception:
|
||||
ret = 1
|
||||
import traceback
|
||||
traceback.print_exc(5)
|
||||
sys.exit(ret)
|
||||
|
||||
@@ -1,119 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
from bb import fetch2
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger("BitBake")
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
|
||||
|
||||
class BBConfiguration(object):
|
||||
"""
|
||||
Manages build options and configurations for one run
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
self.data = {}
|
||||
self.file = []
|
||||
self.cmd = None
|
||||
self.dump_signatures = True
|
||||
self.prefile = []
|
||||
self.postfile = []
|
||||
self.parse_only = True
|
||||
|
||||
def __getattr__(self, attribute):
|
||||
try:
|
||||
return super(BBConfiguration, self).__getattribute__(attribute)
|
||||
except AttributeError:
|
||||
return None
|
||||
|
||||
_warnings_showwarning = warnings.showwarning
|
||||
def _showwarning(message, category, filename, lineno, file=None, line=None):
|
||||
"""Display python warning messages using bb.msg"""
|
||||
if file is not None:
|
||||
if _warnings_showwarning is not None:
|
||||
_warnings_showwarning(message, category, filename, lineno, file, line)
|
||||
else:
|
||||
s = warnings.formatwarning(message, category, filename, lineno)
|
||||
s = s.split("\n")[0]
|
||||
bb.msg.warn(None, s)
|
||||
|
||||
warnings.showwarning = _showwarning
|
||||
warnings.simplefilter("ignore", DeprecationWarning)
|
||||
|
||||
import bb.event
|
||||
import bb.cooker
|
||||
|
||||
buildfile = sys.argv[1]
|
||||
taskname = sys.argv[2]
|
||||
if len(sys.argv) >= 4:
|
||||
dryrun = sys.argv[3]
|
||||
else:
|
||||
dryrun = False
|
||||
if len(sys.argv) >= 5:
|
||||
hashfile = sys.argv[4]
|
||||
p = pickle.Unpickler(file(hashfile, "rb"))
|
||||
hashdata = p.load()
|
||||
else:
|
||||
hashdata = None
|
||||
|
||||
handler = bb.event.LogHandler()
|
||||
logger.addHandler(handler)
|
||||
|
||||
#An example to make debug log messages show up
|
||||
#bb.msg.init_msgconfig(True, 3, [])
|
||||
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
bb.msg.addDefaultlogFilter(console)
|
||||
console.setFormatter(format)
|
||||
|
||||
def worker_fire(event, d):
|
||||
if isinstance(event, logging.LogRecord):
|
||||
console.handle(event)
|
||||
bb.event.worker_fire = worker_fire
|
||||
bb.event.worker_pid = os.getpid()
|
||||
|
||||
initialenv = os.environ.copy()
|
||||
config = BBConfiguration()
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
pass
|
||||
|
||||
cooker = bb.cooker.BBCooker(config, register_idle_function, initialenv)
|
||||
config_data = cooker.configuration.data
|
||||
cooker.status = config_data
|
||||
cooker.handleCollections(config_data.getVar("BBFILE_COLLECTIONS", 1))
|
||||
|
||||
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
|
||||
buildfile = cooker.matchFile(fn)
|
||||
fn = bb.cache.Cache.realfn2virtual(buildfile, cls)
|
||||
|
||||
cooker.buildSetVars()
|
||||
|
||||
# Load data into the cache for fn and parse the loaded cache data
|
||||
the_data = bb.cache.Cache.loadDataFull(fn, cooker.get_file_appends(fn), cooker.configuration.data)
|
||||
|
||||
if taskname.endswith("_setscene"):
|
||||
the_data.setVarFlag(taskname, "quieterrors", "1")
|
||||
|
||||
if hashdata:
|
||||
bb.parse.siggen.set_taskdata(hashdata["hashes"], hashdata["deps"])
|
||||
for h in hashdata["hashes"]:
|
||||
the_data.setVar("BBHASH_%s" % h, hashdata["hashes"][h])
|
||||
for h in hashdata["deps"]:
|
||||
the_data.setVar("BBHASHDEPS_%s" % h, hashdata["deps"][h])
|
||||
|
||||
ret = 0
|
||||
if dryrun != "True":
|
||||
ret = bb.build.exec_task(fn, taskname, the_data)
|
||||
sys.exit(ret)
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
import optparse, os, sys
|
||||
|
||||
# bitbake
|
||||
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__), 'lib'))
|
||||
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
import bb
|
||||
import bb.parse
|
||||
from string import split, join
|
||||
@@ -48,7 +48,7 @@ class HTMLFormatter:
|
||||
From pydoc... almost identical at least
|
||||
"""
|
||||
while pairs:
|
||||
(a, b) = pairs[0]
|
||||
(a,b) = pairs[0]
|
||||
text = join(split(text, a), b)
|
||||
pairs = pairs[1:]
|
||||
return text
|
||||
@@ -87,7 +87,7 @@ class HTMLFormatter:
|
||||
|
||||
return txt + ",".join(txts)
|
||||
|
||||
def groups(self, item):
|
||||
def groups(self,item):
|
||||
"""
|
||||
Create HTML to link to related groups
|
||||
"""
|
||||
@@ -99,12 +99,12 @@ class HTMLFormatter:
|
||||
txt = "<p><b>See also:</b><br>"
|
||||
txts = []
|
||||
for group in item.groups():
|
||||
txts.append( """<a href="group%s.html">%s</a> """ % (group, group) )
|
||||
txts.append( """<a href="group%s.html">%s</a> """ % (group,group) )
|
||||
|
||||
return txt + ",".join(txts)
|
||||
|
||||
|
||||
def createKeySite(self, item):
|
||||
def createKeySite(self,item):
|
||||
"""
|
||||
Create a site for a key. It contains the header/navigator, a heading,
|
||||
the description, links to related keys and to the groups.
|
||||
@@ -149,7 +149,8 @@ class HTMLFormatter:
|
||||
"""
|
||||
|
||||
groups = ""
|
||||
sorted_groups = sorted(doc.groups())
|
||||
sorted_groups = doc.groups()
|
||||
sorted_groups.sort()
|
||||
for group in sorted_groups:
|
||||
groups += """<a href="group%s.html">%s</a><br>""" % (group, group)
|
||||
|
||||
@@ -184,7 +185,8 @@ class HTMLFormatter:
|
||||
Create Overview of all avilable keys
|
||||
"""
|
||||
keys = ""
|
||||
sorted_keys = sorted(doc.doc_keys())
|
||||
sorted_keys = doc.doc_keys()
|
||||
sorted_keys.sort()
|
||||
for key in sorted_keys:
|
||||
keys += """<a href="key%s.html">%s</a><br>""" % (key, key)
|
||||
|
||||
@@ -212,7 +214,7 @@ class HTMLFormatter:
|
||||
description += "<h2 Description of Grozp %s</h2>" % gr
|
||||
description += _description
|
||||
|
||||
items.sort(lambda x, y:cmp(x.name(), y.name()))
|
||||
items.sort(lambda x,y:cmp(x.name(),y.name()))
|
||||
for group in items:
|
||||
groups += """<a href="key%s.html">%s</a><br>""" % (group.name(), group.name())
|
||||
|
||||
@@ -341,7 +343,7 @@ class DocumentationItem:
|
||||
def addGroup(self, group):
|
||||
self._groups.append(group)
|
||||
|
||||
def addRelation(self, relation):
|
||||
def addRelation(self,relation):
|
||||
self._related.append(relation)
|
||||
|
||||
def sort(self):
|
||||
@@ -394,7 +396,7 @@ class Documentation:
|
||||
"""
|
||||
return self.__groups.keys()
|
||||
|
||||
def group_content(self, group_name):
|
||||
def group_content(self,group_name):
|
||||
"""
|
||||
Return a list of keys/names that are in a specefic
|
||||
group or the empty list
|
||||
@@ -410,7 +412,7 @@ def parse_cmdline(args):
|
||||
Parse the CMD line and return the result as a n-tuple
|
||||
"""
|
||||
|
||||
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__, __version__))
|
||||
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__,__version__))
|
||||
usage = """%prog [options]
|
||||
|
||||
Create a set of html pages (documentation) for a bitbake.conf....
|
||||
@@ -426,12 +428,13 @@ Create a set of html pages (documentation) for a bitbake.conf....
|
||||
parser.add_option( "-D", "--debug", help = "Increase the debug level",
|
||||
action = "count", dest = "debug", default = 0 )
|
||||
|
||||
parser.add_option( "-v", "--verbose", help = "output more chit-char to the terminal",
|
||||
parser.add_option( "-v","--verbose", help = "output more chit-char to the terminal",
|
||||
action = "store_true", dest = "verbose", default = False )
|
||||
|
||||
options, args = parser.parse_args( sys.argv )
|
||||
|
||||
bb.msg.init_msgconfig(options.verbose, options.debug)
|
||||
|
||||
if options.debug:
|
||||
bb.msg.set_debug_level(options.debug)
|
||||
|
||||
return options.config, options.output
|
||||
|
||||
@@ -440,7 +443,7 @@ def main():
|
||||
The main Method
|
||||
"""
|
||||
|
||||
(config_file, output_dir) = parse_cmdline( sys.argv )
|
||||
(config_file,output_dir) = parse_cmdline( sys.argv )
|
||||
|
||||
# right to let us load the file now
|
||||
try:
|
||||
@@ -462,7 +465,7 @@ def main():
|
||||
state_group = 2
|
||||
|
||||
for key in bb.data.keys(documentation):
|
||||
data = documentation.getVarFlag(key, "doc")
|
||||
data = bb.data.getVarFlag(key, "doc", documentation)
|
||||
if not data:
|
||||
continue
|
||||
|
||||
@@ -496,7 +499,7 @@ def main():
|
||||
doc.insert_doc_item(doc_ins)
|
||||
|
||||
# let us create the HTML now
|
||||
bb.utils.mkdirhier(output_dir)
|
||||
bb.mkdirhier(output_dir)
|
||||
os.chdir(output_dir)
|
||||
|
||||
# Let us create the sites now. We do it in the following order
|
||||
|
||||
@@ -1,24 +1,4 @@
|
||||
" Vim filetype detection file
|
||||
" Language: BitBake
|
||||
" Author: Ricardo Salveti <rsalveti@rsalveti.net>
|
||||
" Copyright: Copyright (C) 2008 Ricardo Salveti <rsalveti@rsalveti.net>
|
||||
" Licence: You may redistribute this under the same terms as Vim itself
|
||||
"
|
||||
" This sets up the syntax highlighting for BitBake files, like .bb, .bbclass and .inc
|
||||
|
||||
if &compatible || version < 600
|
||||
finish
|
||||
endif
|
||||
|
||||
" .bb and .bbclass
|
||||
au BufNewFile,BufRead *.b{b,bclass} set filetype=bitbake
|
||||
|
||||
" .inc
|
||||
au BufNewFile,BufRead *.inc set filetype=bitbake
|
||||
|
||||
" .conf
|
||||
au BufNewFile,BufRead *.conf
|
||||
\ if (match(expand("%:p:h"), "conf") > 0) |
|
||||
\ set filetype=bitbake |
|
||||
\ endif
|
||||
|
||||
au BufNewFile,BufRead *.bb setfiletype bitbake
|
||||
au BufNewFile,BufRead *.bbclass setfiletype bitbake
|
||||
au BufNewFile,BufRead *.inc setfiletype bitbake
|
||||
" au BufNewFile,BufRead *.conf setfiletype bitbake
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
set sts=4 sw=4 et
|
||||
@@ -1,85 +0,0 @@
|
||||
" Vim plugin file
|
||||
" Purpose: Create a template for new bb files
|
||||
" Author: Ricardo Salveti <rsalveti@gmail.com>
|
||||
" Copyright: Copyright (C) 2008 Ricardo Salveti <rsalveti@gmail.com>
|
||||
"
|
||||
" This file is licensed under the MIT license, see COPYING.MIT in
|
||||
" this source distribution for the terms.
|
||||
"
|
||||
" Based on the gentoo-syntax package
|
||||
"
|
||||
" Will try to use git to find the user name and email
|
||||
|
||||
if &compatible || v:version < 600
|
||||
finish
|
||||
endif
|
||||
|
||||
fun! <SID>GetUserName()
|
||||
let l:user_name = system("git-config --get user.name")
|
||||
if v:shell_error
|
||||
return "Unknow User"
|
||||
else
|
||||
return substitute(l:user_name, "\n", "", "")
|
||||
endfun
|
||||
|
||||
fun! <SID>GetUserEmail()
|
||||
let l:user_email = system("git-config --get user.email")
|
||||
if v:shell_error
|
||||
return "unknow@user.org"
|
||||
else
|
||||
return substitute(l:user_email, "\n", "", "")
|
||||
endfun
|
||||
|
||||
fun! BBHeader()
|
||||
let l:current_year = strftime("%Y")
|
||||
let l:user_name = <SID>GetUserName()
|
||||
let l:user_email = <SID>GetUserEmail()
|
||||
0 put ='# Copyright (C) ' . l:current_year .
|
||||
\ ' ' . l:user_name . ' <' . l:user_email . '>'
|
||||
put ='# Released under the MIT license (see COPYING.MIT for the terms)'
|
||||
$
|
||||
endfun
|
||||
|
||||
fun! NewBBTemplate()
|
||||
let l:paste = &paste
|
||||
set nopaste
|
||||
|
||||
" Get the header
|
||||
call BBHeader()
|
||||
|
||||
" New the bb template
|
||||
put ='DESCRIPTION = \"\"'
|
||||
put ='HOMEPAGE = \"\"'
|
||||
put ='LICENSE = \"\"'
|
||||
put ='SECTION = \"\"'
|
||||
put ='DEPENDS = \"\"'
|
||||
put ='PR = \"r0\"'
|
||||
put =''
|
||||
put ='SRC_URI = \"\"'
|
||||
|
||||
" Go to the first place to edit
|
||||
0
|
||||
/^DESCRIPTION =/
|
||||
exec "normal 2f\""
|
||||
|
||||
if paste == 1
|
||||
set paste
|
||||
endif
|
||||
endfun
|
||||
|
||||
if !exists("g:bb_create_on_empty")
|
||||
let g:bb_create_on_empty = 1
|
||||
endif
|
||||
|
||||
" disable in case of vimdiff
|
||||
if v:progname =~ "vimdiff"
|
||||
let g:bb_create_on_empty = 0
|
||||
endif
|
||||
|
||||
augroup NewBB
|
||||
au BufNewFile *.bb
|
||||
\ if g:bb_create_on_empty |
|
||||
\ call NewBBTemplate() |
|
||||
\ endif
|
||||
augroup END
|
||||
|
||||
@@ -1,123 +1,127 @@
|
||||
" Vim syntax file
|
||||
" Language: BitBake bb/bbclasses/inc
|
||||
" Author: Chris Larson <kergoth@handhelds.org>
|
||||
" Ricardo Salveti <rsalveti@rsalveti.net>
|
||||
" Copyright: Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
|
||||
" Copyright (C) 2008 Ricardo Salveti <rsalveti@rsalveti.net>
|
||||
"
|
||||
" Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
|
||||
" This file is licensed under the MIT license, see COPYING.MIT in
|
||||
" this source distribution for the terms.
|
||||
"
|
||||
" Syntax highlighting for bb, bbclasses and inc files.
|
||||
"
|
||||
" It's an entirely new type, just has specific syntax in shell and python code
|
||||
" Language: BitBake
|
||||
" Maintainer: Chris Larson <kergoth@handhelds.org>
|
||||
" Filenames: *.bb, *.bbclass
|
||||
|
||||
if &compatible || v:version < 600
|
||||
finish
|
||||
endif
|
||||
if exists("b:current_syntax")
|
||||
finish
|
||||
if version < 600
|
||||
syntax clear
|
||||
elseif exists("b:current_syntax")
|
||||
finish
|
||||
endif
|
||||
|
||||
syn case match
|
||||
|
||||
" Catch incorrect syntax (only matches if nothing else does)
|
||||
"
|
||||
syn match bbUnmatched "."
|
||||
|
||||
|
||||
syn include @python syntax/python.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
|
||||
" BitBake syntax
|
||||
|
||||
" Matching case
|
||||
syn case match
|
||||
" Other
|
||||
|
||||
" Indicates the error when nothing is matched
|
||||
syn match bbUnmatched "."
|
||||
syn match bbComment "^#.*$" display contains=bbTodo
|
||||
syn keyword bbTodo TODO FIXME XXX contained
|
||||
syn match bbDelimiter "[(){}=]" contained
|
||||
syn match bbQuote /['"]/ contained
|
||||
syn match bbArrayBrackets "[\[\]]" contained
|
||||
|
||||
" Comments
|
||||
syn cluster bbCommentGroup contains=bbTodo,@Spell
|
||||
syn keyword bbTodo COMBAK FIXME TODO XXX contained
|
||||
syn match bbComment "#.*$" contains=@bbCommentGroup
|
||||
|
||||
" String helpers
|
||||
syn match bbQuote +['"]+ contained
|
||||
syn match bbDelimiter "[(){}=]" contained
|
||||
syn match bbArrayBrackets "[\[\]]" contained
|
||||
|
||||
" BitBake strings
|
||||
syn match bbContinue "\\$"
|
||||
syn region bbString matchgroup=bbQuote start=+"+ skip=+\\$+ excludenl end=+"+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
|
||||
syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ excludenl end=+'+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
|
||||
|
||||
" Vars definition
|
||||
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
|
||||
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
|
||||
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
|
||||
syn region bbVarPyValue start=+${@+ skip=+\\$+ excludenl end=+}+ contained contains=@python
|
||||
syn match bbContinue "\\$"
|
||||
syn region bbString matchgroup=bbQuote start=/"/ skip=/\\$/ excludenl end=/"/ contained keepend contains=bbTodo,bbContinue,bbVarInlinePy,bbVarDeref
|
||||
syn region bbString matchgroup=bbQuote start=/'/ skip=/\\$/ excludenl end=/'/ contained keepend contains=bbTodo,bbContinue,bbVarInlinePy,bbVarDeref
|
||||
|
||||
" Vars metadata flags
|
||||
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
|
||||
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
|
||||
" BitBake variable metadata
|
||||
|
||||
" Includes and requires
|
||||
syn keyword bbInclude inherit include require contained
|
||||
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
|
||||
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
|
||||
syn match bbVarBraces "[\${}]"
|
||||
syn region bbVarDeref matchgroup=bbVarBraces start="${" end="}" contained
|
||||
" syn region bbVarDeref start="${" end="}" contained
|
||||
" syn region bbVarInlinePy start="${@" end="}" contained contains=@python
|
||||
syn region bbVarInlinePy matchgroup=bbVarBraces start="${@" end="}" contained contains=@python
|
||||
|
||||
" Add taks and similar
|
||||
syn keyword bbStatement addtask addhandler after before EXPORT_FUNCTIONS contained
|
||||
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
|
||||
syn match bbStatementLine "^\(addtask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
|
||||
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
|
||||
" syn match bbVarDeref "${[a-zA-Z0-9\-_\.]\+}" contained
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.]\+\(_[${}a-zA/-Z0-9\-_\.]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
|
||||
" OE Important Functions
|
||||
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\./]\+" display contained
|
||||
"syn keyword bbVarEq = display contained nextgroup=bbVarValue
|
||||
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
|
||||
syn match bbVarValue ".*$" contained contains=bbString
|
||||
|
||||
" BitBake variable metadata flags
|
||||
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
|
||||
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
|
||||
"syn match bbVarFlagFlag "\[\([a-zA-Z0-9\-_\.]\+\)\]\s*\(=\)\@=" contains=bbIdentifier nextgroup=bbVarEq
|
||||
|
||||
|
||||
" Functions!
|
||||
syn match bbFunction "\h\w*" display contained
|
||||
|
||||
|
||||
" BitBake python metadata
|
||||
|
||||
syn keyword bbPythonFlag python contained nextgroup=bbFunction
|
||||
syn match bbPythonFuncDef "^\(python\s\+\)\(\w\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPythonFlag,bbFunction,bbDelimiter nextgroup=bbPythonFuncRegion skipwhite
|
||||
syn region bbPythonFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
|
||||
"hi def link bbPythonFuncRegion Comment
|
||||
|
||||
" Generic Functions
|
||||
syn match bbFunction "\h[0-9A-Za-z_-]*" display contained contains=bbOEFunctions
|
||||
|
||||
" BitBake shell metadata
|
||||
syn include @shell syntax/sh.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
syn keyword bbShFakeRootFlag fakeroot contained
|
||||
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([0-9A-Za-z_-]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbDelimiter nextgroup=bbShFuncRegion skipwhite
|
||||
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
|
||||
|
||||
" BitBake python metadata
|
||||
syn keyword bbPyFlag python contained
|
||||
syn match bbPyFuncDef "^\(python\s\+\)\([0-9A-Za-z_-]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPyFlag,bbFunction,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
|
||||
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
|
||||
syn keyword bbFakerootFlag fakeroot contained nextgroup=bbFunction
|
||||
syn match bbShellFuncDef "^\(fakeroot\s*\)\?\(\w\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbFakerootFlag,bbFunction,bbDelimiter nextgroup=bbShellFuncRegion skipwhite
|
||||
syn region bbShellFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
|
||||
"hi def link bbShellFuncRegion Comment
|
||||
|
||||
|
||||
" BitBake 'def'd python functions
|
||||
syn keyword bbPyDef def contained
|
||||
syn region bbPyDefRegion start='^\(def\s\+\)\([0-9A-Za-z_-]\+\)\(\s*(.*)\s*\):\s*$' end='^\(\s\|$\)\@!' contains=@python
|
||||
syn keyword bbDef def contained
|
||||
syn region bbDefRegion start='^def\s\+\w\+\s*([^)]*)\s*:\s*$' end='^\(\s\|$\)\@!' contains=@python
|
||||
|
||||
" Highlighting Definitions
|
||||
hi def link bbUnmatched Error
|
||||
hi def link bbInclude Include
|
||||
hi def link bbTodo Todo
|
||||
hi def link bbComment Comment
|
||||
hi def link bbQuote String
|
||||
hi def link bbString String
|
||||
hi def link bbDelimiter Keyword
|
||||
hi def link bbArrayBrackets Statement
|
||||
hi def link bbContinue Special
|
||||
hi def link bbExport Type
|
||||
hi def link bbExportFlag Type
|
||||
hi def link bbIdentifier Identifier
|
||||
hi def link bbVarDeref PreProc
|
||||
hi def link bbVarDef Identifier
|
||||
hi def link bbVarValue String
|
||||
hi def link bbShFakeRootFlag Type
|
||||
hi def link bbFunction Function
|
||||
hi def link bbPyFlag Type
|
||||
hi def link bbPyDef Statement
|
||||
hi def link bbStatement Statement
|
||||
hi def link bbStatementRest Identifier
|
||||
hi def link bbOEFunctions Special
|
||||
hi def link bbVarPyValue PreProc
|
||||
|
||||
" BitBake statements
|
||||
syn keyword bbStatement include inherit require addtask addhandler EXPORT_FUNCTIONS display contained
|
||||
syn match bbStatementLine "^\(include\|inherit\|require\|addtask\|addhandler\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
|
||||
syn match bbStatementRest ".*$" contained contains=bbString,bbVarDeref
|
||||
|
||||
" Highlight
|
||||
"
|
||||
hi def link bbArrayBrackets Statement
|
||||
hi def link bbUnmatched Error
|
||||
hi def link bbContinue Special
|
||||
hi def link bbDef Statement
|
||||
hi def link bbPythonFlag Type
|
||||
hi def link bbExportFlag Type
|
||||
hi def link bbFakerootFlag Type
|
||||
hi def link bbStatement Statement
|
||||
hi def link bbString String
|
||||
hi def link bbTodo Todo
|
||||
hi def link bbComment Comment
|
||||
hi def link bbOperator Operator
|
||||
hi def link bbError Error
|
||||
hi def link bbFunction Function
|
||||
hi def link bbDelimiter Delimiter
|
||||
hi def link bbIdentifier Identifier
|
||||
hi def link bbVarEq Operator
|
||||
hi def link bbQuote String
|
||||
hi def link bbVarValue String
|
||||
" hi def link bbVarInlinePy PreProc
|
||||
hi def link bbVarDeref PreProc
|
||||
hi def link bbVarBraces PreProc
|
||||
|
||||
let b:current_syntax = "bb"
|
||||
|
||||
@@ -85,6 +85,9 @@ don't execute, just go through the motions
|
||||
.B \-p, \-\-parse-only
|
||||
quit after parsing the BB files (developers only)
|
||||
.TP
|
||||
.B \-d, \-\-disable-psyco
|
||||
disable using the psyco just-in-time compiler (not recommended)
|
||||
.TP
|
||||
.B \-s, \-\-show-versions
|
||||
show current and preferred versions of all packages
|
||||
.TP
|
||||
|
||||
@@ -45,7 +45,7 @@ endif
|
||||
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
|
||||
|
||||
$(xmltotypes): $(manual)
|
||||
$(call command,xmlto --with-dblatex --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
|
||||
$(call command,xmlto --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
|
||||
|
||||
clean:
|
||||
rm -rf $(cleanfiles)
|
||||
|
||||
@@ -12,10 +12,9 @@
|
||||
<corpauthor>BitBake Team</corpauthor>
|
||||
</authorgroup>
|
||||
<copyright>
|
||||
<year>2004, 2005, 2006, 2011</year>
|
||||
<year>2004, 2005, 2006</year>
|
||||
<holder>Chris Larson</holder>
|
||||
<holder>Phil Blundell</holder>
|
||||
<holder>Richard Purdie</holder>
|
||||
</copyright>
|
||||
<legalnotice>
|
||||
<para>This work is licensed under the Creative Commons Attribution License. To view a copy of this license, visit <ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink> or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.</para>
|
||||
@@ -27,10 +26,10 @@
|
||||
<title>Overview</title>
|
||||
<para>BitBake is, at its simplest, a tool for executing
|
||||
tasks and managing metadata. As such, its similarities to GNU make and other
|
||||
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions/projects such as Angstrom and the Yocto project.</para>
|
||||
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions, including OpenZaurus and Familiar.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Background and goals</title>
|
||||
<title>Background and Goals</title>
|
||||
<para>Prior to BitBake, no other build tool adequately met
|
||||
the needs of an aspiring embedded Linux distribution. All of the
|
||||
buildsystems used by traditional desktop Linux distributions lacked
|
||||
@@ -38,14 +37,14 @@ important functionality, and none of the ad-hoc
|
||||
<emphasis>buildroot</emphasis> systems, prevalent in the
|
||||
embedded space, were scalable or maintainable.</para>
|
||||
|
||||
<para>Some important original goals for BitBake were:
|
||||
<para>Some important goals for BitBake were:
|
||||
<itemizedlist>
|
||||
<listitem><para>Handle crosscompilation.</para></listitem>
|
||||
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
|
||||
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
|
||||
<listitem><para>Must be Linux distribution agnostic (both build and target).</para></listitem>
|
||||
<listitem><para>Must be linux distribution agnostic (both build and target).</para></listitem>
|
||||
<listitem><para>Must be architecture agnostic</para></listitem>
|
||||
<listitem><para>Must support multiple build and target operating systems (including Cygwin, the BSDs, etc).</para></listitem>
|
||||
<listitem><para>Must support multiple build and target operating systems (including cygwin, the BSDs, etc).</para></listitem>
|
||||
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
|
||||
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
|
||||
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
|
||||
@@ -54,18 +53,10 @@ between multiple projects using BitBake for their
|
||||
builds.</para></listitem>
|
||||
<listitem><para>Should provide an inheritance mechanism to
|
||||
share common metadata between many packages.</para></listitem>
|
||||
<listitem><para>Et cetera...</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>Over time it has become apparent that some further requirements were necessary:
|
||||
<itemizedlist>
|
||||
<listitem><para>Handle variants of a base recipe (native, sdk, multilib).</para></listitem>
|
||||
<listitem><para>Able to split metadata into layers and allow layers to override each other.</para></listitem>
|
||||
<listitem><para>Allow representation of a given set of input variables to a task as a checksum.</para></listitem>
|
||||
<listitem><para>based on that checksum, allow acceleration of builds with prebuilt components.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>BitBake satisfies all the original requirements and many more with extensions being made to the basic functionality to reflect the additionl requirements. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
|
||||
<para>BitBake satisfies all these and many more. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
<chapter>
|
||||
@@ -100,13 +91,13 @@ share common metadata between many packages.</para></listitem>
|
||||
<section>
|
||||
<title>Setting a default value (?=)</title>
|
||||
<para><screen><varname>A</varname> ?= "aval"</screen></para>
|
||||
<para>If <varname>A</varname> is set before the above is called, it will retain its previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
|
||||
<para>If <varname>A</varname> is set before the above is called, it will retain it's previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Setting a weak default value (??=)</title>
|
||||
<para><screen><varname>A</varname> ??= "somevalue"
|
||||
<varname>A</varname> ??= "someothervalue"</screen></para>
|
||||
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy/weak assignment in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used. Any other setting of A using = or ?= will however override the value set with ??=</para>
|
||||
<title>Setting a default value (??=)</title>
|
||||
<para><screen><varname>A</varname> ??= "somevalue"</screen></para>
|
||||
<para><screen><varname>A</varname> ??= "someothervalue"</screen></para>
|
||||
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy version of ??=, in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Immediate variable expansion (:=)</title>
|
||||
@@ -134,7 +125,7 @@ share common metadata between many packages.</para></listitem>
|
||||
<varname>B</varname> .= "additionaldata"
|
||||
<varname>C</varname> = "cval"
|
||||
<varname>C</varname> =. "test"</screen></para>
|
||||
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above appending and prepending operators, no additional space
|
||||
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above Appending and Prepending operators no additional space
|
||||
will be introduced.</para>
|
||||
</section>
|
||||
<section>
|
||||
@@ -156,12 +147,12 @@ will be introduced.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inclusion</title>
|
||||
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
|
||||
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse in whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Requiring inclusion</title>
|
||||
<title>Requiring Inclusion</title>
|
||||
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
|
||||
raise an ParseError if the file to be included cannot be found. Otherwise it will behave just like the <literal>
|
||||
raise an ParseError if the to be included file can not be found. Otherwise it will behave just like the <literal>
|
||||
include</literal> directive.</para>
|
||||
</section>
|
||||
<section>
|
||||
@@ -180,13 +171,13 @@ include</literal> directive.</para>
|
||||
import time
|
||||
print time.strftime('%Y%m%d', time.gmtime())
|
||||
}</screen></para>
|
||||
<para>This is the similar to the previous, but flags it as Python so that BitBake knows it is Python code.</para>
|
||||
<para>This is the similar to the previous, but flags it as python so that BitBake knows it is python code.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Defining Python functions into the global Python namespace</title>
|
||||
<title>Defining python functions into the global python namespace</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para><screen>def get_depends(bb, d):
|
||||
if d.getVar('SOMECONDITION', True):
|
||||
if bb.data.getVar('SOMECONDITION', d, True):
|
||||
return "dependencywithcond"
|
||||
else:
|
||||
return "dependency"
|
||||
@@ -196,52 +187,41 @@ include</literal> directive.</para>
|
||||
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variable flags</title>
|
||||
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by BitBake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
|
||||
<title>Variable Flags</title>
|
||||
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
|
||||
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
|
||||
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inheritance</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudimentary form of inheritance. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.bbclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
|
||||
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudimentary form of inheritance. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.oeclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Tasks</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined Python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
|
||||
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
|
||||
<para><screen>python do_printdate () {
|
||||
import time
|
||||
print time.strftime('%Y%m%d', time.gmtime())
|
||||
}
|
||||
|
||||
addtask printdate before do_build</screen></para>
|
||||
<para>This defines the necessary Python function and adds it as a task which is now a dependency of do_build, the default task. If anyone executes the do_build task, that will result in do_printdate being run first.</para>
|
||||
<para>This defines the necessary python function and adds it as a task which is now a dependency of do_build (the default task). If anyone executes the do_build task, that will result in do_printdate being run first.</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Task Flags</title>
|
||||
<para>Tasks support a number of flags which control various functionality of the task. These are as follows:</para>
|
||||
<para>'dirs' - directories which should be created before the task runs</para>
|
||||
<para>'cleandirs' - directories which should created before the task runs but should be empty</para>
|
||||
<para>'noexec' - marks the tasks as being empty and no execution required. These are used as dependency placeholders or used when added tasks need to be subsequently disabled.</para>
|
||||
<para>'nostamp' - don't generate a stamp file for a task. This means the task is always rexecuted.</para>
|
||||
<para>'fakeroot' - this task needs to be run in a fakeroot environment, obtained by adding the variables in FAKEROOTENV to the environment.</para>
|
||||
<para>'umask' - the umask to run the task under.</para>
|
||||
<para> For the 'deptask', 'rdeptask', 'recdeptask' and 'recrdeptask' flags please see the dependencies section.</para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Events</title>
|
||||
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
|
||||
<para>BitBake allows installation of event handlers. Events are triggered at certain points during operation, such as the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent is to make it easy to do things like email notification on build failure.</para>
|
||||
<para>BitBake allows to install event handlers. Events are triggered at certain points during operation, such as, the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent was to make it easy to do things like email notifications on build failure.</para>
|
||||
<para><screen>addhandler myclass_eventhandler
|
||||
python myclass_eventhandler() {
|
||||
from bb.event import getName
|
||||
from bb.event import NotHandled, getName
|
||||
from bb import data
|
||||
|
||||
print("The name of the Event is %s" % getName(e))
|
||||
print("The file we run for is %s" % data.getVar('FILE', e.data, True))
|
||||
print "The name of the Event is %s" % getName(e)
|
||||
print "The file we run for is %s" % data.getVar('FILE', e.data, True)
|
||||
|
||||
return NotHandled
|
||||
}
|
||||
</screen></para><para>
|
||||
This event handler gets called every time an event is triggered. A global variable <varname>e</varname> is defined. <varname>e</varname>.data contains an instance of bb.data. With the getName(<varname>e</varname>)
|
||||
@@ -250,65 +230,20 @@ of the event and the content of the <varname>FILE</varname> variable.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Variants</title>
|
||||
<para>Two BitBake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
|
||||
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes used to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarnation will have the "native" class inherited.</para>
|
||||
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
|
||||
<para>Two Bitbake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
|
||||
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes to utilize to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
|
||||
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to be able to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
|
||||
<para><screen>BBVERSIONS = "1.0 2.0 git"
|
||||
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
|
||||
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
|
||||
1.0.[7-9]:1.0.7+"
|
||||
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
|
||||
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides; it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
|
||||
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides, it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Variable interaction: Worked Examples</title>
|
||||
<para>Despite the documentation of the different forms of variable definition above, it can be hard to work out what happens when variable operators are combined. This section documents some common questions people have regarding the way variables interact.</para>
|
||||
|
||||
<section>
|
||||
<title>Override and append ordering</title>
|
||||
|
||||
<para>There is often confusion about which order overrides and the various append operators take effect.</para>
|
||||
|
||||
<para><screen><varname>OVERRIDES</varname> = "foo"
|
||||
<varname>A_foo_append</varname> = "X"</screen></para>
|
||||
<para>In this case, X is unconditionally appended to the variable <varname>A_foo</varname>. Since foo is an override, A_foo would then replace <varname>A</varname>.</para>
|
||||
|
||||
<para><screen><varname>OVERRIDES</varname> = "foo"
|
||||
<varname>A</varname> = "X"
|
||||
<varname>A_append_foo</varname> = "Y"</screen></para>
|
||||
<para>In this case, only when foo is in OVERRIDES, Y is appended to the variable <varname>A</varname> so the value of <varname>A</varname> would become XY (NB: no spaces are appended).</para>
|
||||
|
||||
<para><screen><varname>OVERRIDES</varname> = "foo"
|
||||
<varname>A_foo_append</varname> = "X"
|
||||
<varname>A_foo_append</varname> += "Y"</screen></para>
|
||||
<para>This behaves as per the first case above, but the value of <varname>A</varname> would be "X Y" instead of just "X".</para>
|
||||
|
||||
<para><screen><varname>A</varname> = "1"
|
||||
<varname>A_append</varname> = "2"
|
||||
<varname>A_append</varname> = "3"
|
||||
<varname>A</varname> += "4"
|
||||
<varname>A</varname> .= "5"</screen></para>
|
||||
|
||||
<para>Would ultimately result in <varname>A</varname> taking the value "1 4523" since the _append operator executes at the same time as the expansion of other overrides.</para>
|
||||
|
||||
</section>
|
||||
<section>
|
||||
<title>Key Expansion</title>
|
||||
|
||||
<para>Key expansion happens at the data store finalisation time just before overrides are expanded.</para>
|
||||
|
||||
<para><screen><varname>A${B}</varname> = "X"
|
||||
<varname>B</varname> = "2"
|
||||
<varname>A2</varname> = "Y"</screen></para>
|
||||
<para>So in this case <varname>A2</varname> would take the value of "X".</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
<section>
|
||||
<title>Dependency handling</title>
|
||||
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
|
||||
<title>Dependency Handling</title>
|
||||
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
|
||||
<section>
|
||||
<title>Dependencies internal to the .bb file</title>
|
||||
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
|
||||
@@ -316,26 +251,26 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
|
||||
<section>
|
||||
<title>DEPENDS</title>
|
||||
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
|
||||
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>RDEPENDS</title>
|
||||
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
|
||||
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
|
||||
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
|
||||
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive DEPENDS</title>
|
||||
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
|
||||
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Recursive RDEPENDS</title>
|
||||
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
|
||||
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Inter task</title>
|
||||
<title>Inter Task</title>
|
||||
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
|
||||
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
|
||||
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
|
||||
@@ -345,56 +280,35 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
|
||||
<section>
|
||||
<title>Parsing</title>
|
||||
<section>
|
||||
<title>Configuration files</title>
|
||||
<para>The first kind of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
|
||||
<para>BitBake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list, a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
|
||||
<para>BitBake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on).</para>
|
||||
<title>Configuration Files</title>
|
||||
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
|
||||
<para>Bitbake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
|
||||
<para>Bitbake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
|
||||
<para>Only variable definitions and include directives are allowed in .conf files.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Classes</title>
|
||||
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the directories in <envar>BBPATH</envar>.</para>
|
||||
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>.bb files</title>
|
||||
<title>.bb Files</title>
|
||||
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
|
||||
<chapter>
|
||||
<title>File download support</title>
|
||||
<title>File Download support</title>
|
||||
<section>
|
||||
<title>Overview</title>
|
||||
<para>BitBake provides support to download files this procedure is called fetching and it handled by the fetch and fetch2 modules. At this point the original fetch code is considered to be replaced by fetch2 and this manual only related to the fetch2 codebase.</para>
|
||||
|
||||
<para>The SRC_URI is normally used to tell BitBake which files to fetch. The next sections will describe the available fetchers and their options. Each fetcher honors a set of variables and per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantics of the variables and parameters are defined by the fetcher. BitBake tries to have consistent semantics between the different fetchers.
|
||||
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to indicate BitBake which files to fetch. The next sections will describe th available fetchers and the options they have. Each Fetcher honors a set of Variables and
|
||||
a per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantic of the Variables and Parameters are defined by the Fetcher. BitBakes tries to have a consistent semantic between the different Fetchers.
|
||||
</para>
|
||||
|
||||
<para>The overall fetch process is that first, fetches are attempted from PREMIRRORS. If those don't work, the original SRC_URI is attempted and if that fails, BitBake will fall back to MIRRORS. Cross urls are supported, so its possible to mirror a git repository on an http server as a tarball for example. Some example commonly used mirror definitions are:</para>
|
||||
|
||||
<para><screen><varname>PREMIRRORS</varname> ?= "\
|
||||
bzr://.*/.* http://somemirror.org/sources/ \n \
|
||||
cvs://.*/.* http://somemirror.org/sources/ \n \
|
||||
git://.*/.* http://somemirror.org/sources/ \n \
|
||||
hg://.*/.* http://somemirror.org/sources/ \n \
|
||||
osc://.*/.* http://somemirror.org/sources/ \n \
|
||||
p4://.*/.* http://somemirror.org/sources/ \n \
|
||||
svk://.*/.* http://somemirror.org/sources/ \n \
|
||||
svn://.*/.* http://somemirror.org/sources/ \n"
|
||||
|
||||
<varname>MIRRORS</varname> =+ "\
|
||||
ftp://.*/.* http://somemirror.org/sources/ \n \
|
||||
http://.*/.* http://somemirror.org/sources/ \n \
|
||||
https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
|
||||
|
||||
<para>Non-local downloaded output is placed into the directory specified by the <varname>DL_DIR</varname>. For non local archive downloads the code can verify sha256 and md5 checksums for the download to ensure the file has been downloaded correctly. These may be specified either in the form <varname>SRC_URI[md5sum]</varname> for the md5 checksum and <varname>SRC_URI[sha256sum]</varname> for the sha256 checksum or as parameters on the SRC_URI such as SRC_URI="http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d". If <varname>BB_STRICT_CHECKSUM</varname> is set, any download without a checksum will trigger an error message. In cases where multiple files are listed in SRC_URI, the name parameter is used assign names to the urls and these are then specified in the checksums in the form SRC_URI[name.sha256sum].</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>Local file fetcher</title>
|
||||
<para>The URN for the local file fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative, <varname>FILESPATH</varname> and failing that <varname>FILESDIR</varname> will be used to find the appropriate relative file. The metadata usually extend these variables to include variations of the values in <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
|
||||
<title>Local File Fetcher</title>
|
||||
<para>The URN for the Local File Fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
|
||||
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
|
||||
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
|
||||
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
|
||||
@@ -403,11 +317,10 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>CVS fetcher</title>
|
||||
<para>The URN for the CVS fetcher is <emphasis>cvs</emphasis>. This fetcher honors the variables <varname>CVSDIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build). <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables to use for the CVS checkout or update.
|
||||
<title>CVS File Fetcher</title>
|
||||
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIRS</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
|
||||
</para>
|
||||
<para>The supported parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>. If <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally, <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
|
||||
|
||||
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout by default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR></varname>.
|
||||
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
|
||||
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
|
||||
</screen>
|
||||
@@ -415,22 +328,32 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>HTTP/FTP fetcher</title>
|
||||
<para>The URNs for the HTTP/FTP fetcher are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This fetcher honors the variables <varname>FETCHCOMMAND_wget</varname>. <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the URI and basename of the file to be fetched.
|
||||
<title>HTTP/FTP Fetcher</title>
|
||||
<para>The URNs for the HTTP/FTP are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file, <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the uri and basename of the to be fetched file. <varname>PREMIRRORS</varname>
|
||||
will be tried first when fetching a file if that fails the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac"
|
||||
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac"
|
||||
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan"
|
||||
<para>The only supported Parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac;md5sum=12343"
|
||||
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac;md5sum=1234"
|
||||
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan;md5sum=1234"
|
||||
</screen></para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>SVN fetcher</title>
|
||||
<para>The URN for the SVN fetcher is <emphasis>svn</emphasis>.
|
||||
<title>SVK Fetcher</title>
|
||||
<para>
|
||||
<emphasis>Currently NOT supported</emphasis>
|
||||
</para>
|
||||
<para>This fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>SVNDIR</varname>, <varname>SRCREV</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command. <varname>SRCREV</varname> specifies which revision to use when doing the fetching.
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>SVN Fetcher</title>
|
||||
<para>The URN for the SVN Fetcher is <emphasis>svn</emphasis>.
|
||||
</para>
|
||||
<para>The supported parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the Subversion protocol, <varname>rev</varname> is the Subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
|
||||
<para>This Fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command, <varname>DL_DIR</varname> is the directory where tarballs will be saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
|
||||
</para>
|
||||
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname>. <varname>proto</varname> is the subversion prototype, <varname>rev</varname> is the subversions revision.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
|
||||
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
|
||||
@@ -438,12 +361,12 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<title>GIT fetcher</title>
|
||||
<title>GIT Fetcher</title>
|
||||
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
|
||||
</para>
|
||||
<para>The variable <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
|
||||
<para>The Variables <varname>DL_DIR</varname>, <varname>GITDIR</varname> are used. <varname>DL_DIR</varname> will be used to store the checkedout version. <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
|
||||
</para>
|
||||
<para>The parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a Git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the Git protocol to use and defaults to <quote>git</quote> if a hostname is set, otherwise its <quote>file</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
|
||||
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>.
|
||||
</para>
|
||||
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
|
||||
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
|
||||
@@ -454,13 +377,13 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
|
||||
|
||||
|
||||
<chapter>
|
||||
<title>The BitBake command</title>
|
||||
<title>The bitbake command</title>
|
||||
<section>
|
||||
<title>Introduction</title>
|
||||
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
|
||||
</section>
|
||||
<section>
|
||||
<title>Usage and syntax</title>
|
||||
<title>Usage and Syntax</title>
|
||||
<para>
|
||||
<screen><prompt>$ </prompt>bitbake --help
|
||||
usage: bitbake [options] [package ...]
|
||||
@@ -496,6 +419,8 @@ options:
|
||||
than once.
|
||||
-n, --dry-run don't execute, just go through the motions
|
||||
-p, --parse-only quit after parsing the BB files (developers only)
|
||||
-d, --disable-psyco disable using the psyco just-in-time compiler (not
|
||||
recommended)
|
||||
-s, --show-versions show current and preferred versions of all packages
|
||||
-e, --environment show the global or per-package environment (this is
|
||||
what used to be bbread)
|
||||
@@ -515,7 +440,7 @@ options:
|
||||
<para>
|
||||
<example>
|
||||
<title>Executing a task against a single .bb</title>
|
||||
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and BitBake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
|
||||
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and bitbake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
|
||||
<para><quote>clean</quote> task:</para>
|
||||
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
|
||||
<para><quote>build</quote> task:</para>
|
||||
@@ -525,8 +450,8 @@ options:
|
||||
<para>
|
||||
<example>
|
||||
<title>Executing tasks against a set of .bb files</title>
|
||||
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell BitBake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
|
||||
<para>The next section, Metadata, outlines how to specify such things.</para>
|
||||
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell bitbake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
|
||||
<para>The next section, Metadata, outlines how one goes about specifying such things.</para>
|
||||
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
|
||||
<screen><prompt>$ </prompt>bitbake blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
|
||||
@@ -538,8 +463,8 @@ options:
|
||||
<example>
|
||||
<title>Generating dependency graphs</title>
|
||||
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
|
||||
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">Graphviz</ulink>.
|
||||
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends, one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. This way, <varname>DEPENDS</varname> from inherited classes such as base.bbclass can be removed from the graph.</para>
|
||||
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
|
||||
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
|
||||
<screen><prompt>$ </prompt>bitbake -g blah</screen>
|
||||
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
|
||||
</example>
|
||||
@@ -547,20 +472,20 @@ Two files will be written into the current working directory, <emphasis>depends.
|
||||
</section>
|
||||
<section>
|
||||
<title>Special variables</title>
|
||||
<para>Certain variables affect BitBake operation:</para>
|
||||
<para>Certain variables affect bitbake operation:</para>
|
||||
<section>
|
||||
<title><varname>BB_NUMBER_THREADS</varname></title>
|
||||
<para> The number of threads BitBake should run at once (default: 1).</para>
|
||||
<para> The number of threads bitbake should run at once (default: 1).</para>
|
||||
</section>
|
||||
</section>
|
||||
<section>
|
||||
<title>Metadata</title>
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the <varname>BBFILES</varname> variable is how the BitBake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
|
||||
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
|
||||
<example>
|
||||
<title>Setting BBFILES</title>
|
||||
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
|
||||
</example></para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, set to a component of the .bb filename by default.</para>
|
||||
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
|
||||
<example>
|
||||
<title>Depending on another .bb</title>
|
||||
<para>a.bb:
|
||||
@@ -573,7 +498,7 @@ DEPENDS += "package-b"</screen>
|
||||
</example>
|
||||
<example>
|
||||
<title>Using PROVIDES</title>
|
||||
<para>This example shows the usage of the <varname>PROVIDES</varname> variable, which allows a given .bb to specify what functionality it provides.</para>
|
||||
<para>This example shows the usage of the PROVIDES variable, which allows a given .bb to specify what functionality it provides.</para>
|
||||
<para>package1.bb:
|
||||
<screen>PROVIDES += "virtual/package"</screen>
|
||||
</para>
|
||||
@@ -583,16 +508,16 @@ DEPENDS += "package-b"</screen>
|
||||
<para>package3.bb:
|
||||
<screen>PROVIDES += "virtual/package"</screen>
|
||||
</para>
|
||||
<para>As you can see, we have two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running BitBake to control which of those providers gets used. There is, indeed, such a way.</para>
|
||||
<para>As you can see, here there are two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running bitbake to control which of those providers gets used. There is, indeed, such a way.</para>
|
||||
<para>The following would go into a .conf file, to select package1:
|
||||
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
|
||||
</para>
|
||||
</example>
|
||||
<example>
|
||||
<title>Specifying version preference</title>
|
||||
<para>When there are multiple <quote>versions</quote> of a given package, BitBake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preference for the default selected version. In addition, the user can specify their preferred version.</para>
|
||||
<para>When there are multiple <quote>versions</quote> of a given package, bitbake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preferences for the default selected version. In addition, the user can specify their preferences with regard to version.</para>
|
||||
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
|
||||
<para>If we then have an <filename>a_1.2.bb</filename>, BitBake will choose 1.2 by default. However, if we define the following variable in a .conf that BitBake parses, we can change that.
|
||||
<para>If we then have an <filename>a_1.2.bb</filename>, bitbake will choose 1.2 by default. However, if we define the following variable in a .conf that bitbake parses, we can change that.
|
||||
<screen>PREFERRED_VERSION_a = "1.1"</screen>
|
||||
</para>
|
||||
</example>
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
#
|
||||
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
|
||||
#
|
||||
# Copyright (C) 2006 Tim Amsell
|
||||
# Copyright (C) 2006 Tim Amsell
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
@@ -18,31 +18,29 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Please Note:
|
||||
#Please Note:
|
||||
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
|
||||
# Assign a file to __warn__ to get warnings about slow operations.
|
||||
#
|
||||
|
||||
from __future__ import print_function
|
||||
import copy
|
||||
import types
|
||||
ImmutableTypes = (
|
||||
types.NoneType,
|
||||
bool,
|
||||
complex,
|
||||
float,
|
||||
int,
|
||||
long,
|
||||
tuple,
|
||||
frozenset,
|
||||
basestring
|
||||
)
|
||||
types.ImmutableTypes = tuple([ \
|
||||
types.BooleanType, \
|
||||
types.ComplexType, \
|
||||
types.FloatType, \
|
||||
types.IntType, \
|
||||
types.LongType, \
|
||||
types.NoneType, \
|
||||
types.TupleType, \
|
||||
frozenset] + \
|
||||
list(types.StringTypes))
|
||||
|
||||
MUTABLE = "__mutable__"
|
||||
|
||||
class COWMeta(type):
|
||||
pass
|
||||
|
||||
|
||||
class COWDictMeta(COWMeta):
|
||||
__warn__ = False
|
||||
__hasmutable__ = False
|
||||
@@ -61,12 +59,12 @@ class COWDictMeta(COWMeta):
|
||||
__call__ = cow
|
||||
|
||||
def __setitem__(cls, key, value):
|
||||
if not isinstance(value, ImmutableTypes):
|
||||
if not isinstance(value, types.ImmutableTypes):
|
||||
if not isinstance(value, COWMeta):
|
||||
cls.__hasmutable__ = True
|
||||
key += MUTABLE
|
||||
setattr(cls, key, value)
|
||||
|
||||
|
||||
def __getmutable__(cls, key, readonly=False):
|
||||
nkey = key + MUTABLE
|
||||
try:
|
||||
@@ -79,10 +77,10 @@ class COWDictMeta(COWMeta):
|
||||
return value
|
||||
|
||||
if not cls.__warn__ is False and not isinstance(value, COWMeta):
|
||||
print("Warning: Doing a copy because %s is a mutable type." % key, file=cls.__warn__)
|
||||
print >> cls.__warn__, "Warning: Doing a copy because %s is a mutable type." % key
|
||||
try:
|
||||
value = value.copy()
|
||||
except AttributeError as e:
|
||||
except AttributeError, e:
|
||||
value = copy.copy(value)
|
||||
setattr(cls, nkey, value)
|
||||
return value
|
||||
@@ -100,13 +98,13 @@ class COWDictMeta(COWMeta):
|
||||
value = getattr(cls, key)
|
||||
except AttributeError:
|
||||
value = cls.__getmutable__(key, readonly)
|
||||
|
||||
# This is for values which have been deleted
|
||||
|
||||
# This is for values which have been deleted
|
||||
if value is cls.__marker__:
|
||||
raise AttributeError("key %s does not exist." % key)
|
||||
|
||||
return value
|
||||
except AttributeError as e:
|
||||
except AttributeError, e:
|
||||
if not default is cls.__getmarker__:
|
||||
return default
|
||||
|
||||
@@ -120,9 +118,6 @@ class COWDictMeta(COWMeta):
|
||||
key += MUTABLE
|
||||
delattr(cls, key)
|
||||
|
||||
def __contains__(cls, key):
|
||||
return cls.has_key(key)
|
||||
|
||||
def has_key(cls, key):
|
||||
value = cls.__getreadonly__(key, cls.__marker__)
|
||||
if value is cls.__marker__:
|
||||
@@ -132,7 +127,7 @@ class COWDictMeta(COWMeta):
|
||||
def iter(cls, type, readonly=False):
|
||||
for key in dir(cls):
|
||||
if key.startswith("__"):
|
||||
continue
|
||||
continue
|
||||
|
||||
if key.endswith(MUTABLE):
|
||||
key = key[:-len(MUTABLE)]
|
||||
@@ -158,11 +153,11 @@ class COWDictMeta(COWMeta):
|
||||
return cls.iter("keys")
|
||||
def itervalues(cls, readonly=False):
|
||||
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
|
||||
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
|
||||
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
|
||||
return cls.iter("values", readonly)
|
||||
def iteritems(cls, readonly=False):
|
||||
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
|
||||
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
|
||||
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
|
||||
return cls.iter("items", readonly)
|
||||
|
||||
class COWSetMeta(COWDictMeta):
|
||||
@@ -181,13 +176,13 @@ class COWSetMeta(COWDictMeta):
|
||||
|
||||
def remove(cls, value):
|
||||
COWDictMeta.__delitem__(cls, repr(hash(value)))
|
||||
|
||||
|
||||
def __in__(cls, value):
|
||||
return COWDictMeta.has_key(repr(hash(value)))
|
||||
|
||||
def iterkeys(cls):
|
||||
raise TypeError("sets don't have keys")
|
||||
|
||||
|
||||
def iteritems(cls):
|
||||
raise TypeError("sets don't have 'items'")
|
||||
|
||||
@@ -204,120 +199,120 @@ if __name__ == "__main__":
|
||||
import sys
|
||||
COWDictBase.__warn__ = sys.stderr
|
||||
a = COWDictBase()
|
||||
print("a", a)
|
||||
print "a", a
|
||||
|
||||
a['a'] = 'a'
|
||||
a['b'] = 'b'
|
||||
a['dict'] = {}
|
||||
|
||||
b = a.copy()
|
||||
print("b", b)
|
||||
print "b", b
|
||||
b['c'] = 'b'
|
||||
|
||||
print()
|
||||
print
|
||||
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems():
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
b['dict']['a'] = 'b'
|
||||
b['a'] = 'c'
|
||||
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems():
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
try:
|
||||
b['dict2']
|
||||
except KeyError as e:
|
||||
print("Okay!")
|
||||
except KeyError, e:
|
||||
print "Okay!"
|
||||
|
||||
a['set'] = COWSetBase()
|
||||
a['set'].add("o1")
|
||||
a['set'].add("o1")
|
||||
a['set'].add("o2")
|
||||
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a['set'].itervalues():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b['set'].itervalues():
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
b['set'].add('o3')
|
||||
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a['set'].itervalues():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b['set'].itervalues():
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
a['set2'] = set()
|
||||
a['set2'].add("o1")
|
||||
a['set2'].add("o1")
|
||||
a['set2'].add("o2")
|
||||
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
del b['b']
|
||||
try:
|
||||
print(b['b'])
|
||||
print b['b']
|
||||
except KeyError:
|
||||
print("Yay! deleted key raises error")
|
||||
print "Yay! deleted key raises error"
|
||||
|
||||
if b.has_key('b'):
|
||||
print("Boo!")
|
||||
print "Boo!"
|
||||
else:
|
||||
print("Yay - has_key with delete works!")
|
||||
|
||||
print("a", a)
|
||||
print "Yay - has_key with delete works!"
|
||||
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
b.__revertitem__('b')
|
||||
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
b.__revertitem__('dict')
|
||||
print("a", a)
|
||||
print "a", a
|
||||
for x in a.iteritems():
|
||||
print(x)
|
||||
print("--")
|
||||
print("b", b)
|
||||
print x
|
||||
print "--"
|
||||
print "b", b
|
||||
for x in b.iteritems(readonly=True):
|
||||
print(x)
|
||||
print()
|
||||
print x
|
||||
print
|
||||
|
||||
@@ -21,131 +21,74 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
__version__ = "1.15.1"
|
||||
__version__ = "1.9.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (2, 6, 0):
|
||||
raise RuntimeError("Sorry, python 2.6.0 or later is required for this version of bitbake")
|
||||
__all__ = [
|
||||
|
||||
"debug",
|
||||
"note",
|
||||
"error",
|
||||
"fatal",
|
||||
|
||||
class BBHandledException(Exception):
|
||||
"""
|
||||
The big dilemma for generic bitbake code is what information to give the user
|
||||
when an exception occurs. Any exception inheriting this base exception class
|
||||
has already provided information to the user via some 'fired' message type such as
|
||||
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
|
||||
encounters an exception derived from this class, no backtrace or other information
|
||||
will be given to the user, its assumed the earlier event provided the relevant information.
|
||||
"""
|
||||
pass
|
||||
"mkdirhier",
|
||||
"movefile",
|
||||
"vercmp",
|
||||
|
||||
import os
|
||||
import logging
|
||||
# fetch
|
||||
"decodeurl",
|
||||
"encodeurl",
|
||||
|
||||
# modules
|
||||
"parse",
|
||||
"data",
|
||||
"command",
|
||||
"event",
|
||||
"build",
|
||||
"fetch",
|
||||
"manifest",
|
||||
"methodpool",
|
||||
"cache",
|
||||
"runqueue",
|
||||
"taskdata",
|
||||
"providers",
|
||||
]
|
||||
|
||||
class NullHandler(logging.Handler):
|
||||
def emit(self, record):
|
||||
pass
|
||||
|
||||
Logger = logging.getLoggerClass()
|
||||
class BBLogger(Logger):
|
||||
def __init__(self, name):
|
||||
if name.split(".")[0] == "BitBake":
|
||||
self.debug = self.bbdebug
|
||||
Logger.__init__(self, name)
|
||||
|
||||
def bbdebug(self, level, msg, *args, **kwargs):
|
||||
return self.log(logging.DEBUG - level + 1, msg, *args, **kwargs)
|
||||
|
||||
def plain(self, msg, *args, **kwargs):
|
||||
return self.log(logging.INFO + 1, msg, *args, **kwargs)
|
||||
|
||||
def verbose(self, msg, *args, **kwargs):
|
||||
return self.log(logging.INFO - 1, msg, *args, **kwargs)
|
||||
|
||||
logging.raiseExceptions = False
|
||||
logging.setLoggerClass(BBLogger)
|
||||
|
||||
logger = logging.getLogger("BitBake")
|
||||
logger.addHandler(NullHandler())
|
||||
logger.setLevel(logging.DEBUG - 2)
|
||||
|
||||
# This has to be imported after the setLoggerClass, as the import of bb.msg
|
||||
# can result in construction of the various loggers.
|
||||
import bb.msg
|
||||
import sys, os, types, re, string
|
||||
|
||||
if "BBDEBUG" in os.environ:
|
||||
level = int(os.environ["BBDEBUG"])
|
||||
if level:
|
||||
bb.msg.set_debug_level(level)
|
||||
|
||||
from bb import fetch2 as fetch
|
||||
sys.modules['bb.fetch'] = sys.modules['bb.fetch2']
|
||||
|
||||
# Messaging convenience functions
|
||||
def plain(*args):
|
||||
logger.plain(''.join(args))
|
||||
bb.msg.plain(''.join(args))
|
||||
|
||||
def debug(lvl, *args):
|
||||
if isinstance(lvl, basestring):
|
||||
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
|
||||
args = (lvl,) + args
|
||||
lvl = 1
|
||||
logger.debug(lvl, ''.join(args))
|
||||
bb.msg.debug(lvl, None, ''.join(args))
|
||||
|
||||
def note(*args):
|
||||
logger.info(''.join(args))
|
||||
bb.msg.note(1, None, ''.join(args))
|
||||
|
||||
def warn(*args):
|
||||
logger.warn(''.join(args))
|
||||
bb.msg.warn(None, ''.join(args))
|
||||
|
||||
def error(*args):
|
||||
logger.error(''.join(args))
|
||||
bb.msg.error(None, ''.join(args))
|
||||
|
||||
def fatal(*args):
|
||||
logger.critical(''.join(args))
|
||||
sys.exit(1)
|
||||
bb.msg.fatal(None, ''.join(args))
|
||||
|
||||
|
||||
def deprecated(func, name=None, advice=""):
|
||||
"""This is a decorator which can be used to mark functions
|
||||
as deprecated. It will result in a warning being emmitted
|
||||
when the function is used."""
|
||||
import warnings
|
||||
|
||||
if advice:
|
||||
advice = ": %s" % advice
|
||||
if name is None:
|
||||
name = func.__name__
|
||||
|
||||
def newFunc(*args, **kwargs):
|
||||
warnings.warn("Call to deprecated function %s%s." % (name,
|
||||
advice),
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2)
|
||||
return func(*args, **kwargs)
|
||||
newFunc.__name__ = func.__name__
|
||||
newFunc.__doc__ = func.__doc__
|
||||
newFunc.__dict__.update(func.__dict__)
|
||||
return newFunc
|
||||
|
||||
# For compatibility
|
||||
def deprecate_import(current, modulename, fromlist, renames = None):
|
||||
"""Import objects from one module into another, wrapping them with a DeprecationWarning"""
|
||||
import sys
|
||||
from bb.fetch import MalformedUrl, encodeurl, decodeurl
|
||||
from bb.data import VarExpandError
|
||||
from bb.utils import mkdirhier, movefile, copyfile, which
|
||||
from bb.utils import vercmp
|
||||
|
||||
module = __import__(modulename, fromlist = fromlist)
|
||||
for position, objname in enumerate(fromlist):
|
||||
obj = getattr(module, objname)
|
||||
newobj = deprecated(obj, "{0}.{1}".format(current, objname),
|
||||
"Please use {0}.{1} instead".format(modulename, objname))
|
||||
if renames:
|
||||
newname = renames[position]
|
||||
else:
|
||||
newname = objname
|
||||
|
||||
setattr(sys.modules[current], newname, newobj)
|
||||
|
||||
deprecate_import(__name__, "bb.fetch", ("MalformedUrl", "encodeurl", "decodeurl"))
|
||||
deprecate_import(__name__, "bb.utils", ("mkdirhier", "movefile", "copyfile", "which"))
|
||||
deprecate_import(__name__, "bb.utils", ["vercmp_string"], ["vercmp"])
|
||||
if __name__ == "__main__":
|
||||
import doctest, bb
|
||||
bb.msg.set_debug_level(0)
|
||||
doctest.testmod(bb)
|
||||
|
||||
@@ -25,54 +25,38 @@
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import shlex
|
||||
import bb
|
||||
import bb.msg
|
||||
import bb.process
|
||||
from contextlib import nested
|
||||
from bb import data, event, utils
|
||||
from bb import data, event, mkdirhier, utils
|
||||
import bb, os, sys
|
||||
|
||||
bblogger = logging.getLogger('BitBake')
|
||||
logger = logging.getLogger('BitBake.Build')
|
||||
|
||||
NULL = open(os.devnull, 'r+')
|
||||
|
||||
|
||||
# When we execute a python function we'd like certain things
|
||||
# When we execute a python function we'd like certain things
|
||||
# in all namespaces, hence we add them to __builtins__
|
||||
# If we do not do this and use the exec globals, they will
|
||||
# not be available to subfunctions.
|
||||
__builtins__['bb'] = bb
|
||||
__builtins__['os'] = os
|
||||
|
||||
# events
|
||||
class FuncFailed(Exception):
|
||||
def __init__(self, name = None, logfile = None):
|
||||
self.logfile = logfile
|
||||
self.name = name
|
||||
if name:
|
||||
self.msg = 'Function failed: %s' % name
|
||||
else:
|
||||
self.msg = "Function failed"
|
||||
"""
|
||||
Executed function failed
|
||||
First parameter a message
|
||||
Second paramter is a logfile (optional)
|
||||
"""
|
||||
|
||||
def __str__(self):
|
||||
if self.logfile and os.path.exists(self.logfile):
|
||||
msg = ("%s (see %s for further information)" %
|
||||
(self.msg, self.logfile))
|
||||
else:
|
||||
msg = self.msg
|
||||
return msg
|
||||
class EventException(Exception):
|
||||
"""Exception which is associated with an Event."""
|
||||
|
||||
def __init__(self, msg, event):
|
||||
self.args = msg, event
|
||||
|
||||
class TaskBase(event.Event):
|
||||
"""Base class for task events"""
|
||||
|
||||
def __init__(self, t, d ):
|
||||
self._task = t
|
||||
self._package = d.getVar("PF", True)
|
||||
self._package = bb.data.getVar("PF", d, 1)
|
||||
event.Event.__init__(self)
|
||||
self._message = "package %s: task %s: %s" % (d.getVar("PF", True), t, self.getDisplayName())
|
||||
self._message = "package %s: task %s: %s" % (bb.data.getVar("PF", d, 1), t, bb.event.getName(self)[4:])
|
||||
|
||||
def getTask(self):
|
||||
return self._task
|
||||
@@ -80,9 +64,6 @@ class TaskBase(event.Event):
|
||||
def setTask(self, task):
|
||||
self._task = task
|
||||
|
||||
def getDisplayName(self):
|
||||
return bb.event.getName(self)[4:]
|
||||
|
||||
task = property(getTask, setTask, None, "task property")
|
||||
|
||||
class TaskStarted(TaskBase):
|
||||
@@ -93,232 +74,77 @@ class TaskSucceeded(TaskBase):
|
||||
|
||||
class TaskFailed(TaskBase):
|
||||
"""Task execution failed"""
|
||||
|
||||
def __init__(self, task, logfile, metadata, errprinted = False):
|
||||
def __init__(self, msg, logfile, t, d ):
|
||||
self.logfile = logfile
|
||||
self.errprinted = errprinted
|
||||
super(TaskFailed, self).__init__(task, metadata)
|
||||
self.msg = msg
|
||||
TaskBase.__init__(self, t, d)
|
||||
|
||||
class TaskFailedSilent(TaskBase):
|
||||
"""Task execution failed (silently)"""
|
||||
def __init__(self, task, logfile, metadata):
|
||||
self.logfile = logfile
|
||||
super(TaskFailedSilent, self).__init__(task, metadata)
|
||||
|
||||
def getDisplayName(self):
|
||||
# Don't need to tell the user it was silent
|
||||
return "Failed"
|
||||
|
||||
class TaskInvalid(TaskBase):
|
||||
|
||||
def __init__(self, task, metadata):
|
||||
super(TaskInvalid, self).__init__(task, metadata)
|
||||
self._message = "No such task '%s'" % task
|
||||
|
||||
|
||||
class LogTee(object):
|
||||
def __init__(self, logger, outfile):
|
||||
self.outfile = outfile
|
||||
self.logger = logger
|
||||
self.name = self.outfile.name
|
||||
|
||||
def write(self, string):
|
||||
self.logger.plain(string)
|
||||
self.outfile.write(string)
|
||||
|
||||
def __enter__(self):
|
||||
self.outfile.__enter__()
|
||||
return self
|
||||
|
||||
def __exit__(self, *excinfo):
|
||||
self.outfile.__exit__(*excinfo)
|
||||
|
||||
def __repr__(self):
|
||||
return '<LogTee {0}>'.format(self.name)
|
||||
class InvalidTask(TaskBase):
|
||||
"""Invalid Task"""
|
||||
|
||||
# functions
|
||||
|
||||
def exec_func(func, d, dirs = None):
|
||||
"""Execute an BB 'function'"""
|
||||
|
||||
body = data.getVar(func, d)
|
||||
if not body:
|
||||
if body is None:
|
||||
logger.warn("Function %s doesn't exist", func)
|
||||
return
|
||||
|
||||
flags = data.getVarFlags(func, d)
|
||||
cleandirs = flags.get('cleandirs')
|
||||
if cleandirs:
|
||||
for cdir in data.expand(cleandirs, d).split():
|
||||
bb.utils.remove(cdir, True)
|
||||
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot']:
|
||||
if not item in flags:
|
||||
flags[item] = None
|
||||
|
||||
if dirs is None:
|
||||
dirs = flags.get('dirs')
|
||||
if dirs:
|
||||
dirs = data.expand(dirs, d).split()
|
||||
ispython = flags['python']
|
||||
|
||||
cleandirs = (data.expand(flags['cleandirs'], d) or "").split()
|
||||
for cdir in cleandirs:
|
||||
os.system("rm -rf %s" % cdir)
|
||||
|
||||
if dirs:
|
||||
for adir in dirs:
|
||||
bb.utils.mkdirhier(adir)
|
||||
dirs = data.expand(dirs, d)
|
||||
else:
|
||||
dirs = (data.expand(flags['dirs'], d) or "").split()
|
||||
for adir in dirs:
|
||||
mkdirhier(adir)
|
||||
|
||||
if len(dirs) > 0:
|
||||
adir = dirs[-1]
|
||||
else:
|
||||
adir = data.getVar('B', d, 1)
|
||||
bb.utils.mkdirhier(adir)
|
||||
|
||||
ispython = flags.get('python')
|
||||
|
||||
lockflag = flags.get('lockfiles')
|
||||
if lockflag:
|
||||
lockfiles = [data.expand(f, d) for f in lockflag.split()]
|
||||
else:
|
||||
lockfiles = None
|
||||
|
||||
tempdir = data.getVar('T', d, 1)
|
||||
bb.utils.mkdirhier(tempdir)
|
||||
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
|
||||
|
||||
with bb.utils.fileslocked(lockfiles):
|
||||
if ispython:
|
||||
exec_func_python(func, d, runfile, cwd=adir)
|
||||
else:
|
||||
exec_func_shell(func, d, runfile, cwd=adir)
|
||||
|
||||
_functionfmt = """
|
||||
def {function}(d):
|
||||
{body}
|
||||
|
||||
{function}(d)
|
||||
"""
|
||||
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
def exec_func_python(func, d, runfile, cwd=None):
|
||||
"""Execute a python BB 'function'"""
|
||||
|
||||
bbfile = d.getVar('FILE', True)
|
||||
code = _functionfmt.format(function=func, body=d.getVar(func, True))
|
||||
bb.utils.mkdirhier(os.path.dirname(runfile))
|
||||
with open(runfile, 'w') as script:
|
||||
script.write(code)
|
||||
|
||||
if cwd:
|
||||
try:
|
||||
olddir = os.getcwd()
|
||||
except OSError:
|
||||
olddir = None
|
||||
os.chdir(cwd)
|
||||
|
||||
# Save current directory
|
||||
try:
|
||||
comp = utils.better_compile(code, func, bbfile)
|
||||
utils.better_exec(comp, {"d": d}, code, bbfile)
|
||||
except:
|
||||
if sys.exc_info()[0] in (bb.parse.SkipPackage, bb.build.FuncFailed):
|
||||
raise
|
||||
prevdir = os.getcwd()
|
||||
except OSError:
|
||||
prevdir = data.getVar('TOPDIR', d, True)
|
||||
|
||||
raise FuncFailed(func, None)
|
||||
finally:
|
||||
if cwd and olddir:
|
||||
try:
|
||||
os.chdir(olddir)
|
||||
except OSError:
|
||||
pass
|
||||
# Setup logfiles
|
||||
t = data.getVar('T', d, 1)
|
||||
if not t:
|
||||
bb.msg.fatal(bb.msg.domain.Build, "T not set")
|
||||
mkdirhier(t)
|
||||
logfile = "%s/log.%s.%s" % (t, func, str(os.getpid()))
|
||||
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
|
||||
|
||||
def exec_func_shell(function, d, runfile, cwd=None):
|
||||
"""Execute a shell function from the metadata
|
||||
|
||||
Note on directory behavior. The 'dirs' varflag should contain a list
|
||||
of the directories you need created prior to execution. The last
|
||||
item in the list is where we will chdir/cd to.
|
||||
"""
|
||||
|
||||
# Don't let the emitted shell script override PWD
|
||||
d.delVarFlag('PWD', 'export')
|
||||
|
||||
with open(runfile, 'w') as script:
|
||||
script.write('#!/bin/sh -e\n')
|
||||
data.emit_func(function, script, d)
|
||||
|
||||
if bb.msg.loggerVerboseLogs:
|
||||
script.write("set -x\n")
|
||||
if cwd:
|
||||
script.write("cd %s\n" % cwd)
|
||||
script.write("%s\n" % function)
|
||||
|
||||
os.chmod(runfile, 0775)
|
||||
|
||||
cmd = runfile
|
||||
if d.getVarFlag(function, 'fakeroot'):
|
||||
fakerootcmd = d.getVar('FAKEROOT', True)
|
||||
if fakerootcmd:
|
||||
cmd = [fakerootcmd, runfile]
|
||||
|
||||
if bb.msg.loggerDefaultVerbose:
|
||||
logfile = LogTee(logger, sys.stdout)
|
||||
else:
|
||||
logfile = sys.stdout
|
||||
|
||||
try:
|
||||
bb.process.run(cmd, shell=False, stdin=NULL, log=logfile)
|
||||
except bb.process.CmdError:
|
||||
logfn = d.getVar('BB_LOGFILE', True)
|
||||
raise FuncFailed(function, logfn)
|
||||
|
||||
def _task_data(fn, task, d):
|
||||
localdata = data.createCopy(d)
|
||||
localdata.setVar('BB_FILENAME', fn)
|
||||
localdata.setVar('BB_CURRENTTASK', task[3:])
|
||||
localdata.setVar('OVERRIDES', 'task-%s:%s' %
|
||||
(task[3:], d.getVar('OVERRIDES', False)))
|
||||
localdata.finalize()
|
||||
data.expandKeys(localdata)
|
||||
return localdata
|
||||
|
||||
def _exec_task(fn, task, d, quieterr):
|
||||
"""Execute a BB 'task'
|
||||
|
||||
Execution of a task involves a bit more setup than executing a function,
|
||||
running it with its own local metadata, and with some useful variables set.
|
||||
"""
|
||||
if not data.getVarFlag(task, 'task', d):
|
||||
event.fire(TaskInvalid(task, d), d)
|
||||
logger.error("No such task: %s" % task)
|
||||
return 1
|
||||
|
||||
logger.debug(1, "Executing task %s", task)
|
||||
|
||||
localdata = _task_data(fn, task, d)
|
||||
tempdir = localdata.getVar('T', True)
|
||||
if not tempdir:
|
||||
bb.fatal("T variable not set, unable to build")
|
||||
|
||||
bb.utils.mkdirhier(tempdir)
|
||||
loglink = os.path.join(tempdir, 'log.{0}'.format(task))
|
||||
logbase = 'log.{0}.{1}'.format(task, os.getpid())
|
||||
logfn = os.path.join(tempdir, logbase)
|
||||
if loglink:
|
||||
bb.utils.remove(loglink)
|
||||
|
||||
try:
|
||||
os.symlink(logbase, loglink)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
prefuncs = localdata.getVarFlag(task, 'prefuncs', expand=True)
|
||||
postfuncs = localdata.getVarFlag(task, 'postfuncs', expand=True)
|
||||
|
||||
class ErrorCheckHandler(logging.Handler):
|
||||
def __init__(self):
|
||||
self.triggered = False
|
||||
logging.Handler.__init__(self, logging.ERROR)
|
||||
def emit(self, record):
|
||||
self.triggered = True
|
||||
# Change to correct directory (if specified)
|
||||
if adir and os.access(adir, os.F_OK):
|
||||
os.chdir(adir)
|
||||
|
||||
# Handle logfiles
|
||||
si = file('/dev/null', 'r')
|
||||
try:
|
||||
logfile = file(logfn, 'w')
|
||||
except OSError:
|
||||
logger.exception("Opening log file '%s'", logfn)
|
||||
if bb.msg.debug_level['default'] > 0 or ispython:
|
||||
so = os.popen("tee \"%s\"" % logfile, "w")
|
||||
else:
|
||||
so = file(logfile, 'w')
|
||||
except OSError, e:
|
||||
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
|
||||
pass
|
||||
|
||||
se = so
|
||||
|
||||
# Dup the existing fds so we dont lose them
|
||||
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
|
||||
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
|
||||
@@ -326,109 +152,182 @@ def _exec_task(fn, task, d, quieterr):
|
||||
|
||||
# Replace those fds with our own
|
||||
os.dup2(si.fileno(), osi[1])
|
||||
os.dup2(logfile.fileno(), oso[1])
|
||||
os.dup2(logfile.fileno(), ose[1])
|
||||
os.dup2(so.fileno(), oso[1])
|
||||
os.dup2(se.fileno(), ose[1])
|
||||
|
||||
# Ensure python logging goes to the logfile
|
||||
handler = logging.StreamHandler(logfile)
|
||||
handler.setFormatter(logformatter)
|
||||
# Always enable full debug output into task logfiles
|
||||
handler.setLevel(logging.DEBUG - 2)
|
||||
bblogger.addHandler(handler)
|
||||
locks = []
|
||||
lockfiles = (data.expand(flags['lockfiles'], d) or "").split()
|
||||
for lock in lockfiles:
|
||||
locks.append(bb.utils.lockfile(lock))
|
||||
|
||||
errchk = ErrorCheckHandler()
|
||||
bblogger.addHandler(errchk)
|
||||
|
||||
localdata.setVar('BB_LOGFILE', logfn)
|
||||
|
||||
event.fire(TaskStarted(task, localdata), localdata)
|
||||
try:
|
||||
for func in (prefuncs or '').split():
|
||||
exec_func(func, localdata)
|
||||
exec_func(task, localdata)
|
||||
for func in (postfuncs or '').split():
|
||||
exec_func(func, localdata)
|
||||
except FuncFailed as exc:
|
||||
if quieterr:
|
||||
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
|
||||
# Run the function
|
||||
if ispython:
|
||||
exec_func_python(func, d, runfile, logfile)
|
||||
else:
|
||||
errprinted = errchk.triggered
|
||||
logger.error(str(exc))
|
||||
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
|
||||
return 1
|
||||
finally:
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
exec_func_shell(func, d, runfile, logfile, flags)
|
||||
|
||||
bblogger.removeHandler(handler)
|
||||
# Restore original directory
|
||||
try:
|
||||
os.chdir(prevdir)
|
||||
except:
|
||||
pass
|
||||
|
||||
finally:
|
||||
|
||||
# Unlock any lockfiles
|
||||
for lock in locks:
|
||||
bb.utils.unlockfile(lock)
|
||||
|
||||
# Restore the backup fds
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
|
||||
# Close our logs
|
||||
si.close()
|
||||
so.close()
|
||||
se.close()
|
||||
|
||||
if os.path.exists(logfile) and os.path.getsize(logfile) == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Build, "Zero size logfile %s, removing" % logfile)
|
||||
os.remove(logfile)
|
||||
|
||||
# Close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
si.close()
|
||||
|
||||
logfile.close()
|
||||
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
|
||||
logger.debug(2, "Zero size logfn %s, removing", logfn)
|
||||
bb.utils.remove(logfn)
|
||||
bb.utils.remove(loglink)
|
||||
event.fire(TaskSucceeded(task, localdata), localdata)
|
||||
def exec_func_python(func, d, runfile, logfile):
|
||||
"""Execute a python BB 'function'"""
|
||||
import re, os
|
||||
|
||||
if not localdata.getVarFlag(task, 'nostamp') and not localdata.getVarFlag(task, 'selfstamp'):
|
||||
make_stamp(task, localdata)
|
||||
bbfile = bb.data.getVar('FILE', d, 1)
|
||||
tmp = "def " + func + "():\n%s" % data.getVar(func, d)
|
||||
tmp += '\n' + func + '()'
|
||||
|
||||
return 0
|
||||
f = open(runfile, "w")
|
||||
f.write(tmp)
|
||||
comp = utils.better_compile(tmp, func, bbfile)
|
||||
g = {} # globals
|
||||
g['d'] = d
|
||||
try:
|
||||
utils.better_exec(comp, g, tmp, bbfile)
|
||||
except:
|
||||
(t,value,tb) = sys.exc_info()
|
||||
|
||||
def exec_task(fn, task, d):
|
||||
try:
|
||||
quieterr = False
|
||||
if d.getVarFlag(task, "quieterrors") is not None:
|
||||
quieterr = True
|
||||
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
|
||||
raise
|
||||
bb.msg.error(bb.msg.domain.Build, "Function %s failed" % func)
|
||||
raise FuncFailed("function %s failed" % func, logfile)
|
||||
|
||||
return _exec_task(fn, task, d, quieterr)
|
||||
except Exception:
|
||||
from traceback import format_exc
|
||||
if not quieterr:
|
||||
logger.error("Build of %s failed" % (task))
|
||||
logger.error(format_exc())
|
||||
failedevent = TaskFailed(task, None, d, True)
|
||||
event.fire(failedevent, d)
|
||||
return 1
|
||||
def exec_func_shell(func, d, runfile, logfile, flags):
|
||||
"""Execute a shell BB 'function' Returns true if execution was successful.
|
||||
|
||||
def stamp_internal(taskname, d, file_name):
|
||||
For this, it creates a bash shell script in the tmp dectory, writes the local
|
||||
data into it and finally executes. The output of the shell will end in a log file and stdout.
|
||||
|
||||
Note on directory behavior. The 'dirs' varflag should contain a list
|
||||
of the directories you need created prior to execution. The last
|
||||
item in the list is where we will chdir/cd to.
|
||||
"""
|
||||
Internal stamp helper function
|
||||
Makes sure the stamp directory exists
|
||||
Returns the stamp path+filename
|
||||
|
||||
In the bitbake core, d can be a CacheData and file_name will be set.
|
||||
When called in task context, d will be a data store, file_name will not be set
|
||||
"""
|
||||
taskflagname = taskname
|
||||
if taskname.endswith("_setscene") and taskname != "do_setscene":
|
||||
taskflagname = taskname.replace("_setscene", "")
|
||||
deps = flags['deps']
|
||||
check = flags['check']
|
||||
if check in globals():
|
||||
if globals()[check](func, deps):
|
||||
return
|
||||
|
||||
if file_name:
|
||||
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
|
||||
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
|
||||
f = open(runfile, "w")
|
||||
f.write("#!/bin/sh -e\n")
|
||||
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
|
||||
data.emit_env(f, d)
|
||||
|
||||
f.write("cd %s\n" % os.getcwd())
|
||||
if func: f.write("%s\n" % func)
|
||||
f.close()
|
||||
os.chmod(runfile, 0775)
|
||||
if not func:
|
||||
bb.msg.error(bb.msg.domain.Build, "Function not specified")
|
||||
raise FuncFailed("Function not specified for exec_func_shell")
|
||||
|
||||
# execute function
|
||||
if flags['fakeroot']:
|
||||
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
|
||||
else:
|
||||
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
|
||||
file_name = d.getVar('BB_FILENAME', True)
|
||||
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
|
||||
maybe_fakeroot = ''
|
||||
lang_environment = "LC_ALL=C "
|
||||
ret = os.system('%s%ssh -e %s' % (lang_environment, maybe_fakeroot, runfile))
|
||||
|
||||
if not stamp:
|
||||
if ret == 0:
|
||||
return
|
||||
|
||||
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
|
||||
bb.msg.error(bb.msg.domain.Build, "Function %s failed" % func)
|
||||
raise FuncFailed("function %s failed" % func, logfile)
|
||||
|
||||
bb.utils.mkdirhier(os.path.dirname(stamp))
|
||||
|
||||
def exec_task(task, d):
|
||||
"""Execute an BB 'task'
|
||||
|
||||
The primary difference between executing a task versus executing
|
||||
a function is that a task exists in the task digraph, and therefore
|
||||
has dependencies amongst other tasks."""
|
||||
|
||||
# Check whther this is a valid task
|
||||
if not data.getVarFlag(task, 'task', d):
|
||||
raise EventException("No such task", InvalidTask(task, d))
|
||||
|
||||
try:
|
||||
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % task)
|
||||
old_overrides = data.getVar('OVERRIDES', d, 0)
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', 'task-%s:%s' % (task[3:], old_overrides), localdata)
|
||||
data.update_data(localdata)
|
||||
data.expandKeys(localdata)
|
||||
event.fire(TaskStarted(task, localdata), localdata)
|
||||
exec_func(task, localdata)
|
||||
event.fire(TaskSucceeded(task, localdata), localdata)
|
||||
except FuncFailed, message:
|
||||
# Try to extract the optional logfile
|
||||
try:
|
||||
(msg, logfile) = message
|
||||
except:
|
||||
logfile = None
|
||||
msg = message
|
||||
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % message )
|
||||
failedevent = TaskFailed(msg, logfile, task, d)
|
||||
event.fire(failedevent, d)
|
||||
raise EventException("Function failed in task: %s" % message, failedevent)
|
||||
|
||||
# make stamp, or cause event and raise exception
|
||||
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
|
||||
make_stamp(task, d)
|
||||
|
||||
def extract_stamp(d, fn):
|
||||
"""
|
||||
Extracts stamp format which is either a data dictonary (fn unset)
|
||||
or a dataCache entry (fn set).
|
||||
"""
|
||||
if fn:
|
||||
return d.stamp[fn]
|
||||
return data.getVar('STAMP', d, 1)
|
||||
|
||||
def stamp_internal(task, d, file_name):
|
||||
"""
|
||||
Internal stamp helper function
|
||||
Removes any stamp for the given task
|
||||
Makes sure the stamp directory exists
|
||||
Returns the stamp path+filename
|
||||
"""
|
||||
stamp = extract_stamp(d, file_name)
|
||||
if not stamp:
|
||||
return
|
||||
stamp = "%s.%s" % (stamp, task)
|
||||
mkdirhier(os.path.dirname(stamp))
|
||||
# Remove the file and recreate to force timestamp
|
||||
# change on broken NFS filesystems
|
||||
if os.access(stamp, os.F_OK):
|
||||
os.remove(stamp)
|
||||
return stamp
|
||||
|
||||
def make_stamp(task, d, file_name = None):
|
||||
@@ -437,33 +336,16 @@ def make_stamp(task, d, file_name = None):
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
stamp = stamp_internal(task, d, file_name)
|
||||
# Remove the file and recreate to force timestamp
|
||||
# change on broken NFS filesystems
|
||||
if stamp:
|
||||
bb.utils.remove(stamp)
|
||||
f = open(stamp, "w")
|
||||
f.close()
|
||||
|
||||
# If we're in task context, write out a signature file for each task
|
||||
# as it completes
|
||||
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
|
||||
file_name = d.getVar('BB_FILENAME', True)
|
||||
bb.parse.siggen.dump_sigtask(file_name, task, d.getVar('STAMP', True), True)
|
||||
|
||||
def del_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Removes a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
stamp = stamp_internal(task, d, file_name)
|
||||
bb.utils.remove(stamp)
|
||||
|
||||
def stampfile(taskname, d, file_name = None):
|
||||
"""
|
||||
Return the stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
return stamp_internal(taskname, d, file_name)
|
||||
stamp_internal(task, d, file_name)
|
||||
|
||||
def add_tasks(tasklist, d):
|
||||
task_deps = data.getVar('_task_deps', d)
|
||||
@@ -481,7 +363,7 @@ def add_tasks(tasklist, d):
|
||||
if not task in task_deps['tasks']:
|
||||
task_deps['tasks'].append(task)
|
||||
|
||||
flags = data.getVarFlags(task, d)
|
||||
flags = data.getVarFlags(task, d)
|
||||
def getTask(name):
|
||||
if not name in task_deps:
|
||||
task_deps[name] = {}
|
||||
@@ -493,9 +375,6 @@ def add_tasks(tasklist, d):
|
||||
getTask('rdeptask')
|
||||
getTask('recrdeptask')
|
||||
getTask('nostamp')
|
||||
getTask('fakeroot')
|
||||
getTask('noexec')
|
||||
getTask('umask')
|
||||
task_deps['parents'][task] = []
|
||||
for dep in flags['deps']:
|
||||
dep = data.expand(dep, d)
|
||||
@@ -510,3 +389,4 @@ def remove_task(task, kill, d):
|
||||
If kill is 1, also remove tasks that depend on this task."""
|
||||
|
||||
data.delVarFlag(task, 'task', d)
|
||||
|
||||
|
||||
@@ -28,328 +28,127 @@
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
import os
|
||||
import logging
|
||||
from collections import defaultdict
|
||||
import os, re
|
||||
import bb.data
|
||||
import bb.utils
|
||||
|
||||
logger = logging.getLogger("BitBake.Cache")
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
logger.info("Importing cPickle failed. "
|
||||
"Falling back to a very slow implementation.")
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
|
||||
|
||||
__cache_version__ = "143"
|
||||
__cache_version__ = "131"
|
||||
|
||||
def getCacheFile(path, filename, data_hash):
|
||||
return os.path.join(path, filename + "." + data_hash)
|
||||
|
||||
# RecipeInfoCommon defines common data retrieving methods
|
||||
# from meta data for caches. CoreRecipeInfo as well as other
|
||||
# Extra RecipeInfo needs to inherit this class
|
||||
class RecipeInfoCommon(object):
|
||||
|
||||
@classmethod
|
||||
def listvar(cls, var, metadata):
|
||||
return cls.getvar(var, metadata).split()
|
||||
|
||||
@classmethod
|
||||
def intvar(cls, var, metadata):
|
||||
return int(cls.getvar(var, metadata) or 0)
|
||||
|
||||
@classmethod
|
||||
def depvar(cls, var, metadata):
|
||||
return bb.utils.explode_deps(cls.getvar(var, metadata))
|
||||
|
||||
@classmethod
|
||||
def pkgvar(cls, var, packages, metadata):
|
||||
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
|
||||
for pkg in packages)
|
||||
|
||||
@classmethod
|
||||
def taskvar(cls, var, tasks, metadata):
|
||||
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
|
||||
for task in tasks)
|
||||
|
||||
@classmethod
|
||||
def flaglist(cls, flag, varlist, metadata):
|
||||
return dict((var, metadata.getVarFlag(var, flag, True))
|
||||
for var in varlist)
|
||||
|
||||
@classmethod
|
||||
def getvar(cls, var, metadata):
|
||||
return metadata.getVar(var, True) or ''
|
||||
|
||||
|
||||
class CoreRecipeInfo(RecipeInfoCommon):
|
||||
__slots__ = ()
|
||||
|
||||
cachefile = "bb_cache.dat"
|
||||
|
||||
def __init__(self, filename, metadata):
|
||||
self.file_depends = metadata.getVar('__depends', False)
|
||||
self.timestamp = bb.parse.cached_mtime(filename)
|
||||
self.variants = self.listvar('__VARIANTS', metadata) + ['']
|
||||
self.appends = self.listvar('__BBAPPEND', metadata)
|
||||
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
|
||||
|
||||
self.skipreason = self.getvar('__SKIPPED', metadata)
|
||||
if self.skipreason:
|
||||
self.pn = self.getvar('PN', metadata) or bb.parse.BBHandler.vars_from_file(filename,metadata)[0]
|
||||
self.skipped = True
|
||||
self.provides = self.depvar('PROVIDES', metadata)
|
||||
self.rprovides = self.depvar('RPROVIDES', metadata)
|
||||
return
|
||||
|
||||
self.tasks = metadata.getVar('__BBTASKS', False)
|
||||
|
||||
self.pn = self.getvar('PN', metadata)
|
||||
self.packages = self.listvar('PACKAGES', metadata)
|
||||
if not self.pn in self.packages:
|
||||
self.packages.append(self.pn)
|
||||
|
||||
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
|
||||
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
|
||||
|
||||
self.file_depends = metadata.getVar('__depends', False)
|
||||
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
|
||||
|
||||
self.skipped = False
|
||||
self.pe = self.getvar('PE', metadata)
|
||||
self.pv = self.getvar('PV', metadata)
|
||||
self.pr = self.getvar('PR', metadata)
|
||||
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
|
||||
self.broken = self.getvar('BROKEN', metadata)
|
||||
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
|
||||
self.stamp = self.getvar('STAMP', metadata)
|
||||
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
|
||||
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
|
||||
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
|
||||
self.depends = self.depvar('DEPENDS', metadata)
|
||||
self.provides = self.depvar('PROVIDES', metadata)
|
||||
self.rdepends = self.depvar('RDEPENDS', metadata)
|
||||
self.rprovides = self.depvar('RPROVIDES', metadata)
|
||||
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
|
||||
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
|
||||
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
|
||||
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
|
||||
self.inherits = self.getvar('__inherit_cache', metadata)
|
||||
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
|
||||
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
|
||||
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
|
||||
|
||||
@classmethod
|
||||
def init_cacheData(cls, cachedata):
|
||||
# CacheData in Core RecipeInfo Class
|
||||
cachedata.task_deps = {}
|
||||
cachedata.pkg_fn = {}
|
||||
cachedata.pkg_pn = defaultdict(list)
|
||||
cachedata.pkg_pepvpr = {}
|
||||
cachedata.pkg_dp = {}
|
||||
|
||||
cachedata.stamp = {}
|
||||
cachedata.stamp_base = {}
|
||||
cachedata.stamp_extrainfo = {}
|
||||
cachedata.fn_provides = {}
|
||||
cachedata.pn_provides = defaultdict(list)
|
||||
cachedata.all_depends = []
|
||||
|
||||
cachedata.deps = defaultdict(list)
|
||||
cachedata.packages = defaultdict(list)
|
||||
cachedata.providers = defaultdict(list)
|
||||
cachedata.rproviders = defaultdict(list)
|
||||
cachedata.packages_dynamic = defaultdict(list)
|
||||
|
||||
cachedata.rundeps = defaultdict(lambda: defaultdict(list))
|
||||
cachedata.runrecs = defaultdict(lambda: defaultdict(list))
|
||||
cachedata.possible_world = []
|
||||
cachedata.universe_target = []
|
||||
cachedata.hashfn = {}
|
||||
|
||||
cachedata.basetaskhash = {}
|
||||
cachedata.inherits = {}
|
||||
cachedata.fakerootenv = {}
|
||||
cachedata.fakerootnoenv = {}
|
||||
cachedata.fakerootdirs = {}
|
||||
|
||||
def add_cacheData(self, cachedata, fn):
|
||||
cachedata.task_deps[fn] = self.task_deps
|
||||
cachedata.pkg_fn[fn] = self.pn
|
||||
cachedata.pkg_pn[self.pn].append(fn)
|
||||
cachedata.pkg_pepvpr[fn] = (self.pe, self.pv, self.pr)
|
||||
cachedata.pkg_dp[fn] = self.defaultpref
|
||||
cachedata.stamp[fn] = self.stamp
|
||||
cachedata.stamp_base[fn] = self.stamp_base
|
||||
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
|
||||
|
||||
provides = [self.pn]
|
||||
for provide in self.provides:
|
||||
if provide not in provides:
|
||||
provides.append(provide)
|
||||
cachedata.fn_provides[fn] = provides
|
||||
|
||||
for provide in provides:
|
||||
cachedata.providers[provide].append(fn)
|
||||
if provide not in cachedata.pn_provides[self.pn]:
|
||||
cachedata.pn_provides[self.pn].append(provide)
|
||||
|
||||
for dep in self.depends:
|
||||
if dep not in cachedata.deps[fn]:
|
||||
cachedata.deps[fn].append(dep)
|
||||
if dep not in cachedata.all_depends:
|
||||
cachedata.all_depends.append(dep)
|
||||
|
||||
rprovides = self.rprovides
|
||||
for package in self.packages:
|
||||
cachedata.packages[package].append(fn)
|
||||
rprovides += self.rprovides_pkg[package]
|
||||
|
||||
for rprovide in rprovides:
|
||||
cachedata.rproviders[rprovide].append(fn)
|
||||
|
||||
for package in self.packages_dynamic:
|
||||
cachedata.packages_dynamic[package].append(fn)
|
||||
|
||||
# Build hash of runtime depends and rececommends
|
||||
for package in self.packages + [self.pn]:
|
||||
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
|
||||
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not self.broken and not self.not_world:
|
||||
cachedata.possible_world.append(fn)
|
||||
|
||||
# create a collection of all targets for sanity checking
|
||||
# tasks, such as upstream versions, license, and tools for
|
||||
# task and image creation.
|
||||
cachedata.universe_target.append(self.pn)
|
||||
|
||||
cachedata.hashfn[fn] = self.hashfilename
|
||||
for task, taskhash in self.basetaskhashes.iteritems():
|
||||
identifier = '%s.%s' % (fn, task)
|
||||
cachedata.basetaskhash[identifier] = taskhash
|
||||
|
||||
cachedata.inherits[fn] = self.inherits
|
||||
cachedata.fakerootenv[fn] = self.fakerootenv
|
||||
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
|
||||
cachedata.fakerootdirs[fn] = self.fakerootdirs
|
||||
|
||||
|
||||
|
||||
class Cache(object):
|
||||
class Cache:
|
||||
"""
|
||||
BitBake Cache implementation
|
||||
"""
|
||||
def __init__(self, cooker):
|
||||
|
||||
def __init__(self, data, data_hash, caches_array):
|
||||
# Pass caches_array information into Cache Constructor
|
||||
# It will be used in later for deciding whether we
|
||||
# need extra cache file dump/load support
|
||||
self.caches_array = caches_array
|
||||
self.cachedir = data.getVar("CACHE", True)
|
||||
self.clean = set()
|
||||
self.checked = set()
|
||||
|
||||
self.cachedir = bb.data.getVar("CACHE", cooker.configuration.data, True)
|
||||
self.clean = {}
|
||||
self.checked = {}
|
||||
self.depends_cache = {}
|
||||
self.data = None
|
||||
self.data_fn = None
|
||||
self.cacheclean = True
|
||||
self.data_hash = data_hash
|
||||
|
||||
if self.cachedir in [None, '']:
|
||||
self.has_cache = False
|
||||
logger.info("Not using a cache. "
|
||||
"Set CACHE = <directory> to enable.")
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Not using a cache. Set CACHE = <directory> to enable.")
|
||||
return
|
||||
|
||||
self.has_cache = True
|
||||
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash)
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_cache.dat")
|
||||
|
||||
logger.debug(1, "Using cache in '%s'", self.cachedir)
|
||||
bb.utils.mkdirhier(self.cachedir)
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
|
||||
try:
|
||||
os.stat( self.cachedir )
|
||||
except OSError:
|
||||
bb.mkdirhier( self.cachedir )
|
||||
|
||||
cache_ok = True
|
||||
if self.caches_array:
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
|
||||
cache_ok = cache_ok and os.path.exists(cachefile)
|
||||
cache_class.init_cacheData(self)
|
||||
if cache_ok:
|
||||
self.load_cachefile()
|
||||
elif os.path.isfile(self.cachefile):
|
||||
logger.info("Out of date cache found, rebuilding...")
|
||||
# If any of configuration.data's dependencies are newer than the
|
||||
# cache there isn't even any point in loading it...
|
||||
newest_mtime = 0
|
||||
deps = bb.data.getVar("__depends", cooker.configuration.data, True)
|
||||
for f,old_mtime in deps:
|
||||
if old_mtime > newest_mtime:
|
||||
newest_mtime = old_mtime
|
||||
|
||||
def load_cachefile(self):
|
||||
# Firstly, using core cache file information for
|
||||
# valid checking
|
||||
with open(self.cachefile, "rb") as cachefile:
|
||||
pickled = pickle.Unpickler(cachefile)
|
||||
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
|
||||
try:
|
||||
cache_ver = pickled.load()
|
||||
bitbake_ver = pickled.load()
|
||||
except Exception:
|
||||
logger.info('Invalid cache, rebuilding...')
|
||||
return
|
||||
p = pickle.Unpickler(file(self.cachefile, "rb"))
|
||||
self.depends_cache, version_data = p.load()
|
||||
if version_data['CACHE_VER'] != __cache_version__:
|
||||
raise ValueError, 'Cache Version Mismatch'
|
||||
if version_data['BITBAKE_VER'] != bb.__version__:
|
||||
raise ValueError, 'Bitbake Version Mismatch'
|
||||
except EOFError:
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
|
||||
self.depends_cache = {}
|
||||
except:
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
|
||||
self.depends_cache = {}
|
||||
else:
|
||||
try:
|
||||
os.stat( self.cachefile )
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Out of date cache found, rebuilding...")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if cache_ver != __cache_version__:
|
||||
logger.info('Cache version mismatch, rebuilding...')
|
||||
return
|
||||
elif bitbake_ver != bb.__version__:
|
||||
logger.info('Bitbake version mismatch, rebuilding...')
|
||||
return
|
||||
|
||||
|
||||
cachesize = 0
|
||||
previous_progress = 0
|
||||
previous_percent = 0
|
||||
|
||||
# Calculate the correct cachesize of all those cache files
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
|
||||
with open(cachefile, "rb") as cachefile:
|
||||
cachesize += os.fstat(cachefile.fileno()).st_size
|
||||
|
||||
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
|
||||
def getVar(self, var, fn, exp = 0):
|
||||
"""
|
||||
Gets the value of a variable
|
||||
(similar to getVar in the data class)
|
||||
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
|
||||
with open(cachefile, "rb") as cachefile:
|
||||
pickled = pickle.Unpickler(cachefile)
|
||||
while cachefile:
|
||||
try:
|
||||
key = pickled.load()
|
||||
value = pickled.load()
|
||||
except Exception:
|
||||
break
|
||||
if self.depends_cache.has_key(key):
|
||||
self.depends_cache[key].append(value)
|
||||
else:
|
||||
self.depends_cache[key] = [value]
|
||||
# only fire events on even percentage boundaries
|
||||
current_progress = cachefile.tell() + previous_progress
|
||||
current_percent = 100 * current_progress / cachesize
|
||||
if current_percent > previous_percent:
|
||||
previous_percent = current_percent
|
||||
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
|
||||
self.data)
|
||||
There are two scenarios:
|
||||
1. We have cached data - serve from depends_cache[fn]
|
||||
2. We're learning what data to cache - serve from data
|
||||
backend but add a copy of the data to the cache.
|
||||
"""
|
||||
if fn in self.clean:
|
||||
return self.depends_cache[fn][var]
|
||||
|
||||
previous_progress += current_progress
|
||||
if not fn in self.depends_cache:
|
||||
self.depends_cache[fn] = {}
|
||||
|
||||
# Note: depends cache number is corresponding to the parsing file numbers.
|
||||
# The same file has several caches, still regarded as one item in the cache
|
||||
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
|
||||
len(self.depends_cache)),
|
||||
self.data)
|
||||
if fn != self.data_fn:
|
||||
# We're trying to access data in the cache which doesn't exist
|
||||
# yet setData hasn't been called to setup the right access. Very bad.
|
||||
bb.msg.error(bb.msg.domain.Cache, "Parsing error data_fn %s and fn %s don't match" % (self.data_fn, fn))
|
||||
|
||||
|
||||
@staticmethod
|
||||
def virtualfn2realfn(virtualfn):
|
||||
self.cacheclean = False
|
||||
result = bb.data.getVar(var, self.data, exp)
|
||||
self.depends_cache[fn][var] = result
|
||||
return result
|
||||
|
||||
def setData(self, virtualfn, fn, data):
|
||||
"""
|
||||
Called to prime bb_cache ready to learn which variables to cache.
|
||||
Will be followed by calls to self.getVar which aren't cached
|
||||
but can be fulfilled from self.data.
|
||||
"""
|
||||
self.data_fn = virtualfn
|
||||
self.data = data
|
||||
|
||||
# Make sure __depends makes the depends_cache
|
||||
# If we're a virtual class we need to make sure all our depends are appended
|
||||
# to the depends of fn.
|
||||
depends = self.getVar("__depends", virtualfn, True) or []
|
||||
if "__depends" not in self.depends_cache[fn] or not self.depends_cache[fn]["__depends"]:
|
||||
self.depends_cache[fn]["__depends"] = depends
|
||||
for dep in depends:
|
||||
if dep not in self.depends_cache[fn]["__depends"]:
|
||||
self.depends_cache[fn]["__depends"].append(dep)
|
||||
|
||||
# Make sure the variants always make it into the cache too
|
||||
self.getVar('__VARIANTS', virtualfn, True)
|
||||
|
||||
self.depends_cache[virtualfn]["CACHETIMESTAMP"] = bb.parse.cached_mtime(fn)
|
||||
|
||||
def virtualfn2realfn(self, virtualfn):
|
||||
"""
|
||||
Convert a virtual file name to a real one + the associated subclass keyword
|
||||
"""
|
||||
@@ -357,105 +156,81 @@ class Cache(object):
|
||||
fn = virtualfn
|
||||
cls = ""
|
||||
if virtualfn.startswith('virtual:'):
|
||||
elems = virtualfn.split(':')
|
||||
cls = ":".join(elems[1:-1])
|
||||
fn = elems[-1]
|
||||
cls = virtualfn.split(':', 2)[1]
|
||||
fn = virtualfn.replace('virtual:' + cls + ':', '')
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "virtualfn2realfn %s to %s %s" % (virtualfn, fn, cls))
|
||||
return (fn, cls)
|
||||
|
||||
@staticmethod
|
||||
def realfn2virtual(realfn, cls):
|
||||
def realfn2virtual(self, realfn, cls):
|
||||
"""
|
||||
Convert a real filename + the associated subclass keyword to a virtual filename
|
||||
"""
|
||||
if cls == "":
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and '%s' to %s" % (realfn, cls, realfn))
|
||||
return realfn
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and %s to %s" % (realfn, cls, "virtual:" + cls + ":" + realfn))
|
||||
return "virtual:" + cls + ":" + realfn
|
||||
|
||||
@classmethod
|
||||
def loadDataFull(cls, virtualfn, appends, cfgData):
|
||||
def loadDataFull(self, virtualfn, cfgData):
|
||||
"""
|
||||
Return a complete set of data for fn.
|
||||
To do this, we need to parse the file.
|
||||
"""
|
||||
|
||||
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
|
||||
(fn, cls) = self.virtualfn2realfn(virtualfn)
|
||||
|
||||
logger.debug(1, "Parsing %s (full)", fn)
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s (full)" % fn)
|
||||
|
||||
cfgData.setVar("__ONLYFINALISE", virtual or "default")
|
||||
bb_data = cls.load_bbfile(fn, appends, cfgData)
|
||||
return bb_data[virtual]
|
||||
bb_data = self.load_bbfile(fn, cfgData)
|
||||
return bb_data[cls]
|
||||
|
||||
@classmethod
|
||||
def parse(cls, filename, appends, configdata, caches_array):
|
||||
"""Parse the specified filename, returning the recipe information"""
|
||||
infos = []
|
||||
datastores = cls.load_bbfile(filename, appends, configdata)
|
||||
depends = set()
|
||||
for variant, data in sorted(datastores.iteritems(),
|
||||
key=lambda i: i[0],
|
||||
reverse=True):
|
||||
virtualfn = cls.realfn2virtual(filename, variant)
|
||||
depends |= (data.getVar("__depends", False) or set())
|
||||
if depends and not variant:
|
||||
data.setVar("__depends", depends)
|
||||
def loadData(self, fn, cfgData, cacheData):
|
||||
"""
|
||||
Load a subset of data for fn.
|
||||
If the cached data is valid we do nothing,
|
||||
To do this, we need to parse the file and set the system
|
||||
to record the variables accessed.
|
||||
Return the cache status and whether the file was skipped when parsed
|
||||
"""
|
||||
skipped = 0
|
||||
virtuals = 0
|
||||
|
||||
info_array = []
|
||||
for cache_class in caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
info = cache_class(filename, data)
|
||||
info_array.append(info)
|
||||
infos.append((virtualfn, info_array))
|
||||
if fn not in self.checked:
|
||||
self.cacheValidUpdate(fn)
|
||||
|
||||
return infos
|
||||
|
||||
def load(self, filename, appends, configdata):
|
||||
"""Obtain the recipe information for the specified filename,
|
||||
using cached values if available, otherwise parsing.
|
||||
|
||||
Note that if it does parse to obtain the info, it will not
|
||||
automatically add the information to the cache or to your
|
||||
CacheData. Use the add or add_info method to do so after
|
||||
running this, or use loadData instead."""
|
||||
cached = self.cacheValid(filename, appends)
|
||||
if cached:
|
||||
infos = []
|
||||
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
|
||||
info_array = self.depends_cache[filename]
|
||||
for variant in info_array[0].variants:
|
||||
virtualfn = self.realfn2virtual(filename, variant)
|
||||
infos.append((virtualfn, self.depends_cache[virtualfn]))
|
||||
else:
|
||||
logger.debug(1, "Parsing %s", filename)
|
||||
return self.parse(filename, appends, configdata, self.caches_array)
|
||||
|
||||
return cached, infos
|
||||
|
||||
def loadData(self, fn, appends, cfgData, cacheData):
|
||||
"""Load the recipe info for the specified filename,
|
||||
parsing and adding to the cache if necessary, and adding
|
||||
the recipe information to the supplied CacheData instance."""
|
||||
skipped, virtuals = 0, 0
|
||||
|
||||
cached, infos = self.load(fn, appends, cfgData)
|
||||
for virtualfn, info_array in infos:
|
||||
if info_array[0].skipped:
|
||||
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
|
||||
skipped += 1
|
||||
else:
|
||||
self.add_info(virtualfn, info_array, cacheData, not cached)
|
||||
if self.cacheValid(fn):
|
||||
multi = self.getVar('__VARIANTS', fn, True)
|
||||
for cls in (multi or "").split() + [""]:
|
||||
virtualfn = self.realfn2virtual(fn, cls)
|
||||
if self.depends_cache[virtualfn]["__SKIPPED"]:
|
||||
skipped += 1
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
|
||||
continue
|
||||
self.handle_data(virtualfn, cacheData)
|
||||
virtuals += 1
|
||||
return True, skipped, virtuals
|
||||
|
||||
return cached, skipped, virtuals
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s" % fn)
|
||||
|
||||
def cacheValid(self, fn, appends):
|
||||
bb_data = self.load_bbfile(fn, cfgData)
|
||||
|
||||
for data in bb_data:
|
||||
virtualfn = self.realfn2virtual(fn, data)
|
||||
self.setData(virtualfn, fn, bb_data[data])
|
||||
if self.getVar("__SKIPPED", virtualfn, True):
|
||||
skipped += 1
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
|
||||
else:
|
||||
self.handle_data(virtualfn, cacheData)
|
||||
virtuals += 1
|
||||
return False, skipped, virtuals
|
||||
|
||||
|
||||
def cacheValid(self, fn):
|
||||
"""
|
||||
Is the cache valid for fn?
|
||||
Fast version, no timestamps checked.
|
||||
"""
|
||||
if fn not in self.checked:
|
||||
self.cacheValidUpdate(fn, appends)
|
||||
|
||||
# Is cache enabled?
|
||||
if not self.has_cache:
|
||||
return False
|
||||
@@ -463,7 +238,7 @@ class Cache(object):
|
||||
return True
|
||||
return False
|
||||
|
||||
def cacheValidUpdate(self, fn, appends):
|
||||
def cacheValidUpdate(self, fn):
|
||||
"""
|
||||
Is the cache valid for fn?
|
||||
Make thorough (slower) checks including timestamps.
|
||||
@@ -472,73 +247,56 @@ class Cache(object):
|
||||
if not self.has_cache:
|
||||
return False
|
||||
|
||||
self.checked.add(fn)
|
||||
self.checked[fn] = ""
|
||||
|
||||
# Pretend we're clean so getVar works
|
||||
self.clean[fn] = ""
|
||||
|
||||
# File isn't in depends_cache
|
||||
if not fn in self.depends_cache:
|
||||
logger.debug(2, "Cache: %s is not cached", fn)
|
||||
return False
|
||||
|
||||
mtime = bb.parse.cached_mtime_noerror(fn)
|
||||
|
||||
# Check file still exists
|
||||
if mtime == 0:
|
||||
logger.debug(2, "Cache: %s no longer exists", fn)
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s is not cached" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
mtime = bb.parse.cached_mtime_noerror(fn)
|
||||
|
||||
# Check file still exists
|
||||
if mtime == 0:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
info_array = self.depends_cache[fn]
|
||||
# Check the file's timestamp
|
||||
if mtime != info_array[0].timestamp:
|
||||
logger.debug(2, "Cache: %s changed", fn)
|
||||
if mtime != self.getVar("CACHETIMESTAMP", fn, True):
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s changed" % fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
# Check dependencies are still valid
|
||||
depends = info_array[0].file_depends
|
||||
depends = self.getVar("__depends", fn, True)
|
||||
if depends:
|
||||
for f, old_mtime in depends:
|
||||
for f,old_mtime in depends:
|
||||
fmtime = bb.parse.cached_mtime_noerror(f)
|
||||
# Check if file still exists
|
||||
if old_mtime != 0 and fmtime == 0:
|
||||
logger.debug(2, "Cache: %s's dependency %s was removed",
|
||||
fn, f)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
if (fmtime != old_mtime):
|
||||
logger.debug(2, "Cache: %s's dependency %s changed",
|
||||
fn, f)
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
if appends != info_array[0].appends:
|
||||
logger.debug(2, "Cache: appends for %s changed", fn)
|
||||
bb.note("%s to %s" % (str(appends), str(info_array[0].appends)))
|
||||
self.remove(fn)
|
||||
return False
|
||||
#bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
|
||||
if not fn in self.clean:
|
||||
self.clean[fn] = ""
|
||||
|
||||
invalid = False
|
||||
for cls in info_array[0].variants:
|
||||
# Mark extended class data as clean too
|
||||
multi = self.getVar('__VARIANTS', fn, True)
|
||||
for cls in (multi or "").split():
|
||||
virtualfn = self.realfn2virtual(fn, cls)
|
||||
self.clean.add(virtualfn)
|
||||
if virtualfn not in self.depends_cache:
|
||||
logger.debug(2, "Cache: %s is not cached", virtualfn)
|
||||
invalid = True
|
||||
self.clean[virtualfn] = ""
|
||||
|
||||
# If any one of the variants is not present, mark as invalid for all
|
||||
if invalid:
|
||||
for cls in info_array[0].variants:
|
||||
virtualfn = self.realfn2virtual(fn, cls)
|
||||
if virtualfn in self.clean:
|
||||
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
|
||||
self.clean.remove(virtualfn)
|
||||
if fn in self.clean:
|
||||
logger.debug(2, "Cache: Marking %s as not clean", fn)
|
||||
self.clean.remove(fn)
|
||||
return False
|
||||
|
||||
self.clean.add(fn)
|
||||
return True
|
||||
|
||||
def remove(self, fn):
|
||||
@@ -546,127 +304,179 @@ class Cache(object):
|
||||
Remove a fn from the cache
|
||||
Called from the parser in error cases
|
||||
"""
|
||||
bb.msg.debug(1, bb.msg.domain.Cache, "Removing %s from cache" % fn)
|
||||
if fn in self.depends_cache:
|
||||
logger.debug(1, "Removing %s from cache", fn)
|
||||
del self.depends_cache[fn]
|
||||
if fn in self.clean:
|
||||
logger.debug(1, "Marking %s as unclean", fn)
|
||||
self.clean.remove(fn)
|
||||
del self.clean[fn]
|
||||
|
||||
def sync(self):
|
||||
"""
|
||||
Save the cache
|
||||
Called from the parser when complete (or exiting)
|
||||
"""
|
||||
import copy
|
||||
|
||||
if not self.has_cache:
|
||||
return
|
||||
|
||||
if self.cacheclean:
|
||||
logger.debug(2, "Cache is clean, not saving.")
|
||||
bb.msg.note(1, bb.msg.domain.Cache, "Cache is clean, not saving.")
|
||||
return
|
||||
|
||||
file_dict = {}
|
||||
pickler_dict = {}
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cache_class_name = cache_class.__name__
|
||||
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
|
||||
file_dict[cache_class_name] = open(cachefile, "wb")
|
||||
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
|
||||
|
||||
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
|
||||
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
|
||||
version_data = {}
|
||||
version_data['CACHE_VER'] = __cache_version__
|
||||
version_data['BITBAKE_VER'] = bb.__version__
|
||||
|
||||
try:
|
||||
for key, info_array in self.depends_cache.iteritems():
|
||||
for info in info_array:
|
||||
if isinstance(info, RecipeInfoCommon):
|
||||
cache_class_name = info.__class__.__name__
|
||||
pickler_dict[cache_class_name].dump(key)
|
||||
pickler_dict[cache_class_name].dump(info)
|
||||
finally:
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cache_class_name = cache_class.__name__
|
||||
file_dict[cache_class_name].close()
|
||||
cache_data = copy.copy(self.depends_cache)
|
||||
for fn in self.depends_cache:
|
||||
if '__BB_DONT_CACHE' in self.depends_cache[fn] and self.depends_cache[fn]['__BB_DONT_CACHE']:
|
||||
bb.msg.debug(2, bb.msg.domain.Cache, "Not caching %s, marked as not cacheable" % fn)
|
||||
del cache_data[fn]
|
||||
elif 'PV' in self.depends_cache[fn] and 'SRCREVINACTION' in self.depends_cache[fn]['PV']:
|
||||
bb.msg.error(bb.msg.domain.Cache, "Not caching %s as it had SRCREVINACTION in PV. Please report this bug" % fn)
|
||||
del cache_data[fn]
|
||||
|
||||
del self.depends_cache
|
||||
p = pickle.Pickler(file(self.cachefile, "wb" ), -1 )
|
||||
p.dump([cache_data, version_data])
|
||||
|
||||
@staticmethod
|
||||
def mtime(cachefile):
|
||||
def mtime(self, cachefile):
|
||||
return bb.parse.cached_mtime_noerror(cachefile)
|
||||
|
||||
def add_info(self, filename, info_array, cacheData, parsed=None):
|
||||
if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
|
||||
cacheData.add_from_recipeinfo(filename, info_array)
|
||||
|
||||
if not self.has_cache:
|
||||
return
|
||||
|
||||
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
|
||||
if parsed:
|
||||
self.cacheclean = False
|
||||
self.depends_cache[filename] = info_array
|
||||
|
||||
def add(self, file_name, data, cacheData, parsed=None):
|
||||
def handle_data(self, file_name, cacheData):
|
||||
"""
|
||||
Save data we need into the cache
|
||||
Save data we need into the cache
|
||||
"""
|
||||
|
||||
realfn = self.virtualfn2realfn(file_name)[0]
|
||||
pn = self.getVar('PN', file_name, True)
|
||||
pe = self.getVar('PE', file_name, True) or "0"
|
||||
pv = self.getVar('PV', file_name, True)
|
||||
if 'SRCREVINACTION' in pv:
|
||||
bb.note("Found SRCREVINACTION in PV (%s) or %s. Please report this bug." % (pv, file_name))
|
||||
pr = self.getVar('PR', file_name, True)
|
||||
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
|
||||
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
|
||||
packages = (self.getVar('PACKAGES', file_name, True) or "").split()
|
||||
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
|
||||
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
|
||||
|
||||
info_array = []
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
info_array.append(cache_class(realfn, data))
|
||||
self.add_info(file_name, info_array, cacheData, parsed)
|
||||
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name, True)
|
||||
|
||||
@staticmethod
|
||||
def load_bbfile(bbfile, appends, config):
|
||||
# build PackageName to FileName lookup table
|
||||
if pn not in cacheData.pkg_pn:
|
||||
cacheData.pkg_pn[pn] = []
|
||||
cacheData.pkg_pn[pn].append(file_name)
|
||||
|
||||
cacheData.stamp[file_name] = self.getVar('STAMP', file_name, True)
|
||||
|
||||
# build FileName to PackageName lookup table
|
||||
cacheData.pkg_fn[file_name] = pn
|
||||
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
|
||||
cacheData.pkg_dp[file_name] = dp
|
||||
|
||||
provides = [pn]
|
||||
for provide in (self.getVar("PROVIDES", file_name, True) or "").split():
|
||||
if provide not in provides:
|
||||
provides.append(provide)
|
||||
|
||||
# Build forward and reverse provider hashes
|
||||
# Forward: virtual -> [filenames]
|
||||
# Reverse: PN -> [virtuals]
|
||||
if pn not in cacheData.pn_provides:
|
||||
cacheData.pn_provides[pn] = []
|
||||
|
||||
cacheData.fn_provides[file_name] = provides
|
||||
for provide in provides:
|
||||
if provide not in cacheData.providers:
|
||||
cacheData.providers[provide] = []
|
||||
cacheData.providers[provide].append(file_name)
|
||||
if not provide in cacheData.pn_provides[pn]:
|
||||
cacheData.pn_provides[pn].append(provide)
|
||||
|
||||
cacheData.deps[file_name] = []
|
||||
for dep in depends:
|
||||
if not dep in cacheData.deps[file_name]:
|
||||
cacheData.deps[file_name].append(dep)
|
||||
if not dep in cacheData.all_depends:
|
||||
cacheData.all_depends.append(dep)
|
||||
|
||||
# Build reverse hash for PACKAGES, so runtime dependencies
|
||||
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
|
||||
for package in packages:
|
||||
if not package in cacheData.packages:
|
||||
cacheData.packages[package] = []
|
||||
cacheData.packages[package].append(file_name)
|
||||
rprovides += (self.getVar("RPROVIDES_%s" % package, file_name, 1) or "").split()
|
||||
|
||||
for package in packages_dynamic:
|
||||
if not package in cacheData.packages_dynamic:
|
||||
cacheData.packages_dynamic[package] = []
|
||||
cacheData.packages_dynamic[package].append(file_name)
|
||||
|
||||
for rprovide in rprovides:
|
||||
if not rprovide in cacheData.rproviders:
|
||||
cacheData.rproviders[rprovide] = []
|
||||
cacheData.rproviders[rprovide].append(file_name)
|
||||
|
||||
# Build hash of runtime depends and rececommends
|
||||
|
||||
if not file_name in cacheData.rundeps:
|
||||
cacheData.rundeps[file_name] = {}
|
||||
if not file_name in cacheData.runrecs:
|
||||
cacheData.runrecs[file_name] = {}
|
||||
|
||||
rdepends = self.getVar('RDEPENDS', file_name, True) or ""
|
||||
rrecommends = self.getVar('RRECOMMENDS', file_name, True) or ""
|
||||
for package in packages + [pn]:
|
||||
if not package in cacheData.rundeps[file_name]:
|
||||
cacheData.rundeps[file_name][package] = []
|
||||
if not package in cacheData.runrecs[file_name]:
|
||||
cacheData.runrecs[file_name][package] = []
|
||||
|
||||
cacheData.rundeps[file_name][package] = rdepends + " " + (self.getVar("RDEPENDS_%s" % package, file_name, True) or "")
|
||||
cacheData.runrecs[file_name][package] = rrecommends + " " + (self.getVar("RRECOMMENDS_%s" % package, file_name, True) or "")
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not self.getVar('BROKEN', file_name, True) and not self.getVar('EXCLUDE_FROM_WORLD', file_name, True):
|
||||
cacheData.possible_world.append(file_name)
|
||||
|
||||
# Touch this to make sure its in the cache
|
||||
self.getVar('__BB_DONT_CACHE', file_name, True)
|
||||
self.getVar('__VARIANTS', file_name, True)
|
||||
|
||||
def load_bbfile( self, bbfile , config):
|
||||
"""
|
||||
Load and parse one .bb build file
|
||||
Return the data and whether parsing resulted in the file being skipped
|
||||
"""
|
||||
chdir_back = False
|
||||
|
||||
from bb import data, parse
|
||||
import bb
|
||||
from bb import utils, data, parse, debug, event, fatal
|
||||
|
||||
# expand tmpdir to include this topdir
|
||||
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
oldpath = os.path.abspath(os.getcwd())
|
||||
parse.cached_mtime_noerror(bbfile_loc)
|
||||
if bb.parse.cached_mtime_noerror(bbfile_loc):
|
||||
os.chdir(bbfile_loc)
|
||||
bb_data = data.init_db(config)
|
||||
# The ConfHandler first looks if there is a TOPDIR and if not
|
||||
# then it would call getcwd().
|
||||
# Previously, we chdir()ed to bbfile_loc, called the handler
|
||||
# and finally chdir()ed back, a couple of thousand times. We now
|
||||
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
|
||||
if not data.getVar('TOPDIR', bb_data):
|
||||
chdir_back = True
|
||||
data.setVar('TOPDIR', bbfile_loc, bb_data)
|
||||
try:
|
||||
if appends:
|
||||
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
|
||||
bb_data = parse.handle(bbfile, bb_data)
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
bb_data = parse.handle(bbfile, bb_data) # read .bb data
|
||||
os.chdir(oldpath)
|
||||
return bb_data
|
||||
except:
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
os.chdir(oldpath)
|
||||
raise
|
||||
|
||||
|
||||
def init(cooker):
|
||||
"""
|
||||
The Objective: Cache the minimum amount of data possible yet get to the
|
||||
The Objective: Cache the minimum amount of data possible yet get to the
|
||||
stage of building packages (i.e. tryBuild) without reparsing any .bb files.
|
||||
|
||||
To do this, we intercept getVar calls and only cache the variables we see
|
||||
being accessed. We rely on the cache getVar calls being made for all
|
||||
variables bitbake might need to use to reach this stage. For each cached
|
||||
To do this, we intercept getVar calls and only cache the variables we see
|
||||
being accessed. We rely on the cache getVar calls being made for all
|
||||
variables bitbake might need to use to reach this stage. For each cached
|
||||
file we need to track:
|
||||
|
||||
* Its mtime
|
||||
@@ -676,31 +486,48 @@ def init(cooker):
|
||||
Files causing parsing errors are evicted from the cache.
|
||||
|
||||
"""
|
||||
return Cache(cooker.configuration.data, cooker.configuration.data_hash)
|
||||
return Cache(cooker)
|
||||
|
||||
|
||||
class CacheData(object):
|
||||
|
||||
#============================================================================#
|
||||
# CacheData
|
||||
#============================================================================#
|
||||
class CacheData:
|
||||
"""
|
||||
The data structures we compile from the cached data
|
||||
"""
|
||||
|
||||
def __init__(self, caches_array):
|
||||
self.caches_array = caches_array
|
||||
for cache_class in self.caches_array:
|
||||
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
|
||||
cache_class.init_cacheData(self)
|
||||
|
||||
# Direct cache variables
|
||||
def __init__(self):
|
||||
"""
|
||||
Direct cache variables
|
||||
(from Cache.handle_data)
|
||||
"""
|
||||
self.providers = {}
|
||||
self.rproviders = {}
|
||||
self.packages = {}
|
||||
self.packages_dynamic = {}
|
||||
self.possible_world = []
|
||||
self.pkg_pn = {}
|
||||
self.pkg_fn = {}
|
||||
self.pkg_pepvpr = {}
|
||||
self.pkg_dp = {}
|
||||
self.pn_provides = {}
|
||||
self.fn_provides = {}
|
||||
self.all_depends = []
|
||||
self.deps = {}
|
||||
self.rundeps = {}
|
||||
self.runrecs = {}
|
||||
self.task_queues = {}
|
||||
self.task_deps = {}
|
||||
self.stamp = {}
|
||||
self.preferred = {}
|
||||
self.tasks = {}
|
||||
# Indirect Cache variables (set elsewhere)
|
||||
|
||||
"""
|
||||
Indirect Cache variables
|
||||
(set elsewhere)
|
||||
"""
|
||||
self.ignored_dependencies = []
|
||||
self.world_target = set()
|
||||
self.bbfile_priority = {}
|
||||
|
||||
def add_from_recipeinfo(self, fn, info_array):
|
||||
for info in info_array:
|
||||
info.add_cacheData(self, fn)
|
||||
|
||||
|
||||
self.bbfile_config_priorities = []
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Extra RecipeInfo will be all defined in this file. Currently,
|
||||
# Only Hob (Image Creator) Requests some extra fields. So
|
||||
# HobRecipeInfo is defined. It's named HobRecipeInfo because it
|
||||
# is introduced by 'hob'. Users could also introduce other
|
||||
# RecipeInfo or simply use those already defined RecipeInfo.
|
||||
# In the following patch, this newly defined new extra RecipeInfo
|
||||
# will be dynamically loaded and used for loading/saving the extra
|
||||
# cache fields
|
||||
|
||||
# Copyright (C) 2011, Intel Corporation. All rights reserved.
|
||||
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from bb.cache import RecipeInfoCommon
|
||||
|
||||
class HobRecipeInfo(RecipeInfoCommon):
|
||||
__slots__ = ()
|
||||
|
||||
classname = "HobRecipeInfo"
|
||||
# please override this member with the correct data cache file
|
||||
# such as (bb_cache.dat, bb_extracache_hob.dat)
|
||||
cachefile = "bb_extracache_" + classname +".dat"
|
||||
|
||||
def __init__(self, filename, metadata):
|
||||
|
||||
self.summary = self.getvar('SUMMARY', metadata)
|
||||
self.license = self.getvar('LICENSE', metadata)
|
||||
self.section = self.getvar('SECTION', metadata)
|
||||
self.description = self.getvar('DESCRIPTION', metadata)
|
||||
|
||||
@classmethod
|
||||
def init_cacheData(cls, cachedata):
|
||||
# CacheData in Hob RecipeInfo Class
|
||||
cachedata.summary = {}
|
||||
cachedata.license = {}
|
||||
cachedata.section = {}
|
||||
cachedata.description = {}
|
||||
|
||||
def add_cacheData(self, cachedata, fn):
|
||||
cachedata.summary[fn] = self.summary
|
||||
cachedata.license[fn] = self.license
|
||||
cachedata.section[fn] = self.section
|
||||
cachedata.description[fn] = self.description
|
||||
@@ -1,395 +0,0 @@
|
||||
import ast
|
||||
import codegen
|
||||
import logging
|
||||
import os.path
|
||||
import bb.utils, bb.data
|
||||
from itertools import chain
|
||||
from pysh import pyshyacc, pyshlex, sherrors
|
||||
|
||||
|
||||
logger = logging.getLogger('BitBake.CodeParser')
|
||||
PARSERCACHE_VERSION = 2
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
|
||||
|
||||
|
||||
def check_indent(codestr):
|
||||
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
|
||||
|
||||
i = 0
|
||||
while codestr[i] in ["\n", "\t", " "]:
|
||||
i = i + 1
|
||||
|
||||
if i == 0:
|
||||
return codestr
|
||||
|
||||
if codestr[i-1] == "\t" or codestr[i-1] == " ":
|
||||
return "if 1:\n" + codestr
|
||||
|
||||
return codestr
|
||||
|
||||
pythonparsecache = {}
|
||||
shellparsecache = {}
|
||||
pythonparsecacheextras = {}
|
||||
shellparsecacheextras = {}
|
||||
|
||||
|
||||
def parser_cachefile(d):
|
||||
cachedir = (d.getVar("PERSISTENT_DIR", True) or
|
||||
d.getVar("CACHE", True))
|
||||
if cachedir in [None, '']:
|
||||
return None
|
||||
bb.utils.mkdirhier(cachedir)
|
||||
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
|
||||
logger.debug(1, "Using cache in '%s' for codeparser cache", cachefile)
|
||||
return cachefile
|
||||
|
||||
def parser_cache_init(d):
|
||||
global pythonparsecache
|
||||
global shellparsecache
|
||||
|
||||
cachefile = parser_cachefile(d)
|
||||
if not cachefile:
|
||||
return
|
||||
|
||||
try:
|
||||
p = pickle.Unpickler(file(cachefile, "rb"))
|
||||
data, version = p.load()
|
||||
except:
|
||||
return
|
||||
|
||||
if version != PARSERCACHE_VERSION:
|
||||
return
|
||||
|
||||
pythonparsecache = data[0]
|
||||
shellparsecache = data[1]
|
||||
|
||||
def parser_cache_save(d):
|
||||
cachefile = parser_cachefile(d)
|
||||
if not cachefile:
|
||||
return
|
||||
|
||||
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
|
||||
|
||||
i = os.getpid()
|
||||
lf = None
|
||||
while not lf:
|
||||
shellcache = {}
|
||||
pythoncache = {}
|
||||
|
||||
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
|
||||
if not lf or os.path.exists(cachefile + "-" + str(i)):
|
||||
if lf:
|
||||
bb.utils.unlockfile(lf)
|
||||
lf = None
|
||||
i = i + 1
|
||||
continue
|
||||
|
||||
shellcache = shellparsecacheextras
|
||||
pythoncache = pythonparsecacheextras
|
||||
|
||||
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
|
||||
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
|
||||
|
||||
bb.utils.unlockfile(lf)
|
||||
bb.utils.unlockfile(glf)
|
||||
|
||||
def internSet(items):
|
||||
new = set()
|
||||
for i in items:
|
||||
new.add(intern(i))
|
||||
return new
|
||||
|
||||
def parser_cache_savemerge(d):
|
||||
cachefile = parser_cachefile(d)
|
||||
if not cachefile:
|
||||
return
|
||||
|
||||
glf = bb.utils.lockfile(cachefile + ".lock")
|
||||
|
||||
try:
|
||||
p = pickle.Unpickler(file(cachefile, "rb"))
|
||||
data, version = p.load()
|
||||
except (IOError, EOFError):
|
||||
data, version = None, None
|
||||
|
||||
if version != PARSERCACHE_VERSION:
|
||||
data = [{}, {}]
|
||||
|
||||
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
|
||||
f = os.path.join(os.path.dirname(cachefile), f)
|
||||
try:
|
||||
p = pickle.Unpickler(file(f, "rb"))
|
||||
extradata, version = p.load()
|
||||
except (IOError, EOFError):
|
||||
extradata, version = [{}, {}], None
|
||||
|
||||
if version != PARSERCACHE_VERSION:
|
||||
continue
|
||||
|
||||
for h in extradata[0]:
|
||||
if h not in data[0]:
|
||||
data[0][h] = extradata[0][h]
|
||||
for h in extradata[1]:
|
||||
if h not in data[1]:
|
||||
data[1][h] = extradata[1][h]
|
||||
os.unlink(f)
|
||||
|
||||
# When the dicts are originally created, python calls intern() on the set keys
|
||||
# which significantly improves memory usage. Sadly the pickle/unpickle process
|
||||
# doesn't call intern() on the keys and results in the same strings being duplicated
|
||||
# in memory. This also means pickle will save the same string multiple times in
|
||||
# the cache file. By interning the data here, the cache file shrinks dramatically
|
||||
# meaning faster load times and the reloaded cache files also consume much less
|
||||
# memory. This is worth any performance hit from this loops and the use of the
|
||||
# intern() data storage.
|
||||
# Python 3.x may behave better in this area
|
||||
for h in data[0]:
|
||||
data[0][h]["refs"] = internSet(data[0][h]["refs"])
|
||||
data[0][h]["execs"] = internSet(data[0][h]["execs"])
|
||||
for h in data[1]:
|
||||
data[1][h]["execs"] = internSet(data[1][h]["execs"])
|
||||
|
||||
p = pickle.Pickler(file(cachefile, "wb"), -1)
|
||||
p.dump([data, PARSERCACHE_VERSION])
|
||||
|
||||
bb.utils.unlockfile(glf)
|
||||
|
||||
|
||||
Logger = logging.getLoggerClass()
|
||||
class BufferedLogger(Logger):
|
||||
def __init__(self, name, level=0, target=None):
|
||||
Logger.__init__(self, name)
|
||||
self.setLevel(level)
|
||||
self.buffer = []
|
||||
self.target = target
|
||||
|
||||
def handle(self, record):
|
||||
self.buffer.append(record)
|
||||
|
||||
def flush(self):
|
||||
for record in self.buffer:
|
||||
self.target.handle(record)
|
||||
self.buffer = []
|
||||
|
||||
class PythonParser():
|
||||
getvars = ("d.getVar", "bb.data.getVar", "data.getVar")
|
||||
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
|
||||
|
||||
def warn(self, func, arg):
|
||||
"""Warn about calls of bitbake APIs which pass a non-literal
|
||||
argument for the variable name, as we're not able to track such
|
||||
a reference.
|
||||
"""
|
||||
|
||||
try:
|
||||
funcstr = codegen.to_source(func)
|
||||
argstr = codegen.to_source(arg)
|
||||
except TypeError:
|
||||
self.log.debug(2, 'Failed to convert function and argument to source form')
|
||||
else:
|
||||
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
|
||||
|
||||
def visit_Call(self, node):
|
||||
name = self.called_node_name(node.func)
|
||||
if name in self.getvars:
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
self.var_references.add(node.args[0].s)
|
||||
else:
|
||||
self.warn(node.func, node.args[0])
|
||||
elif name in self.execfuncs:
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
self.var_execs.add(node.args[0].s)
|
||||
else:
|
||||
self.warn(node.func, node.args[0])
|
||||
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
|
||||
self.execs.add(name)
|
||||
|
||||
def called_node_name(self, node):
|
||||
"""Given a called node, return its original string form"""
|
||||
components = []
|
||||
while node:
|
||||
if isinstance(node, ast.Attribute):
|
||||
components.append(node.attr)
|
||||
node = node.value
|
||||
elif isinstance(node, ast.Name):
|
||||
components.append(node.id)
|
||||
return '.'.join(reversed(components))
|
||||
else:
|
||||
break
|
||||
|
||||
def __init__(self, name, log):
|
||||
self.var_references = set()
|
||||
self.var_execs = set()
|
||||
self.execs = set()
|
||||
self.references = set()
|
||||
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
|
||||
|
||||
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
|
||||
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
|
||||
|
||||
def parse_python(self, node):
|
||||
h = hash(str(node))
|
||||
|
||||
if h in pythonparsecache:
|
||||
self.references = pythonparsecache[h]["refs"]
|
||||
self.execs = pythonparsecache[h]["execs"]
|
||||
return
|
||||
|
||||
if h in pythonparsecacheextras:
|
||||
self.references = pythonparsecacheextras[h]["refs"]
|
||||
self.execs = pythonparsecacheextras[h]["execs"]
|
||||
return
|
||||
|
||||
|
||||
code = compile(check_indent(str(node)), "<string>", "exec",
|
||||
ast.PyCF_ONLY_AST)
|
||||
|
||||
for n in ast.walk(code):
|
||||
if n.__class__.__name__ == "Call":
|
||||
self.visit_Call(n)
|
||||
|
||||
self.references.update(self.var_references)
|
||||
self.references.update(self.var_execs)
|
||||
|
||||
pythonparsecacheextras[h] = {}
|
||||
pythonparsecacheextras[h]["refs"] = self.references
|
||||
pythonparsecacheextras[h]["execs"] = self.execs
|
||||
|
||||
class ShellParser():
|
||||
def __init__(self, name, log):
|
||||
self.funcdefs = set()
|
||||
self.allexecs = set()
|
||||
self.execs = set()
|
||||
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
|
||||
self.unhandled_template = "unable to handle non-literal command '%s'"
|
||||
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
|
||||
|
||||
def parse_shell(self, value):
|
||||
"""Parse the supplied shell code in a string, returning the external
|
||||
commands it executes.
|
||||
"""
|
||||
|
||||
h = hash(str(value))
|
||||
|
||||
if h in shellparsecache:
|
||||
self.execs = shellparsecache[h]["execs"]
|
||||
return self.execs
|
||||
|
||||
if h in shellparsecacheextras:
|
||||
self.execs = shellparsecacheextras[h]["execs"]
|
||||
return self.execs
|
||||
|
||||
try:
|
||||
tokens, _ = pyshyacc.parse(value, eof=True, debug=False)
|
||||
except pyshlex.NeedMore:
|
||||
raise sherrors.ShellSyntaxError("Unexpected EOF")
|
||||
|
||||
for token in tokens:
|
||||
self.process_tokens(token)
|
||||
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
|
||||
|
||||
shellparsecacheextras[h] = {}
|
||||
shellparsecacheextras[h]["execs"] = self.execs
|
||||
|
||||
return self.execs
|
||||
|
||||
def process_tokens(self, tokens):
|
||||
"""Process a supplied portion of the syntax tree as returned by
|
||||
pyshyacc.parse.
|
||||
"""
|
||||
|
||||
def function_definition(value):
|
||||
self.funcdefs.add(value.name)
|
||||
return [value.body], None
|
||||
|
||||
def case_clause(value):
|
||||
# Element 0 of each item in the case is the list of patterns, and
|
||||
# Element 1 of each item in the case is the list of commands to be
|
||||
# executed when that pattern matches.
|
||||
words = chain(*[item[0] for item in value.items])
|
||||
cmds = chain(*[item[1] for item in value.items])
|
||||
return cmds, words
|
||||
|
||||
def if_clause(value):
|
||||
main = chain(value.cond, value.if_cmds)
|
||||
rest = value.else_cmds
|
||||
if isinstance(rest, tuple) and rest[0] == "elif":
|
||||
return chain(main, if_clause(rest[1]))
|
||||
else:
|
||||
return chain(main, rest)
|
||||
|
||||
def simple_command(value):
|
||||
return None, chain(value.words, (assign[1] for assign in value.assigns))
|
||||
|
||||
token_handlers = {
|
||||
"and_or": lambda x: ((x.left, x.right), None),
|
||||
"async": lambda x: ([x], None),
|
||||
"brace_group": lambda x: (x.cmds, None),
|
||||
"for_clause": lambda x: (x.cmds, x.items),
|
||||
"function_definition": function_definition,
|
||||
"if_clause": lambda x: (if_clause(x), None),
|
||||
"pipeline": lambda x: (x.commands, None),
|
||||
"redirect_list": lambda x: ([x.cmd], None),
|
||||
"subshell": lambda x: (x.cmds, None),
|
||||
"while_clause": lambda x: (chain(x.condition, x.cmds), None),
|
||||
"until_clause": lambda x: (chain(x.condition, x.cmds), None),
|
||||
"simple_command": simple_command,
|
||||
"case_clause": case_clause,
|
||||
}
|
||||
|
||||
for token in tokens:
|
||||
name, value = token
|
||||
try:
|
||||
more_tokens, words = token_handlers[name](value)
|
||||
except KeyError:
|
||||
raise NotImplementedError("Unsupported token type " + name)
|
||||
|
||||
if more_tokens:
|
||||
self.process_tokens(more_tokens)
|
||||
|
||||
if words:
|
||||
self.process_words(words)
|
||||
|
||||
def process_words(self, words):
|
||||
"""Process a set of 'words' in pyshyacc parlance, which includes
|
||||
extraction of executed commands from $() blocks, as well as grabbing
|
||||
the command name argument.
|
||||
"""
|
||||
|
||||
words = list(words)
|
||||
for word in list(words):
|
||||
wtree = pyshlex.make_wordtree(word[1])
|
||||
for part in wtree:
|
||||
if not isinstance(part, list):
|
||||
continue
|
||||
|
||||
if part[0] in ('`', '$('):
|
||||
command = pyshlex.wordtree_as_string(part[1:-1])
|
||||
self.parse_shell(command)
|
||||
|
||||
if word[0] in ("cmd_name", "cmd_word"):
|
||||
if word in words:
|
||||
words.remove(word)
|
||||
|
||||
usetoken = False
|
||||
for word in words:
|
||||
if word[0] in ("cmd_name", "cmd_word") or \
|
||||
(usetoken and word[0] == "TOKEN"):
|
||||
if "=" in word[1]:
|
||||
usetoken = True
|
||||
continue
|
||||
|
||||
cmd = word[1]
|
||||
if cmd.startswith("$"):
|
||||
self.log.debug(1, self.unhandled_template % cmd)
|
||||
elif cmd == "eval":
|
||||
command = " ".join(word for _, word in words[1:])
|
||||
self.parse_shell(command)
|
||||
else:
|
||||
self.allexecs.add(cmd)
|
||||
break
|
||||
@@ -20,7 +20,7 @@ Provide an interface to interact with the bitbake server through 'commands'
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
The bitbake server takes 'commands' from its UI/commandline.
|
||||
The bitbake server takes 'commands' from its UI/commandline.
|
||||
Commands are either synchronous or asynchronous.
|
||||
Async commands return data to the client in the form of events.
|
||||
Sync commands must only return data through the function return value
|
||||
@@ -30,25 +30,17 @@ Commands are queued in a CommandQueue
|
||||
|
||||
import bb.event
|
||||
import bb.cooker
|
||||
import bb.data
|
||||
|
||||
class CommandCompleted(bb.event.Event):
|
||||
pass
|
||||
|
||||
class CommandExit(bb.event.Event):
|
||||
def __init__(self, exitcode):
|
||||
bb.event.Event.__init__(self)
|
||||
self.exitcode = int(exitcode)
|
||||
|
||||
class CommandFailed(CommandExit):
|
||||
def __init__(self, message):
|
||||
self.error = message
|
||||
CommandExit.__init__(self, 1)
|
||||
async_cmds = {}
|
||||
sync_cmds = {}
|
||||
|
||||
class Command:
|
||||
"""
|
||||
A queue of asynchronous commands for bitbake
|
||||
"""
|
||||
def __init__(self, cooker):
|
||||
|
||||
self.cooker = cooker
|
||||
self.cmds_sync = CommandsSync()
|
||||
self.cmds_async = CommandsAsync()
|
||||
@@ -56,18 +48,28 @@ class Command:
|
||||
# FIXME Add lock for this
|
||||
self.currentAsyncCommand = None
|
||||
|
||||
for attr in CommandsSync.__dict__:
|
||||
command = attr[:].lower()
|
||||
method = getattr(CommandsSync, attr)
|
||||
sync_cmds[command] = (method)
|
||||
|
||||
for attr in CommandsAsync.__dict__:
|
||||
command = attr[:].lower()
|
||||
method = getattr(CommandsAsync, attr)
|
||||
async_cmds[command] = (method)
|
||||
|
||||
def runCommand(self, commandline):
|
||||
try:
|
||||
command = commandline.pop(0)
|
||||
if command in CommandsSync.__dict__:
|
||||
# Can run synchronous commands straight away
|
||||
# Can run synchronous commands straight away
|
||||
return getattr(CommandsSync, command)(self.cmds_sync, self, commandline)
|
||||
if self.currentAsyncCommand is not None:
|
||||
return "Busy (%s in progress)" % self.currentAsyncCommand[0]
|
||||
if command not in CommandsAsync.__dict__:
|
||||
return "No such command"
|
||||
self.currentAsyncCommand = (command, commandline)
|
||||
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
|
||||
self.cooker.server.register_idle_function(self.cooker.runCommands, self.cooker)
|
||||
return True
|
||||
except:
|
||||
import traceback
|
||||
@@ -79,8 +81,7 @@ class Command:
|
||||
(command, options) = self.currentAsyncCommand
|
||||
commandmethod = getattr(CommandsAsync, command)
|
||||
needcache = getattr( commandmethod, "needcache" )
|
||||
if (needcache and self.cooker.state in
|
||||
(bb.cooker.state.initial, bb.cooker.state.parsing)):
|
||||
if needcache and self.cooker.cookerState != bb.cooker.cookerParsed:
|
||||
self.cooker.updateCache()
|
||||
return True
|
||||
else:
|
||||
@@ -88,31 +89,16 @@ class Command:
|
||||
return False
|
||||
else:
|
||||
return False
|
||||
except KeyboardInterrupt as exc:
|
||||
self.finishAsyncCommand("Interrupted")
|
||||
return False
|
||||
except SystemExit as exc:
|
||||
arg = exc.args[0]
|
||||
if isinstance(arg, basestring):
|
||||
self.finishAsyncCommand(arg)
|
||||
else:
|
||||
self.finishAsyncCommand("Exited with %s" % arg)
|
||||
return False
|
||||
except Exception as exc:
|
||||
except:
|
||||
import traceback
|
||||
if isinstance(exc, bb.BBHandledException):
|
||||
self.finishAsyncCommand("")
|
||||
else:
|
||||
self.finishAsyncCommand(traceback.format_exc())
|
||||
self.finishAsyncCommand(traceback.format_exc())
|
||||
return False
|
||||
|
||||
def finishAsyncCommand(self, msg=None, code=None):
|
||||
if msg:
|
||||
bb.event.fire(CommandFailed(msg), self.cooker.configuration.event_data)
|
||||
elif code:
|
||||
bb.event.fire(CommandExit(code), self.cooker.configuration.event_data)
|
||||
def finishAsyncCommand(self, error = None):
|
||||
if error:
|
||||
bb.event.fire(CookerCommandFailed(error), self.cooker.configuration.event_data)
|
||||
else:
|
||||
bb.event.fire(CommandCompleted(), self.cooker.configuration.event_data)
|
||||
bb.event.fire(CookerCommandCompleted(), self.cooker.configuration.event_data)
|
||||
self.currentAsyncCommand = None
|
||||
|
||||
|
||||
@@ -127,13 +113,13 @@ class CommandsSync:
|
||||
"""
|
||||
Trigger cooker 'shutdown' mode
|
||||
"""
|
||||
command.cooker.shutdown()
|
||||
command.cooker.cookerAction = bb.cooker.cookerShutdown
|
||||
|
||||
def stateStop(self, command, params):
|
||||
"""
|
||||
Stop the cooker
|
||||
"""
|
||||
command.cooker.stop()
|
||||
command.cooker.cookerAction = bb.cooker.cookerStop
|
||||
|
||||
def getCmdLineAction(self, command, params):
|
||||
"""
|
||||
@@ -150,7 +136,7 @@ class CommandsSync:
|
||||
if len(params) > 1:
|
||||
expand = params[1]
|
||||
|
||||
return command.cooker.configuration.data.getVar(varname, expand)
|
||||
return bb.data.getVar(varname, command.cooker.configuration.data, expand)
|
||||
|
||||
def setVariable(self, command, params):
|
||||
"""
|
||||
@@ -158,26 +144,8 @@ class CommandsSync:
|
||||
"""
|
||||
varname = params[0]
|
||||
value = params[1]
|
||||
command.cooker.configuration.data.setVar(varname, value)
|
||||
bb.data.setVar(varname, value, command.cooker.configuration.data)
|
||||
|
||||
def initCooker(self, command, params):
|
||||
"""
|
||||
Init the cooker to initial state with nothing parsed
|
||||
"""
|
||||
command.cooker.initialize()
|
||||
|
||||
def resetCooker(self, command, params):
|
||||
"""
|
||||
Reset the cooker to its initial state, thus forcing a reparse for
|
||||
any async command that has the needcache property set to True
|
||||
"""
|
||||
command.cooker.reset()
|
||||
|
||||
def getCpuCount(self, command, params):
|
||||
"""
|
||||
Get the CPU count on the bitbake server
|
||||
"""
|
||||
return bb.utils.cpu_count()
|
||||
|
||||
class CommandsAsync:
|
||||
"""
|
||||
@@ -228,65 +196,6 @@ class CommandsAsync:
|
||||
command.finishAsyncCommand()
|
||||
generateDotGraph.needcache = True
|
||||
|
||||
def generateTargetsTree(self, command, params):
|
||||
"""
|
||||
Generate a tree of buildable targets.
|
||||
If klass is provided ensure all recipes that inherit the class are
|
||||
included in the package list.
|
||||
If pkg_list provided use that list (plus any extras brought in by
|
||||
klass) rather than generating a tree for all packages.
|
||||
"""
|
||||
klass = params[0]
|
||||
pkg_list = params[1]
|
||||
|
||||
command.cooker.generateTargetsTree(klass, pkg_list)
|
||||
command.finishAsyncCommand()
|
||||
generateTargetsTree.needcache = True
|
||||
|
||||
def findCoreBaseFiles(self, command, params):
|
||||
"""
|
||||
Find certain files in COREBASE directory. i.e. Layers
|
||||
"""
|
||||
subdir = params[0]
|
||||
filename = params[1]
|
||||
|
||||
command.cooker.findCoreBaseFiles(subdir, filename)
|
||||
command.finishAsyncCommand()
|
||||
findCoreBaseFiles.needcache = False
|
||||
|
||||
def findConfigFiles(self, command, params):
|
||||
"""
|
||||
Find config files which provide appropriate values
|
||||
for the passed configuration variable. i.e. MACHINE
|
||||
"""
|
||||
varname = params[0]
|
||||
|
||||
command.cooker.findConfigFiles(varname)
|
||||
command.finishAsyncCommand()
|
||||
findConfigFiles.needcache = False
|
||||
|
||||
def findFilesMatchingInDir(self, command, params):
|
||||
"""
|
||||
Find implementation files matching the specified pattern
|
||||
in the requested subdirectory of a BBPATH
|
||||
"""
|
||||
pattern = params[0]
|
||||
directory = params[1]
|
||||
|
||||
command.cooker.findFilesMatchingInDir(pattern, directory)
|
||||
command.finishAsyncCommand()
|
||||
findFilesMatchingInDir.needcache = False
|
||||
|
||||
def findConfigFilePath(self, command, params):
|
||||
"""
|
||||
Find the path of the requested configuration file
|
||||
"""
|
||||
configfile = params[0]
|
||||
|
||||
command.cooker.findConfigFilePath(configfile)
|
||||
command.finishAsyncCommand()
|
||||
findConfigFilePath.needcache = False
|
||||
|
||||
def showVersions(self, command, params):
|
||||
"""
|
||||
Show the currently selected versions
|
||||
@@ -325,40 +234,40 @@ class CommandsAsync:
|
||||
command.finishAsyncCommand()
|
||||
parseFiles.needcache = True
|
||||
|
||||
def reparseFiles(self, command, params):
|
||||
"""
|
||||
Reparse .bb files
|
||||
"""
|
||||
command.cooker.reparseFiles()
|
||||
command.finishAsyncCommand()
|
||||
reparseFiles.needcache = True
|
||||
|
||||
def compareRevisions(self, command, params):
|
||||
"""
|
||||
Parse the .bb files
|
||||
"""
|
||||
if bb.fetch.fetcher_compare_revisions(command.cooker.configuration.data):
|
||||
command.finishAsyncCommand(code=1)
|
||||
else:
|
||||
command.finishAsyncCommand()
|
||||
command.cooker.compareRevisions()
|
||||
command.finishAsyncCommand()
|
||||
compareRevisions.needcache = True
|
||||
|
||||
def parseConfigurationFiles(self, command, params):
|
||||
"""
|
||||
Parse the configuration files
|
||||
"""
|
||||
prefiles = params[0]
|
||||
postfiles = params[1]
|
||||
command.cooker.parseConfigurationFiles(prefiles, postfiles)
|
||||
command.finishAsyncCommand()
|
||||
parseConfigurationFiles.needcache = False
|
||||
#
|
||||
# Events
|
||||
#
|
||||
class CookerCommandCompleted(bb.event.Event):
|
||||
"""
|
||||
Cooker command completed
|
||||
"""
|
||||
def __init__(self):
|
||||
bb.event.Event.__init__(self)
|
||||
|
||||
|
||||
class CookerCommandFailed(bb.event.Event):
|
||||
"""
|
||||
Cooker command completed
|
||||
"""
|
||||
def __init__(self, error):
|
||||
bb.event.Event.__init__(self)
|
||||
self.error = error
|
||||
|
||||
class CookerCommandSetExitCode(bb.event.Event):
|
||||
"""
|
||||
Set the exit code for a cooker command
|
||||
"""
|
||||
def __init__(self, exitcode):
|
||||
bb.event.Event.__init__(self)
|
||||
self.exitcode = int(exitcode)
|
||||
|
||||
|
||||
def triggerEvent(self, command, params):
|
||||
"""
|
||||
Trigger a certain event
|
||||
"""
|
||||
event = params[0]
|
||||
bb.event.fire(eval(event), command.cooker.configuration.data)
|
||||
command.currentAsyncCommand = None
|
||||
triggerEvent.needcache = False
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
"""Code pulled from future python versions, here for compatibility"""
|
||||
|
||||
def total_ordering(cls):
|
||||
"""Class decorator that fills in missing ordering methods"""
|
||||
convert = {
|
||||
'__lt__': [('__gt__', lambda self, other: other < self),
|
||||
('__le__', lambda self, other: not other < self),
|
||||
('__ge__', lambda self, other: not self < other)],
|
||||
'__le__': [('__ge__', lambda self, other: other <= self),
|
||||
('__lt__', lambda self, other: not other <= self),
|
||||
('__gt__', lambda self, other: not self <= other)],
|
||||
'__gt__': [('__lt__', lambda self, other: other > self),
|
||||
('__ge__', lambda self, other: not other > self),
|
||||
('__le__', lambda self, other: not self > other)],
|
||||
'__ge__': [('__le__', lambda self, other: other >= self),
|
||||
('__gt__', lambda self, other: not other >= self),
|
||||
('__lt__', lambda self, other: not self >= other)]
|
||||
}
|
||||
roots = set(dir(cls)) & set(convert)
|
||||
if not roots:
|
||||
raise ValueError('must define at least one ordering operation: < > <= >=')
|
||||
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
|
||||
for opname, opfunc in convert[root]:
|
||||
if opname not in roots:
|
||||
opfunc.__name__ = opname
|
||||
opfunc.__doc__ = getattr(int, opname).__doc__
|
||||
setattr(cls, opname, opfunc)
|
||||
return cls
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,190 +1,191 @@
|
||||
"""
|
||||
Python Deamonizing helper
|
||||
|
||||
Configurable daemon behaviors:
|
||||
|
||||
1.) The current working directory set to the "/" directory.
|
||||
2.) The current file creation mode mask set to 0.
|
||||
3.) Close all open files (1024).
|
||||
4.) Redirect standard I/O streams to "/dev/null".
|
||||
|
||||
A failed call to fork() now raises an exception.
|
||||
|
||||
References:
|
||||
1) Advanced Programming in the Unix Environment: W. Richard Stevens
|
||||
2) Unix Programming Frequently Asked Questions:
|
||||
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
|
||||
|
||||
Modified to allow a function to be daemonized and return for
|
||||
bitbake use by Richard Purdie
|
||||
"""
|
||||
|
||||
__author__ = "Chad J. Schroeder"
|
||||
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
|
||||
__version__ = "0.2"
|
||||
|
||||
# Standard Python modules.
|
||||
import os # Miscellaneous OS interfaces.
|
||||
import sys # System-specific parameters and functions.
|
||||
|
||||
# Default daemon parameters.
|
||||
# File mode creation mask of the daemon.
|
||||
# For BitBake's children, we do want to inherit the parent umask.
|
||||
UMASK = None
|
||||
|
||||
# Default maximum for the number of available file descriptors.
|
||||
MAXFD = 1024
|
||||
|
||||
# The standard I/O file descriptors are redirected to /dev/null by default.
|
||||
if (hasattr(os, "devnull")):
|
||||
REDIRECT_TO = os.devnull
|
||||
else:
|
||||
REDIRECT_TO = "/dev/null"
|
||||
|
||||
def createDaemon(function, logfile):
|
||||
"""
|
||||
Detach a process from the controlling terminal and run it in the
|
||||
background as a daemon, returning control to the caller.
|
||||
"""
|
||||
|
||||
try:
|
||||
# Fork a child process so the parent can exit. This returns control to
|
||||
# the command-line or shell. It also guarantees that the child will not
|
||||
# be a process group leader, since the child receives a new process ID
|
||||
# and inherits the parent's process group ID. This step is required
|
||||
# to insure that the next call to os.setsid is successful.
|
||||
pid = os.fork()
|
||||
except OSError as e:
|
||||
raise Exception("%s [%d]" % (e.strerror, e.errno))
|
||||
|
||||
if (pid == 0): # The first child.
|
||||
# To become the session leader of this new session and the process group
|
||||
# leader of the new process group, we call os.setsid(). The process is
|
||||
# also guaranteed not to have a controlling terminal.
|
||||
os.setsid()
|
||||
|
||||
# Is ignoring SIGHUP necessary?
|
||||
#
|
||||
# It's often suggested that the SIGHUP signal should be ignored before
|
||||
# the second fork to avoid premature termination of the process. The
|
||||
# reason is that when the first child terminates, all processes, e.g.
|
||||
# the second child, in the orphaned group will be sent a SIGHUP.
|
||||
#
|
||||
# "However, as part of the session management system, there are exactly
|
||||
# two cases where SIGHUP is sent on the death of a process:
|
||||
#
|
||||
# 1) When the process that dies is the session leader of a session that
|
||||
# is attached to a terminal device, SIGHUP is sent to all processes
|
||||
# in the foreground process group of that terminal device.
|
||||
# 2) When the death of a process causes a process group to become
|
||||
# orphaned, and one or more processes in the orphaned group are
|
||||
# stopped, then SIGHUP and SIGCONT are sent to all members of the
|
||||
# orphaned group." [2]
|
||||
#
|
||||
# The first case can be ignored since the child is guaranteed not to have
|
||||
# a controlling terminal. The second case isn't so easy to dismiss.
|
||||
# The process group is orphaned when the first child terminates and
|
||||
# POSIX.1 requires that every STOPPED process in an orphaned process
|
||||
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
|
||||
# second child is not STOPPED though, we can safely forego ignoring the
|
||||
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
|
||||
#
|
||||
# import signal # Set handlers for asynchronous events.
|
||||
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
|
||||
|
||||
try:
|
||||
# Fork a second child and exit immediately to prevent zombies. This
|
||||
# causes the second child process to be orphaned, making the init
|
||||
# process responsible for its cleanup. And, since the first child is
|
||||
# a session leader without a controlling terminal, it's possible for
|
||||
# it to acquire one by opening a terminal in the future (System V-
|
||||
# based systems). This second fork guarantees that the child is no
|
||||
# longer a session leader, preventing the daemon from ever acquiring
|
||||
# a controlling terminal.
|
||||
pid = os.fork() # Fork a second child.
|
||||
except OSError as e:
|
||||
raise Exception("%s [%d]" % (e.strerror, e.errno))
|
||||
|
||||
if (pid == 0): # The second child.
|
||||
# We probably don't want the file mode creation mask inherited from
|
||||
# the parent, so we give the child complete control over permissions.
|
||||
if UMASK is not None:
|
||||
os.umask(UMASK)
|
||||
else:
|
||||
# Parent (the first child) of the second child.
|
||||
os._exit(0)
|
||||
else:
|
||||
# exit() or _exit()?
|
||||
# _exit is like exit(), but it doesn't call any functions registered
|
||||
# with atexit (and on_exit) or any registered signal handlers. It also
|
||||
# closes any open file descriptors. Using exit() may cause all stdio
|
||||
# streams to be flushed twice and any temporary files may be unexpectedly
|
||||
# removed. It's therefore recommended that child branches of a fork()
|
||||
# and the parent branch(es) of a daemon use _exit().
|
||||
return
|
||||
|
||||
# Close all open file descriptors. This prevents the child from keeping
|
||||
# open any file descriptors inherited from the parent. There is a variety
|
||||
# of methods to accomplish this task. Three are listed below.
|
||||
#
|
||||
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
|
||||
# number of open file descriptors to close. If it doesn't exists, use
|
||||
# the default value (configurable).
|
||||
#
|
||||
# try:
|
||||
# maxfd = os.sysconf("SC_OPEN_MAX")
|
||||
# except (AttributeError, ValueError):
|
||||
# maxfd = MAXFD
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
|
||||
# maxfd = os.sysconf("SC_OPEN_MAX")
|
||||
# else:
|
||||
# maxfd = MAXFD
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# Use the getrlimit method to retrieve the maximum file descriptor number
|
||||
# that can be opened by this process. If there is not limit on the
|
||||
# resource, use the default value.
|
||||
#
|
||||
import resource # Resource usage information.
|
||||
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
|
||||
if (maxfd == resource.RLIM_INFINITY):
|
||||
maxfd = MAXFD
|
||||
|
||||
# Iterate through and close all file descriptors.
|
||||
# for fd in range(0, maxfd):
|
||||
# try:
|
||||
# os.close(fd)
|
||||
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
|
||||
# pass
|
||||
|
||||
# Redirect the standard I/O file descriptors to the specified file. Since
|
||||
# the daemon has no controlling terminal, most daemons redirect stdin,
|
||||
# stdout, and stderr to /dev/null. This is done to prevent side-effects
|
||||
# from reads and writes to the standard I/O file descriptors.
|
||||
|
||||
# This call to open is guaranteed to return the lowest file descriptor,
|
||||
# which will be 0 (stdin), since it was closed above.
|
||||
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
|
||||
|
||||
# Duplicate standard input to standard output and standard error.
|
||||
# os.dup2(0, 1) # standard output (1)
|
||||
# os.dup2(0, 2) # standard error (2)
|
||||
|
||||
|
||||
si = file('/dev/null', 'r')
|
||||
so = file(logfile, 'w')
|
||||
se = so
|
||||
|
||||
|
||||
# Replace those fds with our own
|
||||
os.dup2(si.fileno(), sys.stdin.fileno())
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(se.fileno(), sys.stderr.fileno())
|
||||
|
||||
function()
|
||||
|
||||
os._exit(0)
|
||||
"""
|
||||
Python Deamonizing helper
|
||||
|
||||
Configurable daemon behaviors:
|
||||
|
||||
1.) The current working directory set to the "/" directory.
|
||||
2.) The current file creation mode mask set to 0.
|
||||
3.) Close all open files (1024).
|
||||
4.) Redirect standard I/O streams to "/dev/null".
|
||||
|
||||
A failed call to fork() now raises an exception.
|
||||
|
||||
References:
|
||||
1) Advanced Programming in the Unix Environment: W. Richard Stevens
|
||||
2) Unix Programming Frequently Asked Questions:
|
||||
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
|
||||
|
||||
Modified to allow a function to be daemonized and return for
|
||||
bitbake use by Richard Purdie
|
||||
"""
|
||||
|
||||
__author__ = "Chad J. Schroeder"
|
||||
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
|
||||
__version__ = "0.2"
|
||||
|
||||
# Standard Python modules.
|
||||
import os # Miscellaneous OS interfaces.
|
||||
import sys # System-specific parameters and functions.
|
||||
|
||||
# Default daemon parameters.
|
||||
# File mode creation mask of the daemon.
|
||||
# For BitBake's children, we do want to inherit the parent umask.
|
||||
UMASK = None
|
||||
|
||||
# Default maximum for the number of available file descriptors.
|
||||
MAXFD = 1024
|
||||
|
||||
# The standard I/O file descriptors are redirected to /dev/null by default.
|
||||
if (hasattr(os, "devnull")):
|
||||
REDIRECT_TO = os.devnull
|
||||
else:
|
||||
REDIRECT_TO = "/dev/null"
|
||||
|
||||
def createDaemon(function, logfile):
|
||||
"""
|
||||
Detach a process from the controlling terminal and run it in the
|
||||
background as a daemon, returning control to the caller.
|
||||
"""
|
||||
|
||||
try:
|
||||
# Fork a child process so the parent can exit. This returns control to
|
||||
# the command-line or shell. It also guarantees that the child will not
|
||||
# be a process group leader, since the child receives a new process ID
|
||||
# and inherits the parent's process group ID. This step is required
|
||||
# to insure that the next call to os.setsid is successful.
|
||||
pid = os.fork()
|
||||
except OSError, e:
|
||||
raise Exception, "%s [%d]" % (e.strerror, e.errno)
|
||||
|
||||
if (pid == 0): # The first child.
|
||||
# To become the session leader of this new session and the process group
|
||||
# leader of the new process group, we call os.setsid(). The process is
|
||||
# also guaranteed not to have a controlling terminal.
|
||||
os.setsid()
|
||||
|
||||
# Is ignoring SIGHUP necessary?
|
||||
#
|
||||
# It's often suggested that the SIGHUP signal should be ignored before
|
||||
# the second fork to avoid premature termination of the process. The
|
||||
# reason is that when the first child terminates, all processes, e.g.
|
||||
# the second child, in the orphaned group will be sent a SIGHUP.
|
||||
#
|
||||
# "However, as part of the session management system, there are exactly
|
||||
# two cases where SIGHUP is sent on the death of a process:
|
||||
#
|
||||
# 1) When the process that dies is the session leader of a session that
|
||||
# is attached to a terminal device, SIGHUP is sent to all processes
|
||||
# in the foreground process group of that terminal device.
|
||||
# 2) When the death of a process causes a process group to become
|
||||
# orphaned, and one or more processes in the orphaned group are
|
||||
# stopped, then SIGHUP and SIGCONT are sent to all members of the
|
||||
# orphaned group." [2]
|
||||
#
|
||||
# The first case can be ignored since the child is guaranteed not to have
|
||||
# a controlling terminal. The second case isn't so easy to dismiss.
|
||||
# The process group is orphaned when the first child terminates and
|
||||
# POSIX.1 requires that every STOPPED process in an orphaned process
|
||||
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
|
||||
# second child is not STOPPED though, we can safely forego ignoring the
|
||||
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
|
||||
#
|
||||
# import signal # Set handlers for asynchronous events.
|
||||
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
|
||||
|
||||
try:
|
||||
# Fork a second child and exit immediately to prevent zombies. This
|
||||
# causes the second child process to be orphaned, making the init
|
||||
# process responsible for its cleanup. And, since the first child is
|
||||
# a session leader without a controlling terminal, it's possible for
|
||||
# it to acquire one by opening a terminal in the future (System V-
|
||||
# based systems). This second fork guarantees that the child is no
|
||||
# longer a session leader, preventing the daemon from ever acquiring
|
||||
# a controlling terminal.
|
||||
pid = os.fork() # Fork a second child.
|
||||
except OSError, e:
|
||||
raise Exception, "%s [%d]" % (e.strerror, e.errno)
|
||||
|
||||
if (pid == 0): # The second child.
|
||||
# We probably don't want the file mode creation mask inherited from
|
||||
# the parent, so we give the child complete control over permissions.
|
||||
if UMASK is not None:
|
||||
os.umask(UMASK)
|
||||
else:
|
||||
# Parent (the first child) of the second child.
|
||||
os._exit(0)
|
||||
else:
|
||||
# exit() or _exit()?
|
||||
# _exit is like exit(), but it doesn't call any functions registered
|
||||
# with atexit (and on_exit) or any registered signal handlers. It also
|
||||
# closes any open file descriptors. Using exit() may cause all stdio
|
||||
# streams to be flushed twice and any temporary files may be unexpectedly
|
||||
# removed. It's therefore recommended that child branches of a fork()
|
||||
# and the parent branch(es) of a daemon use _exit().
|
||||
return
|
||||
|
||||
# Close all open file descriptors. This prevents the child from keeping
|
||||
# open any file descriptors inherited from the parent. There is a variety
|
||||
# of methods to accomplish this task. Three are listed below.
|
||||
#
|
||||
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
|
||||
# number of open file descriptors to close. If it doesn't exists, use
|
||||
# the default value (configurable).
|
||||
#
|
||||
# try:
|
||||
# maxfd = os.sysconf("SC_OPEN_MAX")
|
||||
# except (AttributeError, ValueError):
|
||||
# maxfd = MAXFD
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
|
||||
# maxfd = os.sysconf("SC_OPEN_MAX")
|
||||
# else:
|
||||
# maxfd = MAXFD
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# Use the getrlimit method to retrieve the maximum file descriptor number
|
||||
# that can be opened by this process. If there is not limit on the
|
||||
# resource, use the default value.
|
||||
#
|
||||
import resource # Resource usage information.
|
||||
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
|
||||
if (maxfd == resource.RLIM_INFINITY):
|
||||
maxfd = MAXFD
|
||||
|
||||
# Iterate through and close all file descriptors.
|
||||
# for fd in range(0, maxfd):
|
||||
# try:
|
||||
# os.close(fd)
|
||||
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
|
||||
# pass
|
||||
|
||||
# Redirect the standard I/O file descriptors to the specified file. Since
|
||||
# the daemon has no controlling terminal, most daemons redirect stdin,
|
||||
# stdout, and stderr to /dev/null. This is done to prevent side-effects
|
||||
# from reads and writes to the standard I/O file descriptors.
|
||||
|
||||
# This call to open is guaranteed to return the lowest file descriptor,
|
||||
# which will be 0 (stdin), since it was closed above.
|
||||
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
|
||||
|
||||
# Duplicate standard input to standard output and standard error.
|
||||
# os.dup2(0, 1) # standard output (1)
|
||||
# os.dup2(0, 2) # standard error (2)
|
||||
|
||||
|
||||
si = file('/dev/null', 'r')
|
||||
so = file(logfile, 'w')
|
||||
se = so
|
||||
|
||||
|
||||
# Replace those fds with our own
|
||||
os.dup2(si.fileno(), sys.stdin.fileno())
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(se.fileno(), sys.stderr.fileno())
|
||||
|
||||
function()
|
||||
|
||||
os._exit(0)
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ operations. At night the cookie monster came by and
|
||||
suggested 'give me cookies on setting the variables and
|
||||
things will work out'. Taking this suggestion into account
|
||||
applying the skills from the not yet passed 'Entwurf und
|
||||
Analyse von Algorithmen' lecture and the cookie
|
||||
Analyse von Algorithmen' lecture and the cookie
|
||||
monster seems to be right. We will track setVar more carefully
|
||||
to have faster update_data and expandKeys operations.
|
||||
|
||||
@@ -37,42 +37,39 @@ the speed is more critical here.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import sys, os, re
|
||||
import sys, os, re, types
|
||||
if sys.argv[0][-5:] == "pydoc":
|
||||
path = os.path.dirname(os.path.dirname(sys.argv[1]))
|
||||
else:
|
||||
path = os.path.dirname(os.path.dirname(sys.argv[0]))
|
||||
sys.path.insert(0, path)
|
||||
from itertools import groupby
|
||||
sys.path.insert(0,path)
|
||||
|
||||
from bb import data_smart
|
||||
from bb import codeparser
|
||||
import bb
|
||||
|
||||
logger = data_smart.logger
|
||||
class VarExpandError(Exception):
|
||||
pass
|
||||
|
||||
_dict_type = data_smart.DataSmart
|
||||
|
||||
def init():
|
||||
"""Return a new object representing the Bitbake data"""
|
||||
return _dict_type()
|
||||
|
||||
def init_db(parent = None):
|
||||
"""Return a new object representing the Bitbake data,
|
||||
optionally based on an existing object"""
|
||||
if parent:
|
||||
return parent.createCopy()
|
||||
else:
|
||||
return _dict_type()
|
||||
|
||||
def createCopy(source):
|
||||
"""Link the source set to the destination
|
||||
If one does not find the value in the destination set,
|
||||
search will go on to the source set to get the value.
|
||||
Value from source are copy-on-write. i.e. any try to
|
||||
modify one of them will end up putting the modified value
|
||||
in the destination set.
|
||||
"""
|
||||
return source.createCopy()
|
||||
"""Link the source set to the destination
|
||||
If one does not find the value in the destination set,
|
||||
search will go on to the source set to get the value.
|
||||
Value from source are copy-on-write. i.e. any try to
|
||||
modify one of them will end up putting the modified value
|
||||
in the destination set.
|
||||
"""
|
||||
return source.createCopy()
|
||||
|
||||
def initVar(var, d):
|
||||
"""Non-destructive var init for data structure"""
|
||||
@@ -80,34 +77,91 @@ def initVar(var, d):
|
||||
|
||||
|
||||
def setVar(var, value, d):
|
||||
"""Set a variable to a given value"""
|
||||
d.setVar(var, value)
|
||||
"""Set a variable to a given value
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> print getVar('TEST', d)
|
||||
testcontents
|
||||
"""
|
||||
d.setVar(var,value)
|
||||
|
||||
|
||||
def getVar(var, d, exp = 0):
|
||||
"""Gets the value of a variable"""
|
||||
return d.getVar(var, exp)
|
||||
"""Gets the value of a variable
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> print getVar('TEST', d)
|
||||
testcontents
|
||||
"""
|
||||
return d.getVar(var,exp)
|
||||
|
||||
|
||||
def renameVar(key, newkey, d):
|
||||
"""Renames a variable from key to newkey"""
|
||||
"""Renames a variable from key to newkey
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> renameVar('TEST', 'TEST2', d)
|
||||
>>> print getVar('TEST2', d)
|
||||
testcontents
|
||||
"""
|
||||
d.renameVar(key, newkey)
|
||||
|
||||
def delVar(var, d):
|
||||
"""Removes a variable from the data set"""
|
||||
"""Removes a variable from the data set
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'testcontents', d)
|
||||
>>> print getVar('TEST', d)
|
||||
testcontents
|
||||
>>> delVar('TEST', d)
|
||||
>>> print getVar('TEST', d)
|
||||
None
|
||||
"""
|
||||
d.delVar(var)
|
||||
|
||||
def setVarFlag(var, flag, flagvalue, d):
|
||||
"""Set a flag for a given variable to a given value"""
|
||||
d.setVarFlag(var, flag, flagvalue)
|
||||
"""Set a flag for a given variable to a given value
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'python', 1, d)
|
||||
>>> print getVarFlag('TEST', 'python', d)
|
||||
1
|
||||
"""
|
||||
d.setVarFlag(var,flag,flagvalue)
|
||||
|
||||
def getVarFlag(var, flag, d):
|
||||
"""Gets given flag from given var"""
|
||||
return d.getVarFlag(var, flag)
|
||||
"""Gets given flag from given var
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'python', 1, d)
|
||||
>>> print getVarFlag('TEST', 'python', d)
|
||||
1
|
||||
"""
|
||||
return d.getVarFlag(var,flag)
|
||||
|
||||
def delVarFlag(var, flag, d):
|
||||
"""Removes a given flag from the variable's flags"""
|
||||
d.delVarFlag(var, flag)
|
||||
"""Removes a given flag from the variable's flags
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'testflag', 1, d)
|
||||
>>> print getVarFlag('TEST', 'testflag', d)
|
||||
1
|
||||
>>> delVarFlag('TEST', 'testflag', d)
|
||||
>>> print getVarFlag('TEST', 'testflag', d)
|
||||
None
|
||||
|
||||
"""
|
||||
d.delVarFlag(var,flag)
|
||||
|
||||
def setVarFlags(var, flags, d):
|
||||
"""Set the flags for a given variable
|
||||
@@ -116,27 +170,115 @@ def setVarFlags(var, flags, d):
|
||||
setVarFlags will not clear previous
|
||||
flags. Think of this method as
|
||||
addVarFlags
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> myflags = {}
|
||||
>>> myflags['test'] = 'blah'
|
||||
>>> setVarFlags('TEST', myflags, d)
|
||||
>>> print getVarFlag('TEST', 'test', d)
|
||||
blah
|
||||
"""
|
||||
d.setVarFlags(var, flags)
|
||||
d.setVarFlags(var,flags)
|
||||
|
||||
def getVarFlags(var, d):
|
||||
"""Gets a variable's flags"""
|
||||
"""Gets a variable's flags
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVarFlag('TEST', 'test', 'blah', d)
|
||||
>>> print getVarFlags('TEST', d)['test']
|
||||
blah
|
||||
"""
|
||||
return d.getVarFlags(var)
|
||||
|
||||
def delVarFlags(var, d):
|
||||
"""Removes a variable's flags"""
|
||||
"""Removes a variable's flags
|
||||
|
||||
Example:
|
||||
>>> data = init()
|
||||
>>> setVarFlag('TEST', 'testflag', 1, data)
|
||||
>>> print getVarFlag('TEST', 'testflag', data)
|
||||
1
|
||||
>>> delVarFlags('TEST', data)
|
||||
>>> print getVarFlags('TEST', data)
|
||||
None
|
||||
|
||||
"""
|
||||
d.delVarFlags(var)
|
||||
|
||||
def keys(d):
|
||||
"""Return a list of keys in d"""
|
||||
"""Return a list of keys in d
|
||||
|
||||
Example:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 1, d)
|
||||
>>> setVar('MOO' , 2, d)
|
||||
>>> setVarFlag('TEST', 'test', 1, d)
|
||||
>>> keys(d)
|
||||
['TEST', 'MOO']
|
||||
"""
|
||||
return d.keys()
|
||||
|
||||
def getData(d):
|
||||
"""Returns the data object used"""
|
||||
return d
|
||||
|
||||
def setData(newData, d):
|
||||
"""Sets the data object to the supplied value"""
|
||||
d = newData
|
||||
|
||||
|
||||
##
|
||||
## Cookie Monsters' query functions
|
||||
##
|
||||
def _get_override_vars(d, override):
|
||||
"""
|
||||
Internal!!!
|
||||
|
||||
Get the Names of Variables that have a specific
|
||||
override. This function returns a iterable
|
||||
Set or an empty list
|
||||
"""
|
||||
return []
|
||||
|
||||
def _get_var_flags_triple(d):
|
||||
"""
|
||||
Internal!!!
|
||||
|
||||
"""
|
||||
return []
|
||||
|
||||
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
|
||||
def expand(s, d, varname = None):
|
||||
"""Variable expansion using the data store"""
|
||||
"""Variable expansion using the data store.
|
||||
|
||||
Example:
|
||||
Standard expansion:
|
||||
>>> d = init()
|
||||
>>> setVar('A', 'sshd', d)
|
||||
>>> print expand('/usr/bin/${A}', d)
|
||||
/usr/bin/sshd
|
||||
|
||||
Python expansion:
|
||||
>>> d = init()
|
||||
>>> print expand('result: ${@37 * 72}', d)
|
||||
result: 2664
|
||||
|
||||
Shell expansion:
|
||||
>>> d = init()
|
||||
>>> print expand('${TARGET_MOO}', d)
|
||||
${TARGET_MOO}
|
||||
>>> setVar('TARGET_MOO', 'yupp', d)
|
||||
>>> print expand('${TARGET_MOO}',d)
|
||||
yupp
|
||||
>>> setVar('SRC_URI', 'http://somebug.${TARGET_MOO}', d)
|
||||
>>> delVar('TARGET_MOO', d)
|
||||
>>> print expand('${SRC_URI}', d)
|
||||
http://somebug.${TARGET_MOO}
|
||||
"""
|
||||
return d.expand(s, varname)
|
||||
|
||||
def expandKeys(alterdata, readdata = None):
|
||||
@@ -153,24 +295,46 @@ def expandKeys(alterdata, readdata = None):
|
||||
continue
|
||||
todolist[key] = ekey
|
||||
|
||||
# These two for loops are split for performance to maximise the
|
||||
# These two for loops are split for performance to maximise the
|
||||
# usefulness of the expand cache
|
||||
|
||||
for key in todolist:
|
||||
ekey = todolist[key]
|
||||
renameVar(key, ekey, alterdata)
|
||||
|
||||
def inheritFromOS(d, savedenv, permitted):
|
||||
"""Inherit variables from the initial environment."""
|
||||
exportlist = bb.utils.preserved_envvars_exported()
|
||||
for s in savedenv.keys():
|
||||
if s in permitted:
|
||||
try:
|
||||
setVar(s, getVar(s, savedenv, True), d)
|
||||
if s in exportlist:
|
||||
setVarFlag(s, "export", True, d)
|
||||
except TypeError:
|
||||
pass
|
||||
def expandData(alterdata, readdata = None):
|
||||
"""For each variable in alterdata, expand it, and update the var contents.
|
||||
Replacements use data from readdata.
|
||||
|
||||
Example:
|
||||
>>> a=init()
|
||||
>>> b=init()
|
||||
>>> setVar("dlmsg", "dl_dir is ${DL_DIR}", a)
|
||||
>>> setVar("DL_DIR", "/path/to/whatever", b)
|
||||
>>> expandData(a, b)
|
||||
>>> print getVar("dlmsg", a)
|
||||
dl_dir is /path/to/whatever
|
||||
"""
|
||||
if readdata == None:
|
||||
readdata = alterdata
|
||||
|
||||
for key in keys(alterdata):
|
||||
val = getVar(key, alterdata)
|
||||
if type(val) is not types.StringType:
|
||||
continue
|
||||
expanded = expand(val, readdata)
|
||||
# print "key is %s, val is %s, expanded is %s" % (key, val, expanded)
|
||||
if val != expanded:
|
||||
setVar(key, expanded, alterdata)
|
||||
|
||||
def inheritFromOS(d):
|
||||
"""Inherit variables from the environment."""
|
||||
for s in os.environ.keys():
|
||||
try:
|
||||
setVar(s, os.environ[s], d)
|
||||
setVarFlag(s, "export", True, d)
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emit a variable to be sourced by a shell."""
|
||||
@@ -187,15 +351,20 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
if all:
|
||||
oval = getVar(var, d, 0)
|
||||
val = getVar(var, d, 1)
|
||||
except (KeyboardInterrupt, bb.build.FuncFailed):
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except Exception as exc:
|
||||
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
|
||||
except:
|
||||
excname = str(sys.exc_info()[0])
|
||||
if excname == "bb.build.FuncFailed":
|
||||
raise
|
||||
o.write('# expansion of %s threw %s\n' % (var, excname))
|
||||
return 0
|
||||
|
||||
if all:
|
||||
commentVal = re.sub('\n', '\n#', str(oval))
|
||||
o.write('# %s=%s\n' % (var, commentVal))
|
||||
o.write('# %s=%s\n' % (var, oval))
|
||||
|
||||
if type(val) is not types.StringType:
|
||||
return 0
|
||||
|
||||
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
|
||||
return 0
|
||||
@@ -204,13 +373,12 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
|
||||
if unexport:
|
||||
o.write('unset %s\n' % varExpanded)
|
||||
return 0
|
||||
return 1
|
||||
|
||||
val.rstrip()
|
||||
if not val:
|
||||
return 0
|
||||
|
||||
val = str(val)
|
||||
|
||||
if func:
|
||||
# NOTE: should probably check for unbalanced {} within the var
|
||||
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
|
||||
@@ -222,122 +390,176 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
# if we're going to output this within doublequotes,
|
||||
# to a shell, we need to escape the quotes in the var
|
||||
alter = re.sub('"', '\\"', val.strip())
|
||||
alter = re.sub('\n', ' \\\n', alter)
|
||||
o.write('%s="%s"\n' % (varExpanded, alter))
|
||||
return 0
|
||||
return 1
|
||||
|
||||
|
||||
def emit_env(o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
|
||||
|
||||
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
|
||||
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
|
||||
grouped = groupby(keys, isfunc)
|
||||
for isfunc, keys in grouped:
|
||||
for key in keys:
|
||||
emit_var(key, o, d, all and not isfunc) and o.write('\n')
|
||||
env = keys(d)
|
||||
|
||||
def exported_keys(d):
|
||||
return (key for key in d.keys() if not key.startswith('__') and
|
||||
d.getVarFlag(key, 'export') and
|
||||
not d.getVarFlag(key, 'unexport'))
|
||||
for e in env:
|
||||
if getVarFlag(e, "func", d):
|
||||
continue
|
||||
emit_var(e, o, d, all) and o.write('\n')
|
||||
|
||||
def exported_vars(d):
|
||||
for key in exported_keys(d):
|
||||
try:
|
||||
value = d.getVar(key, True)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if value is not None:
|
||||
yield key, str(value)
|
||||
|
||||
def emit_func(func, o=sys.__stdout__, d = init()):
|
||||
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
|
||||
|
||||
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
|
||||
for key in keys:
|
||||
emit_var(key, o, d, False) and o.write('\n')
|
||||
|
||||
emit_var(func, o, d, False) and o.write('\n')
|
||||
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
|
||||
seen = set()
|
||||
while newdeps:
|
||||
deps = newdeps
|
||||
seen |= deps
|
||||
newdeps = set()
|
||||
for dep in deps:
|
||||
if d.getVarFlag(dep, "func"):
|
||||
emit_var(dep, o, d, False) and o.write('\n')
|
||||
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
|
||||
newdeps -= seen
|
||||
for e in env:
|
||||
if not getVarFlag(e, "func", d):
|
||||
continue
|
||||
emit_var(e, o, d) and o.write('\n')
|
||||
|
||||
def update_data(d):
|
||||
"""Performs final steps upon the datastore, including application of overrides"""
|
||||
d.finalize()
|
||||
"""Modifies the environment vars according to local overrides and commands.
|
||||
Examples:
|
||||
Appending to a variable:
|
||||
>>> d = init()
|
||||
>>> setVar('TEST', 'this is a', d)
|
||||
>>> setVar('TEST_append', ' test', d)
|
||||
>>> setVar('TEST_append', ' of the emergency broadcast system.', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
this is a test of the emergency broadcast system.
|
||||
|
||||
def build_dependencies(key, keys, shelldeps, vardepvals, d):
|
||||
deps = set()
|
||||
vardeps = d.getVarFlag(key, "vardeps", True)
|
||||
try:
|
||||
value = d.getVar(key, False)
|
||||
if key in vardepvals:
|
||||
value = d.getVarFlag(key, "vardepvalue", True)
|
||||
elif d.getVarFlag(key, "func"):
|
||||
if d.getVarFlag(key, "python"):
|
||||
parsedvar = d.expandWithRefs(value, key)
|
||||
parser = bb.codeparser.PythonParser(key, logger)
|
||||
parser.parse_python(parsedvar.value)
|
||||
deps = deps | parser.references
|
||||
else:
|
||||
parsedvar = d.expandWithRefs(value, key)
|
||||
parser = bb.codeparser.ShellParser(key, logger)
|
||||
parser.parse_shell(parsedvar.value)
|
||||
deps = deps | shelldeps
|
||||
if vardeps is None:
|
||||
parser.log.flush()
|
||||
deps = deps | parsedvar.references
|
||||
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
|
||||
else:
|
||||
parser = d.expandWithRefs(value, key)
|
||||
deps |= parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
deps |= set((vardeps or "").split())
|
||||
deps -= set((d.getVarFlag(key, "vardepsexclude", True) or "").split())
|
||||
except:
|
||||
bb.note("Error expanding variable %s" % key)
|
||||
raise
|
||||
return deps, value
|
||||
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
|
||||
#d.setVarFlag(key, "vardeps", deps)
|
||||
Prepending to a variable:
|
||||
>>> setVar('TEST', 'virtual/libc', d)
|
||||
>>> setVar('TEST_prepend', 'virtual/tmake ', d)
|
||||
>>> setVar('TEST_prepend', 'virtual/patcher ', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
virtual/patcher virtual/tmake virtual/libc
|
||||
|
||||
def generate_dependencies(d):
|
||||
Overrides:
|
||||
>>> setVar('TEST_arm', 'target', d)
|
||||
>>> setVar('TEST_ramses', 'machine', d)
|
||||
>>> setVar('TEST_local', 'local', d)
|
||||
>>> setVar('OVERRIDES', 'arm', d)
|
||||
|
||||
keys = set(key for key in d.keys() if not key.startswith("__"))
|
||||
shelldeps = set(key for key in keys if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
|
||||
vardepvals = set(key for key in keys if d.getVarFlag(key, "vardepvalue"))
|
||||
>>> setVar('TEST', 'original', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
target
|
||||
|
||||
deps = {}
|
||||
values = {}
|
||||
>>> setVar('OVERRIDES', 'arm:ramses:local', d)
|
||||
>>> setVar('TEST', 'original', d)
|
||||
>>> update_data(d)
|
||||
>>> print getVar('TEST', d)
|
||||
local
|
||||
|
||||
CopyMonster:
|
||||
>>> e = d.createCopy()
|
||||
>>> setVar('TEST_foo', 'foo', e)
|
||||
>>> update_data(e)
|
||||
>>> print getVar('TEST', e)
|
||||
local
|
||||
|
||||
>>> setVar('OVERRIDES', 'arm:ramses:local:foo', e)
|
||||
>>> update_data(e)
|
||||
>>> print getVar('TEST', e)
|
||||
foo
|
||||
|
||||
>>> f = d.createCopy()
|
||||
>>> setVar('TEST_moo', 'something', f)
|
||||
>>> setVar('OVERRIDES', 'moo:arm:ramses:local:foo', e)
|
||||
>>> update_data(e)
|
||||
>>> print getVar('TEST', e)
|
||||
foo
|
||||
|
||||
|
||||
>>> h = init()
|
||||
>>> setVar('SRC_URI', 'file://append.foo;patch=1 ', h)
|
||||
>>> g = h.createCopy()
|
||||
>>> setVar('SRC_URI_append_arm', 'file://other.foo;patch=1', g)
|
||||
>>> setVar('OVERRIDES', 'arm:moo', g)
|
||||
>>> update_data(g)
|
||||
>>> print getVar('SRC_URI', g)
|
||||
file://append.foo;patch=1 file://other.foo;patch=1
|
||||
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Data, "update_data()")
|
||||
|
||||
# now ask the cookie monster for help
|
||||
#print "Cookie Monster"
|
||||
#print "Append/Prepend %s" % d._special_values
|
||||
#print "Overrides %s" % d._seen_overrides
|
||||
|
||||
overrides = (getVar('OVERRIDES', d, 1) or "").split(':') or []
|
||||
|
||||
#
|
||||
# Well let us see what breaks here. We used to iterate
|
||||
# over each variable and apply the override and then
|
||||
# do the line expanding.
|
||||
# If we have bad luck - which we will have - the keys
|
||||
# where in some order that is so important for this
|
||||
# method which we don't have anymore.
|
||||
# Anyway we will fix that and write test cases this
|
||||
# time.
|
||||
|
||||
#
|
||||
# First we apply all overrides
|
||||
# Then we will handle _append and _prepend
|
||||
#
|
||||
|
||||
for o in overrides:
|
||||
# calculate '_'+override
|
||||
l = len(o)+1
|
||||
|
||||
# see if one should even try
|
||||
if not d._seen_overrides.has_key(o):
|
||||
continue
|
||||
|
||||
vars = d._seen_overrides[o]
|
||||
for var in vars:
|
||||
name = var[:-l]
|
||||
try:
|
||||
d[name] = d[var]
|
||||
except:
|
||||
bb.msg.note(1, bb.msg.domain.Data, "Untracked delVar")
|
||||
|
||||
# now on to the appends and prepends
|
||||
if d._special_values.has_key('_append'):
|
||||
appends = d._special_values['_append'] or []
|
||||
for append in appends:
|
||||
for (a, o) in getVarFlag(append, '_append', d) or []:
|
||||
# maybe the OVERRIDE was not yet added so keep the append
|
||||
if (o and o in overrides) or not o:
|
||||
delVarFlag(append, '_append', d)
|
||||
if o and not o in overrides:
|
||||
continue
|
||||
|
||||
sval = getVar(append,d) or ""
|
||||
sval+=a
|
||||
setVar(append, sval, d)
|
||||
|
||||
|
||||
if d._special_values.has_key('_prepend'):
|
||||
prepends = d._special_values['_prepend'] or []
|
||||
|
||||
for prepend in prepends:
|
||||
for (a, o) in getVarFlag(prepend, '_prepend', d) or []:
|
||||
# maybe the OVERRIDE was not yet added so keep the prepend
|
||||
if (o and o in overrides) or not o:
|
||||
delVarFlag(prepend, '_prepend', d)
|
||||
if o and not o in overrides:
|
||||
continue
|
||||
|
||||
sval = a + (getVar(prepend,d) or "")
|
||||
setVar(prepend, sval, d)
|
||||
|
||||
tasklist = d.getVar('__BBTASKS') or []
|
||||
for task in tasklist:
|
||||
deps[task], values[task] = build_dependencies(task, keys, shelldeps, vardepvals, d)
|
||||
newdeps = deps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
nextdeps = newdeps
|
||||
seen |= nextdeps
|
||||
newdeps = set()
|
||||
for dep in nextdeps:
|
||||
if dep not in deps:
|
||||
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, vardepvals, d)
|
||||
newdeps |= deps[dep]
|
||||
newdeps -= seen
|
||||
#print "For %s: %s" % (task, str(deps[task]))
|
||||
return tasklist, deps, values
|
||||
|
||||
def inherits_class(klass, d):
|
||||
val = getVar('__inherit_cache', d) or []
|
||||
if os.path.join('classes', '%s.bbclass' % klass) in val:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _test():
|
||||
"""Start a doctest run on this module"""
|
||||
import doctest
|
||||
import bb
|
||||
from bb import data
|
||||
bb.msg.set_debug_level(0)
|
||||
doctest.testmod(data)
|
||||
|
||||
if __name__ == "__main__":
|
||||
_test()
|
||||
|
||||
@@ -28,87 +28,25 @@ BitBake build tools.
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import copy, re
|
||||
from collections import MutableMapping
|
||||
import logging
|
||||
import hashlib
|
||||
import bb, bb.codeparser
|
||||
from bb import utils
|
||||
from bb.COW import COWDictBase
|
||||
import copy, os, re, sys, time, types
|
||||
import bb
|
||||
from bb import utils, methodpool
|
||||
from COW import COWDictBase
|
||||
from new import classobj
|
||||
|
||||
logger = logging.getLogger("BitBake.Data")
|
||||
|
||||
__setvar_keyword__ = ["_append", "_prepend"]
|
||||
__setvar_keyword__ = ["_append","_prepend"]
|
||||
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
|
||||
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
_expand_globals = {
|
||||
"os": os,
|
||||
"bb": bb,
|
||||
"time": time,
|
||||
}
|
||||
|
||||
|
||||
class VariableParse:
|
||||
def __init__(self, varname, d, val = None):
|
||||
self.varname = varname
|
||||
self.d = d
|
||||
self.value = val
|
||||
|
||||
self.references = set()
|
||||
self.execs = set()
|
||||
|
||||
def var_sub(self, match):
|
||||
key = match.group()[2:-1]
|
||||
if self.varname and key:
|
||||
if self.varname == key:
|
||||
raise Exception("variable %s references itself!" % self.varname)
|
||||
var = self.d.getVar(key, True)
|
||||
if var is not None:
|
||||
self.references.add(key)
|
||||
return var
|
||||
else:
|
||||
return match.group()
|
||||
|
||||
def python_sub(self, match):
|
||||
code = match.group()[3:-1]
|
||||
codeobj = compile(code.strip(), self.varname or "<expansion>", "eval")
|
||||
|
||||
parser = bb.codeparser.PythonParser(self.varname, logger)
|
||||
parser.parse_python(code)
|
||||
if self.varname:
|
||||
vardeps = self.d.getVarFlag(self.varname, "vardeps", True)
|
||||
if vardeps is None:
|
||||
parser.log.flush()
|
||||
else:
|
||||
parser.log.flush()
|
||||
self.references |= parser.references
|
||||
self.execs |= parser.execs
|
||||
|
||||
value = utils.better_eval(codeobj, DataContext(self.d))
|
||||
return str(value)
|
||||
|
||||
|
||||
class DataContext(dict):
|
||||
def __init__(self, metadata, **kwargs):
|
||||
self.metadata = metadata
|
||||
dict.__init__(self, **kwargs)
|
||||
self['d'] = metadata
|
||||
|
||||
def __missing__(self, key):
|
||||
value = self.metadata.getVar(key, True)
|
||||
if value is None or self.metadata.getVarFlag(key, 'func'):
|
||||
raise KeyError(key)
|
||||
else:
|
||||
return value
|
||||
|
||||
class ExpansionError(Exception):
|
||||
def __init__(self, varname, expression, exception):
|
||||
self.expression = expression
|
||||
self.variablename = varname
|
||||
self.exception = exception
|
||||
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
|
||||
Exception.__init__(self, self.msg)
|
||||
self.args = (varname, expression, exception)
|
||||
def __str__(self):
|
||||
return self.msg
|
||||
|
||||
class DataSmart(MutableMapping):
|
||||
class DataSmart:
|
||||
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
|
||||
self.dict = {}
|
||||
|
||||
@@ -117,116 +55,69 @@ class DataSmart(MutableMapping):
|
||||
self._seen_overrides = seen
|
||||
|
||||
self.expand_cache = {}
|
||||
self.expand_locals = {"d": self}
|
||||
|
||||
def expandWithRefs(self, s, varname):
|
||||
def expand(self,s, varname):
|
||||
def var_sub(match):
|
||||
key = match.group()[2:-1]
|
||||
if varname and key:
|
||||
if varname == key:
|
||||
raise Exception("variable %s references itself!" % varname)
|
||||
var = self.getVar(key, 1)
|
||||
if var is not None:
|
||||
return var
|
||||
else:
|
||||
return match.group()
|
||||
|
||||
if not isinstance(s, basestring): # sanity check
|
||||
return VariableParse(varname, self, s)
|
||||
def python_sub(match):
|
||||
import bb
|
||||
code = match.group()[3:-1]
|
||||
s = eval(code, _expand_globals, self.expand_locals)
|
||||
if type(s) == types.IntType: s = str(s)
|
||||
return s
|
||||
|
||||
if type(s) is not types.StringType: # sanity check
|
||||
return s
|
||||
|
||||
if varname and varname in self.expand_cache:
|
||||
return self.expand_cache[varname]
|
||||
|
||||
varparse = VariableParse(varname, self)
|
||||
|
||||
while s.find('${') != -1:
|
||||
olds = s
|
||||
try:
|
||||
s = __expand_var_regexp__.sub(varparse.var_sub, s)
|
||||
s = __expand_python_regexp__.sub(varparse.python_sub, s)
|
||||
if s == olds:
|
||||
break
|
||||
except ExpansionError:
|
||||
s = __expand_var_regexp__.sub(var_sub, s)
|
||||
s = __expand_python_regexp__.sub(python_sub, s)
|
||||
if s == olds: break
|
||||
if type(s) is not types.StringType: # sanity check
|
||||
bb.msg.error(bb.msg.domain.Data, 'expansion of %s returned non-string %s' % (olds, s))
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
bb.msg.note(1, bb.msg.domain.Data, "%s:%s while evaluating:\n%s" % (sys.exc_info()[0], sys.exc_info()[1], s))
|
||||
raise
|
||||
except Exception as exc:
|
||||
raise ExpansionError(varname, s, exc)
|
||||
|
||||
varparse.value = s
|
||||
|
||||
if varname:
|
||||
self.expand_cache[varname] = varparse
|
||||
self.expand_cache[varname] = s
|
||||
|
||||
return varparse
|
||||
|
||||
def expand(self, s, varname = None):
|
||||
return self.expandWithRefs(s, varname).value
|
||||
|
||||
|
||||
def finalize(self):
|
||||
"""Performs final steps upon the datastore, including application of overrides"""
|
||||
|
||||
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
|
||||
|
||||
#
|
||||
# Well let us see what breaks here. We used to iterate
|
||||
# over each variable and apply the override and then
|
||||
# do the line expanding.
|
||||
# If we have bad luck - which we will have - the keys
|
||||
# where in some order that is so important for this
|
||||
# method which we don't have anymore.
|
||||
# Anyway we will fix that and write test cases this
|
||||
# time.
|
||||
|
||||
#
|
||||
# First we apply all overrides
|
||||
# Then we will handle _append and _prepend
|
||||
#
|
||||
|
||||
for o in overrides:
|
||||
# calculate '_'+override
|
||||
l = len(o) + 1
|
||||
|
||||
# see if one should even try
|
||||
if o not in self._seen_overrides:
|
||||
continue
|
||||
|
||||
vars = self._seen_overrides[o].copy()
|
||||
for var in vars:
|
||||
name = var[:-l]
|
||||
try:
|
||||
self.setVar(name, self.getVar(var, False))
|
||||
self.delVar(var)
|
||||
except Exception:
|
||||
logger.info("Untracked delVar")
|
||||
|
||||
# now on to the appends and prepends
|
||||
for op in __setvar_keyword__:
|
||||
if op in self._special_values:
|
||||
appends = self._special_values[op] or []
|
||||
for append in appends:
|
||||
keep = []
|
||||
for (a, o) in self.getVarFlag(append, op) or []:
|
||||
if o and not o in overrides:
|
||||
keep.append((a ,o))
|
||||
continue
|
||||
|
||||
if op == "_append":
|
||||
sval = self.getVar(append, False) or ""
|
||||
sval += a
|
||||
self.setVar(append, sval)
|
||||
elif op == "_prepend":
|
||||
sval = a + (self.getVar(append, False) or "")
|
||||
self.setVar(append, sval)
|
||||
|
||||
# We save overrides that may be applied at some later stage
|
||||
if keep:
|
||||
self.setVarFlag(append, op, keep)
|
||||
else:
|
||||
self.delVarFlag(append, op)
|
||||
return s
|
||||
|
||||
def initVar(self, var):
|
||||
self.expand_cache = {}
|
||||
if not var in self.dict:
|
||||
self.dict[var] = {}
|
||||
|
||||
def _findVar(self, var):
|
||||
dest = self.dict
|
||||
while dest:
|
||||
if var in dest:
|
||||
return dest[var]
|
||||
def _findVar(self,var):
|
||||
_dest = self.dict
|
||||
|
||||
if "_data" not in dest:
|
||||
while (_dest and var not in _dest):
|
||||
if not "_data" in _dest:
|
||||
_dest = None
|
||||
break
|
||||
dest = dest["_data"]
|
||||
_dest = _dest["_data"]
|
||||
|
||||
if _dest and var in _dest:
|
||||
return _dest[var]
|
||||
return None
|
||||
|
||||
def _makeShadowCopy(self, var):
|
||||
if var in self.dict:
|
||||
@@ -239,7 +130,7 @@ class DataSmart(MutableMapping):
|
||||
else:
|
||||
self.initVar(var)
|
||||
|
||||
def setVar(self, var, value):
|
||||
def setVar(self,var,value):
|
||||
self.expand_cache = {}
|
||||
match = __setvar_regexp__.match(var)
|
||||
if match and match.group("keyword") in __setvar_keyword__:
|
||||
@@ -254,7 +145,7 @@ class DataSmart(MutableMapping):
|
||||
# pay the cookie monster
|
||||
try:
|
||||
self._special_values[keyword].add( base )
|
||||
except KeyError:
|
||||
except:
|
||||
self._special_values[keyword] = set()
|
||||
self._special_values[keyword].add( base )
|
||||
|
||||
@@ -266,25 +157,23 @@ class DataSmart(MutableMapping):
|
||||
# more cookies for the cookie monster
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
if len(override) > 0:
|
||||
if override not in self._seen_overrides:
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override].add( var )
|
||||
if not self._seen_overrides.has_key(override):
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override].add( var )
|
||||
|
||||
# setting var
|
||||
self.dict[var]["content"] = value
|
||||
|
||||
def getVar(self, var, expand=False, noweakdefault=False):
|
||||
value = self.getVarFlag(var, "content", False, noweakdefault)
|
||||
def getVar(self,var,exp):
|
||||
value = self.getVarFlag(var,"content")
|
||||
|
||||
# Call expand() separately to make use of the expand cache
|
||||
if expand and value:
|
||||
return self.expand(value, var)
|
||||
if exp and value:
|
||||
return self.expand(value,var)
|
||||
return value
|
||||
|
||||
def renameVar(self, key, newkey):
|
||||
"""
|
||||
Rename the variable key to newkey
|
||||
Rename the variable key to newkey
|
||||
"""
|
||||
val = self.getVar(key, 0)
|
||||
if val is not None:
|
||||
@@ -298,47 +187,30 @@ class DataSmart(MutableMapping):
|
||||
dest = self.getVarFlag(newkey, i) or []
|
||||
dest.extend(src)
|
||||
self.setVarFlag(newkey, i, dest)
|
||||
|
||||
if i in self._special_values and key in self._special_values[i]:
|
||||
|
||||
if self._special_values.has_key(i) and key in self._special_values[i]:
|
||||
self._special_values[i].remove(key)
|
||||
self._special_values[i].add(newkey)
|
||||
|
||||
self.delVar(key)
|
||||
|
||||
def appendVar(self, key, value):
|
||||
value = (self.getVar(key, False) or "") + value
|
||||
self.setVar(key, value)
|
||||
|
||||
def prependVar(self, key, value):
|
||||
value = value + (self.getVar(key, False) or "")
|
||||
self.setVar(key, value)
|
||||
|
||||
def delVar(self, var):
|
||||
def delVar(self,var):
|
||||
self.expand_cache = {}
|
||||
self.dict[var] = {}
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
|
||||
self._seen_overrides[override].remove(var)
|
||||
|
||||
def setVarFlag(self, var, flag, flagvalue):
|
||||
def setVarFlag(self,var,flag,flagvalue):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
self.dict[var][flag] = flagvalue
|
||||
|
||||
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
|
||||
def getVarFlag(self,var,flag):
|
||||
local_var = self._findVar(var)
|
||||
value = None
|
||||
if local_var:
|
||||
if flag in local_var:
|
||||
value = copy.copy(local_var[flag])
|
||||
elif flag == "content" and "defaultval" in local_var and not noweakdefault:
|
||||
value = copy.copy(local_var["defaultval"])
|
||||
if expand and value:
|
||||
value = self.expand(value, None)
|
||||
return value
|
||||
return copy.copy(local_var[flag])
|
||||
return None
|
||||
|
||||
def delVarFlag(self, var, flag):
|
||||
def delVarFlag(self,var,flag):
|
||||
local_var = self._findVar(var)
|
||||
if not local_var:
|
||||
return
|
||||
@@ -348,15 +220,7 @@ class DataSmart(MutableMapping):
|
||||
if var in self.dict and flag in self.dict[var]:
|
||||
del self.dict[var][flag]
|
||||
|
||||
def appendVarFlag(self, key, flag, value):
|
||||
value = (self.getVarFlag(key, flag, False) or "") + value
|
||||
self.setVarFlag(key, flag, value)
|
||||
|
||||
def prependVarFlag(self, key, flag, value):
|
||||
value = value + (self.getVarFlag(key, flag, False) or "")
|
||||
self.setVarFlag(key, flag, value)
|
||||
|
||||
def setVarFlags(self, var, flags):
|
||||
def setVarFlags(self,var,flags):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
@@ -365,7 +229,7 @@ class DataSmart(MutableMapping):
|
||||
continue
|
||||
self.dict[var][i] = flags[i]
|
||||
|
||||
def getVarFlags(self, var):
|
||||
def getVarFlags(self,var):
|
||||
local_var = self._findVar(var)
|
||||
flags = {}
|
||||
|
||||
@@ -380,7 +244,7 @@ class DataSmart(MutableMapping):
|
||||
return flags
|
||||
|
||||
|
||||
def delVarFlags(self, var):
|
||||
def delVarFlags(self,var):
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
@@ -406,69 +270,25 @@ class DataSmart(MutableMapping):
|
||||
|
||||
return data
|
||||
|
||||
def expandVarref(self, variable, parents=False):
|
||||
"""Find all references to variable in the data and expand it
|
||||
in place, optionally descending to parent datastores."""
|
||||
|
||||
if parents:
|
||||
keys = iter(self)
|
||||
else:
|
||||
keys = self.localkeys()
|
||||
|
||||
ref = '${%s}' % variable
|
||||
value = self.getVar(variable, False)
|
||||
for key in keys:
|
||||
referrervalue = self.getVar(key, False)
|
||||
if referrervalue and ref in referrervalue:
|
||||
self.setVar(key, referrervalue.replace(ref, value))
|
||||
|
||||
def localkeys(self):
|
||||
for key in self.dict:
|
||||
if key != '_data':
|
||||
yield key
|
||||
|
||||
def __iter__(self):
|
||||
def keylist(d):
|
||||
klist = set()
|
||||
for key in d:
|
||||
if key == "_data":
|
||||
continue
|
||||
if not d[key]:
|
||||
continue
|
||||
klist.add(key)
|
||||
|
||||
# Dictionary Methods
|
||||
def keys(self):
|
||||
def _keys(d, mykey):
|
||||
if "_data" in d:
|
||||
klist |= keylist(d["_data"])
|
||||
_keys(d["_data"],mykey)
|
||||
|
||||
return klist
|
||||
for key in d.keys():
|
||||
if key != "_data":
|
||||
mykey[key] = None
|
||||
keytab = {}
|
||||
_keys(self.dict,keytab)
|
||||
return keytab.keys()
|
||||
|
||||
for k in keylist(self.dict):
|
||||
yield k
|
||||
def __getitem__(self,item):
|
||||
#print "Warning deprecated"
|
||||
return self.getVar(item, False)
|
||||
|
||||
def __len__(self):
|
||||
return len(frozenset(self))
|
||||
def __setitem__(self,var,data):
|
||||
#print "Warning deprecated"
|
||||
self.setVar(var,data)
|
||||
|
||||
def __getitem__(self, item):
|
||||
value = self.getVar(item, False)
|
||||
if value is None:
|
||||
raise KeyError(item)
|
||||
else:
|
||||
return value
|
||||
|
||||
def __setitem__(self, var, value):
|
||||
self.setVar(var, value)
|
||||
|
||||
def __delitem__(self, var):
|
||||
self.delVar(var)
|
||||
|
||||
def get_hash(self):
|
||||
data = ""
|
||||
config_whitelist = set((self.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
|
||||
keys = set(key for key in iter(self) if not key.startswith("__"))
|
||||
for key in keys:
|
||||
if key in config_whitelist:
|
||||
continue
|
||||
value = self.getVar(key, False) or ""
|
||||
data = data + key + ': ' + str(value) + '\n'
|
||||
|
||||
return hashlib.md5(data).hexdigest()
|
||||
|
||||
@@ -22,32 +22,23 @@ BitBake build tools.
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, sys
|
||||
import warnings
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
import logging
|
||||
import atexit
|
||||
import traceback
|
||||
import os, re, sys
|
||||
import bb.utils
|
||||
import pickle
|
||||
|
||||
# This is the pid for which we should generate the event. This is set when
|
||||
# the runqueue forks off.
|
||||
worker_pid = 0
|
||||
worker_pipe = None
|
||||
|
||||
logger = logging.getLogger('BitBake.Event')
|
||||
|
||||
class Event(object):
|
||||
class Event:
|
||||
"""Base class for events"""
|
||||
|
||||
def __init__(self):
|
||||
self.pid = worker_pid
|
||||
|
||||
NotHandled = 0
|
||||
Handled = 1
|
||||
Handled = 1
|
||||
|
||||
Registered = 10
|
||||
AlreadyRegistered = 14
|
||||
@@ -57,63 +48,18 @@ _handlers = {}
|
||||
_ui_handlers = {}
|
||||
_ui_handler_seq = 0
|
||||
|
||||
# For compatibility
|
||||
bb.utils._context["NotHandled"] = NotHandled
|
||||
bb.utils._context["Handled"] = Handled
|
||||
|
||||
def execute_handler(name, handler, event, d):
|
||||
event.data = d
|
||||
try:
|
||||
ret = handler(event)
|
||||
except bb.parse.SkipPackage:
|
||||
raise
|
||||
except Exception:
|
||||
etype, value, tb = sys.exc_info()
|
||||
logger.error("Execution of event handler '%s' failed" % name,
|
||||
exc_info=(etype, value, tb.tb_next))
|
||||
raise
|
||||
except SystemExit as exc:
|
||||
if exc.code != 0:
|
||||
logger.error("Execution of event handler '%s' failed" % name)
|
||||
raise
|
||||
finally:
|
||||
def fire_class_handlers(event, d):
|
||||
for handler in _handlers:
|
||||
h = _handlers[handler]
|
||||
event.data = d
|
||||
if type(h).__name__ == "code":
|
||||
exec(h)
|
||||
tmpHandler(event)
|
||||
else:
|
||||
h(event)
|
||||
del event.data
|
||||
|
||||
if ret is not None:
|
||||
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
|
||||
DeprecationWarning, stacklevel = 2)
|
||||
|
||||
def fire_class_handlers(event, d):
|
||||
if isinstance(event, logging.LogRecord):
|
||||
return
|
||||
|
||||
for name, handler in _handlers.iteritems():
|
||||
try:
|
||||
execute_handler(name, handler, event, d)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
ui_queue = []
|
||||
@atexit.register
|
||||
def print_ui_queue():
|
||||
"""If we're exiting before a UI has been spawned, display any queued
|
||||
LogRecords to the console."""
|
||||
logger = logging.getLogger("BitBake")
|
||||
if not _ui_handlers:
|
||||
from bb.msg import BBLogFormatter
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
|
||||
logger.handlers = [console]
|
||||
for event in ui_queue:
|
||||
if isinstance(event, logging.LogRecord):
|
||||
logger.handle(event)
|
||||
|
||||
def fire_ui_handlers(event, d):
|
||||
if not _ui_handlers:
|
||||
# No UI handlers registered yet, queue up the messages
|
||||
ui_queue.append(event)
|
||||
return
|
||||
|
||||
errors = []
|
||||
for h in _ui_handlers:
|
||||
#print "Sending event %s" % event
|
||||
@@ -121,10 +67,7 @@ def fire_ui_handlers(event, d):
|
||||
# We use pickle here since it better handles object instances
|
||||
# which xmlrpc's marshaller does not. Events *must* be serializable
|
||||
# by pickle.
|
||||
if hasattr(_ui_handlers[h].event, "sendpickle"):
|
||||
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
|
||||
else:
|
||||
_ui_handlers[h].event.send(event)
|
||||
_ui_handlers[h].event.send((pickle.dumps(event)))
|
||||
except:
|
||||
errors.append(h)
|
||||
for h in errors:
|
||||
@@ -133,9 +76,9 @@ def fire_ui_handlers(event, d):
|
||||
def fire(event, d):
|
||||
"""Fire off an Event"""
|
||||
|
||||
# We can fire class handlers in the worker process context and this is
|
||||
# We can fire class handlers in the worker process context and this is
|
||||
# desired so they get the task based datastore.
|
||||
# UI handlers need to be fired in the server context so we defer this. They
|
||||
# UI handlers need to be fired in the server context so we defer this. They
|
||||
# don't have a datastore so the datastore context isn't a problem.
|
||||
|
||||
fire_class_handlers(event, d)
|
||||
@@ -146,16 +89,19 @@ def fire(event, d):
|
||||
|
||||
def worker_fire(event, d):
|
||||
data = "<event>" + pickle.dumps(event) + "</event>"
|
||||
worker_pipe.write(data)
|
||||
try:
|
||||
if os.write(worker_pipe, data) != len (data):
|
||||
print "Error sending event to server (short write)"
|
||||
except OSError:
|
||||
sys.exit(1)
|
||||
|
||||
def fire_from_worker(event, d):
|
||||
if not event.startswith("<event>") or not event.endswith("</event>"):
|
||||
print("Error, not an event %s" % event)
|
||||
print "Error, not an event"
|
||||
return
|
||||
event = pickle.loads(event[7:-8])
|
||||
fire_ui_handlers(event, d)
|
||||
|
||||
noop = lambda _: None
|
||||
def register(name, handler):
|
||||
"""Register an Event handler"""
|
||||
|
||||
@@ -165,19 +111,10 @@ def register(name, handler):
|
||||
|
||||
if handler is not None:
|
||||
# handle string containing python code
|
||||
if isinstance(handler, basestring):
|
||||
tmp = "def %s(e):\n%s" % (name, handler)
|
||||
try:
|
||||
code = compile(tmp, "%s(e)" % name, "exec")
|
||||
except SyntaxError:
|
||||
logger.error("Unable to register event handler '%s':\n%s", name,
|
||||
''.join(traceback.format_exc(limit=0)))
|
||||
_handlers[name] = noop
|
||||
return
|
||||
env = {}
|
||||
bb.utils.simple_exec(code, env)
|
||||
func = bb.utils.better_eval(name, env)
|
||||
_handlers[name] = func
|
||||
if type(handler).__name__ == "str":
|
||||
tmp = "def tmpHandler(e):\n%s" % handler
|
||||
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
|
||||
_handlers[name] = comp
|
||||
else:
|
||||
_handlers[name] = handler
|
||||
|
||||
@@ -204,41 +141,16 @@ def getName(e):
|
||||
else:
|
||||
return e.__name__
|
||||
|
||||
class OperationStarted(Event):
|
||||
"""An operation has begun"""
|
||||
def __init__(self, msg = "Operation Started"):
|
||||
Event.__init__(self)
|
||||
self.msg = msg
|
||||
|
||||
class OperationCompleted(Event):
|
||||
"""An operation has completed"""
|
||||
def __init__(self, total, msg = "Operation Completed"):
|
||||
Event.__init__(self)
|
||||
self.total = total
|
||||
self.msg = msg
|
||||
|
||||
class OperationProgress(Event):
|
||||
"""An operation is in progress"""
|
||||
def __init__(self, current, total, msg = "Operation in Progress"):
|
||||
Event.__init__(self)
|
||||
self.current = current
|
||||
self.total = total
|
||||
self.msg = msg + ": %s/%s" % (current, total);
|
||||
|
||||
class ConfigParsed(Event):
|
||||
"""Configuration Parsing Complete"""
|
||||
|
||||
class RecipeEvent(Event):
|
||||
class RecipeParsed(Event):
|
||||
""" Recipe Parsing Complete """
|
||||
|
||||
def __init__(self, fn):
|
||||
self.fn = fn
|
||||
Event.__init__(self)
|
||||
|
||||
class RecipePreFinalise(RecipeEvent):
|
||||
""" Recipe Parsing Complete but not yet finialised"""
|
||||
|
||||
class RecipeParsed(RecipeEvent):
|
||||
""" Recipe Parsing Complete """
|
||||
|
||||
class StampUpdate(Event):
|
||||
"""Trigger for any adjustment of the stamp files to happen"""
|
||||
|
||||
@@ -297,31 +209,23 @@ class BuildBase(Event):
|
||||
|
||||
|
||||
|
||||
class BuildStarted(BuildBase, OperationStarted):
|
||||
class BuildStarted(BuildBase):
|
||||
"""bbmake build run started"""
|
||||
def __init__(self, n, p, failures = 0):
|
||||
OperationStarted.__init__(self, "Building Started")
|
||||
BuildBase.__init__(self, n, p, failures)
|
||||
|
||||
class BuildCompleted(BuildBase, OperationCompleted):
|
||||
|
||||
class BuildCompleted(BuildBase):
|
||||
"""bbmake build run completed"""
|
||||
def __init__(self, total, n, p, failures = 0):
|
||||
if not failures:
|
||||
OperationCompleted.__init__(self, total, "Building Succeeded")
|
||||
else:
|
||||
OperationCompleted.__init__(self, total, "Building Failed")
|
||||
BuildBase.__init__(self, n, p, failures)
|
||||
|
||||
|
||||
|
||||
|
||||
class NoProvider(Event):
|
||||
"""No Provider for an Event"""
|
||||
|
||||
def __init__(self, item, runtime=False, dependees=None, reasons=[]):
|
||||
def __init__(self, item, runtime=False):
|
||||
Event.__init__(self)
|
||||
self._item = item
|
||||
self._runtime = runtime
|
||||
self._dependees = dependees
|
||||
self._reasons = reasons
|
||||
|
||||
def getItem(self):
|
||||
return self._item
|
||||
@@ -356,16 +260,13 @@ class MultipleProviders(Event):
|
||||
"""
|
||||
return self._candidates
|
||||
|
||||
class ParseStarted(OperationStarted):
|
||||
"""Recipe parsing for the runqueue has begun"""
|
||||
def __init__(self, total):
|
||||
OperationStarted.__init__(self, "Recipe parsing Started")
|
||||
self.total = total
|
||||
class ParseProgress(Event):
|
||||
"""
|
||||
Parsing Progress Event
|
||||
"""
|
||||
|
||||
class ParseCompleted(OperationCompleted):
|
||||
"""Recipe parsing for the runqueue has completed"""
|
||||
def __init__(self, cached, parsed, skipped, masked, virtuals, errors, total):
|
||||
OperationCompleted.__init__(self, total, "Recipe parsing Completed")
|
||||
Event.__init__(self)
|
||||
self.cached = cached
|
||||
self.parsed = parsed
|
||||
self.skipped = skipped
|
||||
@@ -373,45 +274,8 @@ class ParseCompleted(OperationCompleted):
|
||||
self.masked = masked
|
||||
self.errors = errors
|
||||
self.sofar = cached + parsed
|
||||
|
||||
class ParseProgress(OperationProgress):
|
||||
"""Recipe parsing progress"""
|
||||
def __init__(self, current, total):
|
||||
OperationProgress.__init__(self, current, total, "Recipe parsing")
|
||||
|
||||
|
||||
class CacheLoadStarted(OperationStarted):
|
||||
"""Loading of the dependency cache has begun"""
|
||||
def __init__(self, total):
|
||||
OperationStarted.__init__(self, "Loading cache Started")
|
||||
self.total = total
|
||||
|
||||
class CacheLoadProgress(OperationProgress):
|
||||
"""Cache loading progress"""
|
||||
def __init__(self, current, total):
|
||||
OperationProgress.__init__(self, current, total, "Loading cache")
|
||||
|
||||
class CacheLoadCompleted(OperationCompleted):
|
||||
"""Cache loading is complete"""
|
||||
def __init__(self, total, num_entries):
|
||||
OperationCompleted.__init__(self, total, "Loading cache Completed")
|
||||
self.num_entries = num_entries
|
||||
|
||||
class TreeDataPreparationStarted(OperationStarted):
|
||||
"""Tree data preparation started"""
|
||||
def __init__(self):
|
||||
OperationStarted.__init__(self, "Preparing tree data Started")
|
||||
|
||||
class TreeDataPreparationProgress(OperationProgress):
|
||||
"""Tree data preparation is in progress"""
|
||||
def __init__(self, current, total):
|
||||
OperationProgress.__init__(self, current, total, "Preparing tree data")
|
||||
|
||||
class TreeDataPreparationCompleted(OperationCompleted):
|
||||
"""Tree data preparation completed"""
|
||||
def __init__(self, total):
|
||||
OperationCompleted.__init__(self, total, "Preparing tree data Completed")
|
||||
|
||||
class DepTreeGenerated(Event):
|
||||
"""
|
||||
Event when a dependency tree has been generated
|
||||
@@ -421,99 +285,3 @@ class DepTreeGenerated(Event):
|
||||
Event.__init__(self)
|
||||
self._depgraph = depgraph
|
||||
|
||||
class TargetsTreeGenerated(Event):
|
||||
"""
|
||||
Event when a set of buildable targets has been generated
|
||||
"""
|
||||
def __init__(self, model):
|
||||
Event.__init__(self)
|
||||
self._model = model
|
||||
|
||||
class FilesMatchingFound(Event):
|
||||
"""
|
||||
Event when a list of files matching the supplied pattern has
|
||||
been generated
|
||||
"""
|
||||
def __init__(self, pattern, matches):
|
||||
Event.__init__(self)
|
||||
self._pattern = pattern
|
||||
self._matches = matches
|
||||
|
||||
class CoreBaseFilesFound(Event):
|
||||
"""
|
||||
Event when a list of appropriate config files has been generated
|
||||
"""
|
||||
def __init__(self, paths):
|
||||
Event.__init__(self)
|
||||
self._paths = paths
|
||||
|
||||
class ConfigFilesFound(Event):
|
||||
"""
|
||||
Event when a list of appropriate config files has been generated
|
||||
"""
|
||||
def __init__(self, variable, values):
|
||||
Event.__init__(self)
|
||||
self._variable = variable
|
||||
self._values = values
|
||||
|
||||
class ConfigFilePathFound(Event):
|
||||
"""
|
||||
Event when a path for a config file has been found
|
||||
"""
|
||||
def __init__(self, path):
|
||||
Event.__init__(self)
|
||||
self._path = path
|
||||
|
||||
class MsgBase(Event):
|
||||
"""Base class for messages"""
|
||||
|
||||
def __init__(self, msg):
|
||||
self._message = msg
|
||||
Event.__init__(self)
|
||||
|
||||
class MsgDebug(MsgBase):
|
||||
"""Debug Message"""
|
||||
|
||||
class MsgNote(MsgBase):
|
||||
"""Note Message"""
|
||||
|
||||
class MsgWarn(MsgBase):
|
||||
"""Warning Message"""
|
||||
|
||||
class MsgError(MsgBase):
|
||||
"""Error Message"""
|
||||
|
||||
class MsgFatal(MsgBase):
|
||||
"""Fatal Message"""
|
||||
|
||||
class MsgPlain(MsgBase):
|
||||
"""General output"""
|
||||
|
||||
class LogHandler(logging.Handler):
|
||||
"""Dispatch logging messages as bitbake events"""
|
||||
|
||||
def emit(self, record):
|
||||
if record.exc_info:
|
||||
etype, value, tb = record.exc_info
|
||||
if hasattr(tb, 'tb_next'):
|
||||
tb = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
record.bb_exc_info = (etype, value, tb)
|
||||
record.exc_info = None
|
||||
fire(record, None)
|
||||
|
||||
def filter(self, record):
|
||||
record.taskpid = worker_pid
|
||||
return True
|
||||
|
||||
class RequestPackageInfo(Event):
|
||||
"""
|
||||
Event to request package information
|
||||
"""
|
||||
|
||||
class PackageInfo(Event):
|
||||
"""
|
||||
Package information for GUI
|
||||
"""
|
||||
def __init__(self, pkginfolist):
|
||||
Event.__init__(self)
|
||||
self._pkginfolist = pkginfolist
|
||||
|
||||
@@ -1,84 +0,0 @@
|
||||
from __future__ import absolute_import
|
||||
import inspect
|
||||
import traceback
|
||||
import bb.namedtuple_with_abc
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class TracebackEntry(namedtuple.abc):
|
||||
"""Pickleable representation of a traceback entry"""
|
||||
_fields = 'filename lineno function args code_context index'
|
||||
_header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
|
||||
|
||||
def format(self, formatter=None):
|
||||
if not self.code_context:
|
||||
return self._header.format(self) + '\n'
|
||||
|
||||
formatted = [self._header.format(self) + ':\n']
|
||||
|
||||
for lineindex, line in enumerate(self.code_context):
|
||||
if formatter:
|
||||
line = formatter(line)
|
||||
|
||||
if lineindex == self.index:
|
||||
formatted.append(' >%s' % line)
|
||||
else:
|
||||
formatted.append(' %s' % line)
|
||||
return formatted
|
||||
|
||||
def __str__(self):
|
||||
return ''.join(self.format())
|
||||
|
||||
def _get_frame_args(frame):
|
||||
"""Get the formatted arguments and class (if available) for a frame"""
|
||||
arginfo = inspect.getargvalues(frame)
|
||||
if not arginfo.args:
|
||||
return '', None
|
||||
|
||||
firstarg = arginfo.args[0]
|
||||
if firstarg == 'self':
|
||||
self = arginfo.locals['self']
|
||||
cls = self.__class__.__name__
|
||||
|
||||
arginfo.args.pop(0)
|
||||
del arginfo.locals['self']
|
||||
else:
|
||||
cls = None
|
||||
|
||||
formatted = inspect.formatargvalues(*arginfo)
|
||||
return formatted, cls
|
||||
|
||||
def extract_traceback(tb, context=1):
|
||||
frames = inspect.getinnerframes(tb, context)
|
||||
for frame, filename, lineno, function, code_context, index in frames:
|
||||
formatted_args, cls = _get_frame_args(frame)
|
||||
if cls:
|
||||
function = '%s.%s' % (cls, function)
|
||||
yield TracebackEntry(filename, lineno, function, formatted_args,
|
||||
code_context, index)
|
||||
|
||||
def format_extracted(extracted, formatter=None, limit=None):
|
||||
if limit:
|
||||
extracted = extracted[-limit:]
|
||||
|
||||
formatted = []
|
||||
for tracebackinfo in extracted:
|
||||
formatted.extend(tracebackinfo.format(formatter))
|
||||
return formatted
|
||||
|
||||
|
||||
def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
|
||||
formatted = ['Traceback (most recent call last):\n']
|
||||
|
||||
if hasattr(tb, 'tb_next'):
|
||||
tb = extract_traceback(tb, context)
|
||||
|
||||
formatted.extend(format_extracted(tb, formatter, limit))
|
||||
formatted.extend(traceback.format_exception_only(etype, value))
|
||||
return formatted
|
||||
|
||||
def to_string(exc):
|
||||
if isinstance(exc, SystemExit):
|
||||
if not isinstance(exc.code, basestring):
|
||||
return 'Exited with "%d"' % exc.code
|
||||
return str(exc)
|
||||
785
bitbake/lib/bb/fetch/__init__.py
Normal file
785
bitbake/lib/bb/fetch/__init__.py
Normal file
@@ -0,0 +1,785 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb import persist_data
|
||||
|
||||
class MalformedUrl(Exception):
|
||||
"""Exception raised when encountering an invalid url"""
|
||||
|
||||
class FetchError(Exception):
|
||||
"""Exception raised when a download fails"""
|
||||
|
||||
class NoMethodError(Exception):
|
||||
"""Exception raised when there is no method to obtain a supplied url or set of urls"""
|
||||
|
||||
class MissingParameterError(Exception):
|
||||
"""Exception raised when a fetch method is missing a critical parameter in the url"""
|
||||
|
||||
class ParameterError(Exception):
|
||||
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
|
||||
|
||||
class MD5SumError(Exception):
|
||||
"""Exception raised when a MD5SUM of a file does not match the expected one"""
|
||||
|
||||
class InvalidSRCREV(Exception):
|
||||
"""Exception raised when an invalid SRCREV is encountered"""
|
||||
|
||||
def decodeurl(url):
|
||||
"""Decodes an URL into the tokens (scheme, network location, path,
|
||||
user, password, parameters).
|
||||
|
||||
>>> decodeurl("http://www.google.com/index.html")
|
||||
('http', 'www.google.com', '/index.html', '', '', {})
|
||||
|
||||
>>> decodeurl("file://gas/COPYING")
|
||||
('file', '', 'gas/COPYING', '', '', {})
|
||||
|
||||
CVS url with username, host and cvsroot. The cvs module to check out is in the
|
||||
parameters:
|
||||
|
||||
>>> decodeurl("cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg")
|
||||
('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'})
|
||||
|
||||
Dito, but this time the username has a password part. And we also request a special tag
|
||||
to check out.
|
||||
|
||||
>>> decodeurl("cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;module=familiar/dist/ipkg;tag=V0-99-81")
|
||||
('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'})
|
||||
"""
|
||||
|
||||
m = re.compile('(?P<type>[^:]*)://((?P<user>.+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
|
||||
if not m:
|
||||
raise MalformedUrl(url)
|
||||
|
||||
type = m.group('type')
|
||||
location = m.group('location')
|
||||
if not location:
|
||||
raise MalformedUrl(url)
|
||||
user = m.group('user')
|
||||
parm = m.group('parm')
|
||||
|
||||
locidx = location.find('/')
|
||||
if locidx != -1 and type.lower() != 'file':
|
||||
host = location[:locidx]
|
||||
path = location[locidx:]
|
||||
else:
|
||||
host = ""
|
||||
path = location
|
||||
if user:
|
||||
m = re.compile('(?P<user>[^:]+)(:?(?P<pswd>.*))').match(user)
|
||||
if m:
|
||||
user = m.group('user')
|
||||
pswd = m.group('pswd')
|
||||
else:
|
||||
user = ''
|
||||
pswd = ''
|
||||
|
||||
p = {}
|
||||
if parm:
|
||||
for s in parm.split(';'):
|
||||
s1,s2 = s.split('=')
|
||||
p[s1] = s2
|
||||
|
||||
return (type, host, path, user, pswd, p)
|
||||
|
||||
def encodeurl(decoded):
|
||||
"""Encodes a URL from tokens (scheme, network location, path,
|
||||
user, password, parameters).
|
||||
|
||||
>>> encodeurl(['http', 'www.google.com', '/index.html', '', '', {}])
|
||||
'http://www.google.com/index.html'
|
||||
|
||||
CVS with username, host and cvsroot. The cvs module to check out is in the
|
||||
parameters:
|
||||
|
||||
>>> encodeurl(['cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}])
|
||||
'cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg'
|
||||
|
||||
Dito, but this time the username has a password part. And we also request a special tag
|
||||
to check out.
|
||||
|
||||
>>> encodeurl(['cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}])
|
||||
'cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg'
|
||||
"""
|
||||
|
||||
(type, host, path, user, pswd, p) = decoded
|
||||
|
||||
if not type or not path:
|
||||
bb.msg.fatal(bb.msg.domain.Fetcher, "invalid or missing parameters for url encoding")
|
||||
url = '%s://' % type
|
||||
if user:
|
||||
url += "%s" % user
|
||||
if pswd:
|
||||
url += ":%s" % pswd
|
||||
url += "@"
|
||||
if host:
|
||||
url += "%s" % host
|
||||
url += "%s" % path
|
||||
if p:
|
||||
for parm in p:
|
||||
url += ";%s=%s" % (parm, p[parm])
|
||||
|
||||
return url
|
||||
|
||||
def uri_replace(uri, uri_find, uri_replace, d):
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
|
||||
if not uri or not uri_find or not uri_replace:
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "uri_replace: passed an undefined value, not replacing")
|
||||
uri_decoded = list(bb.decodeurl(uri))
|
||||
uri_find_decoded = list(bb.decodeurl(uri_find))
|
||||
uri_replace_decoded = list(bb.decodeurl(uri_replace))
|
||||
result_decoded = ['','','','','',{}]
|
||||
for i in uri_find_decoded:
|
||||
loc = uri_find_decoded.index(i)
|
||||
result_decoded[loc] = uri_decoded[loc]
|
||||
import types
|
||||
if type(i) == types.StringType:
|
||||
if (re.match(i, uri_decoded[loc])):
|
||||
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
|
||||
if uri_find_decoded.index(i) == 2:
|
||||
if d:
|
||||
localfn = bb.fetch.localpath(uri, d)
|
||||
if localfn:
|
||||
result_decoded[loc] = os.path.dirname(result_decoded[loc]) + "/" + os.path.basename(bb.fetch.localpath(uri, d))
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: matching %s against %s and replacing with %s" % (i, uri_decoded[loc], uri_replace_decoded[loc]))
|
||||
else:
|
||||
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: no match")
|
||||
return uri
|
||||
# else:
|
||||
# for j in i:
|
||||
# FIXME: apply replacements against options
|
||||
return bb.encodeurl(result_decoded)
|
||||
|
||||
methods = []
|
||||
urldata_cache = {}
|
||||
saved_headrevs = {}
|
||||
|
||||
def fetcher_init(d):
|
||||
"""
|
||||
Called to initilize the fetchers once the configuration data is known
|
||||
Calls before this must not hit the cache.
|
||||
"""
|
||||
pd = persist_data.PersistData(d)
|
||||
# When to drop SCM head revisions controled by user policy
|
||||
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Clearing SRCREV cache due to cache policy of: %s" % srcrev_policy)
|
||||
try:
|
||||
bb.fetch.saved_headrevs = pd.getKeyValues("BB_URI_HEADREVS")
|
||||
except:
|
||||
pass
|
||||
pd.delDomain("BB_URI_HEADREVS")
|
||||
else:
|
||||
bb.msg.fatal(bb.msg.domain.Fetcher, "Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
|
||||
for m in methods:
|
||||
if hasattr(m, "init"):
|
||||
m.init(d)
|
||||
|
||||
# Make sure our domains exist
|
||||
pd.addDomain("BB_URI_HEADREVS")
|
||||
pd.addDomain("BB_URI_LOCALCOUNT")
|
||||
|
||||
def fetcher_compare_revisons(d):
|
||||
"""
|
||||
Compare the revisions in the persistant cache with current values and
|
||||
return true/false on whether they've changed.
|
||||
"""
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
data = pd.getKeyValues("BB_URI_HEADREVS")
|
||||
data2 = bb.fetch.saved_headrevs
|
||||
|
||||
changed = False
|
||||
for key in data:
|
||||
if key not in data2 or data2[key] != data[key]:
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s changed" % key)
|
||||
changed = True
|
||||
return True
|
||||
else:
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "%s did not change" % key)
|
||||
return False
|
||||
|
||||
# Function call order is usually:
|
||||
# 1. init
|
||||
# 2. go
|
||||
# 3. localpaths
|
||||
# localpath can be called at any time
|
||||
|
||||
def init(urls, d, setup = True):
|
||||
urldata = {}
|
||||
fn = bb.data.getVar('FILE', d, 1)
|
||||
if fn in urldata_cache:
|
||||
urldata = urldata_cache[fn]
|
||||
|
||||
for url in urls:
|
||||
if url not in urldata:
|
||||
urldata[url] = FetchData(url, d)
|
||||
|
||||
if setup:
|
||||
for url in urldata:
|
||||
if not urldata[url].setup:
|
||||
urldata[url].setup_localpath(d)
|
||||
|
||||
urldata_cache[fn] = urldata
|
||||
return urldata
|
||||
|
||||
def go(d, urls = None):
|
||||
"""
|
||||
Fetch all urls
|
||||
init must have previously been called
|
||||
"""
|
||||
if not urls:
|
||||
urls = d.getVar("SRC_URI", 1).split()
|
||||
urldata = init(urls, d, True)
|
||||
|
||||
for u in urls:
|
||||
ud = urldata[u]
|
||||
m = ud.method
|
||||
if ud.localfile:
|
||||
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
|
||||
# File already present along with md5 stamp file
|
||||
# Touch md5 file to show activity
|
||||
try:
|
||||
os.utime(ud.md5, None)
|
||||
except:
|
||||
# Errors aren't fatal here
|
||||
pass
|
||||
continue
|
||||
lf = bb.utils.lockfile(ud.lockfile)
|
||||
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
|
||||
# If someone else fetched this before we got the lock,
|
||||
# notice and don't try again
|
||||
try:
|
||||
os.utime(ud.md5, None)
|
||||
except:
|
||||
# Errors aren't fatal here
|
||||
pass
|
||||
bb.utils.unlockfile(lf)
|
||||
continue
|
||||
|
||||
# First try fetching uri, u, from PREMIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('PREMIRRORS', d, 1) or "").split('\n') if i ]
|
||||
localpath = try_mirrors(d, u, mirrors)
|
||||
if not localpath:
|
||||
# Next try fetching from the original uri, u
|
||||
try:
|
||||
m.go(u, ud, d)
|
||||
localpath = ud.localpath
|
||||
except:
|
||||
# Finally, try fetching uri, u, from MIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('MIRRORS', d, 1) or "").split('\n') if i ]
|
||||
localpath = try_mirrors (d, u, mirrors)
|
||||
|
||||
if localpath:
|
||||
ud.localpath = localpath
|
||||
|
||||
if ud.localfile:
|
||||
if not m.forcefetch(u, ud, d):
|
||||
Fetch.write_md5sum(u, ud, d)
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
|
||||
def checkstatus(d):
|
||||
"""
|
||||
Check all urls exist upstream
|
||||
init must have previously been called
|
||||
"""
|
||||
urldata = init([], d, True)
|
||||
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
m = ud.method
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
|
||||
# First try checking uri, u, from PREMIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('PREMIRRORS', d, 1) or "").split('\n') if i ]
|
||||
ret = try_mirrors(d, u, mirrors, True)
|
||||
if not ret:
|
||||
# Next try checking from the original uri, u
|
||||
try:
|
||||
ret = m.checkstatus(u, ud, d)
|
||||
except:
|
||||
# Finally, try checking uri, u, from MIRRORS
|
||||
mirrors = [ i.split() for i in (bb.data.getVar('MIRRORS', d, 1) or "").split('\n') if i ]
|
||||
ret = try_mirrors (d, u, mirrors, True)
|
||||
|
||||
if not ret:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "URL %s doesn't work" % u)
|
||||
|
||||
def localpaths(d):
|
||||
"""
|
||||
Return a list of the local filenames, assuming successful fetch
|
||||
"""
|
||||
local = []
|
||||
urldata = init([], d, True)
|
||||
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
local.append(ud.localpath)
|
||||
|
||||
return local
|
||||
|
||||
srcrev_internal_call = False
|
||||
|
||||
def get_srcrev(d):
|
||||
"""
|
||||
Return the version string for the current package
|
||||
(usually to be used as PV)
|
||||
Most packages usually only have one SCM so we just pass on the call.
|
||||
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
|
||||
have been set.
|
||||
"""
|
||||
|
||||
#
|
||||
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
|
||||
# could translate into a call to here. If it does, we need to catch this
|
||||
# and provide some way so it knows get_srcrev is active instead of being
|
||||
# some number etc. hence the srcrev_internal_call tracking and the magic
|
||||
# "SRCREVINACTION" return value.
|
||||
#
|
||||
# Neater solutions welcome!
|
||||
#
|
||||
if bb.fetch.srcrev_internal_call:
|
||||
return "SRCREVINACTION"
|
||||
|
||||
scms = []
|
||||
|
||||
# Only call setup_localpath on URIs which suppports_srcrev()
|
||||
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
|
||||
for u in urldata:
|
||||
ud = urldata[u]
|
||||
if ud.method.suppports_srcrev():
|
||||
if not ud.setup:
|
||||
ud.setup_localpath(d)
|
||||
scms.append(u)
|
||||
|
||||
if len(scms) == 0:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
raise ParameterError
|
||||
|
||||
bb.data.setVar('__BB_DONT_CACHE','1', d)
|
||||
|
||||
if len(scms) == 1:
|
||||
return urldata[scms[0]].method.sortable_revision(scms[0], urldata[scms[0]], d)
|
||||
|
||||
#
|
||||
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
|
||||
#
|
||||
format = bb.data.getVar('SRCREV_FORMAT', d, 1)
|
||||
if not format:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
|
||||
raise ParameterError
|
||||
|
||||
for scm in scms:
|
||||
if 'name' in urldata[scm].parm:
|
||||
name = urldata[scm].parm["name"]
|
||||
rev = urldata[scm].method.sortable_revision(scm, urldata[scm], d)
|
||||
format = format.replace(name, rev)
|
||||
|
||||
return format
|
||||
|
||||
def localpath(url, d, cache = True):
|
||||
"""
|
||||
Called from the parser with cache=False since the cache isn't ready
|
||||
at this point. Also called from classed in OE e.g. patch.bbclass
|
||||
"""
|
||||
ud = init([url], d)
|
||||
if ud[url].method:
|
||||
return ud[url].localpath
|
||||
return url
|
||||
|
||||
def runfetchcmd(cmd, d, quiet = False):
|
||||
"""
|
||||
Run cmd returning the command output
|
||||
Raise an error if interrupted or cmd fails
|
||||
Optionally echo command output to stdout
|
||||
"""
|
||||
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
# FIXME: Should really include all export varaiables?
|
||||
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST', 'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy', 'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
|
||||
|
||||
for var in exportvars:
|
||||
val = data.getVar(var, d, True)
|
||||
if val:
|
||||
cmd = 'export ' + var + '=%s; %s' % (val, cmd)
|
||||
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
|
||||
|
||||
# redirect stderr to stdout
|
||||
stdout_handle = os.popen(cmd + " 2>&1", "r")
|
||||
output = ""
|
||||
|
||||
while 1:
|
||||
line = stdout_handle.readline()
|
||||
if not line:
|
||||
break
|
||||
if not quiet:
|
||||
print line,
|
||||
output += line
|
||||
|
||||
status = stdout_handle.close() or 0
|
||||
signal = status >> 8
|
||||
exitstatus = status & 0xff
|
||||
|
||||
if signal:
|
||||
raise FetchError("Fetch command %s failed with signal %s, output:\n%s" % (cmd, signal, output))
|
||||
elif status != 0:
|
||||
raise FetchError("Fetch command %s failed with exit code %s, output:\n%s" % (cmd, status, output))
|
||||
|
||||
return output
|
||||
|
||||
def try_mirrors(d, uri, mirrors, check = False):
|
||||
"""
|
||||
Try to use a mirrored version of the sources.
|
||||
This method will be automatically called before the fetchers go.
|
||||
|
||||
d Is a bb.data instance
|
||||
uri is the original uri we're trying to download
|
||||
mirrors is the list of mirrors we're going to try
|
||||
"""
|
||||
fpath = os.path.join(data.getVar("DL_DIR", d, 1), os.path.basename(uri))
|
||||
if not check and os.access(fpath, os.R_OK):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % fpath)
|
||||
return fpath
|
||||
|
||||
ld = d.createCopy()
|
||||
for (find, replace) in mirrors:
|
||||
newuri = uri_replace(uri, find, replace, ld)
|
||||
if newuri != uri:
|
||||
try:
|
||||
ud = FetchData(newuri, ld)
|
||||
except bb.fetch.NoMethodError:
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "No method for %s" % uri)
|
||||
continue
|
||||
|
||||
ud.setup_localpath(ld)
|
||||
|
||||
try:
|
||||
if check:
|
||||
ud.method.checkstatus(newuri, ud, ld)
|
||||
else:
|
||||
ud.method.go(newuri, ud, ld)
|
||||
return ud.localpath
|
||||
except (bb.fetch.MissingParameterError,
|
||||
bb.fetch.FetchError,
|
||||
bb.fetch.MD5SumError):
|
||||
import sys
|
||||
(type, value, traceback) = sys.exc_info()
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Mirror fetch failure: %s" % value)
|
||||
continue
|
||||
return None
|
||||
|
||||
|
||||
class FetchData(object):
|
||||
"""
|
||||
A class which represents the fetcher state for a given URI.
|
||||
"""
|
||||
def __init__(self, url, d):
|
||||
self.localfile = ""
|
||||
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = bb.decodeurl(data.expand(url, d))
|
||||
self.date = Fetch.getSRCDate(self, d)
|
||||
self.url = url
|
||||
if not self.user and "user" in self.parm:
|
||||
self.user = self.parm["user"]
|
||||
if not self.pswd and "pswd" in self.parm:
|
||||
self.pswd = self.parm["pswd"]
|
||||
self.setup = False
|
||||
for m in methods:
|
||||
if m.supports(url, self, d):
|
||||
self.method = m
|
||||
return
|
||||
raise NoMethodError("Missing implementation for url %s" % url)
|
||||
|
||||
def setup_localpath(self, d):
|
||||
self.setup = True
|
||||
if "localpath" in self.parm:
|
||||
# if user sets localpath for file, use it instead.
|
||||
self.localpath = self.parm["localpath"]
|
||||
else:
|
||||
premirrors = bb.data.getVar('PREMIRRORS', d, True)
|
||||
local = ""
|
||||
if premirrors and self.url:
|
||||
aurl = self.url.split(";")[0]
|
||||
mirrors = [ i.split() for i in (premirrors or "").split('\n') if i ]
|
||||
for (find, replace) in mirrors:
|
||||
if replace.startswith("file://"):
|
||||
path = aurl.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
local = replace.split("://")[1] + os.path.basename(path)
|
||||
if local == aurl or not os.path.exists(local) or os.path.isdir(local):
|
||||
local = ""
|
||||
self.localpath = local
|
||||
if not local:
|
||||
try:
|
||||
bb.fetch.srcrev_internal_call = True
|
||||
self.localpath = self.method.localpath(self.url, self, d)
|
||||
finally:
|
||||
bb.fetch.srcrev_internal_call = False
|
||||
# We have to clear data's internal caches since the cached value of SRCREV is now wrong.
|
||||
# Horrible...
|
||||
bb.data.delVar("ISHOULDNEVEREXIST", d)
|
||||
|
||||
# Note: These files should always be in DL_DIR whereas localpath may not be.
|
||||
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath), d)
|
||||
self.md5 = basepath + '.md5'
|
||||
self.lockfile = basepath + '.lock'
|
||||
|
||||
|
||||
class Fetch(object):
|
||||
"""Base class for 'fetch'ing data"""
|
||||
|
||||
def __init__(self, urls = []):
|
||||
self.urls = []
|
||||
|
||||
def supports(self, url, urldata, d):
|
||||
"""
|
||||
Check to see if this fetch class supports a given url.
|
||||
"""
|
||||
return 0
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
Can also setup variables in urldata for use in go (saving code duplication
|
||||
and duplicate code execution)
|
||||
"""
|
||||
return url
|
||||
|
||||
def setUrls(self, urls):
|
||||
self.__urls = urls
|
||||
|
||||
def getUrls(self):
|
||||
return self.__urls
|
||||
|
||||
urls = property(getUrls, setUrls, None, "Urls property")
|
||||
|
||||
def forcefetch(self, url, urldata, d):
|
||||
"""
|
||||
Force a fetch, even if localpath exists?
|
||||
"""
|
||||
return False
|
||||
|
||||
def suppports_srcrev(self):
|
||||
"""
|
||||
The fetcher supports auto source revisions (SRCREV)
|
||||
"""
|
||||
return False
|
||||
|
||||
def go(self, url, urldata, d):
|
||||
"""
|
||||
Fetch urls
|
||||
Assumes localpath was called first
|
||||
"""
|
||||
raise NoMethodError("Missing implementation for url")
|
||||
|
||||
def checkstatus(self, url, urldata, d):
|
||||
"""
|
||||
Check the status of a URL
|
||||
Assumes localpath was called first
|
||||
"""
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s could not be checked for status since no method exists." % url)
|
||||
return True
|
||||
|
||||
def getSRCDate(urldata, d):
|
||||
"""
|
||||
Return the SRC Date for the component
|
||||
|
||||
d the bb.data module
|
||||
"""
|
||||
if "srcdate" in urldata.parm:
|
||||
return urldata.parm['srcdate']
|
||||
|
||||
pn = data.getVar("PN", d, 1)
|
||||
|
||||
if pn:
|
||||
return data.getVar("SRCDATE_%s" % pn, d, 1) or data.getVar("CVSDATE_%s" % pn, d, 1) or data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
|
||||
|
||||
return data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
|
||||
getSRCDate = staticmethod(getSRCDate)
|
||||
|
||||
def srcrev_internal_helper(ud, d):
|
||||
"""
|
||||
Return:
|
||||
a) a source revision if specified
|
||||
b) True if auto srcrev is in action
|
||||
c) False otherwise
|
||||
"""
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
return ud.parm['rev']
|
||||
|
||||
if 'tag' in ud.parm:
|
||||
return ud.parm['tag']
|
||||
|
||||
rev = None
|
||||
if 'name' in ud.parm:
|
||||
pn = data.getVar("PN", d, 1)
|
||||
rev = data.getVar("SRCREV_pn-" + pn + "_" + ud.parm['name'], d, 1)
|
||||
if not rev:
|
||||
rev = data.getVar("SRCREV", d, 1)
|
||||
if rev == "INVALID":
|
||||
raise InvalidSRCREV("Please set SRCREV to a valid value")
|
||||
if not rev:
|
||||
return False
|
||||
if rev is "SRCREVINACTION":
|
||||
return True
|
||||
return rev
|
||||
|
||||
srcrev_internal_helper = staticmethod(srcrev_internal_helper)
|
||||
|
||||
def localcount_internal_helper(ud, d):
|
||||
"""
|
||||
Return:
|
||||
a) a locked localcount if specified
|
||||
b) None otherwise
|
||||
"""
|
||||
|
||||
localcount= None
|
||||
if 'name' in ud.parm:
|
||||
pn = data.getVar("PN", d, 1)
|
||||
localcount = data.getVar("LOCALCOUNT_" + ud.parm['name'], d, 1)
|
||||
if not localcount:
|
||||
localcount = data.getVar("LOCALCOUNT", d, 1)
|
||||
return localcount
|
||||
|
||||
localcount_internal_helper = staticmethod(localcount_internal_helper)
|
||||
|
||||
def verify_md5sum(ud, got_sum):
|
||||
"""
|
||||
Verify the md5sum we wanted with the one we got
|
||||
"""
|
||||
wanted_sum = None
|
||||
if 'md5sum' in ud.parm:
|
||||
wanted_sum = ud.parm['md5sum']
|
||||
if not wanted_sum:
|
||||
return True
|
||||
|
||||
return wanted_sum == got_sum
|
||||
verify_md5sum = staticmethod(verify_md5sum)
|
||||
|
||||
def write_md5sum(url, ud, d):
|
||||
md5data = bb.utils.md5_file(ud.localpath)
|
||||
# verify the md5sum
|
||||
if not Fetch.verify_md5sum(ud, md5data):
|
||||
raise MD5SumError(url)
|
||||
|
||||
md5out = file(ud.md5, 'w')
|
||||
md5out.write(md5data)
|
||||
md5out.close()
|
||||
write_md5sum = staticmethod(write_md5sum)
|
||||
|
||||
def latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Look in the cache for the latest revision, if not present ask the SCM.
|
||||
"""
|
||||
if not hasattr(self, "_latest_revision"):
|
||||
raise ParameterError
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
key = self.generate_revision_key(url, ud, d)
|
||||
rev = pd.getValue("BB_URI_HEADREVS", key)
|
||||
if rev != None:
|
||||
return str(rev)
|
||||
|
||||
rev = self._latest_revision(url, ud, d)
|
||||
pd.setValue("BB_URI_HEADREVS", key, rev)
|
||||
return rev
|
||||
|
||||
def sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
|
||||
"""
|
||||
if hasattr(self, "_sortable_revision"):
|
||||
return self._sortable_revision(url, ud, d)
|
||||
|
||||
pd = persist_data.PersistData(d)
|
||||
key = self.generate_revision_key(url, ud, d)
|
||||
|
||||
latest_rev = self._build_revision(url, ud, d)
|
||||
last_rev = pd.getValue("BB_URI_LOCALCOUNT", key + "_rev")
|
||||
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
|
||||
count = None
|
||||
if uselocalcount:
|
||||
count = Fetch.localcount_internal_helper(ud, d)
|
||||
if count is None:
|
||||
count = pd.getValue("BB_URI_LOCALCOUNT", key + "_count")
|
||||
|
||||
if last_rev == latest_rev:
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
buildindex_provided = hasattr(self, "_sortable_buildindex")
|
||||
if buildindex_provided:
|
||||
count = self._sortable_buildindex(url, ud, d, latest_rev)
|
||||
|
||||
if count is None:
|
||||
count = "0"
|
||||
elif uselocalcount or buildindex_provided:
|
||||
count = str(count)
|
||||
else:
|
||||
count = str(int(count) + 1)
|
||||
|
||||
pd.setValue("BB_URI_LOCALCOUNT", key + "_rev", latest_rev)
|
||||
pd.setValue("BB_URI_LOCALCOUNT", key + "_count", count)
|
||||
|
||||
return str(count + "+" + latest_rev)
|
||||
|
||||
def generate_revision_key(self, url, ud, d):
|
||||
key = self._revision_key(url, ud, d)
|
||||
return "%s-%s" % (key, bb.data.getVar("PN", d, True) or "")
|
||||
|
||||
import cvs
|
||||
import git
|
||||
import local
|
||||
import svn
|
||||
import wget
|
||||
import svk
|
||||
import ssh
|
||||
import perforce
|
||||
import bzr
|
||||
import hg
|
||||
import osc
|
||||
import repo
|
||||
|
||||
methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
methods.append(svn.Svn())
|
||||
methods.append(git.Git())
|
||||
methods.append(cvs.Cvs())
|
||||
methods.append(svk.Svk())
|
||||
methods.append(ssh.SSH())
|
||||
methods.append(perforce.Perforce())
|
||||
methods.append(bzr.Bzr())
|
||||
methods.append(hg.Hg())
|
||||
methods.append(osc.Osc())
|
||||
methods.append(repo.Repo())
|
||||
148
bitbake/lib/bb/fetch/bzr.py
Normal file
148
bitbake/lib/bb/fetch/bzr.py
Normal file
@@ -0,0 +1,148 @@
|
||||
"""
|
||||
BitBake 'Fetch' implementation for bzr.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2007 Ross Burton
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
#
|
||||
# Classes for obtaining upstream sources for the
|
||||
# BitBake build tools.
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Bzr(Fetch):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['bzr']
|
||||
|
||||
def localpath (self, url, ud, d):
|
||||
|
||||
# Create paths to bzr checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
|
||||
|
||||
revision = Fetch.srcrev_internal_helper(ud, d)
|
||||
if revision is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
elif revision:
|
||||
ud.revision = revision
|
||||
|
||||
if not ud.revision:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildbzrcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an bzr commandline based on ud
|
||||
command is "fetch", "update", "revno"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_bzr}', d)
|
||||
|
||||
proto = "http"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
bzrroot = ud.host + ud.path
|
||||
|
||||
options = []
|
||||
|
||||
if command is "revno":
|
||||
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command is "fetch":
|
||||
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
elif command is "update":
|
||||
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid bzr command %s" % command)
|
||||
|
||||
return bzrcmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "update")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Update %s" % loc)
|
||||
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
|
||||
runfetchcmd(bzrcmd, d)
|
||||
else:
|
||||
os.system("rm -rf %s" % os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)))
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Checkout %s" % loc)
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % bzrcmd)
|
||||
runfetchcmd(bzrcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.pkgdir)), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "bzr:" + ud.pkgdir
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "BZR fetcher hitting network for %s" % url)
|
||||
|
||||
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
|
||||
|
||||
return output.strip()
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
|
||||
177
bitbake/lib/bb/fetch/cvs.py
Normal file
177
bitbake/lib/bb/fetch/cvs.py
Normal file
@@ -0,0 +1,177 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
#
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
class Cvs(Fetch):
|
||||
"""
|
||||
Class to fetch a module or modules from cvs repositories
|
||||
"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['cvs']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("cvs method needs a 'module' parameter")
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.tag = ""
|
||||
if 'tag' in ud.parm:
|
||||
ud.tag = ud.parm['tag']
|
||||
|
||||
# Override the default date in certain cases
|
||||
if 'date' in ud.parm:
|
||||
ud.date = ud.parm['date']
|
||||
elif ud.tag:
|
||||
ud.date = ""
|
||||
|
||||
norecurse = ''
|
||||
if 'norecurse' in ud.parm:
|
||||
norecurse = '_norecurse'
|
||||
|
||||
fullpath = ''
|
||||
if 'fullpath' in ud.parm:
|
||||
fullpath = '_fullpath'
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def forcefetch(self, url, ud, d):
|
||||
if (ud.date == "now"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
|
||||
method = "pserver"
|
||||
if "method" in ud.parm:
|
||||
method = ud.parm["method"]
|
||||
|
||||
localdir = ud.module
|
||||
if "localdir" in ud.parm:
|
||||
localdir = ud.parm["localdir"]
|
||||
|
||||
cvs_port = ""
|
||||
if "port" in ud.parm:
|
||||
cvs_port = ud.parm["port"]
|
||||
|
||||
cvs_rsh = None
|
||||
if method == "ext":
|
||||
if "rsh" in ud.parm:
|
||||
cvs_rsh = ud.parm["rsh"]
|
||||
|
||||
if method == "dir":
|
||||
cvsroot = ud.path
|
||||
else:
|
||||
cvsroot = ":" + method
|
||||
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
|
||||
if cvsproxyhost:
|
||||
cvsroot += ";proxy=" + cvsproxyhost
|
||||
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
|
||||
if cvsproxyport:
|
||||
cvsroot += ";proxyport=" + cvsproxyport
|
||||
cvsroot += ":" + ud.user
|
||||
if ud.pswd:
|
||||
cvsroot += ":" + ud.pswd
|
||||
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
|
||||
|
||||
options = []
|
||||
if 'norecurse' in ud.parm:
|
||||
options.append("-l")
|
||||
if ud.date:
|
||||
# treat YYYYMMDDHHMM specially for CVS
|
||||
if len(ud.date) == 12:
|
||||
options.append("-D \"%s %s:%s UTC\"" % (ud.date[0:8], ud.date[8:10], ud.date[10:12]))
|
||||
else:
|
||||
options.append("-D \"%s UTC\"" % ud.date)
|
||||
if ud.tag:
|
||||
options.append("-r %s" % ud.tag)
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
data.setVar('CVSROOT', cvsroot, localdata)
|
||||
data.setVar('CVSCOOPTS', " ".join(options), localdata)
|
||||
data.setVar('CVSMODULE', ud.module, localdata)
|
||||
cvscmd = data.getVar('FETCHCOMMAND', localdata, 1)
|
||||
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, 1)
|
||||
|
||||
if cvs_rsh:
|
||||
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
|
||||
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
|
||||
|
||||
# create module directory
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory")
|
||||
pkg = data.expand('${PN}', d)
|
||||
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
|
||||
moddir = os.path.join(pkgdir,localdir)
|
||||
if os.access(os.path.join(moddir,'CVS'), os.R_OK):
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(moddir)
|
||||
myret = os.system(cvsupdatecmd)
|
||||
else:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(pkgdir)
|
||||
os.chdir(pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cvscmd)
|
||||
myret = os.system(cvscmd)
|
||||
|
||||
if myret != 0 or not os.access(moddir, os.R_OK):
|
||||
try:
|
||||
os.rmdir(moddir)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
|
||||
# tar them up to a defined filename
|
||||
if 'fullpath' in ud.parm:
|
||||
os.chdir(pkgdir)
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, localdir))
|
||||
else:
|
||||
os.chdir(moddir)
|
||||
os.chdir('..')
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
|
||||
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
217
bitbake/lib/bb/fetch/git.py
Normal file
217
bitbake/lib/bb/fetch/git.py
Normal file
@@ -0,0 +1,217 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' git implementation
|
||||
|
||||
"""
|
||||
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Git(Fetch):
|
||||
"""Class to fetch a module or modules from git repositories"""
|
||||
def init(self, d):
|
||||
#
|
||||
# Only enable _sortable revision if the key is set
|
||||
#
|
||||
if bb.data.getVar("BB_GIT_CLONE_FOR_SRCREV", d, True):
|
||||
self._sortable_buildindex = self._sortable_buildindex_disabled
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with git.
|
||||
"""
|
||||
return ud.type in ['git']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
if 'protocol' in ud.parm:
|
||||
ud.proto = ud.parm['protocol']
|
||||
elif not ud.host:
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "rsync"
|
||||
|
||||
ud.branch = ud.parm.get("branch", "master")
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
|
||||
ud.mirrortarball = 'git_%s.tar.gz' % (gitsrcname)
|
||||
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
|
||||
tag = Fetch.srcrev_internal_helper(ud, d)
|
||||
if tag is True:
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
elif tag:
|
||||
ud.tag = tag
|
||||
|
||||
if not ud.tag or ud.tag == "master":
|
||||
ud.tag = self.latest_revision(url, ud, d)
|
||||
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
if subdir.endswith("/"):
|
||||
subdir = subdir[:-1]
|
||||
subdirpath = os.path.join(ud.path, subdir);
|
||||
else:
|
||||
subdirpath = ud.path;
|
||||
|
||||
if 'fullclone' in ud.parm:
|
||||
ud.localfile = ud.mirrortarball
|
||||
else:
|
||||
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, subdirpath.replace('/', '.'), ud.tag), d)
|
||||
|
||||
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
repofile = os.path.join(data.getVar("DL_DIR", d, 1), ud.mirrortarball)
|
||||
|
||||
coname = '%s' % (ud.tag)
|
||||
codir = os.path.join(ud.clonedir, coname)
|
||||
|
||||
if not os.path.exists(ud.clonedir):
|
||||
try:
|
||||
Fetch.try_mirrors(ud.mirrortarball)
|
||||
bb.mkdirhier(ud.clonedir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % (repofile), d)
|
||||
except:
|
||||
runfetchcmd("%s clone -n %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
# Remove all but the .git directory
|
||||
if not self._contains_ref(ud.tag, d):
|
||||
runfetchcmd("rm * -Rf", d)
|
||||
runfetchcmd("%s fetch %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.branch), d)
|
||||
runfetchcmd("%s fetch --tags %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
|
||||
runfetchcmd("%s prune-packed" % ud.basecmd, d)
|
||||
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
|
||||
if mirror_tarballs != "0" or 'fullclone' in ud.parm:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
|
||||
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
|
||||
|
||||
if 'fullclone' in ud.parm:
|
||||
return
|
||||
|
||||
if os.path.exists(codir):
|
||||
bb.utils.prunedir(codir)
|
||||
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
if subdir.endswith("/"):
|
||||
subdirbase = os.path.basename(subdir[:-1])
|
||||
else:
|
||||
subdirbase = os.path.basename(subdir)
|
||||
else:
|
||||
subdirbase = ""
|
||||
|
||||
if subdir != "":
|
||||
readpathspec = ":%s" % (subdir)
|
||||
codir = os.path.join(codir, "git")
|
||||
coprefix = os.path.join(codir, subdirbase, "")
|
||||
else:
|
||||
readpathspec = ""
|
||||
coprefix = os.path.join(codir, "git", "")
|
||||
|
||||
bb.mkdirhier(codir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f --prefix=%s -a" % (ud.basecmd, coprefix), d)
|
||||
|
||||
os.chdir(codir)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git checkout")
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
bb.utils.prunedir(codir)
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _contains_ref(self, tag, d):
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
output = runfetchcmd("%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag), d, quiet=True)
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "git:" + ud.host + ud.path.replace('/', '.')
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Compute the HEAD revision for the url
|
||||
"""
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
cmd = "%s ls-remote %s://%s%s%s %s" % (basecmd, ud.proto, username, ud.host, ud.path, ud.branch)
|
||||
output = runfetchcmd(cmd, d, True)
|
||||
if not output:
|
||||
raise bb.fetch.FetchError("Fetch command %s gave empty output\n" % (cmd))
|
||||
return output.split()[0]
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.tag
|
||||
|
||||
def _sortable_buildindex_disabled(self, url, ud, d, rev):
|
||||
"""
|
||||
Return a suitable buildindex for the revision specified. This is done by counting revisions
|
||||
using "git rev-list" which may or may not work in different circumstances.
|
||||
"""
|
||||
|
||||
cwd = os.getcwd()
|
||||
|
||||
# Check if we have the rev already
|
||||
|
||||
if not os.path.exists(ud.clonedir):
|
||||
print "no repo"
|
||||
self.go(None, ud, d)
|
||||
if not os.path.exists(ud.clonedir):
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value" % (url, ud.clonedir))
|
||||
return None
|
||||
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
if not self._contains_ref(rev, d):
|
||||
self.go(None, ud, d)
|
||||
|
||||
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
|
||||
os.chdir(cwd)
|
||||
|
||||
buildindex = "%s" % output.split()[0]
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "GIT repository for %s in %s is returning %s revisions in rev-list before %s" % (url, ud.clonedir, buildindex, rev))
|
||||
return buildindex
|
||||
|
||||
173
bitbake/lib/bb/fetch/hg.py
Normal file
173
bitbake/lib/bb/fetch/hg.py
Normal file
@@ -0,0 +1,173 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for mercurial DRCS (hg).
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
# Copyright (C) 2007 Robert Schuster
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Hg(Fetch):
|
||||
"""Class to fetch a from mercurial repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with mercurial.
|
||||
"""
|
||||
return ud.type in ['hg']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("hg method needs a 'module' parameter")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to mercurial checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
tag = Fetch.srcrev_internal_helper(ud, d)
|
||||
if tag is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
elif tag:
|
||||
ud.revision = tag
|
||||
else:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildhgcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an hg commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_hg}', d)
|
||||
|
||||
proto = "http"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
host = ud.host
|
||||
if proto == "file":
|
||||
host = "/"
|
||||
ud.host = "localhost"
|
||||
|
||||
if not ud.user:
|
||||
hgroot = host + ud.path
|
||||
else:
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
|
||||
if command is "info":
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
|
||||
options = [];
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command is "fetch":
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
elif command is "pull":
|
||||
# do not pass options list; limiting pull to rev causes the local
|
||||
# repo not to contain it and immediately following "update" command
|
||||
# will crash
|
||||
cmd = "%s pull" % (basecmd)
|
||||
elif command is "update":
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid hg command %s" % command)
|
||||
|
||||
return cmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
|
||||
updatecmd = self._buildhgcommand(ud, d, "pull")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
else:
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
|
||||
runfetchcmd(fetchcmd, d)
|
||||
|
||||
# Even when we clone (fetch), we still need to update as hg's clone
|
||||
# won't checkout the specified revision if its on a branch
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Compute tip revision for the url
|
||||
"""
|
||||
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
|
||||
return output.strip()
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "hg:" + ud.moddir
|
||||
|
||||
72
bitbake/lib/bb/fetch/local.py
Normal file
72
bitbake/lib/bb/fetch/local.py
Normal file
@@ -0,0 +1,72 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
|
||||
class Local(Fetch):
|
||||
def supports(self, url, urldata, d):
|
||||
"""
|
||||
Check to see if a given url represents a local fetch.
|
||||
"""
|
||||
return urldata.type in ['file']
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
"""
|
||||
path = url.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
newpath = path
|
||||
if path[0] != "/":
|
||||
filespath = data.getVar('FILESPATH', d, 1)
|
||||
if filespath:
|
||||
newpath = bb.which(filespath, path)
|
||||
if not newpath:
|
||||
filesdir = data.getVar('FILESDIR', d, 1)
|
||||
if filesdir:
|
||||
newpath = os.path.join(filesdir, path)
|
||||
# We don't set localfile as for this fetcher the file is already local!
|
||||
return newpath
|
||||
|
||||
def go(self, url, urldata, d):
|
||||
"""Fetch urls (no-op for Local method)"""
|
||||
# no need to fetch local files, we'll deal with them in place.
|
||||
return 1
|
||||
|
||||
def checkstatus(self, url, urldata, d):
|
||||
"""
|
||||
Check the status of the url
|
||||
"""
|
||||
if urldata.localpath.find("*") != -1:
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
|
||||
return True
|
||||
if os.path.exists(urldata.localpath):
|
||||
return True
|
||||
return False
|
||||
150
bitbake/lib/bb/fetch/osc.py
Normal file
150
bitbake/lib/bb/fetch/osc.py
Normal file
@@ -0,0 +1,150 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
Bitbake "Fetch" implementation for osc (Opensuse build service client).
|
||||
Based on the svn "Fetch" implementation.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Osc(Fetch):
|
||||
"""Class to fetch a module or modules from Opensuse build server
|
||||
repositories."""
|
||||
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with osc.
|
||||
"""
|
||||
return ud.type in ['osc']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("osc method needs a 'module' parameter.")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to osc checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
|
||||
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
pv = data.getVar("PV", d, 0)
|
||||
rev = Fetch.srcrev_internal_helper(ud, d)
|
||||
if rev and rev != True:
|
||||
ud.revision = rev
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildosccommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an ocs commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_osc}', d)
|
||||
|
||||
proto = "ocs"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
options = []
|
||||
|
||||
config = "-c %s" % self.generate_config(ud, d)
|
||||
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
coroot = ud.path
|
||||
if coroot.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
coroot= coroot[1:]
|
||||
|
||||
if command is "fetch":
|
||||
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
|
||||
elif command is "update":
|
||||
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid osc command %s" % command)
|
||||
|
||||
return osccmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""
|
||||
Fetch url
|
||||
"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
|
||||
oscupdatecmd = self._buildosccommand(ud, d, "update")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update "+ loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscupdatecmd)
|
||||
runfetchcmd(oscupdatecmd, d)
|
||||
else:
|
||||
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscfetchcmd)
|
||||
runfetchcmd(oscfetchcmd, d)
|
||||
|
||||
os.chdir(os.path.join(ud.pkgdir + ud.path))
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def supports_srcrev(self):
|
||||
return False
|
||||
|
||||
def generate_config(self, ud, d):
|
||||
"""
|
||||
Generate a .oscrc to be used for this run.
|
||||
"""
|
||||
|
||||
config_path = "%s/oscrc" % data.expand('${OSCDIR}', d)
|
||||
if (os.path.exists(config_path)):
|
||||
os.remove(config_path)
|
||||
|
||||
f = open(config_path, 'w')
|
||||
f.write("[general]\n")
|
||||
f.write("apisrv = %s\n" % ud.host)
|
||||
f.write("scheme = http\n")
|
||||
f.write("su-wrapper = su -c\n")
|
||||
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
|
||||
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
|
||||
f.write("extra-pkgs = gzip\n")
|
||||
f.write("\n")
|
||||
f.write("[%s]\n" % ud.host)
|
||||
f.write("user = %s\n" % ud.parm["user"])
|
||||
f.write("pass = %s\n" % ud.parm["pswd"])
|
||||
f.close()
|
||||
|
||||
return config_path
|
||||
209
bitbake/lib/bb/fetch/perforce.py
Normal file
209
bitbake/lib/bb/fetch/perforce.py
Normal file
@@ -0,0 +1,209 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
|
||||
class Perforce(Fetch):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['p4']
|
||||
|
||||
def doparse(url,d):
|
||||
parm = {}
|
||||
path = url.split("://")[1]
|
||||
delim = path.find("@");
|
||||
if delim != -1:
|
||||
(user,pswd,host,port) = path.split('@')[0].split(":")
|
||||
path = path.split('@')[1]
|
||||
else:
|
||||
(host,port) = data.getVar('P4PORT', d).split(':')
|
||||
user = ""
|
||||
pswd = ""
|
||||
|
||||
if path.find(";") != -1:
|
||||
keys=[]
|
||||
values=[]
|
||||
plist = path.split(';')
|
||||
for item in plist:
|
||||
if item.count('='):
|
||||
(key,value) = item.split('=')
|
||||
keys.append(key)
|
||||
values.append(value)
|
||||
|
||||
parm = dict(zip(keys,values))
|
||||
path = "//" + path.split(';')[0]
|
||||
host += ":%s" % (port)
|
||||
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
|
||||
|
||||
return host,path,user,pswd,parm
|
||||
doparse = staticmethod(doparse)
|
||||
|
||||
def getcset(d, depot,host,user,pswd,parm):
|
||||
p4opt = ""
|
||||
if "cset" in parm:
|
||||
return parm["cset"];
|
||||
if user:
|
||||
p4opt += " -u %s" % (user)
|
||||
if pswd:
|
||||
p4opt += " -P %s" % (pswd)
|
||||
if host:
|
||||
p4opt += " -p %s" % (host)
|
||||
|
||||
p4date = data.getVar("P4DATE", d, 1)
|
||||
if "revision" in parm:
|
||||
depot += "#%s" % (parm["revision"])
|
||||
elif "label" in parm:
|
||||
depot += "@%s" % (parm["label"])
|
||||
elif p4date:
|
||||
depot += "@%s" % (p4date)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND_p4', d, 1)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
|
||||
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
|
||||
cset = p4file.readline().strip()
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "READ %s" % (cset))
|
||||
if not cset:
|
||||
return -1
|
||||
|
||||
return cset.split(' ')[1]
|
||||
getcset = staticmethod(getcset)
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
(host,path,user,pswd,parm) = Perforce.doparse(url,d)
|
||||
|
||||
# If a label is specified, we use that as our filename
|
||||
|
||||
if "label" in parm:
|
||||
ud.localfile = "%s.tar.gz" % (parm["label"])
|
||||
return os.path.join(data.getVar("DL_DIR", d, 1), ud.localfile)
|
||||
|
||||
base = path
|
||||
which = path.find('/...')
|
||||
if which != -1:
|
||||
base = path[:which]
|
||||
|
||||
if base[0] == "/":
|
||||
base = base[1:]
|
||||
|
||||
cset = Perforce.getcset(d, path, host, user, pswd, parm)
|
||||
|
||||
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host,base.replace('/', '.'), cset), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, 1), ud.localfile)
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""
|
||||
Fetch urls
|
||||
"""
|
||||
|
||||
(host,depot,user,pswd,parm) = Perforce.doparse(loc, d)
|
||||
|
||||
if depot.find('/...') != -1:
|
||||
path = depot[:depot.find('/...')]
|
||||
else:
|
||||
path = depot
|
||||
|
||||
if "module" in parm:
|
||||
module = parm["module"]
|
||||
else:
|
||||
module = os.path.basename(path)
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
# Get the p4 command
|
||||
p4opt = ""
|
||||
if user:
|
||||
p4opt += " -u %s" % (user)
|
||||
|
||||
if pswd:
|
||||
p4opt += " -P %s" % (pswd)
|
||||
|
||||
if host:
|
||||
p4opt += " -p %s" % (host)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND', localdata, 1)
|
||||
|
||||
# create temp directory
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
if not tmpfile:
|
||||
bb.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
|
||||
raise FetchError(module)
|
||||
|
||||
if "label" in parm:
|
||||
depot = "%s@%s" % (depot,parm["label"])
|
||||
else:
|
||||
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
|
||||
depot = "%s@%s" % (depot,cset)
|
||||
|
||||
os.chdir(tmpfile)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "%s%s files %s" % (p4cmd, p4opt, depot))
|
||||
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
|
||||
|
||||
if not p4file:
|
||||
bb.error("Fetch: unable to get the P4 files from %s" % (depot))
|
||||
raise FetchError(module)
|
||||
|
||||
count = 0
|
||||
|
||||
for file in p4file:
|
||||
list = file.split()
|
||||
|
||||
if list[2] == "delete":
|
||||
continue
|
||||
|
||||
dest = list[0][len(path)+1:]
|
||||
where = dest.find("#")
|
||||
|
||||
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module,dest[:where],list[0]))
|
||||
count = count + 1
|
||||
|
||||
if count == 0:
|
||||
bb.error("Fetch: No files gathered from the P4 fetch")
|
||||
raise FetchError(module)
|
||||
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, module))
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(module)
|
||||
# cleanup
|
||||
os.system('rm -rf %s' % tmpfile)
|
||||
|
||||
|
||||
106
bitbake/lib/bb/fetch/repo.py
Normal file
106
bitbake/lib/bb/fetch/repo.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake "Fetch" repo (git) implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2009 Tom Rini <trini@embeddedalley.com>
|
||||
#
|
||||
# Based on git.py which is:
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, re
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Repo(Fetch):
|
||||
"""Class to fetch a module or modules from repo (git) repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with repo.
|
||||
"""
|
||||
return ud.type in ["repo"]
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
"""
|
||||
We don"t care about the git rev of the manifests repository, but
|
||||
we do care about the manifest to use. The default is "default".
|
||||
We also care about the branch or tag to be used. The default is
|
||||
"master".
|
||||
"""
|
||||
|
||||
if "protocol" in ud.parm:
|
||||
ud.proto = ud.parm["protocol"]
|
||||
else:
|
||||
ud.proto = "git"
|
||||
|
||||
if "branch" in ud.parm:
|
||||
ud.branch = ud.parm["branch"]
|
||||
else:
|
||||
ud.branch = "master"
|
||||
|
||||
if "manifest" in ud.parm:
|
||||
manifest = ud.parm["manifest"]
|
||||
if manifest.endswith(".xml"):
|
||||
ud.manifest = manifest
|
||||
else:
|
||||
ud.manifest = manifest + ".xml"
|
||||
else:
|
||||
ud.manifest = "default.xml"
|
||||
|
||||
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping repo init / sync." % ud.localpath)
|
||||
return
|
||||
|
||||
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
|
||||
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
|
||||
codir = os.path.join(repodir, gitsrcname, ud.manifest)
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + "@"
|
||||
else:
|
||||
username = ""
|
||||
|
||||
bb.mkdirhier(os.path.join(codir, "repo"))
|
||||
os.chdir(os.path.join(codir, "repo"))
|
||||
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
|
||||
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
|
||||
|
||||
runfetchcmd("repo sync", d)
|
||||
os.chdir(codir)
|
||||
|
||||
# Create a cache
|
||||
runfetchcmd("tar --exclude=.repo --exclude=.git -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return False
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.manifest
|
||||
|
||||
def _want_sortable_revision(self, url, ud, d):
|
||||
return False
|
||||
@@ -38,10 +38,8 @@ IETF secsh internet draft:
|
||||
|
||||
import re, os
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
|
||||
|
||||
__pattern__ = re.compile(r'''
|
||||
@@ -63,21 +61,21 @@ __pattern__ = re.compile(r'''
|
||||
$
|
||||
''', re.VERBOSE)
|
||||
|
||||
class SSH(FetchMethod):
|
||||
class SSH(Fetch):
|
||||
'''Class to fetch a module or modules via Secure Shell'''
|
||||
|
||||
def supports(self, url, urldata, d):
|
||||
return __pattern__.match(url) != None
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
m = __pattern__.match(urldata.url)
|
||||
m = __pattern__.match(url)
|
||||
path = m.group('path')
|
||||
host = m.group('host')
|
||||
lpath = os.path.join(data.getVar('DL_DIR', d, True), host, os.path.basename(path))
|
||||
return lpath
|
||||
|
||||
def download(self, url, urldata, d):
|
||||
dldir = data.getVar('DL_DIR', d, True)
|
||||
def go(self, url, urldata, d):
|
||||
dldir = data.getVar('DL_DIR', d, 1)
|
||||
|
||||
m = __pattern__.match(url)
|
||||
path = m.group('path')
|
||||
@@ -114,7 +112,7 @@ class SSH(FetchMethod):
|
||||
commands.mkarg(ldir)
|
||||
)
|
||||
|
||||
bb.fetch2.check_network_access(d, cmd, urldata.url)
|
||||
|
||||
runfetchcmd(cmd, d)
|
||||
|
||||
(exitstatus, output) = commands.getstatusoutput(cmd)
|
||||
if exitstatus != 0:
|
||||
print output
|
||||
raise FetchError('Unable to fetch %s' % url)
|
||||
106
bitbake/lib/bb/fetch/svk.py
Normal file
106
bitbake/lib/bb/fetch/svk.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
This implementation is for svk. It is based on the svn implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
|
||||
class Svk(Fetch):
|
||||
"""Class to fetch a module or modules from svk repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with svk.
|
||||
"""
|
||||
return ud.type in ['svk']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("svk method needs a 'module' parameter")
|
||||
else:
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.revision = ""
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def forcefetch(self, url, ud, d):
|
||||
if (ud.date == "now"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
svkroot = ud.host + ud.path
|
||||
|
||||
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
|
||||
|
||||
if ud.revision:
|
||||
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
|
||||
|
||||
# create temp directory
|
||||
localdata = data.createCopy(d)
|
||||
data.update_data(localdata)
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
|
||||
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
if not tmpfile:
|
||||
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
|
||||
raise FetchError(ud.module)
|
||||
|
||||
# check out sources there
|
||||
os.chdir(tmpfile)
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svkcmd)
|
||||
myret = os.system(svkcmd)
|
||||
if myret != 0:
|
||||
try:
|
||||
os.rmdir(tmpfile)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
|
||||
os.chdir(os.path.join(tmpfile, os.path.dirname(ud.module)))
|
||||
# tar them up to a defined filename
|
||||
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)))
|
||||
if myret != 0:
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise FetchError(ud.module)
|
||||
# cleanup
|
||||
os.system('rm -rf %s' % tmpfile)
|
||||
201
bitbake/lib/bb/fetch/svn.py
Normal file
201
bitbake/lib/bb/fetch/svn.py
Normal file
@@ -0,0 +1,201 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for svn.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import sys
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
from bb.fetch import MissingParameterError
|
||||
from bb.fetch import runfetchcmd
|
||||
|
||||
class Svn(Fetch):
|
||||
"""Class to fetch a module or modules from svn repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with svn.
|
||||
"""
|
||||
return ud.type in ['svn']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("svn method needs a 'module' parameter")
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to svn checkouts
|
||||
relpath = ud.path
|
||||
if relpath.startswith('/'):
|
||||
# Remove leading slash as os.path.join can't cope
|
||||
relpath = relpath[1:]
|
||||
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.date = ""
|
||||
ud.revision = ud.parm['rev']
|
||||
elif 'date' in ud.date:
|
||||
ud.date = ud.parm['date']
|
||||
ud.revision = ""
|
||||
else:
|
||||
#
|
||||
# ***Nasty hack***
|
||||
# If DATE in unexpanded PV, use ud.date (which is set from SRCDATE)
|
||||
# Should warn people to switch to SRCREV here
|
||||
#
|
||||
pv = data.getVar("PV", d, 0)
|
||||
if "DATE" in pv:
|
||||
ud.revision = ""
|
||||
else:
|
||||
rev = Fetch.srcrev_internal_helper(ud, d)
|
||||
if rev is True:
|
||||
ud.revision = self.latest_revision(url, ud, d)
|
||||
ud.date = ""
|
||||
elif rev:
|
||||
ud.revision = rev
|
||||
ud.date = ""
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def _buildsvncommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an svn commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_svn}', d)
|
||||
|
||||
proto = "svn"
|
||||
if "proto" in ud.parm:
|
||||
proto = ud.parm["proto"]
|
||||
|
||||
svn_rsh = None
|
||||
if proto == "svn+ssh" and "rsh" in ud.parm:
|
||||
svn_rsh = ud.parm["rsh"]
|
||||
|
||||
svnroot = ud.host + ud.path
|
||||
|
||||
# either use the revision, or SRCDATE in braces,
|
||||
options = []
|
||||
|
||||
if ud.user:
|
||||
options.append("--username %s" % ud.user)
|
||||
|
||||
if ud.pswd:
|
||||
options.append("--password %s" % ud.pswd)
|
||||
|
||||
if command is "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
suffix = "@%s" % (ud.revision)
|
||||
elif ud.date:
|
||||
options.append("-r {%s}" % ud.date)
|
||||
|
||||
if command is "fetch":
|
||||
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
|
||||
elif command is "update":
|
||||
svncmd = "%s update %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid svn command %s" % command)
|
||||
|
||||
if svn_rsh:
|
||||
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
|
||||
|
||||
return svncmd
|
||||
|
||||
def go(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
|
||||
svnupdatecmd = self._buildsvncommand(ud, d, "update")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupdatecmd)
|
||||
runfetchcmd(svnupdatecmd, d)
|
||||
else:
|
||||
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnfetchcmd)
|
||||
runfetchcmd(svnfetchcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
try:
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
|
||||
except:
|
||||
t, v, tb = sys.exc_info()
|
||||
try:
|
||||
os.unlink(ud.localpath)
|
||||
except OSError:
|
||||
pass
|
||||
raise t, v, tb
|
||||
|
||||
def suppports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "svn:" + ud.moddir
|
||||
|
||||
def _latest_revision(self, url, ud, d):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "SVN fetcher hitting network for %s" % url)
|
||||
|
||||
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
|
||||
|
||||
revision = None
|
||||
for line in output.splitlines():
|
||||
if "Last Changed Rev" in line:
|
||||
revision = line.split(":")[1].strip()
|
||||
|
||||
return revision
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
114
bitbake/lib/bb/fetch/wget.py
Normal file
114
bitbake/lib/bb/fetch/wget.py
Normal file
@@ -0,0 +1,114 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch import Fetch
|
||||
from bb.fetch import FetchError
|
||||
|
||||
class Wget(Fetch):
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with wget.
|
||||
"""
|
||||
return ud.type in ['http','https','ftp']
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
|
||||
url = bb.encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
|
||||
ud.basename = os.path.basename(ud.path)
|
||||
ud.localfile = data.expand(os.path.basename(url), d)
|
||||
|
||||
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
|
||||
|
||||
def go(self, uri, ud, d, checkonly = False):
|
||||
"""Fetch urls"""
|
||||
|
||||
def fetch_uri(uri, ud, d):
|
||||
if checkonly:
|
||||
fetchcmd = data.getVar("CHECKCOMMAND", d, 1)
|
||||
elif os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again..
|
||||
fetchcmd = data.getVar("RESUMECOMMAND", d, 1)
|
||||
else:
|
||||
fetchcmd = data.getVar("FETCHCOMMAND", d, 1)
|
||||
|
||||
uri = uri.split(";")[0]
|
||||
uri_decoded = list(bb.decodeurl(uri))
|
||||
uri_type = uri_decoded[0]
|
||||
uri_host = uri_decoded[1]
|
||||
|
||||
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
|
||||
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
|
||||
httpproxy = None
|
||||
ftpproxy = None
|
||||
if uri_type == 'http':
|
||||
httpproxy = data.getVar("HTTP_PROXY", d, True)
|
||||
httpproxy_ignore = (data.getVar("HTTP_PROXY_IGNORE", d, True) or "").split()
|
||||
for p in httpproxy_ignore:
|
||||
if uri_host.endswith(p):
|
||||
httpproxy = None
|
||||
break
|
||||
if uri_type == 'ftp':
|
||||
ftpproxy = data.getVar("FTP_PROXY", d, True)
|
||||
ftpproxy_ignore = (data.getVar("HTTP_PROXY_IGNORE", d, True) or "").split()
|
||||
for p in ftpproxy_ignore:
|
||||
if uri_host.endswith(p):
|
||||
ftpproxy = None
|
||||
break
|
||||
if httpproxy:
|
||||
fetchcmd = "http_proxy=" + httpproxy + " " + fetchcmd
|
||||
if ftpproxy:
|
||||
fetchcmd = "ftp_proxy=" + ftpproxy + " " + fetchcmd
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "executing " + fetchcmd)
|
||||
ret = os.system(fetchcmd)
|
||||
if ret != 0:
|
||||
return False
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath) and not checkonly:
|
||||
bb.msg.debug(2, bb.msg.domain.Fetcher, "The fetch command for %s returned success but %s doesn't exist?..." % (uri, ud.localpath))
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
if fetch_uri(uri, ud, localdata):
|
||||
return True
|
||||
|
||||
raise FetchError(uri)
|
||||
|
||||
|
||||
def checkstatus(self, uri, ud, d):
|
||||
return self.go(uri, ud, d, True)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,143 +0,0 @@
|
||||
"""
|
||||
BitBake 'Fetch' implementation for bzr.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2007 Ross Burton
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
#
|
||||
# Classes for obtaining upstream sources for the
|
||||
# BitBake build tools.
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
|
||||
class Bzr(FetchMethod):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['bzr']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
init bzr specific variable within url data
|
||||
"""
|
||||
# Create paths to bzr checkouts
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
if not ud.revision:
|
||||
ud.revision = self.latest_revision(ud.url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
def _buildbzrcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an bzr commandline based on ud
|
||||
command is "fetch", "update", "revno"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_bzr}', d)
|
||||
|
||||
proto = ud.parm.get('proto', 'http')
|
||||
|
||||
bzrroot = ud.host + ud.path
|
||||
|
||||
options = []
|
||||
|
||||
if command == "revno":
|
||||
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
else:
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
|
||||
elif command == "update":
|
||||
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid bzr command %s" % command, ud.url)
|
||||
|
||||
return bzrcmd
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "update")
|
||||
logger.debug(1, "BZR Update %s", loc)
|
||||
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
|
||||
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
|
||||
runfetchcmd(bzrcmd, d)
|
||||
else:
|
||||
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
|
||||
logger.debug(1, "BZR Checkout %s", loc)
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", bzrcmd)
|
||||
runfetchcmd(bzrcmd, d)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata == "keep":
|
||||
tar_flags = ""
|
||||
else:
|
||||
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
|
||||
|
||||
# tar them up to a defined filename
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
|
||||
|
||||
def supports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d, name):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "bzr:" + ud.pkgdir
|
||||
|
||||
def _latest_revision(self, url, ud, d, name):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
logger.debug(2, "BZR fetcher hitting network for %s", url)
|
||||
|
||||
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)
|
||||
|
||||
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
|
||||
|
||||
return output.strip()
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
@@ -1,181 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
#
|
||||
|
||||
import os
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod, FetchError, MissingParameterError, logger
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
class Cvs(FetchMethod):
|
||||
"""
|
||||
Class to fetch a module or modules from cvs repositories
|
||||
"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with cvs.
|
||||
"""
|
||||
return ud.type in ['cvs']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError("module", ud.url)
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.tag = ud.parm.get('tag', "")
|
||||
|
||||
# Override the default date in certain cases
|
||||
if 'date' in ud.parm:
|
||||
ud.date = ud.parm['date']
|
||||
elif ud.tag:
|
||||
ud.date = ""
|
||||
|
||||
norecurse = ''
|
||||
if 'norecurse' in ud.parm:
|
||||
norecurse = '_norecurse'
|
||||
|
||||
fullpath = ''
|
||||
if 'fullpath' in ud.parm:
|
||||
fullpath = '_fullpath'
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
|
||||
|
||||
def need_update(self, url, ud, d):
|
||||
if (ud.date == "now"):
|
||||
return True
|
||||
if not os.path.exists(ud.localpath):
|
||||
return True
|
||||
return False
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
|
||||
method = ud.parm.get('method', 'pserver')
|
||||
localdir = ud.parm.get('localdir', ud.module)
|
||||
cvs_port = ud.parm.get('port', '')
|
||||
|
||||
cvs_rsh = None
|
||||
if method == "ext":
|
||||
if "rsh" in ud.parm:
|
||||
cvs_rsh = ud.parm["rsh"]
|
||||
|
||||
if method == "dir":
|
||||
cvsroot = ud.path
|
||||
else:
|
||||
cvsroot = ":" + method
|
||||
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
|
||||
if cvsproxyhost:
|
||||
cvsroot += ";proxy=" + cvsproxyhost
|
||||
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
|
||||
if cvsproxyport:
|
||||
cvsroot += ";proxyport=" + cvsproxyport
|
||||
cvsroot += ":" + ud.user
|
||||
if ud.pswd:
|
||||
cvsroot += ":" + ud.pswd
|
||||
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
|
||||
|
||||
options = []
|
||||
if 'norecurse' in ud.parm:
|
||||
options.append("-l")
|
||||
if ud.date:
|
||||
# treat YYYYMMDDHHMM specially for CVS
|
||||
if len(ud.date) == 12:
|
||||
options.append("-D \"%s %s:%s UTC\"" % (ud.date[0:8], ud.date[8:10], ud.date[10:12]))
|
||||
else:
|
||||
options.append("-D \"%s UTC\"" % ud.date)
|
||||
if ud.tag:
|
||||
options.append("-r %s" % ud.tag)
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
data.setVar('CVSROOT', cvsroot, localdata)
|
||||
data.setVar('CVSCOOPTS', " ".join(options), localdata)
|
||||
data.setVar('CVSMODULE', ud.module, localdata)
|
||||
cvscmd = data.getVar('FETCHCOMMAND', localdata, True)
|
||||
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, True)
|
||||
|
||||
if cvs_rsh:
|
||||
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
|
||||
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
|
||||
|
||||
# create module directory
|
||||
logger.debug(2, "Fetch: checking for module directory")
|
||||
pkg = data.expand('${PN}', d)
|
||||
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
|
||||
moddir = os.path.join(pkgdir, localdir)
|
||||
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
|
||||
logger.info("Update " + loc)
|
||||
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
|
||||
# update sources there
|
||||
os.chdir(moddir)
|
||||
cmd = cvsupdatecmd
|
||||
else:
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(pkgdir)
|
||||
os.chdir(pkgdir)
|
||||
logger.debug(1, "Running %s", cvscmd)
|
||||
bb.fetch2.check_network_access(d, cvscmd, ud.url)
|
||||
cmd = cvscmd
|
||||
|
||||
runfetchcmd(cmd, d, cleanup = [moddir])
|
||||
|
||||
if not os.access(moddir, os.R_OK):
|
||||
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata == "keep":
|
||||
tar_flags = ""
|
||||
else:
|
||||
tar_flags = "--exclude 'CVS'"
|
||||
|
||||
# tar them up to a defined filename
|
||||
if 'fullpath' in ud.parm:
|
||||
os.chdir(pkgdir)
|
||||
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
|
||||
else:
|
||||
os.chdir(moddir)
|
||||
os.chdir('..')
|
||||
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
|
||||
|
||||
runfetchcmd(cmd, d, cleanup = [ud.localpath])
|
||||
|
||||
def clean(self, ud, d):
|
||||
""" Clean CVS Files and tarballs """
|
||||
|
||||
pkg = data.expand('${PN}', d)
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
|
||||
|
||||
bb.utils.remove(pkgdir, True)
|
||||
bb.utils.remove(ud.localpath)
|
||||
|
||||
@@ -1,331 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' git implementation
|
||||
|
||||
git fetcher support the SRC_URI with format of:
|
||||
SRC_URI = "git://some.host/somepath;OptionA=xxx;OptionB=xxx;..."
|
||||
|
||||
Supported SRC_URI options are:
|
||||
|
||||
- branch
|
||||
The git branch to retrieve from. The default is "master"
|
||||
|
||||
this option also support multiple branches fetching, branches
|
||||
are seperated by comma. in multiple branches case, the name option
|
||||
must have the same number of names to match the branches, which is
|
||||
used to specify the SRC_REV for the branch
|
||||
e.g:
|
||||
SRC_URI="git://some.host/somepath;branch=branchX,branchY;name=nameX,nameY"
|
||||
SRCREV_nameX = "xxxxxxxxxxxxxxxxxxxx"
|
||||
SRCREV_nameY = "YYYYYYYYYYYYYYYYYYYY"
|
||||
|
||||
- tag
|
||||
The git tag to retrieve. The default is "master"
|
||||
|
||||
- protocol
|
||||
The method to use to access the repository. Common options are "git",
|
||||
"http", "file" and "rsync". The default is "git"
|
||||
|
||||
- rebaseable
|
||||
rebaseable indicates that the upstream git repo may rebase in the future,
|
||||
and current revision may disappear from upstream repo. This option will
|
||||
reminder fetcher to preserve local cache carefully for future use.
|
||||
The default value is "0", set rebaseable=1 for rebaseable git repo
|
||||
|
||||
- nocheckout
|
||||
Don't checkout source code when unpacking. set this option for the recipe
|
||||
who has its own routine to checkout code.
|
||||
The default is "0", set nocheckout=1 if needed.
|
||||
|
||||
- bareclone
|
||||
Create a bare clone of the source code and don't checkout the source code
|
||||
when unpacking. Set this option for the recipe who has its own routine to
|
||||
checkout code and tracking branch requirements.
|
||||
The default is "0", set bareclone=1 if needed.
|
||||
|
||||
"""
|
||||
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
|
||||
class Git(FetchMethod):
|
||||
"""Class to fetch a module or modules from git repositories"""
|
||||
def init(self, d):
|
||||
#
|
||||
# Only enable _sortable revision if the key is set
|
||||
#
|
||||
if d.getVar("BB_GIT_CLONE_FOR_SRCREV", True):
|
||||
self._sortable_buildindex = self._sortable_buildindex_disabled
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with git.
|
||||
"""
|
||||
return ud.type in ['git']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
init git specific variable within url data
|
||||
so that the git method like latest_revision() can work
|
||||
"""
|
||||
if 'protocol' in ud.parm:
|
||||
ud.proto = ud.parm['protocol']
|
||||
elif not ud.host:
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "git"
|
||||
|
||||
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
|
||||
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
|
||||
|
||||
ud.nocheckout = ud.parm.get("nocheckout","0") == "1"
|
||||
|
||||
ud.rebaseable = ud.parm.get("rebaseable","0") == "1"
|
||||
|
||||
# bareclone implies nocheckout
|
||||
ud.bareclone = ud.parm.get("bareclone","0") == "1"
|
||||
if ud.bareclone:
|
||||
ud.nocheckout = 1
|
||||
|
||||
branches = ud.parm.get("branch", "master").split(',')
|
||||
if len(branches) != len(ud.names):
|
||||
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
|
||||
ud.branches = {}
|
||||
for name in ud.names:
|
||||
branch = branches[ud.names.index(name)]
|
||||
ud.branches[name] = branch
|
||||
|
||||
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
|
||||
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
for name in ud.names:
|
||||
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
|
||||
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
|
||||
ud.branches[name] = ud.revisions[name]
|
||||
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.'))
|
||||
# for rebaseable git repo, it is necessary to keep mirror tar ball
|
||||
# per revision, so that even the revision disappears from the
|
||||
# upstream repo in the future, the mirror will remain intact and still
|
||||
# contains the revision
|
||||
if ud.rebaseable:
|
||||
for name in ud.names:
|
||||
gitsrcname = gitsrcname + '_' + ud.revisions[name]
|
||||
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
|
||||
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
|
||||
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
|
||||
|
||||
ud.localfile = ud.clonedir
|
||||
|
||||
def localpath(self, url, ud, d):
|
||||
return ud.clonedir
|
||||
|
||||
def need_update(self, u, ud, d):
|
||||
if not os.path.exists(ud.clonedir):
|
||||
return True
|
||||
os.chdir(ud.clonedir)
|
||||
for name in ud.names:
|
||||
if not self._contains_ref(ud.revisions[name], d):
|
||||
return True
|
||||
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_premirror(self, u, ud, d):
|
||||
# If we don't do this, updating an existing checkout with only premirrors
|
||||
# is not possible
|
||||
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
|
||||
return True
|
||||
if os.path.exists(ud.clonedir):
|
||||
return False
|
||||
return True
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
ud.repochanged = not os.path.exists(ud.fullmirror)
|
||||
|
||||
# If the checkout doesn't exist and the mirror tarball does, extract it
|
||||
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
|
||||
bb.utils.mkdirhier(ud.clonedir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
|
||||
|
||||
repourl = "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
|
||||
|
||||
# If the repo still doesn't exist, fallback to cloning it
|
||||
if not os.path.exists(ud.clonedir):
|
||||
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
|
||||
bb.fetch2.check_network_access(d, clone_cmd)
|
||||
runfetchcmd(clone_cmd, d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
# Update the checkout if needed
|
||||
needupdate = False
|
||||
for name in ud.names:
|
||||
if not self._contains_ref(ud.revisions[name], d):
|
||||
needupdate = True
|
||||
if needupdate:
|
||||
try:
|
||||
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
|
||||
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
|
||||
except bb.fetch2.FetchError:
|
||||
logger.debug(1, "No Origin")
|
||||
|
||||
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
|
||||
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
|
||||
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
|
||||
runfetchcmd(fetch_cmd, d)
|
||||
runfetchcmd("%s prune-packed" % ud.basecmd, d)
|
||||
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
|
||||
ud.repochanged = True
|
||||
|
||||
def build_mirror_data(self, url, ud, d):
|
||||
# Generate a mirror tarball if needed
|
||||
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
|
||||
os.chdir(ud.clonedir)
|
||||
logger.info("Creating tarball of git repository")
|
||||
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
|
||||
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
""" unpack the downloaded src to destdir"""
|
||||
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
readpathspec = ":%s" % (subdir)
|
||||
def_destsuffix = "%s/" % os.path.basename(subdir)
|
||||
else:
|
||||
readpathspec = ""
|
||||
def_destsuffix = "git/"
|
||||
|
||||
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
|
||||
destdir = os.path.join(destdir, destsuffix)
|
||||
if os.path.exists(destdir):
|
||||
bb.utils.prunedir(destdir)
|
||||
|
||||
cloneflags = "-s -n"
|
||||
if ud.bareclone:
|
||||
cloneflags += " --mirror"
|
||||
|
||||
runfetchcmd("git clone %s %s/ %s" % (cloneflags, ud.clonedir, destdir), d)
|
||||
if not ud.nocheckout:
|
||||
os.chdir(destdir)
|
||||
if subdir != "":
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
|
||||
else:
|
||||
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
|
||||
return True
|
||||
|
||||
def clean(self, ud, d):
|
||||
""" clean the git directory """
|
||||
|
||||
bb.utils.remove(ud.localpath, True)
|
||||
bb.utils.remove(ud.fullmirror)
|
||||
|
||||
def supports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _contains_ref(self, tag, d):
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag)
|
||||
output = runfetchcmd(cmd, d, quiet=True)
|
||||
if len(output.split()) > 1:
|
||||
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _revision_key(self, url, ud, d, name):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "git:" + ud.host + ud.path.replace('/', '.') + ud.branches[name]
|
||||
|
||||
def _latest_revision(self, url, ud, d, name):
|
||||
"""
|
||||
Compute the HEAD revision for the url
|
||||
"""
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
|
||||
cmd = "%s ls-remote %s://%s%s%s %s" % \
|
||||
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
|
||||
bb.fetch2.check_network_access(d, cmd)
|
||||
output = runfetchcmd(cmd, d, True)
|
||||
if not output:
|
||||
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
|
||||
return output.split()[0]
|
||||
|
||||
def _build_revision(self, url, ud, d, name):
|
||||
return ud.revisions[name]
|
||||
|
||||
def _sortable_buildindex_disabled(self, url, ud, d, rev):
|
||||
"""
|
||||
Return a suitable buildindex for the revision specified. This is done by counting revisions
|
||||
using "git rev-list" which may or may not work in different circumstances.
|
||||
"""
|
||||
|
||||
cwd = os.getcwd()
|
||||
|
||||
# Check if we have the rev already
|
||||
|
||||
if not os.path.exists(ud.clonedir):
|
||||
logger.debug(1, "GIT repository for %s does not exist in %s. \
|
||||
Downloading.", url, ud.clonedir)
|
||||
self.download(None, ud, d)
|
||||
if not os.path.exists(ud.clonedir):
|
||||
logger.error("GIT repository for %s does not exist in %s after \
|
||||
download. Cannot get sortable buildnumber, using \
|
||||
old value", url, ud.clonedir)
|
||||
return None
|
||||
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
if not self._contains_ref(rev, d):
|
||||
self.download(None, ud, d)
|
||||
|
||||
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
|
||||
os.chdir(cwd)
|
||||
|
||||
buildindex = "%s" % output.split()[0]
|
||||
logger.debug(1, "GIT repository for %s in %s is returning %s revisions in rev-list before %s", url, ud.clonedir, buildindex, rev)
|
||||
return buildindex
|
||||
|
||||
def checkstatus(self, uri, ud, d):
|
||||
fetchcmd = "%s ls-remote %s" % (ud.basecmd, uri)
|
||||
try:
|
||||
runfetchcmd(fetchcmd, d, quiet=True)
|
||||
return True
|
||||
except FetchError:
|
||||
return False
|
||||
@@ -1,176 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for mercurial DRCS (hg).
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
# Copyright (C) 2007 Robert Schuster
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import MissingParameterError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
|
||||
class Hg(FetchMethod):
|
||||
"""Class to fetch from mercurial repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with mercurial.
|
||||
"""
|
||||
return ud.type in ['hg']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
init hg specific variable within url data
|
||||
"""
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError('module', ud.url)
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to mercurial checkouts
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
elif not ud.revision:
|
||||
ud.revision = self.latest_revision(ud.url, ud, d)
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
def need_update(self, url, ud, d):
|
||||
revTag = ud.parm.get('rev', 'tip')
|
||||
if revTag == "tip":
|
||||
return True
|
||||
if not os.path.exists(ud.localpath):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _buildhgcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an hg commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_hg}', d)
|
||||
|
||||
proto = ud.parm.get('proto', 'http')
|
||||
|
||||
host = ud.host
|
||||
if proto == "file":
|
||||
host = "/"
|
||||
ud.host = "localhost"
|
||||
|
||||
if not ud.user:
|
||||
hgroot = host + ud.path
|
||||
else:
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
|
||||
if command == "info":
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
|
||||
options = [];
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
elif command == "pull":
|
||||
# do not pass options list; limiting pull to rev causes the local
|
||||
# repo not to contain it and immediately following "update" command
|
||||
# will crash
|
||||
cmd = "%s pull" % (basecmd)
|
||||
elif command == "update":
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid hg command %s" % command, ud.url)
|
||||
|
||||
return cmd
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
|
||||
updatecmd = self._buildhgcommand(ud, d, "pull")
|
||||
logger.info("Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
logger.debug(1, "Running %s", updatecmd)
|
||||
bb.fetch2.check_network_access(d, updatecmd, ud.url)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
else:
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", fetchcmd)
|
||||
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
|
||||
runfetchcmd(fetchcmd, d)
|
||||
|
||||
# Even when we clone (fetch), we still need to update as hg's clone
|
||||
# won't checkout the specified revision if its on a branch
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
os.chdir(ud.moddir)
|
||||
logger.debug(1, "Running %s", updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata == "keep":
|
||||
tar_flags = ""
|
||||
else:
|
||||
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
|
||||
|
||||
def supports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _latest_revision(self, url, ud, d, name):
|
||||
"""
|
||||
Compute tip revision for the url
|
||||
"""
|
||||
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"))
|
||||
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
|
||||
return output.strip()
|
||||
|
||||
def _build_revision(self, url, ud, d, name):
|
||||
return ud.revision
|
||||
|
||||
def _revision_key(self, url, ud, d, name):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "hg:" + ud.moddir
|
||||
@@ -1,91 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
import bb.utils
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
|
||||
class Local(FetchMethod):
|
||||
def supports(self, url, urldata, d):
|
||||
"""
|
||||
Check to see if a given url represents a local fetch.
|
||||
"""
|
||||
return urldata.type in ['file']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
# We don't set localfile as for this fetcher the file is already local!
|
||||
ud.basename = os.path.basename(ud.url.split("://")[1].split(";")[0])
|
||||
return
|
||||
|
||||
def localpath(self, url, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
"""
|
||||
path = url.split("://")[1]
|
||||
path = path.split(";")[0]
|
||||
newpath = path
|
||||
if path[0] != "/":
|
||||
filespath = data.getVar('FILESPATH', d, True)
|
||||
if filespath:
|
||||
newpath = bb.utils.which(filespath, path)
|
||||
if not newpath:
|
||||
filesdir = data.getVar('FILESDIR', d, True)
|
||||
if filesdir:
|
||||
newpath = os.path.join(filesdir, path)
|
||||
if not os.path.exists(newpath) and path.find("*") == -1:
|
||||
dldirfile = os.path.join(data.getVar("DL_DIR", d, True), os.path.basename(path))
|
||||
return dldirfile
|
||||
return newpath
|
||||
|
||||
def need_update(self, url, ud, d):
|
||||
if url.find("*") != -1:
|
||||
return False
|
||||
if os.path.exists(ud.localpath):
|
||||
return False
|
||||
return True
|
||||
|
||||
def download(self, url, urldata, d):
|
||||
"""Fetch urls (no-op for Local method)"""
|
||||
# no need to fetch local files, we'll deal with them in place.
|
||||
return 1
|
||||
|
||||
def checkstatus(self, url, urldata, d):
|
||||
"""
|
||||
Check the status of the url
|
||||
"""
|
||||
if urldata.localpath.find("*") != -1:
|
||||
logger.info("URL %s looks like a glob and was therefore not checked.", url)
|
||||
return True
|
||||
if os.path.exists(urldata.localpath):
|
||||
return True
|
||||
return False
|
||||
|
||||
def clean(self, urldata, d):
|
||||
return
|
||||
|
||||
@@ -1,135 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
Bitbake "Fetch" implementation for osc (Opensuse build service client).
|
||||
Based on the svn "Fetch" implementation.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import MissingParameterError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
class Osc(FetchMethod):
|
||||
"""Class to fetch a module or modules from Opensuse build server
|
||||
repositories."""
|
||||
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with osc.
|
||||
"""
|
||||
return ud.type in ['osc']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError('module', ud.url)
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to osc checkouts
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
|
||||
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
pv = data.getVar("PV", d, 0)
|
||||
rev = bb.fetch2.srcrev_internal_helper(ud, d)
|
||||
if rev and rev != True:
|
||||
ud.revision = rev
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
def _buildosccommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an ocs commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_osc}', d)
|
||||
|
||||
proto = ud.parm.get('proto', 'ocs')
|
||||
|
||||
options = []
|
||||
|
||||
config = "-c %s" % self.generate_config(ud, d)
|
||||
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
coroot = self._strip_leading_slashes(ud.path)
|
||||
|
||||
if command == "fetch":
|
||||
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
|
||||
elif command == "update":
|
||||
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid osc command %s" % command, ud.url)
|
||||
|
||||
return osccmd
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""
|
||||
Fetch url
|
||||
"""
|
||||
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
|
||||
oscupdatecmd = self._buildosccommand(ud, d, "update")
|
||||
logger.info("Update "+ loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
logger.debug(1, "Running %s", oscupdatecmd)
|
||||
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
|
||||
runfetchcmd(oscupdatecmd, d)
|
||||
else:
|
||||
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", oscfetchcmd)
|
||||
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
|
||||
runfetchcmd(oscfetchcmd, d)
|
||||
|
||||
os.chdir(os.path.join(ud.pkgdir + ud.path))
|
||||
# tar them up to a defined filename
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
|
||||
|
||||
def supports_srcrev(self):
|
||||
return False
|
||||
|
||||
def generate_config(self, ud, d):
|
||||
"""
|
||||
Generate a .oscrc to be used for this run.
|
||||
"""
|
||||
|
||||
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
|
||||
if (os.path.exists(config_path)):
|
||||
os.remove(config_path)
|
||||
|
||||
f = open(config_path, 'w')
|
||||
f.write("[general]\n")
|
||||
f.write("apisrv = %s\n" % ud.host)
|
||||
f.write("scheme = http\n")
|
||||
f.write("su-wrapper = su -c\n")
|
||||
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
|
||||
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
|
||||
f.write("extra-pkgs = gzip\n")
|
||||
f.write("\n")
|
||||
f.write("[%s]\n" % ud.host)
|
||||
f.write("user = %s\n" % ud.parm["user"])
|
||||
f.write("pass = %s\n" % ud.parm["pswd"])
|
||||
f.close()
|
||||
|
||||
return config_path
|
||||
@@ -1,196 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
from future_builtins import zip
|
||||
import os
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
class Perforce(FetchMethod):
|
||||
def supports(self, url, ud, d):
|
||||
return ud.type in ['p4']
|
||||
|
||||
def doparse(url, d):
|
||||
parm = {}
|
||||
path = url.split("://")[1]
|
||||
delim = path.find("@");
|
||||
if delim != -1:
|
||||
(user, pswd, host, port) = path.split('@')[0].split(":")
|
||||
path = path.split('@')[1]
|
||||
else:
|
||||
(host, port) = data.getVar('P4PORT', d).split(':')
|
||||
user = ""
|
||||
pswd = ""
|
||||
|
||||
if path.find(";") != -1:
|
||||
keys=[]
|
||||
values=[]
|
||||
plist = path.split(';')
|
||||
for item in plist:
|
||||
if item.count('='):
|
||||
(key, value) = item.split('=')
|
||||
keys.append(key)
|
||||
values.append(value)
|
||||
|
||||
parm = dict(zip(keys, values))
|
||||
path = "//" + path.split(';')[0]
|
||||
host += ":%s" % (port)
|
||||
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
|
||||
|
||||
return host, path, user, pswd, parm
|
||||
doparse = staticmethod(doparse)
|
||||
|
||||
def getcset(d, depot, host, user, pswd, parm):
|
||||
p4opt = ""
|
||||
if "cset" in parm:
|
||||
return parm["cset"];
|
||||
if user:
|
||||
p4opt += " -u %s" % (user)
|
||||
if pswd:
|
||||
p4opt += " -P %s" % (pswd)
|
||||
if host:
|
||||
p4opt += " -p %s" % (host)
|
||||
|
||||
p4date = data.getVar("P4DATE", d, True)
|
||||
if "revision" in parm:
|
||||
depot += "#%s" % (parm["revision"])
|
||||
elif "label" in parm:
|
||||
depot += "@%s" % (parm["label"])
|
||||
elif p4date:
|
||||
depot += "@%s" % (p4date)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND_p4', d, True)
|
||||
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
|
||||
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
|
||||
cset = p4file.readline().strip()
|
||||
logger.debug(1, "READ %s", cset)
|
||||
if not cset:
|
||||
return -1
|
||||
|
||||
return cset.split(' ')[1]
|
||||
getcset = staticmethod(getcset)
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
(host, path, user, pswd, parm) = Perforce.doparse(ud.url, d)
|
||||
|
||||
# If a label is specified, we use that as our filename
|
||||
|
||||
if "label" in parm:
|
||||
ud.localfile = "%s.tar.gz" % (parm["label"])
|
||||
return
|
||||
|
||||
base = path
|
||||
which = path.find('/...')
|
||||
if which != -1:
|
||||
base = path[:which]
|
||||
|
||||
base = self._strip_leading_slashes(base)
|
||||
|
||||
cset = Perforce.getcset(d, path, host, user, pswd, parm)
|
||||
|
||||
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base.replace('/', '.'), cset), d)
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""
|
||||
Fetch urls
|
||||
"""
|
||||
|
||||
(host, depot, user, pswd, parm) = Perforce.doparse(loc, d)
|
||||
|
||||
if depot.find('/...') != -1:
|
||||
path = depot[:depot.find('/...')]
|
||||
else:
|
||||
path = depot
|
||||
|
||||
module = parm.get('module', os.path.basename(path))
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
# Get the p4 command
|
||||
p4opt = ""
|
||||
if user:
|
||||
p4opt += " -u %s" % (user)
|
||||
|
||||
if pswd:
|
||||
p4opt += " -P %s" % (pswd)
|
||||
|
||||
if host:
|
||||
p4opt += " -p %s" % (host)
|
||||
|
||||
p4cmd = data.getVar('FETCHCOMMAND', localdata, True)
|
||||
|
||||
# create temp directory
|
||||
logger.debug(2, "Fetch: creating temporary directory")
|
||||
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
if not tmpfile:
|
||||
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
|
||||
|
||||
if "label" in parm:
|
||||
depot = "%s@%s" % (depot, parm["label"])
|
||||
else:
|
||||
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
|
||||
depot = "%s@%s" % (depot, cset)
|
||||
|
||||
os.chdir(tmpfile)
|
||||
logger.info("Fetch " + loc)
|
||||
logger.info("%s%s files %s", p4cmd, p4opt, depot)
|
||||
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
|
||||
|
||||
if not p4file:
|
||||
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
|
||||
|
||||
count = 0
|
||||
|
||||
for file in p4file:
|
||||
list = file.split()
|
||||
|
||||
if list[2] == "delete":
|
||||
continue
|
||||
|
||||
dest = list[0][len(path)+1:]
|
||||
where = dest.find("#")
|
||||
|
||||
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
|
||||
count = count + 1
|
||||
|
||||
if count == 0:
|
||||
logger.error()
|
||||
raise FetchError("Fetch: No files gathered from the P4 fetch", loc)
|
||||
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, module), d, cleanup = [ud.localpath])
|
||||
# cleanup
|
||||
bb.utils.prunedir(tmpfile)
|
||||
@@ -1,98 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake "Fetch" repo (git) implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2009 Tom Rini <trini@embeddedalley.com>
|
||||
#
|
||||
# Based on git.py which is:
|
||||
#Copyright (C) 2005 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
class Repo(FetchMethod):
|
||||
"""Class to fetch a module or modules from repo (git) repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with repo.
|
||||
"""
|
||||
return ud.type in ["repo"]
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
We don"t care about the git rev of the manifests repository, but
|
||||
we do care about the manifest to use. The default is "default".
|
||||
We also care about the branch or tag to be used. The default is
|
||||
"master".
|
||||
"""
|
||||
|
||||
ud.proto = ud.parm.get('protocol', 'git')
|
||||
ud.branch = ud.parm.get('branch', 'master')
|
||||
ud.manifest = ud.parm.get('manifest', 'default.xml')
|
||||
if not ud.manifest.endswith('.xml'):
|
||||
ud.manifest += '.xml'
|
||||
|
||||
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
|
||||
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
|
||||
return
|
||||
|
||||
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
|
||||
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
|
||||
codir = os.path.join(repodir, gitsrcname, ud.manifest)
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + "@"
|
||||
else:
|
||||
username = ""
|
||||
|
||||
bb.utils.mkdirhier(os.path.join(codir, "repo"))
|
||||
os.chdir(os.path.join(codir, "repo"))
|
||||
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
|
||||
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
|
||||
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
|
||||
|
||||
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
|
||||
runfetchcmd("repo sync", d)
|
||||
os.chdir(codir)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata == "keep":
|
||||
tar_flags = ""
|
||||
else:
|
||||
tar_flags = "--exclude '.repo' --exclude '.git'"
|
||||
|
||||
# Create a cache
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
|
||||
|
||||
def supports_srcrev(self):
|
||||
return False
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.manifest
|
||||
|
||||
def _want_sortable_revision(self, url, ud, d):
|
||||
return False
|
||||
@@ -1,97 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
This implementation is for svk. It is based on the svn implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2006 Holger Hans Peter Freyther
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import MissingParameterError
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
class Svk(FetchMethod):
|
||||
"""Class to fetch a module or modules from svk repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with svk.
|
||||
"""
|
||||
return ud.type in ['svk']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError('module', ud.url)
|
||||
else:
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
ud.revision = ud.parm.get('rev', "")
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
|
||||
|
||||
def need_update(self, url, ud, d):
|
||||
if ud.date == "now":
|
||||
return True
|
||||
if not os.path.exists(ud.localpath):
|
||||
return True
|
||||
return False
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
svkroot = ud.host + ud.path
|
||||
|
||||
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
|
||||
|
||||
if ud.revision:
|
||||
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
|
||||
|
||||
# create temp directory
|
||||
localdata = data.createCopy(d)
|
||||
data.update_data(localdata)
|
||||
logger.debug(2, "Fetch: creating temporary directory")
|
||||
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
|
||||
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
|
||||
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
|
||||
tmpfile = tmppipe.readline().strip()
|
||||
if not tmpfile:
|
||||
logger.error()
|
||||
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
|
||||
|
||||
# check out sources there
|
||||
os.chdir(tmpfile)
|
||||
logger.info("Fetch " + loc)
|
||||
logger.debug(1, "Running %s", svkcmd)
|
||||
runfetchcmd(svkcmd, d, cleanup = [tmpfile])
|
||||
|
||||
os.chdir(os.path.join(tmpfile, os.path.dirname(ud.module)))
|
||||
# tar them up to a defined filename
|
||||
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)), d, cleanup = [ud.localpath])
|
||||
|
||||
# cleanup
|
||||
bb.utils.prunedir(tmpfile)
|
||||
@@ -1,182 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for svn.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2004 Marcin Juszkiewicz
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import sys
|
||||
import logging
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import MissingParameterError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
|
||||
class Svn(FetchMethod):
|
||||
"""Class to fetch a module or modules from svn repositories"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with svn.
|
||||
"""
|
||||
return ud.type in ['svn']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
init svn specific variable within url data
|
||||
"""
|
||||
if not "module" in ud.parm:
|
||||
raise MissingParameterError('module', ud.url)
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
# Create paths to svn checkouts
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
ud.localfile = data.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
def _buildsvncommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an svn commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_svn}', d)
|
||||
|
||||
proto = ud.parm.get('proto', 'svn')
|
||||
|
||||
svn_rsh = None
|
||||
if proto == "svn+ssh" and "rsh" in ud.parm:
|
||||
svn_rsh = ud.parm["rsh"]
|
||||
|
||||
svnroot = ud.host + ud.path
|
||||
|
||||
options = []
|
||||
|
||||
if ud.user:
|
||||
options.append("--username %s" % ud.user)
|
||||
|
||||
if ud.pswd:
|
||||
options.append("--password %s" % ud.pswd)
|
||||
|
||||
if command == "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
suffix = "@%s" % (ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
|
||||
elif command == "update":
|
||||
svncmd = "%s update %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid svn command %s" % command, ud.url)
|
||||
|
||||
if svn_rsh:
|
||||
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
|
||||
|
||||
return svncmd
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
|
||||
svnupdatecmd = self._buildsvncommand(ud, d, "update")
|
||||
logger.info("Update " + loc)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
logger.debug(1, "Running %s", svnupdatecmd)
|
||||
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
|
||||
runfetchcmd(svnupdatecmd, d)
|
||||
else:
|
||||
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + loc)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.debug(1, "Running %s", svnfetchcmd)
|
||||
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
|
||||
runfetchcmd(svnfetchcmd, d)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata == "keep":
|
||||
tar_flags = ""
|
||||
else:
|
||||
tar_flags = "--exclude '.svn'"
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
|
||||
|
||||
def clean(self, ud, d):
|
||||
""" Clean SVN specific files and dirs """
|
||||
|
||||
bb.utils.remove(ud.localpath)
|
||||
bb.utils.remove(ud.moddir, True)
|
||||
|
||||
|
||||
def supports_srcrev(self):
|
||||
return True
|
||||
|
||||
def _revision_key(self, url, ud, d, name):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "svn:" + ud.moddir
|
||||
|
||||
def _latest_revision(self, url, ud, d, name):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "info"))
|
||||
|
||||
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
|
||||
|
||||
revision = None
|
||||
for line in output.splitlines():
|
||||
if "Last Changed Rev" in line:
|
||||
revision = line.split(":")[1].strip()
|
||||
|
||||
return revision
|
||||
|
||||
def _sortable_revision(self, url, ud, d):
|
||||
"""
|
||||
Return a sortable revision number which in our case is the revision number
|
||||
"""
|
||||
|
||||
return self._build_revision(url, ud, d)
|
||||
|
||||
def _build_revision(self, url, ud, d):
|
||||
return ud.revision
|
||||
@@ -1,92 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementations
|
||||
|
||||
Classes for obtaining upstream sources for the
|
||||
BitBake build tools.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import logging
|
||||
import bb
|
||||
import urllib
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import encodeurl
|
||||
from bb.fetch2 import decodeurl
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
class Wget(FetchMethod):
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with wget.
|
||||
"""
|
||||
return ud.type in ['http', 'https', 'ftp']
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
|
||||
ud.basename = os.path.basename(ud.path)
|
||||
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
|
||||
|
||||
def download(self, uri, ud, d, checkonly = False):
|
||||
"""Fetch urls"""
|
||||
|
||||
def fetch_uri(uri, ud, d):
|
||||
if checkonly:
|
||||
fetchcmd = data.getVar("CHECKCOMMAND", d, True)
|
||||
elif os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again..
|
||||
fetchcmd = data.getVar("RESUMECOMMAND", d, True)
|
||||
else:
|
||||
fetchcmd = data.getVar("FETCHCOMMAND", d, True)
|
||||
|
||||
uri = uri.split(";")[0]
|
||||
uri_decoded = list(decodeurl(uri))
|
||||
uri_type = uri_decoded[0]
|
||||
uri_host = uri_decoded[1]
|
||||
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
|
||||
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
|
||||
if not checkonly:
|
||||
logger.info("fetch " + uri)
|
||||
logger.debug(2, "executing " + fetchcmd)
|
||||
bb.fetch2.check_network_access(d, fetchcmd)
|
||||
runfetchcmd(fetchcmd, d, quiet=checkonly)
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath) and not checkonly:
|
||||
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
|
||||
|
||||
localdata = data.createCopy(d)
|
||||
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
|
||||
data.update_data(localdata)
|
||||
|
||||
fetch_uri(uri, ud, localdata)
|
||||
|
||||
return True
|
||||
|
||||
def checkstatus(self, uri, ud, d):
|
||||
return self.download(uri, ud, d, True)
|
||||
@@ -27,7 +27,7 @@
|
||||
a method pool to do this task.
|
||||
|
||||
This pool will be used to compile and execute the functions. It
|
||||
will be smart enough to
|
||||
will be smart enough to
|
||||
"""
|
||||
|
||||
from bb.utils import better_compile, better_exec
|
||||
@@ -43,8 +43,8 @@ def insert_method(modulename, code, fn):
|
||||
Add code of a module should be added. The methods
|
||||
will be simply added, no checking will be done
|
||||
"""
|
||||
comp = better_compile(code, modulename, fn )
|
||||
better_exec(comp, None, code, fn)
|
||||
comp = better_compile(code, "<bb>", fn )
|
||||
better_exec(comp, __builtins__, code, fn)
|
||||
|
||||
# now some instrumentation
|
||||
code = comp.co_names
|
||||
@@ -59,7 +59,7 @@ def insert_method(modulename, code, fn):
|
||||
def check_insert_method(modulename, code, fn):
|
||||
"""
|
||||
Add the code if it wasnt added before. The module
|
||||
name will be used for that
|
||||
name will be used for that
|
||||
|
||||
Variables:
|
||||
@modulename a short name e.g. base.bbclass
|
||||
@@ -81,4 +81,4 @@ def get_parsed_dict():
|
||||
"""
|
||||
shortcut
|
||||
"""
|
||||
return _parsed_methods
|
||||
return _parsed_methods
|
||||
|
||||
@@ -1,244 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# Copyright (C) 2012 Robert Yang
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os, logging, re, sys
|
||||
import bb
|
||||
logger = logging.getLogger("BitBake.Monitor")
|
||||
|
||||
def printErr(info):
|
||||
logger.error("%s\n Disk space monitor will NOT be enabled" % info)
|
||||
|
||||
def convertGMK(unit):
|
||||
|
||||
""" Convert the space unit G, M, K, the unit is case-insensitive """
|
||||
|
||||
unitG = re.match('([1-9][0-9]*)[gG]\s?$', unit)
|
||||
if unitG:
|
||||
return int(unitG.group(1)) * (1024 ** 3)
|
||||
unitM = re.match('([1-9][0-9]*)[mM]\s?$', unit)
|
||||
if unitM:
|
||||
return int(unitM.group(1)) * (1024 ** 2)
|
||||
unitK = re.match('([1-9][0-9]*)[kK]\s?$', unit)
|
||||
if unitK:
|
||||
return int(unitK.group(1)) * 1024
|
||||
unitN = re.match('([1-9][0-9]*)\s?$', unit)
|
||||
if unitN:
|
||||
return int(unitN.group(1))
|
||||
else:
|
||||
return None
|
||||
|
||||
def getMountedDev(path):
|
||||
|
||||
""" Get the device mounted at the path, uses /proc/mounts """
|
||||
|
||||
# Get the mount point of the filesystem containing path
|
||||
# st_dev is the ID of device containing file
|
||||
parentDev = os.stat(path).st_dev
|
||||
currentDev = parentDev
|
||||
# When the current directory's device is different from the
|
||||
# parrent's, then the current directory is a mount point
|
||||
while parentDev == currentDev:
|
||||
mountPoint = path
|
||||
# Use dirname to get the parrent's directory
|
||||
path = os.path.dirname(path)
|
||||
# Reach the "/"
|
||||
if path == mountPoint:
|
||||
break
|
||||
parentDev= os.stat(path).st_dev
|
||||
|
||||
try:
|
||||
with open("/proc/mounts", "r") as ifp:
|
||||
for line in ifp:
|
||||
procLines = line.rstrip('\n').split()
|
||||
if procLines[1] == mountPoint:
|
||||
return procLines[0]
|
||||
except EnvironmentError:
|
||||
pass
|
||||
return None
|
||||
|
||||
def getDiskData(BBDirs, configuration):
|
||||
|
||||
"""Prepare disk data for disk space monitor"""
|
||||
|
||||
# Save the device IDs, need the ID to be unique (the dictionary's key is
|
||||
# unique), so that when more than one directories are located in the same
|
||||
# device, we just monitor it once
|
||||
devDict = {}
|
||||
for pathSpaceInode in BBDirs.split():
|
||||
# The input format is: "dir,space,inode", dir is a must, space
|
||||
# and inode are optional
|
||||
pathSpaceInodeRe = re.match('([^,]*),([^,]*),([^,]*),?(.*)', pathSpaceInode)
|
||||
if not pathSpaceInodeRe:
|
||||
printErr("Invalid value in BB_DISKMON_DIRS: %s" % pathSpaceInode)
|
||||
return None
|
||||
|
||||
action = pathSpaceInodeRe.group(1)
|
||||
if action not in ("ABORT", "STOPTASKS", "WARN"):
|
||||
printErr("Unknown disk space monitor action: %s" % action)
|
||||
return None
|
||||
|
||||
path = os.path.realpath(pathSpaceInodeRe.group(2))
|
||||
if not path:
|
||||
printErr("Invalid path value in BB_DISKMON_DIRS: %s" % pathSpaceInode)
|
||||
return None
|
||||
|
||||
# The disk space or inode is optional, but it should have a correct
|
||||
# value once it is specified
|
||||
minSpace = pathSpaceInodeRe.group(3)
|
||||
if minSpace:
|
||||
minSpace = convertGMK(minSpace)
|
||||
if not minSpace:
|
||||
printErr("Invalid disk space value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(3))
|
||||
return None
|
||||
else:
|
||||
# 0 means that it is not specified
|
||||
minSpace = None
|
||||
|
||||
minInode = pathSpaceInodeRe.group(4)
|
||||
if minInode:
|
||||
minInode = convertGMK(minInode)
|
||||
if not minInode:
|
||||
printErr("Invalid inode value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(4))
|
||||
return None
|
||||
else:
|
||||
# 0 means that it is not specified
|
||||
minInode = None
|
||||
|
||||
if minSpace is None and minInode is None:
|
||||
printErr("No disk space or inode value in found BB_DISKMON_DIRS: %s" % pathSpaceInode)
|
||||
return None
|
||||
# mkdir for the directory since it may not exist, for example the
|
||||
# DL_DIR may not exist at the very beginning
|
||||
if not os.path.exists(path):
|
||||
bb.utils.mkdirhier(path)
|
||||
mountedDev = getMountedDev(path)
|
||||
devDict[mountedDev] = action, path, minSpace, minInode
|
||||
|
||||
return devDict
|
||||
|
||||
def getInterval(configuration):
|
||||
|
||||
""" Get the disk space interval """
|
||||
|
||||
# The default value is 50M and 5K.
|
||||
spaceDefault = 50 * 1024 * 1024
|
||||
inodeDefault = 5 * 1024
|
||||
|
||||
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL", True)
|
||||
if not interval:
|
||||
return spaceDefault, inodeDefault
|
||||
else:
|
||||
# The disk space or inode interval is optional, but it should
|
||||
# have a correct value once it is specified
|
||||
intervalRe = re.match('([^,]*),?\s*(.*)', interval)
|
||||
if intervalRe:
|
||||
intervalSpace = intervalRe.group(1)
|
||||
if intervalSpace:
|
||||
intervalSpace = convertGMK(intervalSpace)
|
||||
if not intervalSpace:
|
||||
printErr("Invalid disk space interval value in BB_DISKMON_WARNINTERVAL: %s" % intervalRe.group(1))
|
||||
return None, None
|
||||
else:
|
||||
intervalSpace = spaceDefault
|
||||
intervalInode = intervalRe.group(2)
|
||||
if intervalInode:
|
||||
intervalInode = convertGMK(intervalInode)
|
||||
if not intervalInode:
|
||||
printErr("Invalid disk inode interval value in BB_DISKMON_WARNINTERVAL: %s" % intervalRe.group(2))
|
||||
return None, None
|
||||
else:
|
||||
intervalInode = inodeDefault
|
||||
return intervalSpace, intervalInode
|
||||
else:
|
||||
printErr("Invalid interval value in BB_DISKMON_WARNINTERVAL: %s" % interval)
|
||||
return None, None
|
||||
|
||||
class diskMonitor:
|
||||
|
||||
"""Prepare the disk space monitor data"""
|
||||
|
||||
def __init__(self, configuration):
|
||||
|
||||
self.enableMonitor = False
|
||||
|
||||
BBDirs = configuration.getVar("BB_DISKMON_DIRS", True) or None
|
||||
if BBDirs:
|
||||
self.devDict = getDiskData(BBDirs, configuration)
|
||||
if self.devDict:
|
||||
self.spaceInterval, self.inodeInterval = getInterval(configuration)
|
||||
if self.spaceInterval and self.inodeInterval:
|
||||
self.enableMonitor = True
|
||||
# These are for saving the previous disk free space and inode, we
|
||||
# use them to avoid print too many warning messages
|
||||
self.preFreeS = {}
|
||||
self.preFreeI = {}
|
||||
# This is for STOPTASKS and ABORT, to avoid print the message repeatly
|
||||
# during waiting the tasks to finish
|
||||
self.checked = {}
|
||||
for dev in self.devDict:
|
||||
self.preFreeS[dev] = 0
|
||||
self.preFreeI[dev] = 0
|
||||
self.checked[dev] = False
|
||||
if self.spaceInterval is None and self.inodeInterval is None:
|
||||
self.enableMonitor = False
|
||||
|
||||
def check(self, rq):
|
||||
|
||||
""" Take action for the monitor """
|
||||
|
||||
if self.enableMonitor:
|
||||
for dev in self.devDict:
|
||||
st = os.statvfs(self.devDict[dev][1])
|
||||
|
||||
# The free space, float point number
|
||||
freeSpace = st.f_bavail * st.f_frsize
|
||||
|
||||
if self.devDict[dev][2] and freeSpace < self.devDict[dev][2]:
|
||||
# Always show warning, the self.checked would always be False if the action is WARN
|
||||
if self.preFreeS[dev] == 0 or self.preFreeS[dev] - freeSpace > self.spaceInterval and not self.checked[dev]:
|
||||
logger.warn("The free space of %s is running low (%.3fGB left)" % (dev, freeSpace / 1024 / 1024 / 1024.0))
|
||||
self.preFreeS[dev] = freeSpace
|
||||
|
||||
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
|
||||
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(False)
|
||||
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(True)
|
||||
|
||||
# The free inodes, float point number
|
||||
freeInode = st.f_favail
|
||||
|
||||
if self.devDict[dev][3] and freeInode < self.devDict[dev][3]:
|
||||
# Always show warning, the self.checked would always be False if the action is WARN
|
||||
if self.preFreeI[dev] == 0 or self.preFreeI[dev] - freeInode > self.inodeInterval and not self.checked[dev]:
|
||||
logger.warn("The free inode of %s is running low (%.3fK left)" % (dev, freeInode / 1024.0))
|
||||
self.preFreeI[dev] = freeInode
|
||||
|
||||
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
|
||||
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(False)
|
||||
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(True)
|
||||
return
|
||||
@@ -22,125 +22,104 @@ Message handling infrastructure for bitbake
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import sys
|
||||
import logging
|
||||
import collections
|
||||
from itertools import groupby
|
||||
import warnings
|
||||
import bb
|
||||
import bb.event
|
||||
import sys, bb
|
||||
from bb import event
|
||||
|
||||
class BBLogFormatter(logging.Formatter):
|
||||
"""Formatter which ensures that our 'plain' messages (logging.INFO + 1) are used as is"""
|
||||
debug_level = {}
|
||||
|
||||
DEBUG3 = logging.DEBUG - 2
|
||||
DEBUG2 = logging.DEBUG - 1
|
||||
DEBUG = logging.DEBUG
|
||||
VERBOSE = logging.INFO - 1
|
||||
NOTE = logging.INFO
|
||||
PLAIN = logging.INFO + 1
|
||||
ERROR = logging.ERROR
|
||||
WARNING = logging.WARNING
|
||||
CRITICAL = logging.CRITICAL
|
||||
verbose = False
|
||||
|
||||
levelnames = {
|
||||
DEBUG3 : 'DEBUG',
|
||||
DEBUG2 : 'DEBUG',
|
||||
DEBUG : 'DEBUG',
|
||||
VERBOSE: 'NOTE',
|
||||
NOTE : 'NOTE',
|
||||
PLAIN : '',
|
||||
WARNING : 'WARNING',
|
||||
ERROR : 'ERROR',
|
||||
CRITICAL: 'ERROR',
|
||||
}
|
||||
|
||||
def getLevelName(self, levelno):
|
||||
try:
|
||||
return self.levelnames[levelno]
|
||||
except KeyError:
|
||||
self.levelnames[levelno] = value = 'Level %d' % levelno
|
||||
return value
|
||||
|
||||
def format(self, record):
|
||||
record.levelname = self.getLevelName(record.levelno)
|
||||
if record.levelno == self.PLAIN:
|
||||
msg = record.getMessage()
|
||||
else:
|
||||
msg = logging.Formatter.format(self, record)
|
||||
|
||||
if hasattr(record, 'bb_exc_info'):
|
||||
etype, value, tb = record.bb_exc_info
|
||||
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
|
||||
msg += '\n' + ''.join(formatted)
|
||||
return msg
|
||||
|
||||
class BBLogFilter(object):
|
||||
def __init__(self, handler, level, debug_domains):
|
||||
self.stdlevel = level
|
||||
self.debug_domains = debug_domains
|
||||
loglevel = level
|
||||
for domain in debug_domains:
|
||||
if debug_domains[domain] < loglevel:
|
||||
loglevel = debug_domains[domain]
|
||||
handler.setLevel(loglevel)
|
||||
handler.addFilter(self)
|
||||
|
||||
def filter(self, record):
|
||||
if record.levelno >= self.stdlevel:
|
||||
return True
|
||||
if record.name in self.debug_domains and record.levelno >= self.debug_domains[record.name]:
|
||||
return True
|
||||
return False
|
||||
domain = bb.utils.Enum(
|
||||
'Build',
|
||||
'Cache',
|
||||
'Collection',
|
||||
'Data',
|
||||
'Depends',
|
||||
'Fetcher',
|
||||
'Parsing',
|
||||
'PersistData',
|
||||
'Provider',
|
||||
'RunQueue',
|
||||
'TaskData',
|
||||
'Util')
|
||||
|
||||
|
||||
class MsgBase(bb.event.Event):
|
||||
"""Base class for messages"""
|
||||
|
||||
def __init__(self, msg):
|
||||
self._message = msg
|
||||
event.Event.__init__(self)
|
||||
|
||||
class MsgDebug(MsgBase):
|
||||
"""Debug Message"""
|
||||
|
||||
class MsgNote(MsgBase):
|
||||
"""Note Message"""
|
||||
|
||||
class MsgWarn(MsgBase):
|
||||
"""Warning Message"""
|
||||
|
||||
class MsgError(MsgBase):
|
||||
"""Error Message"""
|
||||
|
||||
class MsgFatal(MsgBase):
|
||||
"""Fatal Message"""
|
||||
|
||||
class MsgPlain(MsgBase):
|
||||
"""General output"""
|
||||
|
||||
#
|
||||
# Message control functions
|
||||
#
|
||||
|
||||
loggerDefaultDebugLevel = 0
|
||||
loggerDefaultVerbose = False
|
||||
loggerVerboseLogs = False
|
||||
loggerDefaultDomains = []
|
||||
def set_debug_level(level):
|
||||
bb.msg.debug_level = {}
|
||||
for domain in bb.msg.domain:
|
||||
bb.msg.debug_level[domain] = level
|
||||
bb.msg.debug_level['default'] = level
|
||||
|
||||
def init_msgconfig(verbose, debug, debug_domains = []):
|
||||
"""
|
||||
Set default verbosity and debug levels config the logger
|
||||
"""
|
||||
bb.msg.loggerDefaultDebugLevel = debug
|
||||
bb.msg.loggerDefaultVerbose = verbose
|
||||
if verbose:
|
||||
bb.msg.loggerVerboseLogs = True
|
||||
bb.msg.loggerDefaultDomains = debug_domains
|
||||
def set_verbose(level):
|
||||
bb.msg.verbose = level
|
||||
|
||||
def addDefaultlogFilter(handler):
|
||||
|
||||
debug = loggerDefaultDebugLevel
|
||||
verbose = loggerDefaultVerbose
|
||||
domains = loggerDefaultDomains
|
||||
|
||||
if debug:
|
||||
level = BBLogFormatter.DEBUG - debug + 1
|
||||
elif verbose:
|
||||
level = BBLogFormatter.VERBOSE
|
||||
else:
|
||||
level = BBLogFormatter.NOTE
|
||||
|
||||
debug_domains = {}
|
||||
for (domainarg, iterator) in groupby(domains):
|
||||
dlevel = len(tuple(iterator))
|
||||
debug_domains["BitBake.%s" % domainarg] = logging.DEBUG - dlevel + 1
|
||||
|
||||
BBLogFilter(handler, level, debug_domains)
|
||||
def set_debug_domains(domains):
|
||||
for domain in domains:
|
||||
found = False
|
||||
for ddomain in bb.msg.domain:
|
||||
if domain == str(ddomain):
|
||||
bb.msg.debug_level[ddomain] = bb.msg.debug_level[ddomain] + 1
|
||||
found = True
|
||||
if not found:
|
||||
bb.msg.warn(None, "Logging domain %s is not valid, ignoring" % domain)
|
||||
|
||||
#
|
||||
# Message handling functions
|
||||
#
|
||||
|
||||
def fatal(msgdomain, msg):
|
||||
if msgdomain:
|
||||
logger = logging.getLogger("BitBake.%s" % msgdomain)
|
||||
else:
|
||||
logger = logging.getLogger("BitBake")
|
||||
logger.critical(msg)
|
||||
def debug(level, domain, msg, fn = None):
|
||||
if not domain:
|
||||
domain = 'default'
|
||||
if debug_level[domain] >= level:
|
||||
bb.event.fire(MsgDebug(msg), None)
|
||||
|
||||
def note(level, domain, msg, fn = None):
|
||||
if not domain:
|
||||
domain = 'default'
|
||||
if level == 1 or verbose or debug_level[domain] >= 1:
|
||||
bb.event.fire(MsgNote(msg), None)
|
||||
|
||||
def warn(domain, msg, fn = None):
|
||||
bb.event.fire(MsgWarn(msg), None)
|
||||
|
||||
def error(domain, msg, fn = None):
|
||||
bb.event.fire(MsgError(msg), None)
|
||||
print 'ERROR: ' + msg
|
||||
|
||||
def fatal(domain, msg, fn = None):
|
||||
bb.event.fire(MsgFatal(msg), None)
|
||||
print 'FATAL: ' + msg
|
||||
sys.exit(1)
|
||||
|
||||
def plain(msg, fn = None):
|
||||
bb.event.fire(MsgPlain(msg), None)
|
||||
|
||||
|
||||
@@ -1,255 +0,0 @@
|
||||
# http://code.activestate.com/recipes/577629-namedtupleabc-abstract-base-class-mix-in-for-named/
|
||||
#!/usr/bin/env python
|
||||
# Copyright (c) 2011 Jan Kaliszewski (zuo). Available under the MIT License.
|
||||
|
||||
"""
|
||||
namedtuple_with_abc.py:
|
||||
* named tuple mix-in + ABC (abstract base class) recipe,
|
||||
* works under Python 2.6, 2.7 as well as 3.x.
|
||||
|
||||
Import this module to patch collections.namedtuple() factory function
|
||||
-- enriching it with the 'abc' attribute (an abstract base class + mix-in
|
||||
for named tuples) and decorating it with a wrapper that registers each
|
||||
newly created named tuple as a subclass of namedtuple.abc.
|
||||
|
||||
How to import:
|
||||
import collections, namedtuple_with_abc
|
||||
or:
|
||||
import namedtuple_with_abc
|
||||
from collections import namedtuple
|
||||
# ^ in this variant you must import namedtuple function
|
||||
# *after* importing namedtuple_with_abc module
|
||||
or simply:
|
||||
from namedtuple_with_abc import namedtuple
|
||||
|
||||
Simple usage example:
|
||||
class Credentials(namedtuple.abc):
|
||||
_fields = 'username password'
|
||||
def __str__(self):
|
||||
return ('{0.__class__.__name__}'
|
||||
'(username={0.username}, password=...)'.format(self))
|
||||
print(Credentials("alice", "Alice's password"))
|
||||
|
||||
For more advanced examples -- see below the "if __name__ == '__main__':".
|
||||
"""
|
||||
|
||||
import collections
|
||||
from abc import ABCMeta, abstractproperty
|
||||
from functools import wraps
|
||||
from sys import version_info
|
||||
|
||||
__all__ = ('namedtuple',)
|
||||
_namedtuple = collections.namedtuple
|
||||
|
||||
|
||||
class _NamedTupleABCMeta(ABCMeta):
|
||||
'''The metaclass for the abstract base class + mix-in for named tuples.'''
|
||||
def __new__(mcls, name, bases, namespace):
|
||||
fields = namespace.get('_fields')
|
||||
for base in bases:
|
||||
if fields is not None:
|
||||
break
|
||||
fields = getattr(base, '_fields', None)
|
||||
if not isinstance(fields, abstractproperty):
|
||||
basetuple = _namedtuple(name, fields)
|
||||
bases = (basetuple,) + bases
|
||||
namespace.pop('_fields', None)
|
||||
namespace.setdefault('__doc__', basetuple.__doc__)
|
||||
namespace.setdefault('__slots__', ())
|
||||
return ABCMeta.__new__(mcls, name, bases, namespace)
|
||||
|
||||
|
||||
exec(
|
||||
# Python 2.x metaclass declaration syntax
|
||||
"""class _NamedTupleABC(object):
|
||||
'''The abstract base class + mix-in for named tuples.'''
|
||||
__metaclass__ = _NamedTupleABCMeta
|
||||
_fields = abstractproperty()""" if version_info[0] < 3 else
|
||||
# Python 3.x metaclass declaration syntax
|
||||
"""class _NamedTupleABC(metaclass=_NamedTupleABCMeta):
|
||||
'''The abstract base class + mix-in for named tuples.'''
|
||||
_fields = abstractproperty()"""
|
||||
)
|
||||
|
||||
|
||||
_namedtuple.abc = _NamedTupleABC
|
||||
#_NamedTupleABC.register(type(version_info)) # (and similar, in the future...)
|
||||
|
||||
@wraps(_namedtuple)
|
||||
def namedtuple(*args, **kwargs):
|
||||
'''Named tuple factory with namedtuple.abc subclass registration.'''
|
||||
cls = _namedtuple(*args, **kwargs)
|
||||
_NamedTupleABC.register(cls)
|
||||
return cls
|
||||
|
||||
collections.namedtuple = namedtuple
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
'''Examples and explanations'''
|
||||
|
||||
# Simple usage
|
||||
|
||||
class MyRecord(namedtuple.abc):
|
||||
_fields = 'x y z' # such form will be transformed into ('x', 'y', 'z')
|
||||
def _my_custom_method(self):
|
||||
return list(self._asdict().items())
|
||||
# (the '_fields' attribute belongs to the named tuple public API anyway)
|
||||
|
||||
rec = MyRecord(1, 2, 3)
|
||||
print(rec)
|
||||
print(rec._my_custom_method())
|
||||
print(rec._replace(y=222))
|
||||
print(rec._replace(y=222)._my_custom_method())
|
||||
|
||||
# Custom abstract classes...
|
||||
|
||||
class MyAbstractRecord(namedtuple.abc):
|
||||
def _my_custom_method(self):
|
||||
return list(self._asdict().items())
|
||||
|
||||
try:
|
||||
MyAbstractRecord() # (abstract classes cannot be instantiated)
|
||||
except TypeError as exc:
|
||||
print(exc)
|
||||
|
||||
class AnotherAbstractRecord(MyAbstractRecord):
|
||||
def __str__(self):
|
||||
return '<<<{0}>>>'.format(super(AnotherAbstractRecord,
|
||||
self).__str__())
|
||||
|
||||
# ...and their non-abstract subclasses
|
||||
|
||||
class MyRecord2(MyAbstractRecord):
|
||||
_fields = 'a, b'
|
||||
|
||||
class MyRecord3(AnotherAbstractRecord):
|
||||
_fields = 'p', 'q', 'r'
|
||||
|
||||
rec2 = MyRecord2('foo', 'bar')
|
||||
print(rec2)
|
||||
print(rec2._my_custom_method())
|
||||
print(rec2._replace(b=222))
|
||||
print(rec2._replace(b=222)._my_custom_method())
|
||||
|
||||
rec3 = MyRecord3('foo', 'bar', 'baz')
|
||||
print(rec3)
|
||||
print(rec3._my_custom_method())
|
||||
print(rec3._replace(q=222))
|
||||
print(rec3._replace(q=222)._my_custom_method())
|
||||
|
||||
# You can also subclass non-abstract ones...
|
||||
|
||||
class MyRecord33(MyRecord3):
|
||||
def __str__(self):
|
||||
return '< {0!r}, ..., {0!r} >'.format(self.p, self.r)
|
||||
|
||||
rec33 = MyRecord33('foo', 'bar', 'baz')
|
||||
print(rec33)
|
||||
print(rec33._my_custom_method())
|
||||
print(rec33._replace(q=222))
|
||||
print(rec33._replace(q=222)._my_custom_method())
|
||||
|
||||
# ...and even override the magic '_fields' attribute again
|
||||
|
||||
class MyRecord345(MyRecord3):
|
||||
_fields = 'e f g h i j k'
|
||||
|
||||
rec345 = MyRecord345(1, 2, 3, 4, 3, 2, 1)
|
||||
print(rec345)
|
||||
print(rec345._my_custom_method())
|
||||
print(rec345._replace(f=222))
|
||||
print(rec345._replace(f=222)._my_custom_method())
|
||||
|
||||
# Mixing-in some other classes is also possible:
|
||||
|
||||
class MyMixIn(object):
|
||||
def method(self):
|
||||
return "MyMixIn.method() called"
|
||||
def _my_custom_method(self):
|
||||
return "MyMixIn._my_custom_method() called"
|
||||
def count(self, item):
|
||||
return "MyMixIn.count({0}) called".format(item)
|
||||
def _asdict(self): # (cannot override a namedtuple method, see below)
|
||||
return "MyMixIn._asdict() called"
|
||||
|
||||
class MyRecord4(MyRecord33, MyMixIn): # mix-in on the right
|
||||
_fields = 'j k l x'
|
||||
|
||||
class MyRecord5(MyMixIn, MyRecord33): # mix-in on the left
|
||||
_fields = 'j k l x y'
|
||||
|
||||
rec4 = MyRecord4(1, 2, 3, 2)
|
||||
print(rec4)
|
||||
print(rec4.method())
|
||||
print(rec4._my_custom_method()) # MyRecord33's
|
||||
print(rec4.count(2)) # tuple's
|
||||
print(rec4._replace(k=222))
|
||||
print(rec4._replace(k=222).method())
|
||||
print(rec4._replace(k=222)._my_custom_method()) # MyRecord33's
|
||||
print(rec4._replace(k=222).count(8)) # tuple's
|
||||
|
||||
rec5 = MyRecord5(1, 2, 3, 2, 1)
|
||||
print(rec5)
|
||||
print(rec5.method())
|
||||
print(rec5._my_custom_method()) # MyMixIn's
|
||||
print(rec5.count(2)) # MyMixIn's
|
||||
print(rec5._replace(k=222))
|
||||
print(rec5._replace(k=222).method())
|
||||
print(rec5._replace(k=222)._my_custom_method()) # MyMixIn's
|
||||
print(rec5._replace(k=222).count(2)) # MyMixIn's
|
||||
|
||||
# None that behavior: the standard namedtuple methods cannot be
|
||||
# overriden by a foreign mix-in -- even if the mix-in is declared
|
||||
# as the leftmost base class (but, obviously, you can override them
|
||||
# in the defined class or its subclasses):
|
||||
|
||||
print(rec4._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
|
||||
print(rec5._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
|
||||
|
||||
class MyRecord6(MyRecord33):
|
||||
_fields = 'j k l x y z'
|
||||
def _asdict(self):
|
||||
return "MyRecord6._asdict() called"
|
||||
rec6 = MyRecord6(1, 2, 3, 1, 2, 3)
|
||||
print(rec6._asdict()) # (this returns "MyRecord6._asdict() called")
|
||||
|
||||
# All that record classes are real subclasses of namedtuple.abc:
|
||||
|
||||
assert issubclass(MyRecord, namedtuple.abc)
|
||||
assert issubclass(MyAbstractRecord, namedtuple.abc)
|
||||
assert issubclass(AnotherAbstractRecord, namedtuple.abc)
|
||||
assert issubclass(MyRecord2, namedtuple.abc)
|
||||
assert issubclass(MyRecord3, namedtuple.abc)
|
||||
assert issubclass(MyRecord33, namedtuple.abc)
|
||||
assert issubclass(MyRecord345, namedtuple.abc)
|
||||
assert issubclass(MyRecord4, namedtuple.abc)
|
||||
assert issubclass(MyRecord5, namedtuple.abc)
|
||||
assert issubclass(MyRecord6, namedtuple.abc)
|
||||
|
||||
# ...but abstract ones are not subclasses of tuple
|
||||
# (and this is what you probably want):
|
||||
|
||||
assert not issubclass(MyAbstractRecord, tuple)
|
||||
assert not issubclass(AnotherAbstractRecord, tuple)
|
||||
|
||||
assert issubclass(MyRecord, tuple)
|
||||
assert issubclass(MyRecord2, tuple)
|
||||
assert issubclass(MyRecord3, tuple)
|
||||
assert issubclass(MyRecord33, tuple)
|
||||
assert issubclass(MyRecord345, tuple)
|
||||
assert issubclass(MyRecord4, tuple)
|
||||
assert issubclass(MyRecord5, tuple)
|
||||
assert issubclass(MyRecord6, tuple)
|
||||
|
||||
# Named tuple classes created with namedtuple() factory function
|
||||
# (in the "traditional" way) are registered as "virtual" subclasses
|
||||
# of namedtuple.abc:
|
||||
|
||||
MyTuple = namedtuple('MyTuple', 'a b c')
|
||||
mt = MyTuple(1, 2, 3)
|
||||
assert issubclass(MyTuple, namedtuple.abc)
|
||||
assert isinstance(mt, namedtuple.abc)
|
||||
@@ -24,58 +24,42 @@ File parsers for the BitBake build tools.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
__all__ = [ 'ParseError', 'SkipPackage', 'cached_mtime', 'mark_dependency',
|
||||
'supports', 'handle', 'init' ]
|
||||
handlers = []
|
||||
|
||||
import os
|
||||
import stat
|
||||
import logging
|
||||
import bb
|
||||
import bb.utils
|
||||
import bb.siggen
|
||||
|
||||
logger = logging.getLogger("BitBake.Parsing")
|
||||
import bb, os
|
||||
|
||||
class ParseError(Exception):
|
||||
"""Exception raised when parsing fails"""
|
||||
def __init__(self, msg, filename, lineno=0):
|
||||
self.msg = msg
|
||||
self.filename = filename
|
||||
self.lineno = lineno
|
||||
Exception.__init__(self, msg, filename, lineno)
|
||||
|
||||
def __str__(self):
|
||||
if self.lineno:
|
||||
return "ParseError at %s:%d: %s" % (self.filename, self.lineno, self.msg)
|
||||
else:
|
||||
return "ParseError in %s: %s" % (self.filename, self.msg)
|
||||
|
||||
class SkipPackage(Exception):
|
||||
"""Exception raised to skip this package"""
|
||||
|
||||
__mtime_cache = {}
|
||||
def cached_mtime(f):
|
||||
if f not in __mtime_cache:
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
if not __mtime_cache.has_key(f):
|
||||
__mtime_cache[f] = os.stat(f)[8]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def cached_mtime_noerror(f):
|
||||
if f not in __mtime_cache:
|
||||
if not __mtime_cache.has_key(f):
|
||||
try:
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
__mtime_cache[f] = os.stat(f)[8]
|
||||
except OSError:
|
||||
return 0
|
||||
return __mtime_cache[f]
|
||||
|
||||
def update_mtime(f):
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
__mtime_cache[f] = os.stat(f)[8]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def mark_dependency(d, f):
|
||||
if f.startswith('./'):
|
||||
f = "%s/%s" % (os.getcwd(), f[2:])
|
||||
deps = d.getVar('__depends') or set()
|
||||
deps.update([(f, cached_mtime(f))])
|
||||
d.setVar('__depends', deps)
|
||||
deps = bb.data.getVar('__depends', d) or []
|
||||
deps.append( (f, cached_mtime(f)) )
|
||||
bb.data.setVar('__depends', deps, d)
|
||||
|
||||
def supports(fn, data):
|
||||
"""Returns true if we have a handler for this file, false otherwise"""
|
||||
@@ -89,25 +73,20 @@ def handle(fn, data, include = 0):
|
||||
for h in handlers:
|
||||
if h['supports'](fn, data):
|
||||
return h['handle'](fn, data, include)
|
||||
raise ParseError("not a BitBake file", fn)
|
||||
raise ParseError("%s is not a BitBake file" % fn)
|
||||
|
||||
def init(fn, data):
|
||||
for h in handlers:
|
||||
if h['supports'](fn):
|
||||
return h['init'](data)
|
||||
|
||||
def init_parser(d):
|
||||
bb.parse.siggen = bb.siggen.init(d)
|
||||
|
||||
def resolve_file(fn, d):
|
||||
if not os.path.isabs(fn):
|
||||
bbpath = d.getVar("BBPATH", True)
|
||||
newfn = bb.utils.which(bbpath, fn)
|
||||
if not newfn:
|
||||
raise IOError("file %s not found in %s" % (fn, bbpath))
|
||||
fn = newfn
|
||||
fn = bb.which(bb.data.getVar("BBPATH", d, 1), fn)
|
||||
if not fn:
|
||||
raise IOError("file %s not found" % fn)
|
||||
|
||||
logger.debug(2, "LOAD %s", fn)
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "LOAD %s" % fn)
|
||||
return fn
|
||||
|
||||
# Used by OpenEmbedded metadata
|
||||
@@ -122,7 +101,7 @@ def vars_from_file(mypkg, d):
|
||||
parts = myfile[0].split('_')
|
||||
__pkgsplit_cache__[mypkg] = parts
|
||||
if len(parts) > 3:
|
||||
raise ParseError("Unable to generate default variables from filename (too many underscores)", mypkg)
|
||||
raise ParseError("Unable to generate default variables from the filename: %s (too many underscores)" % mypkg)
|
||||
exp = 3 - len(parts)
|
||||
tmplist = []
|
||||
while exp != 0:
|
||||
@@ -131,13 +110,4 @@ def vars_from_file(mypkg, d):
|
||||
parts.extend(tmplist)
|
||||
return parts
|
||||
|
||||
def get_file_depends(d):
|
||||
'''Return the dependent files'''
|
||||
dep_files = []
|
||||
depends = d.getVar('__depends', True) or set()
|
||||
depends = depends.union(d.getVar('__base_depends', True) or set())
|
||||
for (fn, _) in depends:
|
||||
dep_files.append(os.path.abspath(fn))
|
||||
return " ".join(dep_files)
|
||||
|
||||
from bb.parse.parse_py import __version__, ConfHandler, BBHandler
|
||||
|
||||
@@ -21,55 +21,46 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from __future__ import absolute_import
|
||||
from future_builtins import filter
|
||||
import re
|
||||
import string
|
||||
import logging
|
||||
import bb
|
||||
import itertools
|
||||
from bb import methodpool
|
||||
from bb.parse import logger
|
||||
import bb, re, string
|
||||
from itertools import chain
|
||||
|
||||
__word__ = re.compile(r"\S+")
|
||||
__parsed_methods__ = bb.methodpool.get_parsed_dict()
|
||||
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
|
||||
|
||||
class StatementGroup(list):
|
||||
def eval(self, data):
|
||||
for statement in self:
|
||||
statement.eval(data)
|
||||
map(lambda x: x.eval(data), self)
|
||||
|
||||
class AstNode(object):
|
||||
def __init__(self, filename, lineno):
|
||||
self.filename = filename
|
||||
self.lineno = lineno
|
||||
pass
|
||||
|
||||
class IncludeNode(AstNode):
|
||||
def __init__(self, filename, lineno, what_file, force):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
def __init__(self, what_file, fn, lineno, force):
|
||||
self.what_file = what_file
|
||||
self.from_fn = fn
|
||||
self.from_lineno = lineno
|
||||
self.force = force
|
||||
|
||||
def eval(self, data):
|
||||
"""
|
||||
Include the file and evaluate the statements
|
||||
"""
|
||||
s = data.expand(self.what_file)
|
||||
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
|
||||
s = bb.data.expand(self.what_file, data)
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "CONF %s:%d: including %s" % (self.from_fn, self.from_lineno, s))
|
||||
|
||||
# TODO: Cache those includes... maybe not here though
|
||||
if self.force:
|
||||
bb.parse.ConfHandler.include(self.filename, s, self.lineno, data, "include required")
|
||||
bb.parse.ConfHandler.include(self.from_fn, s, data, "include required")
|
||||
else:
|
||||
bb.parse.ConfHandler.include(self.filename, s, self.lineno, data, False)
|
||||
bb.parse.ConfHandler.include(self.from_fn, s, data, False)
|
||||
|
||||
class ExportNode(AstNode):
|
||||
def __init__(self, filename, lineno, var):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
def __init__(self, var):
|
||||
self.var = var
|
||||
|
||||
def eval(self, data):
|
||||
data.setVarFlag(self.var, "export", 1)
|
||||
bb.data.setVarFlag(self.var, "export", 1, data)
|
||||
|
||||
class DataNode(AstNode):
|
||||
"""
|
||||
@@ -78,21 +69,20 @@ class DataNode(AstNode):
|
||||
this need to be re-evaluated... we might be able to do
|
||||
that faster with multiple classes.
|
||||
"""
|
||||
def __init__(self, filename, lineno, groupd):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
def __init__(self, groupd):
|
||||
self.groupd = groupd
|
||||
|
||||
def getFunc(self, key, data):
|
||||
if 'flag' in self.groupd and self.groupd['flag'] != None:
|
||||
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
|
||||
return bb.data.getVarFlag(key, self.groupd['flag'], data)
|
||||
else:
|
||||
return data.getVar(key, noweakdefault=True)
|
||||
return bb.data.getVar(key, data)
|
||||
|
||||
def eval(self, data):
|
||||
groupd = self.groupd
|
||||
key = groupd["var"]
|
||||
if "exp" in groupd and groupd["exp"] != None:
|
||||
data.setVarFlag(key, "export", 1)
|
||||
bb.data.setVarFlag(key, "export", 1, data)
|
||||
if "ques" in groupd and groupd["ques"] != None:
|
||||
val = self.getFunc(key, data)
|
||||
if val == None:
|
||||
@@ -100,7 +90,7 @@ class DataNode(AstNode):
|
||||
elif "colon" in groupd and groupd["colon"] != None:
|
||||
e = data.createCopy()
|
||||
bb.data.update_data(e)
|
||||
val = e.expand(groupd["value"], key + "[:=]")
|
||||
val = bb.data.expand(groupd["value"], e)
|
||||
elif "append" in groupd and groupd["append"] != None:
|
||||
val = "%s %s" % ((self.getFunc(key, data) or ""), groupd["value"])
|
||||
elif "prepend" in groupd and groupd["prepend"] != None:
|
||||
@@ -113,74 +103,73 @@ class DataNode(AstNode):
|
||||
val = groupd["value"]
|
||||
|
||||
if 'flag' in groupd and groupd['flag'] != None:
|
||||
data.setVarFlag(key, groupd['flag'], val)
|
||||
bb.msg.debug(3, bb.msg.domain.Parsing, "setVarFlag(%s, %s, %s, data)" % (key, groupd['flag'], val))
|
||||
bb.data.setVarFlag(key, groupd['flag'], val, data)
|
||||
elif groupd["lazyques"]:
|
||||
data.setVarFlag(key, "defaultval", val)
|
||||
assigned = bb.data.getVar("__lazy_assigned", data) or []
|
||||
assigned.append(key)
|
||||
bb.data.setVar("__lazy_assigned", assigned, data)
|
||||
bb.data.setVarFlag(key, "defaultval", val, data)
|
||||
else:
|
||||
data.setVar(key, val)
|
||||
bb.data.setVar(key, val, data)
|
||||
|
||||
class MethodNode(AstNode):
|
||||
def __init__(self, filename, lineno, func_name, body):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
class MethodNode:
|
||||
def __init__(self, func_name, body, lineno, fn):
|
||||
self.func_name = func_name
|
||||
self.body = body
|
||||
self.fn = fn
|
||||
self.lineno = lineno
|
||||
|
||||
def eval(self, data):
|
||||
if self.func_name == "__anonymous":
|
||||
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-', '____'))))
|
||||
funcname = ("__anon_%s_%s" % (self.lineno, self.fn.translate(string.maketrans('/.+-', '____'))))
|
||||
if not funcname in bb.methodpool._parsed_fns:
|
||||
text = "def %s(d):\n" % (funcname) + '\n'.join(self.body)
|
||||
bb.methodpool.insert_method(funcname, text, self.filename)
|
||||
anonfuncs = data.getVar('__BBANONFUNCS') or []
|
||||
bb.methodpool.insert_method(funcname, text, self.fn)
|
||||
anonfuncs = bb.data.getVar('__BBANONFUNCS', data) or []
|
||||
anonfuncs.append(funcname)
|
||||
data.setVar('__BBANONFUNCS', anonfuncs)
|
||||
bb.data.setVar('__BBANONFUNCS', anonfuncs, data)
|
||||
else:
|
||||
data.setVarFlag(self.func_name, "func", 1)
|
||||
data.setVar(self.func_name, '\n'.join(self.body))
|
||||
bb.data.setVarFlag(self.func_name, "func", 1, data)
|
||||
bb.data.setVar(self.func_name, '\n'.join(self.body), data)
|
||||
|
||||
class PythonMethodNode(AstNode):
|
||||
def __init__(self, filename, lineno, function, define, body):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
self.function = function
|
||||
self.define = define
|
||||
def __init__(self, root, body, fn):
|
||||
self.root = root
|
||||
self.body = body
|
||||
self.fn = fn
|
||||
|
||||
def eval(self, data):
|
||||
# Note we will add root to parsedmethods after having parse
|
||||
# 'this' file. This means we will not parse methods from
|
||||
# bb classes twice
|
||||
text = '\n'.join(self.body)
|
||||
if not bb.methodpool.parsed_module(self.define):
|
||||
bb.methodpool.insert_method(self.define, text, self.filename)
|
||||
data.setVarFlag(self.function, "func", 1)
|
||||
data.setVarFlag(self.function, "python", 1)
|
||||
data.setVar(self.function, text)
|
||||
if not self.root in __parsed_methods__:
|
||||
text = '\n'.join(self.body)
|
||||
bb.methodpool.insert_method(self.root, text, self.fn)
|
||||
|
||||
class MethodFlagsNode(AstNode):
|
||||
def __init__(self, filename, lineno, key, m):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
def __init__(self, key, m):
|
||||
self.key = key
|
||||
self.m = m
|
||||
|
||||
def eval(self, data):
|
||||
if data.getVar(self.key):
|
||||
if bb.data.getVar(self.key, data):
|
||||
# clean up old version of this piece of metadata, as its
|
||||
# flags could cause problems
|
||||
data.setVarFlag(self.key, 'python', None)
|
||||
data.setVarFlag(self.key, 'fakeroot', None)
|
||||
bb.data.setVarFlag(self.key, 'python', None, data)
|
||||
bb.data.setVarFlag(self.key, 'fakeroot', None, data)
|
||||
if self.m.group("py") is not None:
|
||||
data.setVarFlag(self.key, "python", "1")
|
||||
bb.data.setVarFlag(self.key, "python", "1", data)
|
||||
else:
|
||||
data.delVarFlag(self.key, "python")
|
||||
bb.data.delVarFlag(self.key, "python", data)
|
||||
if self.m.group("fr") is not None:
|
||||
data.setVarFlag(self.key, "fakeroot", "1")
|
||||
bb.data.setVarFlag(self.key, "fakeroot", "1", data)
|
||||
else:
|
||||
data.delVarFlag(self.key, "fakeroot")
|
||||
bb.data.delVarFlag(self.key, "fakeroot", data)
|
||||
|
||||
class ExportFuncsNode(AstNode):
|
||||
def __init__(self, filename, lineno, fns, classes):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
self.n = fns.split()
|
||||
def __init__(self, fns, classes):
|
||||
self.n = __word__.findall(fns)
|
||||
self.classes = classes
|
||||
|
||||
def eval(self, data):
|
||||
@@ -197,29 +186,28 @@ class ExportFuncsNode(AstNode):
|
||||
vars.append([allvars[0], allvars[2]])
|
||||
|
||||
for (var, calledvar) in vars:
|
||||
if data.getVar(var) and not data.getVarFlag(var, 'export_func'):
|
||||
if bb.data.getVar(var, data) and not bb.data.getVarFlag(var, 'export_func', data):
|
||||
continue
|
||||
|
||||
if data.getVar(var):
|
||||
data.setVarFlag(var, 'python', None)
|
||||
data.setVarFlag(var, 'func', None)
|
||||
if bb.data.getVar(var, data):
|
||||
bb.data.setVarFlag(var, 'python', None, data)
|
||||
bb.data.setVarFlag(var, 'func', None, data)
|
||||
|
||||
for flag in [ "func", "python" ]:
|
||||
if data.getVarFlag(calledvar, flag):
|
||||
data.setVarFlag(var, flag, data.getVarFlag(calledvar, flag))
|
||||
if bb.data.getVarFlag(calledvar, flag, data):
|
||||
bb.data.setVarFlag(var, flag, bb.data.getVarFlag(calledvar, flag, data), data)
|
||||
for flag in [ "dirs" ]:
|
||||
if data.getVarFlag(var, flag):
|
||||
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag))
|
||||
if bb.data.getVarFlag(var, flag, data):
|
||||
bb.data.setVarFlag(calledvar, flag, bb.data.getVarFlag(var, flag, data), data)
|
||||
|
||||
if data.getVarFlag(calledvar, "python"):
|
||||
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n")
|
||||
if bb.data.getVarFlag(calledvar, "python", data):
|
||||
bb.data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n", data)
|
||||
else:
|
||||
data.setVar(var, "\t" + calledvar + "\n")
|
||||
data.setVarFlag(var, 'export_func', '1')
|
||||
bb.data.setVar(var, "\t" + calledvar + "\n", data)
|
||||
bb.data.setVarFlag(var, 'export_func', '1', data)
|
||||
|
||||
class AddTaskNode(AstNode):
|
||||
def __init__(self, filename, lineno, func, before, after):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
def __init__(self, func, before, after):
|
||||
self.func = func
|
||||
self.before = before
|
||||
self.after = after
|
||||
@@ -229,107 +217,124 @@ class AddTaskNode(AstNode):
|
||||
if self.func[:3] != "do_":
|
||||
var = "do_" + self.func
|
||||
|
||||
data.setVarFlag(var, "task", 1)
|
||||
bbtasks = data.getVar('__BBTASKS') or []
|
||||
bb.data.setVarFlag(var, "task", 1, data)
|
||||
bbtasks = bb.data.getVar('__BBTASKS', data) or []
|
||||
if not var in bbtasks:
|
||||
bbtasks.append(var)
|
||||
data.setVar('__BBTASKS', bbtasks)
|
||||
bb.data.setVar('__BBTASKS', bbtasks, data)
|
||||
|
||||
existing = data.getVarFlag(var, "deps") or []
|
||||
existing = bb.data.getVarFlag(var, "deps", data) or []
|
||||
if self.after is not None:
|
||||
# set up deps for function
|
||||
for entry in self.after.split():
|
||||
if entry not in existing:
|
||||
existing.append(entry)
|
||||
data.setVarFlag(var, "deps", existing)
|
||||
bb.data.setVarFlag(var, "deps", existing, data)
|
||||
if self.before is not None:
|
||||
# set up things that depend on this func
|
||||
for entry in self.before.split():
|
||||
existing = data.getVarFlag(entry, "deps") or []
|
||||
existing = bb.data.getVarFlag(entry, "deps", data) or []
|
||||
if var not in existing:
|
||||
data.setVarFlag(entry, "deps", [var] + existing)
|
||||
bb.data.setVarFlag(entry, "deps", [var] + existing, data)
|
||||
|
||||
class BBHandlerNode(AstNode):
|
||||
def __init__(self, filename, lineno, fns):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
self.hs = fns.split()
|
||||
def __init__(self, fns):
|
||||
self.hs = __word__.findall(fns)
|
||||
|
||||
def eval(self, data):
|
||||
bbhands = data.getVar('__BBHANDLERS') or []
|
||||
bbhands = bb.data.getVar('__BBHANDLERS', data) or []
|
||||
for h in self.hs:
|
||||
bbhands.append(h)
|
||||
data.setVarFlag(h, "handler", 1)
|
||||
data.setVar('__BBHANDLERS', bbhands)
|
||||
bb.data.setVarFlag(h, "handler", 1, data)
|
||||
bb.data.setVar('__BBHANDLERS', bbhands, data)
|
||||
|
||||
class InheritNode(AstNode):
|
||||
def __init__(self, filename, lineno, classes):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
self.classes = classes
|
||||
def __init__(self, files):
|
||||
self.n = __word__.findall(files)
|
||||
|
||||
def eval(self, data):
|
||||
bb.parse.BBHandler.inherit(self.classes, self.filename, self.lineno, data)
|
||||
bb.parse.BBHandler.inherit(self.n, data)
|
||||
|
||||
def handleInclude(statements, m, fn, lineno, force):
|
||||
statements.append(IncludeNode(m.group(1), fn, lineno, force))
|
||||
|
||||
def handleInclude(statements, filename, lineno, m, force):
|
||||
statements.append(IncludeNode(filename, lineno, m.group(1), force))
|
||||
def handleExport(statements, m):
|
||||
statements.append(ExportNode(m.group(1)))
|
||||
|
||||
def handleExport(statements, filename, lineno, m):
|
||||
statements.append(ExportNode(filename, lineno, m.group(1)))
|
||||
def handleData(statements, groupd):
|
||||
statements.append(DataNode(groupd))
|
||||
|
||||
def handleData(statements, filename, lineno, groupd):
|
||||
statements.append(DataNode(filename, lineno, groupd))
|
||||
def handleMethod(statements, func_name, lineno, fn, body):
|
||||
statements.append(MethodNode(func_name, body, lineno, fn))
|
||||
|
||||
def handleMethod(statements, filename, lineno, func_name, body):
|
||||
statements.append(MethodNode(filename, lineno, func_name, body))
|
||||
def handlePythonMethod(statements, root, body, fn):
|
||||
statements.append(PythonMethodNode(root, body, fn))
|
||||
|
||||
def handlePythonMethod(statements, filename, lineno, funcname, root, body):
|
||||
statements.append(PythonMethodNode(filename, lineno, funcname, root, body))
|
||||
def handleMethodFlags(statements, key, m):
|
||||
statements.append(MethodFlagsNode(key, m))
|
||||
|
||||
def handleMethodFlags(statements, filename, lineno, key, m):
|
||||
statements.append(MethodFlagsNode(filename, lineno, key, m))
|
||||
def handleExportFuncs(statements, m, classes):
|
||||
statements.append(ExportFuncsNode(m.group(1), classes))
|
||||
|
||||
def handleExportFuncs(statements, filename, lineno, m, classes):
|
||||
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classes))
|
||||
|
||||
def handleAddTask(statements, filename, lineno, m):
|
||||
def handleAddTask(statements, m):
|
||||
func = m.group("func")
|
||||
before = m.group("before")
|
||||
after = m.group("after")
|
||||
if func is None:
|
||||
return
|
||||
|
||||
statements.append(AddTaskNode(filename, lineno, func, before, after))
|
||||
statements.append(AddTaskNode(func, before, after))
|
||||
|
||||
def handleBBHandlers(statements, filename, lineno, m):
|
||||
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
|
||||
def handleBBHandlers(statements, m):
|
||||
statements.append(BBHandlerNode(m.group(1)))
|
||||
|
||||
def handleInherit(statements, filename, lineno, m):
|
||||
classes = m.group(1)
|
||||
statements.append(InheritNode(filename, lineno, classes))
|
||||
def handleInherit(statements, m):
|
||||
files = m.group(1)
|
||||
n = __word__.findall(files)
|
||||
statements.append(InheritNode(m.group(1)))
|
||||
|
||||
def finalize(fn, d, variant = None):
|
||||
all_handlers = {}
|
||||
for var in d.getVar('__BBHANDLERS') or []:
|
||||
# try to add the handler
|
||||
handler = d.getVar(var)
|
||||
bb.event.register(var, handler)
|
||||
|
||||
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
|
||||
def finalise(fn, d):
|
||||
for lazykey in bb.data.getVar("__lazy_assigned", d) or ():
|
||||
if bb.data.getVar(lazykey, d) is None:
|
||||
val = bb.data.getVarFlag(lazykey, "defaultval", d)
|
||||
bb.data.setVar(lazykey, val, d)
|
||||
|
||||
bb.data.expandKeys(d)
|
||||
bb.data.update_data(d)
|
||||
code = []
|
||||
for funcname in d.getVar("__BBANONFUNCS") or []:
|
||||
code.append("%s(d)" % funcname)
|
||||
bb.utils.simple_exec("\n".join(code), {"d": d})
|
||||
anonqueue = bb.data.getVar("__anonqueue", d, 1) or []
|
||||
body = [x['content'] for x in anonqueue]
|
||||
flag = { 'python' : 1, 'func' : 1 }
|
||||
bb.data.setVar("__anonfunc", "\n".join(body), d)
|
||||
bb.data.setVarFlags("__anonfunc", flag, d)
|
||||
from bb import build
|
||||
try:
|
||||
t = bb.data.getVar('T', d)
|
||||
bb.data.setVar('T', '${TMPDIR}/anonfunc/', d)
|
||||
anonfuncs = bb.data.getVar('__BBANONFUNCS', d) or []
|
||||
code = ""
|
||||
for f in anonfuncs:
|
||||
code = code + " %s(d)\n" % f
|
||||
bb.data.setVar("__anonfunc", code, d)
|
||||
build.exec_func("__anonfunc", d)
|
||||
bb.data.delVar('T', d)
|
||||
if t:
|
||||
bb.data.setVar('T', t, d)
|
||||
except Exception, e:
|
||||
bb.msg.debug(1, bb.msg.domain.Parsing, "Exception when executing anonymous function: %s" % e)
|
||||
raise
|
||||
bb.data.delVar("__anonqueue", d)
|
||||
bb.data.delVar("__anonfunc", d)
|
||||
bb.data.update_data(d)
|
||||
|
||||
tasklist = d.getVar('__BBTASKS') or []
|
||||
all_handlers = {}
|
||||
for var in bb.data.getVar('__BBHANDLERS', d) or []:
|
||||
# try to add the handler
|
||||
handler = bb.data.getVar(var,d)
|
||||
bb.event.register(var, handler)
|
||||
|
||||
tasklist = bb.data.getVar('__BBTASKS', d) or []
|
||||
bb.build.add_tasks(tasklist, d)
|
||||
|
||||
bb.parse.siggen.finalise(fn, d, variant)
|
||||
|
||||
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
|
||||
|
||||
bb.event.fire(bb.event.RecipeParsed(fn), d)
|
||||
|
||||
def _create_variants(datastores, names, function):
|
||||
@@ -355,7 +360,7 @@ def _expand_versions(versions):
|
||||
versions = iter(versions)
|
||||
while True:
|
||||
try:
|
||||
version = next(versions)
|
||||
version = versions.next()
|
||||
except StopIteration:
|
||||
break
|
||||
|
||||
@@ -365,22 +370,16 @@ def _expand_versions(versions):
|
||||
else:
|
||||
newversions = expand_one(version, int(range_ver.group("from")),
|
||||
int(range_ver.group("to")))
|
||||
versions = itertools.chain(newversions, versions)
|
||||
versions = chain(newversions, versions)
|
||||
|
||||
def multi_finalize(fn, d):
|
||||
appends = (d.getVar("__BBAPPEND", True) or "").split()
|
||||
for append in appends:
|
||||
logger.debug(2, "Appending .bbappend file %s to %s", append, fn)
|
||||
bb.parse.BBHandler.handle(append, d, True)
|
||||
|
||||
onlyfinalise = d.getVar("__ONLYFINALISE", False)
|
||||
|
||||
safe_d = d
|
||||
|
||||
d = bb.data.createCopy(safe_d)
|
||||
try:
|
||||
finalize(fn, d)
|
||||
except bb.parse.SkipPackage as e:
|
||||
d.setVar("__SKIPPED", e.args[0])
|
||||
finalise(fn, d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, d)
|
||||
datastores = {"": safe_d}
|
||||
|
||||
versions = (d.getVar("BBVERSIONS", True) or "").split()
|
||||
@@ -421,52 +420,31 @@ def multi_finalize(fn, d):
|
||||
d = bb.data.createCopy(safe_d)
|
||||
verfunc(pv, d, safe_d)
|
||||
try:
|
||||
finalize(fn, d)
|
||||
except bb.parse.SkipPackage as e:
|
||||
d.setVar("__SKIPPED", e.args[0])
|
||||
finalise(fn, d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, d)
|
||||
|
||||
_create_variants(datastores, versions, verfunc)
|
||||
|
||||
extended = d.getVar("BBCLASSEXTEND", True) or ""
|
||||
if extended:
|
||||
# the following is to support bbextends with arguments, for e.g. multilib
|
||||
# an example is as follows:
|
||||
# BBCLASSEXTEND = "multilib:lib32"
|
||||
# it will create foo-lib32, inheriting multilib.bbclass and set
|
||||
# BBEXTENDCURR to "multilib" and BBEXTENDVARIANT to "lib32"
|
||||
extendedmap = {}
|
||||
variantmap = {}
|
||||
|
||||
for ext in extended.split():
|
||||
eext = ext.split(':', 2)
|
||||
if len(eext) > 1:
|
||||
extendedmap[ext] = eext[0]
|
||||
variantmap[ext] = eext[1]
|
||||
else:
|
||||
extendedmap[ext] = ext
|
||||
|
||||
pn = d.getVar("PN", True)
|
||||
def extendfunc(name, d):
|
||||
if name != extendedmap[name]:
|
||||
d.setVar("BBEXTENDCURR", extendedmap[name])
|
||||
d.setVar("BBEXTENDVARIANT", variantmap[name])
|
||||
else:
|
||||
d.setVar("PN", "%s-%s" % (pn, name))
|
||||
bb.parse.BBHandler.inherit(extendedmap[name], fn, 0, d)
|
||||
d.setVar("PN", "%s-%s" % (pn, name))
|
||||
bb.parse.BBHandler.inherit([name], d)
|
||||
|
||||
safe_d.setVar("BBCLASSEXTEND", extended)
|
||||
_create_variants(datastores, extendedmap.keys(), extendfunc)
|
||||
_create_variants(datastores, extended.split(), extendfunc)
|
||||
|
||||
for variant, variant_d in datastores.iteritems():
|
||||
for variant, variant_d in datastores.items():
|
||||
if variant:
|
||||
try:
|
||||
if not onlyfinalise or variant in onlyfinalise:
|
||||
finalize(fn, variant_d, variant)
|
||||
except bb.parse.SkipPackage as e:
|
||||
variant_d.setVar("__SKIPPED", e.args[0])
|
||||
finalise(fn, variant_d)
|
||||
except bb.parse.SkipPackage:
|
||||
bb.data.setVar("__SKIPPED", True, variant_d)
|
||||
|
||||
if len(datastores) > 1:
|
||||
variants = filter(None, datastores.iterkeys())
|
||||
variants = filter(None, datastores.keys())
|
||||
safe_d.setVar("__VARIANTS", " ".join(variants))
|
||||
|
||||
datastores[""] = d
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
#
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
@@ -25,18 +25,15 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from __future__ import absolute_import
|
||||
import re, bb, os
|
||||
import logging
|
||||
import bb.build, bb.utils
|
||||
from bb import data
|
||||
import re, bb, os, sys, time, string
|
||||
import bb.fetch, bb.build, bb.utils
|
||||
from bb import data, fetch
|
||||
|
||||
from . import ConfHandler
|
||||
from .. import resolve_file, ast, logger
|
||||
from .ConfHandler import include, init
|
||||
from ConfHandler import include, init
|
||||
from bb.parse import ParseError, resolve_file, ast
|
||||
|
||||
# For compatibility
|
||||
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
|
||||
from bb.parse import vars_from_file
|
||||
|
||||
__func_start_regexp__ = re.compile( r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
|
||||
__inherit_regexp__ = re.compile( r"inherit\s+(.+)" )
|
||||
@@ -65,34 +62,35 @@ IN_PYTHON_EOF = -9999999999999
|
||||
|
||||
|
||||
def supports(fn, d):
|
||||
"""Return True if fn has a supported extension"""
|
||||
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
|
||||
return fn[-3:] == ".bb" or fn[-8:] == ".bbclass" or fn[-4:] == ".inc"
|
||||
|
||||
def inherit(files, fn, lineno, d):
|
||||
def inherit(files, d):
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
files = d.expand(files).split()
|
||||
fn = ""
|
||||
lineno = 0
|
||||
files = data.expand(files, d)
|
||||
for file in files:
|
||||
if not os.path.isabs(file) and not file.endswith(".bbclass"):
|
||||
if file[0] != "/" and file[-8:] != ".bbclass":
|
||||
file = os.path.join('classes', '%s.bbclass' % file)
|
||||
|
||||
if not file in __inherit_cache:
|
||||
logger.log(logging.DEBUG -1, "BB %s:%d: inheriting %s", fn, lineno, file)
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
|
||||
__inherit_cache.append( file )
|
||||
data.setVar('__inherit_cache', __inherit_cache, d)
|
||||
include(fn, file, lineno, d, "inherit")
|
||||
include(fn, file, d, "inherit")
|
||||
__inherit_cache = data.getVar('__inherit_cache', d) or []
|
||||
|
||||
def get_statements(filename, absolute_filename, base_name):
|
||||
def get_statements(filename, absolsute_filename, base_name):
|
||||
global cached_statements
|
||||
|
||||
try:
|
||||
return cached_statements[absolute_filename]
|
||||
return cached_statements[absolsute_filename]
|
||||
except KeyError:
|
||||
file = open(absolute_filename, 'r')
|
||||
file = open(absolsute_filename, 'r')
|
||||
statements = ast.StatementGroup()
|
||||
|
||||
lineno = 0
|
||||
while True:
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = file.readline()
|
||||
if not s: break
|
||||
@@ -103,7 +101,7 @@ def get_statements(filename, absolute_filename, base_name):
|
||||
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
|
||||
|
||||
if filename.endswith(".bbclass") or filename.endswith(".inc"):
|
||||
cached_statements[absolute_filename] = statements
|
||||
cached_statements[absolsute_filename] = statements
|
||||
return statements
|
||||
|
||||
def handle(fn, d, include):
|
||||
@@ -115,12 +113,12 @@ def handle(fn, d, include):
|
||||
|
||||
|
||||
if include == 0:
|
||||
logger.debug(2, "BB %s: handle(data)", fn)
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data)")
|
||||
else:
|
||||
logger.debug(2, "BB %s: handle(data, include)", fn)
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data, include)")
|
||||
|
||||
base_name = os.path.basename(fn)
|
||||
(root, ext) = os.path.splitext(base_name)
|
||||
(root, ext) = os.path.splitext(os.path.basename(fn))
|
||||
base_name = "%s%s" % (root,ext)
|
||||
init(d)
|
||||
|
||||
if ext == ".bbclass":
|
||||
@@ -146,7 +144,7 @@ def handle(fn, d, include):
|
||||
|
||||
# DONE WITH PARSING... time to evaluate
|
||||
if ext != ".bbclass":
|
||||
data.setVar('FILE', abs_fn, d)
|
||||
data.setVar('FILE', fn, d)
|
||||
|
||||
statements.eval(d)
|
||||
|
||||
@@ -157,7 +155,7 @@ def handle(fn, d, include):
|
||||
return ast.multi_finalize(fn, d)
|
||||
|
||||
if oldfile:
|
||||
d.setVar("FILE", oldfile)
|
||||
bb.data.setVar("FILE", oldfile, d)
|
||||
|
||||
# we have parsed the bb class now
|
||||
if ext == ".bbclass" or ext == ".inc":
|
||||
@@ -166,11 +164,11 @@ def handle(fn, d, include):
|
||||
return d
|
||||
|
||||
def feeder(lineno, s, fn, root, statements):
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, classes, bb, __residue__
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__,__infunc__, __body__, classes, bb, __residue__
|
||||
if __infunc__:
|
||||
if s == '}':
|
||||
__body__.append('')
|
||||
ast.handleMethod(statements, fn, lineno, __infunc__, __body__)
|
||||
ast.handleMethod(statements, __infunc__, lineno, fn, __body__)
|
||||
__infunc__ = ""
|
||||
__body__ = []
|
||||
else:
|
||||
@@ -183,69 +181,60 @@ def feeder(lineno, s, fn, root, statements):
|
||||
__body__.append(s)
|
||||
return
|
||||
else:
|
||||
ast.handlePythonMethod(statements, fn, lineno, __inpython__,
|
||||
root, __body__)
|
||||
ast.handlePythonMethod(statements, root, __body__, fn)
|
||||
__body__ = []
|
||||
__inpython__ = False
|
||||
|
||||
if lineno == IN_PYTHON_EOF:
|
||||
return
|
||||
|
||||
if s and s[0] == '#':
|
||||
if len(__residue__) != 0 and __residue__[0][0] != "#":
|
||||
bb.error("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
|
||||
# fall through
|
||||
|
||||
if s and s[-1] == '\\':
|
||||
if s == '' or s[0] == '#': return # skip comments and empty lines
|
||||
|
||||
if s[-1] == '\\':
|
||||
__residue__.append(s[:-1])
|
||||
return
|
||||
|
||||
s = "".join(__residue__) + s
|
||||
__residue__ = []
|
||||
|
||||
# Skip empty lines
|
||||
if s == '':
|
||||
return
|
||||
|
||||
# Skip comments
|
||||
if s[0] == '#':
|
||||
return
|
||||
|
||||
m = __func_start_regexp__.match(s)
|
||||
if m:
|
||||
__infunc__ = m.group("func") or "__anonymous"
|
||||
ast.handleMethodFlags(statements, fn, lineno, __infunc__, m)
|
||||
ast.handleMethodFlags(statements, __infunc__, m)
|
||||
return
|
||||
|
||||
m = __def_regexp__.match(s)
|
||||
if m:
|
||||
__body__.append(s)
|
||||
__inpython__ = m.group(1)
|
||||
|
||||
__inpython__ = True
|
||||
return
|
||||
|
||||
m = __export_func_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleExportFuncs(statements, fn, lineno, m, classes)
|
||||
ast.handleExportFuncs(statements, m, classes)
|
||||
return
|
||||
|
||||
m = __addtask_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleAddTask(statements, fn, lineno, m)
|
||||
ast.handleAddTask(statements, m)
|
||||
return
|
||||
|
||||
m = __addhandler_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleBBHandlers(statements, fn, lineno, m)
|
||||
ast.handleBBHandlers(statements, m)
|
||||
return
|
||||
|
||||
m = __inherit_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInherit(statements, fn, lineno, m)
|
||||
ast.handleInherit(statements, m)
|
||||
return
|
||||
|
||||
from bb.parse import ConfHandler
|
||||
return ConfHandler.feeder(lineno, s, fn, statements)
|
||||
|
||||
# Add us to the handlers list
|
||||
from .. import handlers
|
||||
from bb.parse import handlers
|
||||
handlers.append({'supports': supports, 'handle': handle, 'init': init})
|
||||
del handlers
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
# Copyright (C) 2003, 2004 Phil Blundell
|
||||
#
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
@@ -24,42 +24,43 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re, os
|
||||
import logging
|
||||
import bb.utils
|
||||
from bb.parse import ParseError, resolve_file, ast, logger
|
||||
import re, bb.data, os, sys
|
||||
from bb.parse import ParseError, resolve_file, ast
|
||||
|
||||
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"])(?P<value>.*)(?P=apo)$")
|
||||
#__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}]+)\s*(?P<colon>:)?(?P<ques>\?)?=\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
|
||||
__include_regexp__ = re.compile( r"include\s+(.+)" )
|
||||
__require_regexp__ = re.compile( r"require\s+(.+)" )
|
||||
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
|
||||
__export_regexp__ = re.compile( r"export\s+(.+)" )
|
||||
|
||||
def init(data):
|
||||
topdir = data.getVar('TOPDIR')
|
||||
topdir = bb.data.getVar('TOPDIR', data)
|
||||
if not topdir:
|
||||
data.setVar('TOPDIR', os.getcwd())
|
||||
topdir = os.getcwd()
|
||||
bb.data.setVar('TOPDIR', topdir, data)
|
||||
if not bb.data.getVar('BBPATH', data):
|
||||
bb.fatal("The BBPATH environment variable must be set")
|
||||
|
||||
|
||||
def supports(fn, d):
|
||||
return fn[-5:] == ".conf"
|
||||
|
||||
def include(oldfn, fn, lineno, data, error_out):
|
||||
def include(oldfn, fn, data, error_out):
|
||||
"""
|
||||
error_out: A string indicating the verb (e.g. "include", "inherit") to be
|
||||
used in a ParseError that will be raised if the file to be included could
|
||||
not be included. Specify False to avoid raising an error in this case.
|
||||
|
||||
error_out If True a ParseError will be reaised if the to be included
|
||||
"""
|
||||
if oldfn == fn: # prevent infinite recursion
|
||||
if oldfn == fn: # prevent infinate recursion
|
||||
return None
|
||||
|
||||
import bb
|
||||
fn = data.expand(fn)
|
||||
oldfn = data.expand(oldfn)
|
||||
fn = bb.data.expand(fn, data)
|
||||
oldfn = bb.data.expand(oldfn, data)
|
||||
|
||||
if not os.path.isabs(fn):
|
||||
dname = os.path.dirname(oldfn)
|
||||
bbpath = "%s:%s" % (dname, data.getVar("BBPATH", True))
|
||||
abs_fn = bb.utils.which(bbpath, fn)
|
||||
bbpath = "%s:%s" % (dname, bb.data.getVar("BBPATH", data, 1))
|
||||
abs_fn = bb.which(bbpath, fn)
|
||||
if abs_fn:
|
||||
fn = abs_fn
|
||||
|
||||
@@ -68,8 +69,8 @@ def include(oldfn, fn, lineno, data, error_out):
|
||||
ret = handle(fn, data, True)
|
||||
except IOError:
|
||||
if error_out:
|
||||
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
|
||||
logger.debug(2, "CONF file '%s' not found", fn)
|
||||
raise ParseError("Could not %(error_out)s file %(fn)s" % vars() )
|
||||
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF file '%s' not found" % fn)
|
||||
|
||||
def handle(fn, data, include):
|
||||
init(data)
|
||||
@@ -77,7 +78,7 @@ def handle(fn, data, include):
|
||||
if include == 0:
|
||||
oldfile = None
|
||||
else:
|
||||
oldfile = data.getVar('FILE')
|
||||
oldfile = bb.data.getVar('FILE', data)
|
||||
|
||||
abs_fn = resolve_file(fn, data)
|
||||
f = open(abs_fn, 'r')
|
||||
@@ -87,7 +88,7 @@ def handle(fn, data, include):
|
||||
|
||||
statements = ast.StatementGroup()
|
||||
lineno = 0
|
||||
while True:
|
||||
while 1:
|
||||
lineno = lineno + 1
|
||||
s = f.readline()
|
||||
if not s: break
|
||||
@@ -96,16 +97,16 @@ def handle(fn, data, include):
|
||||
s = s.rstrip()
|
||||
if s[0] == '#': continue # skip comments
|
||||
while s[-1] == '\\':
|
||||
s2 = f.readline().strip()
|
||||
s2 = f.readline()[:-1].strip()
|
||||
lineno = lineno + 1
|
||||
s = s[:-1] + s2
|
||||
feeder(lineno, s, fn, statements)
|
||||
|
||||
# DONE WITH PARSING... time to evaluate
|
||||
data.setVar('FILE', abs_fn)
|
||||
bb.data.setVar('FILE', fn, data)
|
||||
statements.eval(data)
|
||||
if oldfile:
|
||||
data.setVar('FILE', oldfile)
|
||||
bb.data.setVar('FILE', oldfile, data)
|
||||
|
||||
return data
|
||||
|
||||
@@ -113,25 +114,25 @@ def feeder(lineno, s, fn, statements):
|
||||
m = __config_regexp__.match(s)
|
||||
if m:
|
||||
groupd = m.groupdict()
|
||||
ast.handleData(statements, fn, lineno, groupd)
|
||||
ast.handleData(statements, groupd)
|
||||
return
|
||||
|
||||
m = __include_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInclude(statements, fn, lineno, m, False)
|
||||
ast.handleInclude(statements, m, fn, lineno, False)
|
||||
return
|
||||
|
||||
m = __require_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInclude(statements, fn, lineno, m, True)
|
||||
ast.handleInclude(statements, m, fn, lineno, True)
|
||||
return
|
||||
|
||||
m = __export_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleExport(statements, fn, lineno, m)
|
||||
ast.handleExport(statements, m)
|
||||
return
|
||||
|
||||
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
|
||||
raise ParseError("%s:%d: unparsed line: '%s'" % (fn, lineno, s));
|
||||
|
||||
# Add us to the handlers list
|
||||
from bb.parse import handlers
|
||||
|
||||
@@ -25,9 +25,9 @@ File parsers for the BitBake build tools.
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
from __future__ import absolute_import
|
||||
from . import ConfHandler
|
||||
from . import BBHandler
|
||||
|
||||
__version__ = '1.0'
|
||||
|
||||
__all__ = [ 'ConfHandler', 'BBHandler']
|
||||
|
||||
import ConfHandler
|
||||
import BBHandler
|
||||
|
||||
@@ -1,12 +1,6 @@
|
||||
"""BitBake Persistent Data Store
|
||||
|
||||
Used to store data in a central location such that other threads/tasks can
|
||||
access them at some future date. Acts as a convenience wrapper around sqlite,
|
||||
currently, providing a key/value store accessed by 'domain'.
|
||||
"""
|
||||
|
||||
# BitBake Persistent Data Store
|
||||
#
|
||||
# Copyright (C) 2007 Richard Purdie
|
||||
# Copyright (C) 2010 Chris Larson <chris_larson@mentor.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
@@ -21,190 +15,107 @@ currently, providing a key/value store accessed by 'domain'.
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import collections
|
||||
import logging
|
||||
import os.path
|
||||
import sys
|
||||
import warnings
|
||||
from bb.compat import total_ordering
|
||||
from collections import Mapping
|
||||
import bb, os
|
||||
|
||||
try:
|
||||
import sqlite3
|
||||
except ImportError:
|
||||
from pysqlite2 import dbapi2 as sqlite3
|
||||
try:
|
||||
from pysqlite2 import dbapi2 as sqlite3
|
||||
except ImportError:
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "Importing sqlite3 and pysqlite2 failed, please install one of them. Python 2.5 or a 'python-pysqlite2' like package is likely to be what you need.")
|
||||
|
||||
sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
raise Exception("sqlite3 version 3.3.0 or later is required.")
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "sqlite3 version 3.3.0 or later is required.")
|
||||
|
||||
class PersistData:
|
||||
"""
|
||||
BitBake Persistent Data Store
|
||||
|
||||
logger = logging.getLogger("BitBake.PersistData")
|
||||
if hasattr(sqlite3, 'enable_shared_cache'):
|
||||
try:
|
||||
sqlite3.enable_shared_cache(True)
|
||||
except sqlite3.OperationalError:
|
||||
pass
|
||||
Used to store data in a central location such that other threads/tasks can
|
||||
access them at some future date.
|
||||
|
||||
The "domain" is used as a key to isolate each data pool and in this
|
||||
implementation corresponds to an SQL table. The SQL table consists of a
|
||||
simple key and value pair.
|
||||
|
||||
@total_ordering
|
||||
class SQLTable(collections.MutableMapping):
|
||||
"""Object representing a table/domain in the database"""
|
||||
def __init__(self, cachefile, table):
|
||||
self.cachefile = cachefile
|
||||
self.table = table
|
||||
self.cursor = connect(self.cachefile)
|
||||
|
||||
self._execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);"
|
||||
% table)
|
||||
|
||||
def _execute(self, *query):
|
||||
"""Execute a query, waiting to acquire a lock if necessary"""
|
||||
count = 0
|
||||
while True:
|
||||
try:
|
||||
return self.cursor.execute(*query)
|
||||
except sqlite3.OperationalError as exc:
|
||||
if 'database is locked' in str(exc) and count < 500:
|
||||
count = count + 1
|
||||
self.cursor.close()
|
||||
self.cursor = connect(self.cachefile)
|
||||
continue
|
||||
raise
|
||||
|
||||
def __enter__(self):
|
||||
self.cursor.__enter__()
|
||||
return self
|
||||
|
||||
def __exit__(self, *excinfo):
|
||||
self.cursor.__exit__(*excinfo)
|
||||
|
||||
def __getitem__(self, key):
|
||||
data = self._execute("SELECT * from %s where key=?;" %
|
||||
self.table, [key])
|
||||
for row in data:
|
||||
return row[1]
|
||||
raise KeyError(key)
|
||||
|
||||
def __delitem__(self, key):
|
||||
if key not in self:
|
||||
raise KeyError(key)
|
||||
self._execute("DELETE from %s where key=?;" % self.table, [key])
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
if not isinstance(key, basestring):
|
||||
raise TypeError('Only string keys are supported')
|
||||
elif not isinstance(value, basestring):
|
||||
raise TypeError('Only string values are supported')
|
||||
|
||||
data = self._execute("SELECT * from %s where key=?;" %
|
||||
self.table, [key])
|
||||
exists = len(list(data))
|
||||
if exists:
|
||||
self._execute("UPDATE %s SET value=? WHERE key=?;" % self.table,
|
||||
[value, key])
|
||||
else:
|
||||
self._execute("INSERT into %s(key, value) values (?, ?);" %
|
||||
self.table, [key, value])
|
||||
|
||||
def __contains__(self, key):
|
||||
return key in set(self)
|
||||
|
||||
def __len__(self):
|
||||
data = self._execute("SELECT COUNT(key) FROM %s;" % self.table)
|
||||
for row in data:
|
||||
return row[0]
|
||||
|
||||
def __iter__(self):
|
||||
data = self._execute("SELECT key FROM %s;" % self.table)
|
||||
return (row[0] for row in data)
|
||||
|
||||
def __lt__(self, other):
|
||||
if not isinstance(other, Mapping):
|
||||
raise NotImplemented
|
||||
|
||||
return len(self) < len(other)
|
||||
|
||||
def values(self):
|
||||
return list(self.itervalues())
|
||||
|
||||
def itervalues(self):
|
||||
data = self._execute("SELECT value FROM %s;" % self.table)
|
||||
return (row[0] for row in data)
|
||||
|
||||
def items(self):
|
||||
return list(self.iteritems())
|
||||
|
||||
def iteritems(self):
|
||||
return self._execute("SELECT * FROM %s;" % self.table)
|
||||
|
||||
def clear(self):
|
||||
self._execute("DELETE FROM %s;" % self.table)
|
||||
|
||||
def has_key(self, key):
|
||||
return key in self
|
||||
|
||||
|
||||
class PersistData(object):
|
||||
"""Deprecated representation of the bitbake persistent data store"""
|
||||
Why sqlite? It handles all the locking issues for us.
|
||||
"""
|
||||
def __init__(self, d):
|
||||
warnings.warn("Use of PersistData is deprecated. Please use "
|
||||
"persist(domain, d) instead.",
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2)
|
||||
self.cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
|
||||
if self.cachedir in [None, '']:
|
||||
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'PERSISTENT_DIR' or 'CACHE' variable.")
|
||||
try:
|
||||
os.stat(self.cachedir)
|
||||
except OSError:
|
||||
bb.mkdirhier(self.cachedir)
|
||||
|
||||
self.data = persist(d)
|
||||
logger.debug(1, "Using '%s' as the persistent data cache",
|
||||
self.data.filename)
|
||||
self.cachefile = os.path.join(self.cachedir,"bb_persist_data.sqlite3")
|
||||
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
|
||||
|
||||
self.connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
|
||||
|
||||
def addDomain(self, domain):
|
||||
"""
|
||||
Add a domain (pending deprecation)
|
||||
Should be called before any domain is used
|
||||
Creates it if it doesn't exist.
|
||||
"""
|
||||
return self.data[domain]
|
||||
self.connection.execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
|
||||
|
||||
def delDomain(self, domain):
|
||||
"""
|
||||
Removes a domain and all the data it contains
|
||||
"""
|
||||
del self.data[domain]
|
||||
self.connection.execute("DROP TABLE IF EXISTS %s;" % domain)
|
||||
|
||||
def getKeyValues(self, domain):
|
||||
"""
|
||||
Return a list of key + value pairs for a domain
|
||||
"""
|
||||
return self.data[domain].items()
|
||||
ret = {}
|
||||
data = self.connection.execute("SELECT key, value from %s;" % domain)
|
||||
for row in data:
|
||||
ret[str(row[0])] = str(row[1])
|
||||
|
||||
return ret
|
||||
|
||||
def getValue(self, domain, key):
|
||||
"""
|
||||
Return the value of a key for a domain
|
||||
"""
|
||||
return self.data[domain][key]
|
||||
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
|
||||
for row in data:
|
||||
return row[1]
|
||||
|
||||
def setValue(self, domain, key, value):
|
||||
"""
|
||||
Sets the value of a key for a domain
|
||||
"""
|
||||
self.data[domain][key] = value
|
||||
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
|
||||
rows = 0
|
||||
for row in data:
|
||||
rows = rows + 1
|
||||
if rows:
|
||||
self._execute("UPDATE %s SET value=? WHERE key=?;" % domain, [value, key])
|
||||
else:
|
||||
self._execute("INSERT into %s(key, value) values (?, ?);" % domain, [key, value])
|
||||
|
||||
def delValue(self, domain, key):
|
||||
"""
|
||||
Deletes a key/value pair
|
||||
"""
|
||||
del self.data[domain][key]
|
||||
self._execute("DELETE from %s where key=?;" % domain, [key])
|
||||
|
||||
def connect(database):
|
||||
return sqlite3.connect(database, timeout=5, isolation_level=None)
|
||||
def _execute(self, *query):
|
||||
while True:
|
||||
try:
|
||||
self.connection.execute(*query)
|
||||
return
|
||||
except sqlite3.OperationalError, e:
|
||||
if 'database is locked' in str(e):
|
||||
continue
|
||||
raise
|
||||
|
||||
|
||||
|
||||
def persist(domain, d):
|
||||
"""Convenience factory for SQLTable objects based upon metadata"""
|
||||
import bb.utils
|
||||
cachedir = (d.getVar("PERSISTENT_DIR", True) or
|
||||
d.getVar("CACHE", True))
|
||||
if not cachedir:
|
||||
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
|
||||
sys.exit(1)
|
||||
|
||||
bb.utils.mkdirhier(cachedir)
|
||||
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
|
||||
return SQLTable(cachefile, domain)
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
import logging
|
||||
import signal
|
||||
import subprocess
|
||||
|
||||
logger = logging.getLogger('BitBake.Process')
|
||||
|
||||
def subprocess_setup():
|
||||
# Python installs a SIGPIPE handler by default. This is usually not what
|
||||
# non-Python subprocesses expect.
|
||||
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
|
||||
|
||||
class CmdError(RuntimeError):
|
||||
def __init__(self, command, msg=None):
|
||||
self.command = command
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
if not isinstance(self.command, basestring):
|
||||
cmd = subprocess.list2cmdline(self.command)
|
||||
else:
|
||||
cmd = self.command
|
||||
|
||||
msg = "Execution of '%s' failed" % cmd
|
||||
if self.msg:
|
||||
msg += ': %s' % self.msg
|
||||
return msg
|
||||
|
||||
class NotFoundError(CmdError):
|
||||
def __str__(self):
|
||||
return CmdError.__str__(self) + ": command not found"
|
||||
|
||||
class ExecutionError(CmdError):
|
||||
def __init__(self, command, exitcode, stdout = None, stderr = None):
|
||||
CmdError.__init__(self, command)
|
||||
self.exitcode = exitcode
|
||||
self.stdout = stdout
|
||||
self.stderr = stderr
|
||||
|
||||
def __str__(self):
|
||||
message = ""
|
||||
if self.stderr:
|
||||
message += self.stderr
|
||||
if self.stdout:
|
||||
message += self.stdout
|
||||
if message:
|
||||
message = ":\n" + message
|
||||
return (CmdError.__str__(self) +
|
||||
" with exit code %s" % self.exitcode + message)
|
||||
|
||||
class Popen(subprocess.Popen):
|
||||
defaults = {
|
||||
"close_fds": True,
|
||||
"preexec_fn": subprocess_setup,
|
||||
"stdout": subprocess.PIPE,
|
||||
"stderr": subprocess.STDOUT,
|
||||
"stdin": subprocess.PIPE,
|
||||
"shell": False,
|
||||
}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
options = dict(self.defaults)
|
||||
options.update(kwargs)
|
||||
subprocess.Popen.__init__(self, *args, **options)
|
||||
|
||||
def _logged_communicate(pipe, log, input):
|
||||
if pipe.stdin:
|
||||
if input is not None:
|
||||
pipe.stdin.write(input)
|
||||
pipe.stdin.close()
|
||||
|
||||
bufsize = 512
|
||||
outdata, errdata = [], []
|
||||
while pipe.poll() is None:
|
||||
if pipe.stdout is not None:
|
||||
data = pipe.stdout.read(bufsize)
|
||||
if data is not None:
|
||||
outdata.append(data)
|
||||
log.write(data)
|
||||
|
||||
if pipe.stderr is not None:
|
||||
data = pipe.stderr.read(bufsize)
|
||||
if data is not None:
|
||||
errdata.append(data)
|
||||
log.write(data)
|
||||
return ''.join(outdata), ''.join(errdata)
|
||||
|
||||
def run(cmd, input=None, log=None, **options):
|
||||
"""Convenience function to run a command and return its output, raising an
|
||||
exception when the command fails"""
|
||||
|
||||
if isinstance(cmd, basestring) and not "shell" in options:
|
||||
options["shell"] = True
|
||||
|
||||
try:
|
||||
pipe = Popen(cmd, **options)
|
||||
except OSError as exc:
|
||||
if exc.errno == 2:
|
||||
raise NotFoundError(cmd)
|
||||
else:
|
||||
raise CmdError(cmd, exc)
|
||||
|
||||
if log:
|
||||
stdout, stderr = _logged_communicate(pipe, log, input)
|
||||
else:
|
||||
stdout, stderr = pipe.communicate(input)
|
||||
|
||||
if pipe.returncode != 0:
|
||||
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
|
||||
return stdout, stderr
|
||||
@@ -22,55 +22,16 @@
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import re
|
||||
import logging
|
||||
from bb import data, utils
|
||||
from collections import defaultdict
|
||||
import bb
|
||||
|
||||
logger = logging.getLogger("BitBake.Provider")
|
||||
|
||||
class NoProvider(bb.BBHandledException):
|
||||
class NoProvider(Exception):
|
||||
"""Exception raised when no provider of a build dependency can be found"""
|
||||
|
||||
class NoRProvider(bb.BBHandledException):
|
||||
class NoRProvider(Exception):
|
||||
"""Exception raised when no provider of a runtime dependency can be found"""
|
||||
|
||||
|
||||
def findProviders(cfgData, dataCache, pkg_pn = None):
|
||||
"""
|
||||
Convenience function to get latest and preferred providers in pkg_pn
|
||||
"""
|
||||
|
||||
if not pkg_pn:
|
||||
pkg_pn = dataCache.pkg_pn
|
||||
|
||||
# Need to ensure data store is expanded
|
||||
localdata = data.createCopy(cfgData)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
|
||||
preferred_versions = {}
|
||||
latest_versions = {}
|
||||
|
||||
for pn in pkg_pn:
|
||||
(last_ver, last_file, pref_ver, pref_file) = findBestProvider(pn, localdata, dataCache, pkg_pn)
|
||||
preferred_versions[pn] = (pref_ver, pref_file)
|
||||
latest_versions[pn] = (last_ver, last_file)
|
||||
|
||||
return (latest_versions, preferred_versions)
|
||||
|
||||
|
||||
def allProviders(dataCache):
|
||||
"""
|
||||
Find all providers for each pn
|
||||
"""
|
||||
all_providers = defaultdict(list)
|
||||
for (fn, pn) in dataCache.pkg_fn.items():
|
||||
ver = dataCache.pkg_pepvpr[fn]
|
||||
all_providers[pn].append((ver, fn))
|
||||
return all_providers
|
||||
|
||||
|
||||
def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
"""
|
||||
Reorder pkg_pn by file priority and default preference
|
||||
@@ -101,7 +62,7 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
|
||||
"""
|
||||
Check if the version pe,pv,pr is the preferred one.
|
||||
If there is preferred version defined and ends with '%', then pv has to start with that version after removing the '%'
|
||||
If there is preferred version defined and ends with '%', then pv has to start with that version after removing the '%'
|
||||
"""
|
||||
if (pr == preferred_r or preferred_r == None):
|
||||
if (pe == preferred_e or preferred_e == None):
|
||||
@@ -120,10 +81,10 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
preferred_ver = None
|
||||
|
||||
localdata = data.createCopy(cfgData)
|
||||
localdata.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn))
|
||||
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
|
||||
bb.data.update_data(localdata)
|
||||
|
||||
preferred_v = localdata.getVar('PREFERRED_VERSION', True)
|
||||
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
|
||||
if preferred_v:
|
||||
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
|
||||
if m:
|
||||
@@ -142,7 +103,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
|
||||
for file_set in pkg_pn:
|
||||
for f in file_set:
|
||||
pe, pv, pr = dataCache.pkg_pepvpr[f]
|
||||
pe,pv,pr = dataCache.pkg_pepvpr[f]
|
||||
if preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
|
||||
preferred_file = f
|
||||
preferred_ver = (pe, pv, pr)
|
||||
@@ -159,21 +120,9 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
if item:
|
||||
itemstr = " (for item %s)" % item
|
||||
if preferred_file is None:
|
||||
logger.info("preferred version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
available_vers = []
|
||||
for file_set in pkg_pn:
|
||||
for f in file_set:
|
||||
pe, pv, pr = dataCache.pkg_pepvpr[f]
|
||||
ver_str = pv
|
||||
if pe:
|
||||
ver_str = "%s:%s" % (pe, ver_str)
|
||||
if not ver_str in available_vers:
|
||||
available_vers.append(ver_str)
|
||||
if available_vers:
|
||||
available_vers.sort()
|
||||
logger.info("versions of %s available: %s", pn, ' '.join(available_vers))
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "preferred version %s of %s not available%s" % (pv_str, pn, itemstr))
|
||||
else:
|
||||
logger.debug(1, "selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "selecting %s as PREFERRED_VERSION %s of package %s%s" % (preferred_file, pv_str, pn, itemstr))
|
||||
|
||||
return (preferred_ver, preferred_file)
|
||||
|
||||
@@ -187,7 +136,7 @@ def findLatestProvider(pn, cfgData, dataCache, file_set):
|
||||
latest_p = 0
|
||||
latest_f = None
|
||||
for file_name in file_set:
|
||||
pe, pv, pr = dataCache.pkg_pepvpr[file_name]
|
||||
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
|
||||
dp = dataCache.pkg_dp[file_name]
|
||||
|
||||
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
|
||||
@@ -220,14 +169,14 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
|
||||
def _filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
"""
|
||||
eligible = []
|
||||
preferred_versions = {}
|
||||
sortpkg_pn = {}
|
||||
|
||||
# The order of providers depends on the order of the files on the disk
|
||||
# The order of providers depends on the order of the files on the disk
|
||||
# up to here. Sort pkg_pn to make dependency issues reproducible rather
|
||||
# than effectively random.
|
||||
providers.sort()
|
||||
@@ -240,7 +189,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
pkg_pn[pn] = []
|
||||
pkg_pn[pn].append(p)
|
||||
|
||||
logger.debug(1, "providers for %s are: %s", item, pkg_pn.keys())
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "providers for %s are: %s" % (item, pkg_pn.keys()))
|
||||
|
||||
# First add PREFERRED_VERSIONS
|
||||
for pn in pkg_pn:
|
||||
@@ -249,7 +198,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
if preferred_versions[pn][1]:
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
# Now add latest versions
|
||||
# Now add latest verisons
|
||||
for pn in sortpkg_pn:
|
||||
if pn in preferred_versions and preferred_versions[pn][1]:
|
||||
continue
|
||||
@@ -257,7 +206,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
if len(eligible) == 0:
|
||||
logger.error("no eligible providers for %s", item)
|
||||
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
|
||||
return 0
|
||||
|
||||
# If pn == item, give it a slight default preference
|
||||
@@ -277,14 +226,14 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
|
||||
def filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
Takes a "normal" target item
|
||||
"""
|
||||
|
||||
eligible = _filterProviders(providers, item, cfgData, dataCache)
|
||||
|
||||
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item, True)
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
|
||||
if prefervar:
|
||||
dataCache.preferred[item] = prefervar
|
||||
|
||||
@@ -293,19 +242,19 @@ def filterProviders(providers, item, cfgData, dataCache):
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
if dataCache.preferred[item] == pn:
|
||||
logger.verbose("selecting %s to satisfy %s due to PREFERRED_PROVIDERS", pn, item)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
foundUnique = True
|
||||
break
|
||||
|
||||
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
|
||||
|
||||
return eligible, foundUnique
|
||||
|
||||
def filterProvidersRunTime(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
Take a list of providers and filter/reorder according to the
|
||||
environment variables and previous build results
|
||||
Takes a "runtime" target item
|
||||
"""
|
||||
@@ -315,31 +264,27 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
|
||||
# Should use dataCache.preferred here?
|
||||
preferred = []
|
||||
preferred_vars = []
|
||||
pns = {}
|
||||
for p in eligible:
|
||||
pns[dataCache.pkg_fn[p]] = p
|
||||
for p in eligible:
|
||||
pn = dataCache.pkg_fn[p]
|
||||
provides = dataCache.pn_provides[pn]
|
||||
for provide in provides:
|
||||
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, True)
|
||||
logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
|
||||
if prefervar in pns and pns[prefervar] not in preferred:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "checking PREFERRED_PROVIDER_%s" % (provide))
|
||||
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
|
||||
if prefervar == pn:
|
||||
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
|
||||
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to %s" % (pn, item, var))
|
||||
preferred_vars.append(var)
|
||||
pref = pns[prefervar]
|
||||
eligible.remove(pref)
|
||||
eligible = [pref] + eligible
|
||||
preferred.append(pref)
|
||||
eligible.remove(p)
|
||||
eligible = [p] + eligible
|
||||
preferred.append(p)
|
||||
break
|
||||
|
||||
numberPreferred = len(preferred)
|
||||
|
||||
if numberPreferred > 1:
|
||||
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s", item, preferred, preferred_vars)
|
||||
bb.msg.error(bb.msg.domain.Provider, "Conflicting PREFERRED_PROVIDER entries were found which resulted in an attempt to select multiple providers (%s) for runtime dependecy %s\nThe entries resulting in this conflict were: %s" % (preferred, item, preferred_vars))
|
||||
|
||||
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
|
||||
|
||||
return eligible, numberPreferred
|
||||
|
||||
@@ -352,7 +297,7 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
rproviders = []
|
||||
|
||||
if rdepend in dataCache.rproviders:
|
||||
rproviders += dataCache.rproviders[rdepend]
|
||||
rproviders += dataCache.rproviders[rdepend]
|
||||
|
||||
if rdepend in dataCache.packages:
|
||||
rproviders += dataCache.packages[rdepend]
|
||||
@@ -369,7 +314,7 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
try:
|
||||
regexp = re.compile(pattern)
|
||||
except:
|
||||
logger.error("Error parsing regular expression '%s'", pattern)
|
||||
bb.msg.error(bb.msg.domain.Provider, "Error parsing re expression: %s" % pattern)
|
||||
raise
|
||||
regexp_cache[pattern] = regexp
|
||||
if regexp.match(rdepend):
|
||||
|
||||
@@ -1,710 +0,0 @@
|
||||
# builtin.py - builtins and utilities definitions for pysh.
|
||||
#
|
||||
# Copyright 2007 Patrick Mezard
|
||||
#
|
||||
# This software may be used and distributed according to the terms
|
||||
# of the GNU General Public License, incorporated herein by reference.
|
||||
|
||||
"""Builtin and internal utilities implementations.
|
||||
|
||||
- Beware not to use python interpreter environment as if it were the shell
|
||||
environment. For instance, commands working directory must be explicitely handled
|
||||
through env['PWD'] instead of relying on python working directory.
|
||||
"""
|
||||
import errno
|
||||
import optparse
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
def has_subprocess_bug():
|
||||
return getattr(subprocess, 'list2cmdline') and \
|
||||
( subprocess.list2cmdline(['']) == '' or \
|
||||
subprocess.list2cmdline(['foo|bar']) == 'foo|bar')
|
||||
|
||||
# Detect python bug 1634343: "subprocess swallows empty arguments under win32"
|
||||
# <http://sourceforge.net/tracker/index.php?func=detail&aid=1634343&group_id=5470&atid=105470>
|
||||
# Also detect: "[ 1710802 ] subprocess must escape redirection characters under win32"
|
||||
# <http://sourceforge.net/tracker/index.php?func=detail&aid=1710802&group_id=5470&atid=105470>
|
||||
if has_subprocess_bug():
|
||||
import subprocess_fix
|
||||
subprocess.list2cmdline = subprocess_fix.list2cmdline
|
||||
|
||||
from sherrors import *
|
||||
|
||||
class NonExitingParser(optparse.OptionParser):
|
||||
"""OptionParser default behaviour upon error is to print the error message and
|
||||
exit. Raise a utility error instead.
|
||||
"""
|
||||
def error(self, msg):
|
||||
raise UtilityError(msg)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# set special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_SET = NonExitingParser(usage="set - set or unset options and positional parameters")
|
||||
OPT_SET.add_option( '-f', action='store_true', dest='has_f', default=False,
|
||||
help='The shell shall disable pathname expansion.')
|
||||
OPT_SET.add_option('-e', action='store_true', dest='has_e', default=False,
|
||||
help="""When this option is on, if a simple command fails for any of the \
|
||||
reasons listed in Consequences of Shell Errors or returns an exit status \
|
||||
value >0, and is not part of the compound list following a while, until, \
|
||||
or if keyword, and is not a part of an AND or OR list, and is not a \
|
||||
pipeline preceded by the ! reserved word, then the shell shall immediately \
|
||||
exit.""")
|
||||
OPT_SET.add_option('-x', action='store_true', dest='has_x', default=False,
|
||||
help="""The shell shall write to standard error a trace for each command \
|
||||
after it expands the command and before it executes it. It is unspecified \
|
||||
whether the command that turns tracing off is traced.""")
|
||||
|
||||
def builtin_set(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
option, args = OPT_SET.parse_args(args)
|
||||
env = interp.get_env()
|
||||
|
||||
if option.has_f:
|
||||
env.set_opt('-f')
|
||||
if option.has_e:
|
||||
env.set_opt('-e')
|
||||
if option.has_x:
|
||||
env.set_opt('-x')
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# shift special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
def builtin_shift(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
params = interp.get_env().get_positional_args()
|
||||
if args:
|
||||
try:
|
||||
n = int(args[0])
|
||||
if n > len(params):
|
||||
raise ValueError()
|
||||
except ValueError:
|
||||
return 1
|
||||
else:
|
||||
n = 1
|
||||
|
||||
params[:n] = []
|
||||
interp.get_env().set_positional_args(params)
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# export special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_EXPORT = NonExitingParser(usage="set - set or unset options and positional parameters")
|
||||
OPT_EXPORT.add_option('-p', action='store_true', dest='has_p', default=False)
|
||||
|
||||
def builtin_export(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
option, args = OPT_EXPORT.parse_args(args)
|
||||
if option.has_p:
|
||||
raise NotImplementedError()
|
||||
|
||||
for arg in args:
|
||||
try:
|
||||
name, value = arg.split('=', 1)
|
||||
except ValueError:
|
||||
name, value = arg, None
|
||||
env = interp.get_env().export(name, value)
|
||||
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# return special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
def builtin_return(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
res = 0
|
||||
if args:
|
||||
try:
|
||||
res = int(args[0])
|
||||
except ValueError:
|
||||
res = 0
|
||||
if not 0<=res<=255:
|
||||
res = 0
|
||||
|
||||
# BUG: should be last executed command exit code
|
||||
raise ReturnSignal(res)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# trap special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
def builtin_trap(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
if len(args) < 2:
|
||||
stderr.write('trap: usage: trap [[arg] signal_spec ...]\n')
|
||||
return 2
|
||||
|
||||
action = args[0]
|
||||
for sig in args[1:]:
|
||||
try:
|
||||
env.traps[sig] = action
|
||||
except Exception as e:
|
||||
stderr.write('trap: %s\n' % str(e))
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# unset special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_UNSET = NonExitingParser("unset - unset values and attributes of variables and functions")
|
||||
OPT_UNSET.add_option( '-f', action='store_true', dest='has_f', default=False)
|
||||
OPT_UNSET.add_option( '-v', action='store_true', dest='has_v', default=False)
|
||||
|
||||
def builtin_unset(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
option, args = OPT_UNSET.parse_args(args)
|
||||
|
||||
status = 0
|
||||
env = interp.get_env()
|
||||
for arg in args:
|
||||
try:
|
||||
if option.has_f:
|
||||
env.remove_function(arg)
|
||||
else:
|
||||
del env[arg]
|
||||
except KeyError:
|
||||
pass
|
||||
except VarAssignmentError:
|
||||
status = 1
|
||||
|
||||
return status
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# wait special builtin
|
||||
#-------------------------------------------------------------------------------
|
||||
def builtin_wait(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
return interp.wait([int(arg) for arg in args])
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# cat utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_cat(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
if not args:
|
||||
args = ['-']
|
||||
|
||||
status = 0
|
||||
for arg in args:
|
||||
if arg == '-':
|
||||
data = stdin.read()
|
||||
else:
|
||||
path = os.path.join(env['PWD'], arg)
|
||||
try:
|
||||
f = file(path, 'rb')
|
||||
try:
|
||||
data = f.read()
|
||||
finally:
|
||||
f.close()
|
||||
except IOError as e:
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
status = 1
|
||||
continue
|
||||
stdout.write(data)
|
||||
stdout.flush()
|
||||
return status
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# cd utility
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_CD = NonExitingParser("cd - change the working directory")
|
||||
|
||||
def utility_cd(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
option, args = OPT_CD.parse_args(args)
|
||||
env = interp.get_env()
|
||||
|
||||
directory = None
|
||||
printdir = False
|
||||
if not args:
|
||||
home = env.get('HOME')
|
||||
if home:
|
||||
# Unspecified, do nothing
|
||||
return 0
|
||||
else:
|
||||
directory = home
|
||||
elif len(args)==1:
|
||||
directory = args[0]
|
||||
if directory=='-':
|
||||
if 'OLDPWD' not in env:
|
||||
raise UtilityError("OLDPWD not set")
|
||||
printdir = True
|
||||
directory = env['OLDPWD']
|
||||
else:
|
||||
raise UtilityError("too many arguments")
|
||||
|
||||
curpath = None
|
||||
# Absolute directories will be handled correctly by the os.path.join call.
|
||||
if not directory.startswith('.') and not directory.startswith('..'):
|
||||
cdpaths = env.get('CDPATH', '.').split(';')
|
||||
for cdpath in cdpaths:
|
||||
p = os.path.join(cdpath, directory)
|
||||
if os.path.isdir(p):
|
||||
curpath = p
|
||||
break
|
||||
|
||||
if curpath is None:
|
||||
curpath = directory
|
||||
curpath = os.path.join(env['PWD'], directory)
|
||||
|
||||
env['OLDPWD'] = env['PWD']
|
||||
env['PWD'] = curpath
|
||||
if printdir:
|
||||
stdout.write('%s\n' % curpath)
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# colon utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_colon(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# echo utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_echo(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
# Echo only takes arguments, no options. Use printf if you need fancy stuff.
|
||||
output = ' '.join(args) + '\n'
|
||||
stdout.write(output)
|
||||
stdout.flush()
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# egrep utility
|
||||
#-------------------------------------------------------------------------------
|
||||
# egrep is usually a shell script.
|
||||
# Unfortunately, pysh does not support shell scripts *with arguments* right now,
|
||||
# so the redirection is implemented here, assuming grep is available.
|
||||
def utility_egrep(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
return run_command('grep', ['-E'] + args, interp, env, stdin, stdout,
|
||||
stderr, debugflags)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# env utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_env(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
if args and args[0]=='-i':
|
||||
raise NotImplementedError('env: -i option is not implemented')
|
||||
|
||||
i = 0
|
||||
for arg in args:
|
||||
if '=' not in arg:
|
||||
break
|
||||
# Update the current environment
|
||||
name, value = arg.split('=', 1)
|
||||
env[name] = value
|
||||
i += 1
|
||||
|
||||
if args[i:]:
|
||||
# Find then execute the specified interpreter
|
||||
utility = env.find_in_path(args[i])
|
||||
if not utility:
|
||||
return 127
|
||||
args[i:i+1] = utility
|
||||
name = args[i]
|
||||
args = args[i+1:]
|
||||
try:
|
||||
return run_command(name, args, interp, env, stdin, stdout, stderr,
|
||||
debugflags)
|
||||
except UtilityError:
|
||||
stderr.write('env: failed to execute %s' % ' '.join([name]+args))
|
||||
return 126
|
||||
else:
|
||||
for pair in env.get_variables().iteritems():
|
||||
stdout.write('%s=%s\n' % pair)
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# exit utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_exit(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
res = None
|
||||
if args:
|
||||
try:
|
||||
res = int(args[0])
|
||||
except ValueError:
|
||||
res = None
|
||||
if not 0<=res<=255:
|
||||
res = None
|
||||
|
||||
if res is None:
|
||||
# BUG: should be last executed command exit code
|
||||
res = 0
|
||||
|
||||
raise ExitSignal(res)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# fgrep utility
|
||||
#-------------------------------------------------------------------------------
|
||||
# see egrep
|
||||
def utility_fgrep(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
return run_command('grep', ['-F'] + args, interp, env, stdin, stdout,
|
||||
stderr, debugflags)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# gunzip utility
|
||||
#-------------------------------------------------------------------------------
|
||||
# see egrep
|
||||
def utility_gunzip(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
return run_command('gzip', ['-d'] + args, interp, env, stdin, stdout,
|
||||
stderr, debugflags)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# kill utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_kill(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
for arg in args:
|
||||
pid = int(arg)
|
||||
status = subprocess.call(['pskill', '/T', str(pid)],
|
||||
shell=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE)
|
||||
# pskill is asynchronous, hence the stupid polling loop
|
||||
while 1:
|
||||
p = subprocess.Popen(['pslist', str(pid)],
|
||||
shell=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT)
|
||||
output = p.communicate()[0]
|
||||
if ('process %d was not' % pid) in output:
|
||||
break
|
||||
time.sleep(1)
|
||||
return status
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# mkdir utility
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_MKDIR = NonExitingParser("mkdir - make directories.")
|
||||
OPT_MKDIR.add_option('-p', action='store_true', dest='has_p', default=False)
|
||||
|
||||
def utility_mkdir(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
# TODO: implement umask
|
||||
# TODO: implement proper utility error report
|
||||
option, args = OPT_MKDIR.parse_args(args)
|
||||
for arg in args:
|
||||
path = os.path.join(env['PWD'], arg)
|
||||
if option.has_p:
|
||||
try:
|
||||
os.makedirs(path)
|
||||
except IOError as e:
|
||||
if e.errno != errno.EEXIST:
|
||||
raise
|
||||
else:
|
||||
os.mkdir(path)
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# netstat utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_netstat(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
# Do you really expect me to implement netstat ?
|
||||
# This empty form is enough for Mercurial tests since it's
|
||||
# supposed to generate nothing upon success. Faking this test
|
||||
# is not a big deal either.
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# pwd utility
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_PWD = NonExitingParser("pwd - return working directory name")
|
||||
OPT_PWD.add_option('-L', action='store_true', dest='has_L', default=True,
|
||||
help="""If the PWD environment variable contains an absolute pathname of \
|
||||
the current directory that does not contain the filenames dot or dot-dot, \
|
||||
pwd shall write this pathname to standard output. Otherwise, the -L option \
|
||||
shall behave as the -P option.""")
|
||||
OPT_PWD.add_option('-P', action='store_true', dest='has_L', default=False,
|
||||
help="""The absolute pathname written shall not contain filenames that, in \
|
||||
the context of the pathname, refer to files of type symbolic link.""")
|
||||
|
||||
def utility_pwd(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
option, args = OPT_PWD.parse_args(args)
|
||||
stdout.write('%s\n' % env['PWD'])
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# printf utility
|
||||
#-------------------------------------------------------------------------------
|
||||
RE_UNESCAPE = re.compile(r'(\\x[a-zA-Z0-9]{2}|\\[0-7]{1,3}|\\.)')
|
||||
|
||||
def utility_printf(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
def replace(m):
|
||||
assert m.group()
|
||||
g = m.group()[1:]
|
||||
if g.startswith('x'):
|
||||
return chr(int(g[1:], 16))
|
||||
if len(g) <= 3 and len([c for c in g if c in '01234567']) == len(g):
|
||||
# Yay, an octal number
|
||||
return chr(int(g, 8))
|
||||
return {
|
||||
'a': '\a',
|
||||
'b': '\b',
|
||||
'f': '\f',
|
||||
'n': '\n',
|
||||
'r': '\r',
|
||||
't': '\t',
|
||||
'v': '\v',
|
||||
'\\': '\\',
|
||||
}.get(g)
|
||||
|
||||
# Convert escape sequences
|
||||
format = re.sub(RE_UNESCAPE, replace, args[0])
|
||||
stdout.write(format % tuple(args[1:]))
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# true utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_true(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# sed utility
|
||||
#-------------------------------------------------------------------------------
|
||||
RE_SED = re.compile(r'^s(.).*\1[a-zA-Z]*$')
|
||||
|
||||
# cygwin sed fails with some expressions when they do not end with a single space.
|
||||
# see unit tests for details. Interestingly, the same expressions works perfectly
|
||||
# in cygwin shell.
|
||||
def utility_sed(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
# Scan pattern arguments and append a space if necessary
|
||||
for i in xrange(len(args)):
|
||||
if not RE_SED.search(args[i]):
|
||||
continue
|
||||
args[i] = args[i] + ' '
|
||||
|
||||
return run_command(name, args, interp, env, stdin, stdout,
|
||||
stderr, debugflags)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# sleep utility
|
||||
#-------------------------------------------------------------------------------
|
||||
def utility_sleep(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
time.sleep(int(args[0]))
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# sort utility
|
||||
#-------------------------------------------------------------------------------
|
||||
OPT_SORT = NonExitingParser("sort - sort, merge, or sequence check text files")
|
||||
|
||||
def utility_sort(name, args, interp, env, stdin, stdout, stderr, debugflags):
|
||||
|
||||
def sort(path):
|
||||
if path == '-':
|
||||
lines = stdin.readlines()
|
||||
else:
|
||||
try:
|
||||
f = file(path)
|
||||
try:
|
||||
lines = f.readlines()
|
||||
finally:
|
||||
f.close()
|
||||
except IOError as e:
|
||||
stderr.write(str(e) + '\n')
|
||||
return 1
|
||||
|
||||
if lines and lines[-1][-1]!='\n':
|
||||
lines[-1] = lines[-1] + '\n'
|
||||
return lines
|
||||
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
option, args = OPT_SORT.parse_args(args)
|
||||
alllines = []
|
||||
|
||||
if len(args)<=0:
|
||||
args += ['-']
|
||||
|
||||
# Load all files lines
|
||||
curdir = os.getcwd()
|
||||
try:
|
||||
os.chdir(env['PWD'])
|
||||
for path in args:
|
||||
alllines += sort(path)
|
||||
finally:
|
||||
os.chdir(curdir)
|
||||
|
||||
alllines.sort()
|
||||
for line in alllines:
|
||||
stdout.write(line)
|
||||
return 0
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# hg utility
|
||||
#-------------------------------------------------------------------------------
|
||||
|
||||
hgcommands = [
|
||||
'add',
|
||||
'addremove',
|
||||
'commit', 'ci',
|
||||
'debugrename',
|
||||
'debugwalk',
|
||||
'falabala', # Dummy command used in a mercurial test
|
||||
'incoming',
|
||||
'locate',
|
||||
'pull',
|
||||
'push',
|
||||
'qinit',
|
||||
'remove', 'rm',
|
||||
'rename', 'mv',
|
||||
'revert',
|
||||
'showconfig',
|
||||
'status', 'st',
|
||||
'strip',
|
||||
]
|
||||
|
||||
def rewriteslashes(name, args):
|
||||
# Several hg commands output file paths, rewrite the separators
|
||||
if len(args) > 1 and name.lower().endswith('python') \
|
||||
and args[0].endswith('hg'):
|
||||
for cmd in hgcommands:
|
||||
if cmd in args[1:]:
|
||||
return True
|
||||
|
||||
# svn output contains many paths with OS specific separators.
|
||||
# Normalize these to unix paths.
|
||||
base = os.path.basename(name)
|
||||
if base.startswith('svn'):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def rewritehg(output):
|
||||
if not output:
|
||||
return output
|
||||
# Rewrite os specific messages
|
||||
output = output.replace(': The system cannot find the file specified',
|
||||
': No such file or directory')
|
||||
output = re.sub(': Access is denied.*$', ': Permission denied', output)
|
||||
output = output.replace(': No connection could be made because the target machine actively refused it',
|
||||
': Connection refused')
|
||||
return output
|
||||
|
||||
|
||||
def run_command(name, args, interp, env, stdin, stdout,
|
||||
stderr, debugflags):
|
||||
# Execute the command
|
||||
if 'debug-utility' in debugflags:
|
||||
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
|
||||
|
||||
hgbin = interp.options().hgbinary
|
||||
ishg = hgbin and ('hg' in name or args and 'hg' in args[0])
|
||||
unixoutput = 'cygwin' in name or ishg
|
||||
|
||||
exec_env = env.get_variables()
|
||||
try:
|
||||
# BUG: comparing file descriptor is clearly not a reliable way to tell
|
||||
# whether they point on the same underlying object. But in pysh limited
|
||||
# scope this is usually right, we do not expect complicated redirections
|
||||
# besides usual 2>&1.
|
||||
# Still there is one case we have but cannot deal with is when stdout
|
||||
# and stderr are redirected *by pysh caller*. This the reason for the
|
||||
# --redirect pysh() option.
|
||||
# Now, we want to know they are the same because we sometimes need to
|
||||
# transform the command output, mostly remove CR-LF to ensure that
|
||||
# command output is unix-like. Cygwin utilies are a special case because
|
||||
# they explicitely set their output streams to binary mode, so we have
|
||||
# nothing to do. For all others commands, we have to guess whether they
|
||||
# are sending text data, in which case the transformation must be done.
|
||||
# Again, the NUL character test is unreliable but should be enough for
|
||||
# hg tests.
|
||||
redirected = stdout.fileno()==stderr.fileno()
|
||||
if not redirected:
|
||||
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
|
||||
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
else:
|
||||
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
|
||||
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
||||
out, err = p.communicate()
|
||||
except WindowsError as e:
|
||||
raise UtilityError(str(e))
|
||||
|
||||
if not unixoutput:
|
||||
def encode(s):
|
||||
if '\0' in s:
|
||||
return s
|
||||
return s.replace('\r\n', '\n')
|
||||
else:
|
||||
encode = lambda s: s
|
||||
|
||||
if rewriteslashes(name, args):
|
||||
encode1_ = encode
|
||||
def encode(s):
|
||||
s = encode1_(s)
|
||||
s = s.replace('\\\\', '\\')
|
||||
s = s.replace('\\', '/')
|
||||
return s
|
||||
|
||||
if ishg:
|
||||
encode2_ = encode
|
||||
def encode(s):
|
||||
return rewritehg(encode2_(s))
|
||||
|
||||
stdout.write(encode(out))
|
||||
if not redirected:
|
||||
stderr.write(encode(err))
|
||||
return p.returncode
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,116 +0,0 @@
|
||||
#! /usr/bin/env python
|
||||
|
||||
import sys
|
||||
from _lsprof import Profiler, profiler_entry
|
||||
|
||||
__all__ = ['profile', 'Stats']
|
||||
|
||||
def profile(f, *args, **kwds):
|
||||
"""XXX docstring"""
|
||||
p = Profiler()
|
||||
p.enable(subcalls=True, builtins=True)
|
||||
try:
|
||||
f(*args, **kwds)
|
||||
finally:
|
||||
p.disable()
|
||||
return Stats(p.getstats())
|
||||
|
||||
|
||||
class Stats(object):
|
||||
"""XXX docstring"""
|
||||
|
||||
def __init__(self, data):
|
||||
self.data = data
|
||||
|
||||
def sort(self, crit="inlinetime"):
|
||||
"""XXX docstring"""
|
||||
if crit not in profiler_entry.__dict__:
|
||||
raise ValueError("Can't sort by %s" % crit)
|
||||
self.data.sort(lambda b, a: cmp(getattr(a, crit),
|
||||
getattr(b, crit)))
|
||||
for e in self.data:
|
||||
if e.calls:
|
||||
e.calls.sort(lambda b, a: cmp(getattr(a, crit),
|
||||
getattr(b, crit)))
|
||||
|
||||
def pprint(self, top=None, file=None, limit=None, climit=None):
|
||||
"""XXX docstring"""
|
||||
if file is None:
|
||||
file = sys.stdout
|
||||
d = self.data
|
||||
if top is not None:
|
||||
d = d[:top]
|
||||
cols = "% 12s %12s %11.4f %11.4f %s\n"
|
||||
hcols = "% 12s %12s %12s %12s %s\n"
|
||||
cols2 = "+%12s %12s %11.4f %11.4f + %s\n"
|
||||
file.write(hcols % ("CallCount", "Recursive", "Total(ms)",
|
||||
"Inline(ms)", "module:lineno(function)"))
|
||||
count = 0
|
||||
for e in d:
|
||||
file.write(cols % (e.callcount, e.reccallcount, e.totaltime,
|
||||
e.inlinetime, label(e.code)))
|
||||
count += 1
|
||||
if limit is not None and count == limit:
|
||||
return
|
||||
ccount = 0
|
||||
if e.calls:
|
||||
for se in e.calls:
|
||||
file.write(cols % ("+%s" % se.callcount, se.reccallcount,
|
||||
se.totaltime, se.inlinetime,
|
||||
"+%s" % label(se.code)))
|
||||
count += 1
|
||||
ccount += 1
|
||||
if limit is not None and count == limit:
|
||||
return
|
||||
if climit is not None and ccount == climit:
|
||||
break
|
||||
|
||||
def freeze(self):
|
||||
"""Replace all references to code objects with string
|
||||
descriptions; this makes it possible to pickle the instance."""
|
||||
|
||||
# this code is probably rather ickier than it needs to be!
|
||||
for i in range(len(self.data)):
|
||||
e = self.data[i]
|
||||
if not isinstance(e.code, str):
|
||||
self.data[i] = type(e)((label(e.code),) + e[1:])
|
||||
if e.calls:
|
||||
for j in range(len(e.calls)):
|
||||
se = e.calls[j]
|
||||
if not isinstance(se.code, str):
|
||||
e.calls[j] = type(se)((label(se.code),) + se[1:])
|
||||
|
||||
_fn2mod = {}
|
||||
|
||||
def label(code):
|
||||
if isinstance(code, str):
|
||||
return code
|
||||
try:
|
||||
mname = _fn2mod[code.co_filename]
|
||||
except KeyError:
|
||||
for k, v in sys.modules.items():
|
||||
if v is None:
|
||||
continue
|
||||
if not hasattr(v, '__file__'):
|
||||
continue
|
||||
if not isinstance(v.__file__, str):
|
||||
continue
|
||||
if v.__file__.startswith(code.co_filename):
|
||||
mname = _fn2mod[code.co_filename] = k
|
||||
break
|
||||
else:
|
||||
mname = _fn2mod[code.co_filename] = '<%s>'%code.co_filename
|
||||
|
||||
return '%s:%d(%s)' % (mname, code.co_firstlineno, code.co_name)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import os
|
||||
sys.argv = sys.argv[1:]
|
||||
if not sys.argv:
|
||||
print >> sys.stderr, "usage: lsprof.py <script> <arguments...>"
|
||||
sys.exit(2)
|
||||
sys.path.insert(0, os.path.abspath(os.path.dirname(sys.argv[0])))
|
||||
stats = profile(execfile, sys.argv[0], globals(), locals())
|
||||
stats.sort()
|
||||
stats.pprint()
|
||||
@@ -1,167 +0,0 @@
|
||||
# pysh.py - command processing for pysh.
|
||||
#
|
||||
# Copyright 2007 Patrick Mezard
|
||||
#
|
||||
# This software may be used and distributed according to the terms
|
||||
# of the GNU General Public License, incorporated herein by reference.
|
||||
|
||||
import optparse
|
||||
import os
|
||||
import sys
|
||||
|
||||
import interp
|
||||
|
||||
SH_OPT = optparse.OptionParser(prog='pysh', usage="%prog [OPTIONS]", version='0.1')
|
||||
SH_OPT.add_option('-c', action='store_true', dest='command_string', default=None,
|
||||
help='A string that shall be interpreted by the shell as one or more commands')
|
||||
SH_OPT.add_option('--redirect-to', dest='redirect_to', default=None,
|
||||
help='Redirect script commands stdout and stderr to the specified file')
|
||||
# See utility_command in builtin.py about the reason for this flag.
|
||||
SH_OPT.add_option('--redirected', dest='redirected', action='store_true', default=False,
|
||||
help='Tell the interpreter that stdout and stderr are actually the same objects, which is really stdout')
|
||||
SH_OPT.add_option('--debug-parsing', action='store_true', dest='debug_parsing', default=False,
|
||||
help='Trace PLY execution')
|
||||
SH_OPT.add_option('--debug-tree', action='store_true', dest='debug_tree', default=False,
|
||||
help='Display the generated syntax tree.')
|
||||
SH_OPT.add_option('--debug-cmd', action='store_true', dest='debug_cmd', default=False,
|
||||
help='Trace command execution before parameters expansion and exit status.')
|
||||
SH_OPT.add_option('--debug-utility', action='store_true', dest='debug_utility', default=False,
|
||||
help='Trace utility calls, after parameters expansions')
|
||||
SH_OPT.add_option('--ast', action='store_true', dest='ast', default=False,
|
||||
help='Encoded commands to execute in a subprocess')
|
||||
SH_OPT.add_option('--profile', action='store_true', default=False,
|
||||
help='Profile pysh run')
|
||||
|
||||
|
||||
def split_args(args):
|
||||
# Separate shell arguments from command ones
|
||||
# Just stop at the first argument not starting with a dash. I know, this is completely broken,
|
||||
# it ignores files starting with a dash or may take option values for command file. This is not
|
||||
# supposed to happen for now
|
||||
command_index = len(args)
|
||||
for i,arg in enumerate(args):
|
||||
if not arg.startswith('-'):
|
||||
command_index = i
|
||||
break
|
||||
|
||||
return args[:command_index], args[command_index:]
|
||||
|
||||
|
||||
def fixenv(env):
|
||||
path = env.get('PATH')
|
||||
if path is not None:
|
||||
parts = path.split(os.pathsep)
|
||||
# Remove Windows utilities from PATH, they are useless at best and
|
||||
# some of them (find) may be confused with other utilities.
|
||||
parts = [p for p in parts if 'system32' not in p.lower()]
|
||||
env['PATH'] = os.pathsep.join(parts)
|
||||
if env.get('HOME') is None:
|
||||
# Several utilities, including cvsps, cannot work without
|
||||
# a defined HOME directory.
|
||||
env['HOME'] = os.path.expanduser('~')
|
||||
return env
|
||||
|
||||
def _sh(cwd, shargs, cmdargs, options, debugflags=None, env=None):
|
||||
if os.environ.get('PYSH_TEXT') != '1':
|
||||
import msvcrt
|
||||
for fp in (sys.stdin, sys.stdout, sys.stderr):
|
||||
msvcrt.setmode(fp.fileno(), os.O_BINARY)
|
||||
|
||||
hgbin = os.environ.get('PYSH_HGTEXT') != '1'
|
||||
|
||||
if debugflags is None:
|
||||
debugflags = []
|
||||
if options.debug_parsing: debugflags.append('debug-parsing')
|
||||
if options.debug_utility: debugflags.append('debug-utility')
|
||||
if options.debug_cmd: debugflags.append('debug-cmd')
|
||||
if options.debug_tree: debugflags.append('debug-tree')
|
||||
|
||||
if env is None:
|
||||
env = fixenv(dict(os.environ))
|
||||
if cwd is None:
|
||||
cwd = os.getcwd()
|
||||
|
||||
if not cmdargs:
|
||||
# Nothing to do
|
||||
return 0
|
||||
|
||||
ast = None
|
||||
command_file = None
|
||||
if options.command_string:
|
||||
input = cmdargs[0]
|
||||
if not options.ast:
|
||||
input += '\n'
|
||||
else:
|
||||
args, input = interp.decodeargs(input), None
|
||||
env, ast = args
|
||||
cwd = env.get('PWD', cwd)
|
||||
else:
|
||||
command_file = cmdargs[0]
|
||||
arguments = cmdargs[1:]
|
||||
|
||||
prefix = interp.resolve_shebang(command_file, ignoreshell=True)
|
||||
if prefix:
|
||||
input = ' '.join(prefix + [command_file] + arguments)
|
||||
else:
|
||||
# Read commands from file
|
||||
f = file(command_file)
|
||||
try:
|
||||
# Trailing newline to help the parser
|
||||
input = f.read() + '\n'
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
redirect = None
|
||||
try:
|
||||
if options.redirected:
|
||||
stdout = sys.stdout
|
||||
stderr = stdout
|
||||
elif options.redirect_to:
|
||||
redirect = open(options.redirect_to, 'wb')
|
||||
stdout = redirect
|
||||
stderr = redirect
|
||||
else:
|
||||
stdout = sys.stdout
|
||||
stderr = sys.stderr
|
||||
|
||||
# TODO: set arguments to environment variables
|
||||
opts = interp.Options()
|
||||
opts.hgbinary = hgbin
|
||||
ip = interp.Interpreter(cwd, debugflags, stdout=stdout, stderr=stderr,
|
||||
opts=opts)
|
||||
try:
|
||||
# Export given environment in shell object
|
||||
for k,v in env.iteritems():
|
||||
ip.get_env().export(k,v)
|
||||
return ip.execute_script(input, ast, scriptpath=command_file)
|
||||
finally:
|
||||
ip.close()
|
||||
finally:
|
||||
if redirect is not None:
|
||||
redirect.close()
|
||||
|
||||
def sh(cwd=None, args=None, debugflags=None, env=None):
|
||||
if args is None:
|
||||
args = sys.argv[1:]
|
||||
shargs, cmdargs = split_args(args)
|
||||
options, shargs = SH_OPT.parse_args(shargs)
|
||||
|
||||
if options.profile:
|
||||
import lsprof
|
||||
p = lsprof.Profiler()
|
||||
p.enable(subcalls=True)
|
||||
try:
|
||||
return _sh(cwd, shargs, cmdargs, options, debugflags, env)
|
||||
finally:
|
||||
p.disable()
|
||||
stats = lsprof.Stats(p.getstats())
|
||||
stats.sort()
|
||||
stats.pprint(top=10, file=sys.stderr, climit=5)
|
||||
else:
|
||||
return _sh(cwd, shargs, cmdargs, options, debugflags, env)
|
||||
|
||||
def main():
|
||||
sys.exit(sh())
|
||||
|
||||
if __name__=='__main__':
|
||||
main()
|
||||
@@ -1,888 +0,0 @@
|
||||
# pyshlex.py - PLY compatible lexer for pysh.
|
||||
#
|
||||
# Copyright 2007 Patrick Mezard
|
||||
#
|
||||
# This software may be used and distributed according to the terms
|
||||
# of the GNU General Public License, incorporated herein by reference.
|
||||
|
||||
# TODO:
|
||||
# - review all "char in 'abc'" snippets: the empty string can be matched
|
||||
# - test line continuations within quoted/expansion strings
|
||||
# - eof is buggy wrt sublexers
|
||||
# - the lexer cannot really work in pull mode as it would be required to run
|
||||
# PLY in pull mode. It was designed to work incrementally and it would not be
|
||||
# that hard to enable pull mode.
|
||||
import re
|
||||
try:
|
||||
s = set()
|
||||
del s
|
||||
except NameError:
|
||||
from Set import Set as set
|
||||
|
||||
from ply import lex
|
||||
from sherrors import *
|
||||
|
||||
class NeedMore(Exception):
|
||||
pass
|
||||
|
||||
def is_blank(c):
|
||||
return c in (' ', '\t')
|
||||
|
||||
_RE_DIGITS = re.compile(r'^\d+$')
|
||||
|
||||
def are_digits(s):
|
||||
return _RE_DIGITS.search(s) is not None
|
||||
|
||||
_OPERATORS = dict([
|
||||
('&&', 'AND_IF'),
|
||||
('||', 'OR_IF'),
|
||||
(';;', 'DSEMI'),
|
||||
('<<', 'DLESS'),
|
||||
('>>', 'DGREAT'),
|
||||
('<&', 'LESSAND'),
|
||||
('>&', 'GREATAND'),
|
||||
('<>', 'LESSGREAT'),
|
||||
('<<-', 'DLESSDASH'),
|
||||
('>|', 'CLOBBER'),
|
||||
('&', 'AMP'),
|
||||
(';', 'COMMA'),
|
||||
('<', 'LESS'),
|
||||
('>', 'GREATER'),
|
||||
('(', 'LPARENS'),
|
||||
(')', 'RPARENS'),
|
||||
])
|
||||
|
||||
#Make a function to silence pychecker "Local variable shadows global"
|
||||
def make_partial_ops():
|
||||
partials = {}
|
||||
for k in _OPERATORS:
|
||||
for i in range(1, len(k)+1):
|
||||
partials[k[:i]] = None
|
||||
return partials
|
||||
|
||||
_PARTIAL_OPERATORS = make_partial_ops()
|
||||
|
||||
def is_partial_op(s):
|
||||
"""Return True if s matches a non-empty subpart of an operator starting
|
||||
at its first character.
|
||||
"""
|
||||
return s in _PARTIAL_OPERATORS
|
||||
|
||||
def is_op(s):
|
||||
"""If s matches an operator, returns the operator identifier. Return None
|
||||
otherwise.
|
||||
"""
|
||||
return _OPERATORS.get(s)
|
||||
|
||||
_RESERVEDS = dict([
|
||||
('if', 'If'),
|
||||
('then', 'Then'),
|
||||
('else', 'Else'),
|
||||
('elif', 'Elif'),
|
||||
('fi', 'Fi'),
|
||||
('do', 'Do'),
|
||||
('done', 'Done'),
|
||||
('case', 'Case'),
|
||||
('esac', 'Esac'),
|
||||
('while', 'While'),
|
||||
('until', 'Until'),
|
||||
('for', 'For'),
|
||||
('{', 'Lbrace'),
|
||||
('}', 'Rbrace'),
|
||||
('!', 'Bang'),
|
||||
('in', 'In'),
|
||||
('|', 'PIPE'),
|
||||
])
|
||||
|
||||
def get_reserved(s):
|
||||
return _RESERVEDS.get(s)
|
||||
|
||||
_RE_NAME = re.compile(r'^[0-9a-zA-Z_]+$')
|
||||
|
||||
def is_name(s):
|
||||
return _RE_NAME.search(s) is not None
|
||||
|
||||
def find_chars(seq, chars):
|
||||
for i,v in enumerate(seq):
|
||||
if v in chars:
|
||||
return i,v
|
||||
return -1, None
|
||||
|
||||
class WordLexer:
|
||||
"""WordLexer parse quoted or expansion expressions and return an expression
|
||||
tree. The input string can be any well formed sequence beginning with quoting
|
||||
or expansion character. Embedded expressions are handled recursively. The
|
||||
resulting tree is made of lists and strings. Lists represent quoted or
|
||||
expansion expressions. Each list first element is the opening separator,
|
||||
the last one the closing separator. In-between can be any number of strings
|
||||
or lists for sub-expressions. Non quoted/expansion expression can written as
|
||||
strings or as lists with empty strings as starting and ending delimiters.
|
||||
"""
|
||||
|
||||
NAME_CHARSET = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_'
|
||||
NAME_CHARSET = dict(zip(NAME_CHARSET, NAME_CHARSET))
|
||||
|
||||
SPECIAL_CHARSET = '@*#?-$!0'
|
||||
|
||||
#Characters which can be escaped depends on the current delimiters
|
||||
ESCAPABLE = {
|
||||
'`': set(['$', '\\', '`']),
|
||||
'"': set(['$', '\\', '`', '"']),
|
||||
"'": set(),
|
||||
}
|
||||
|
||||
def __init__(self, heredoc = False):
|
||||
# _buffer is the unprocessed input characters buffer
|
||||
self._buffer = []
|
||||
# _stack is empty or contains a quoted list being processed
|
||||
# (this is the DFS path to the quoted expression being evaluated).
|
||||
self._stack = []
|
||||
self._escapable = None
|
||||
# True when parsing unquoted here documents
|
||||
self._heredoc = heredoc
|
||||
|
||||
def add(self, data, eof=False):
|
||||
"""Feed the lexer with more data. If the quoted expression can be
|
||||
delimited, return a tuple (expr, remaining) containing the expression
|
||||
tree and the unconsumed data.
|
||||
Otherwise, raise NeedMore.
|
||||
"""
|
||||
self._buffer += list(data)
|
||||
self._parse(eof)
|
||||
|
||||
result = self._stack[0]
|
||||
remaining = ''.join(self._buffer)
|
||||
self._stack = []
|
||||
self._buffer = []
|
||||
return result, remaining
|
||||
|
||||
def _is_escapable(self, c, delim=None):
|
||||
if delim is None:
|
||||
if self._heredoc:
|
||||
# Backslashes works as if they were double quoted in unquoted
|
||||
# here-documents
|
||||
delim = '"'
|
||||
else:
|
||||
if len(self._stack)<=1:
|
||||
return True
|
||||
delim = self._stack[-2][0]
|
||||
|
||||
escapables = self.ESCAPABLE.get(delim, None)
|
||||
return escapables is None or c in escapables
|
||||
|
||||
def _parse_squote(self, buf, result, eof):
|
||||
if not buf:
|
||||
raise NeedMore()
|
||||
try:
|
||||
pos = buf.index("'")
|
||||
except ValueError:
|
||||
raise NeedMore()
|
||||
result[-1] += ''.join(buf[:pos])
|
||||
result += ["'"]
|
||||
return pos+1, True
|
||||
|
||||
def _parse_bquote(self, buf, result, eof):
|
||||
if not buf:
|
||||
raise NeedMore()
|
||||
|
||||
if buf[0]=='\n':
|
||||
#Remove line continuations
|
||||
result[:] = ['', '', '']
|
||||
elif self._is_escapable(buf[0]):
|
||||
result[-1] += buf[0]
|
||||
result += ['']
|
||||
else:
|
||||
#Keep as such
|
||||
result[:] = ['', '\\'+buf[0], '']
|
||||
|
||||
return 1, True
|
||||
|
||||
def _parse_dquote(self, buf, result, eof):
|
||||
if not buf:
|
||||
raise NeedMore()
|
||||
pos, sep = find_chars(buf, '$\\`"')
|
||||
if pos==-1:
|
||||
raise NeedMore()
|
||||
|
||||
result[-1] += ''.join(buf[:pos])
|
||||
if sep=='"':
|
||||
result += ['"']
|
||||
return pos+1, True
|
||||
else:
|
||||
#Keep everything until the separator and defer processing
|
||||
return pos, False
|
||||
|
||||
def _parse_command(self, buf, result, eof):
|
||||
if not buf:
|
||||
raise NeedMore()
|
||||
|
||||
chars = '$\\`"\''
|
||||
if result[0] == '$(':
|
||||
chars += ')'
|
||||
pos, sep = find_chars(buf, chars)
|
||||
if pos == -1:
|
||||
raise NeedMore()
|
||||
|
||||
result[-1] += ''.join(buf[:pos])
|
||||
if (result[0]=='$(' and sep==')') or (result[0]=='`' and sep=='`'):
|
||||
result += [sep]
|
||||
return pos+1, True
|
||||
else:
|
||||
return pos, False
|
||||
|
||||
def _parse_parameter(self, buf, result, eof):
|
||||
if not buf:
|
||||
raise NeedMore()
|
||||
|
||||
pos, sep = find_chars(buf, '$\\`"\'}')
|
||||
if pos==-1:
|
||||
raise NeedMore()
|
||||
|
||||
result[-1] += ''.join(buf[:pos])
|
||||
if sep=='}':
|
||||
result += [sep]
|
||||
return pos+1, True
|
||||
else:
|
||||
return pos, False
|
||||
|
||||
def _parse_dollar(self, buf, result, eof):
|
||||
sep = result[0]
|
||||
if sep=='$':
|
||||
if not buf:
|
||||
#TODO: handle empty $
|
||||
raise NeedMore()
|
||||
if buf[0]=='(':
|
||||
if len(buf)==1:
|
||||
raise NeedMore()
|
||||
|
||||
if buf[1]=='(':
|
||||
result[0] = '$(('
|
||||
buf[:2] = []
|
||||
else:
|
||||
result[0] = '$('
|
||||
buf[:1] = []
|
||||
|
||||
elif buf[0]=='{':
|
||||
result[0] = '${'
|
||||
buf[:1] = []
|
||||
else:
|
||||
if buf[0] in self.SPECIAL_CHARSET:
|
||||
result[-1] = buf[0]
|
||||
read = 1
|
||||
else:
|
||||
for read,c in enumerate(buf):
|
||||
if c not in self.NAME_CHARSET:
|
||||
break
|
||||
else:
|
||||
if not eof:
|
||||
raise NeedMore()
|
||||
read += 1
|
||||
|
||||
result[-1] += ''.join(buf[0:read])
|
||||
|
||||
if not result[-1]:
|
||||
result[:] = ['', result[0], '']
|
||||
else:
|
||||
result += ['']
|
||||
return read,True
|
||||
|
||||
sep = result[0]
|
||||
if sep=='$(':
|
||||
parsefunc = self._parse_command
|
||||
elif sep=='${':
|
||||
parsefunc = self._parse_parameter
|
||||
else:
|
||||
raise NotImplementedError()
|
||||
|
||||
pos, closed = parsefunc(buf, result, eof)
|
||||
return pos, closed
|
||||
|
||||
def _parse(self, eof):
|
||||
buf = self._buffer
|
||||
stack = self._stack
|
||||
recurse = False
|
||||
|
||||
while 1:
|
||||
if not stack or recurse:
|
||||
if not buf:
|
||||
raise NeedMore()
|
||||
if buf[0] not in ('"\\`$\''):
|
||||
raise ShellSyntaxError('Invalid quoted string sequence')
|
||||
stack.append([buf[0], ''])
|
||||
buf[:1] = []
|
||||
recurse = False
|
||||
|
||||
result = stack[-1]
|
||||
if result[0]=="'":
|
||||
parsefunc = self._parse_squote
|
||||
elif result[0]=='\\':
|
||||
parsefunc = self._parse_bquote
|
||||
elif result[0]=='"':
|
||||
parsefunc = self._parse_dquote
|
||||
elif result[0]=='`':
|
||||
parsefunc = self._parse_command
|
||||
elif result[0][0]=='$':
|
||||
parsefunc = self._parse_dollar
|
||||
else:
|
||||
raise NotImplementedError()
|
||||
|
||||
read, closed = parsefunc(buf, result, eof)
|
||||
|
||||
buf[:read] = []
|
||||
if closed:
|
||||
if len(stack)>1:
|
||||
#Merge in parent expression
|
||||
parsed = stack.pop()
|
||||
stack[-1] += [parsed]
|
||||
stack[-1] += ['']
|
||||
else:
|
||||
break
|
||||
else:
|
||||
recurse = True
|
||||
|
||||
def normalize_wordtree(wtree):
|
||||
"""Fold back every literal sequence (delimited with empty strings) into
|
||||
parent sequence.
|
||||
"""
|
||||
def normalize(wtree):
|
||||
result = []
|
||||
for part in wtree[1:-1]:
|
||||
if isinstance(part, list):
|
||||
part = normalize(part)
|
||||
if part[0]=='':
|
||||
#Move the part content back at current level
|
||||
result += part[1:-1]
|
||||
continue
|
||||
elif not part:
|
||||
#Remove empty strings
|
||||
continue
|
||||
result.append(part)
|
||||
if not result:
|
||||
result = ['']
|
||||
return [wtree[0]] + result + [wtree[-1]]
|
||||
|
||||
return normalize(wtree)
|
||||
|
||||
|
||||
def make_wordtree(token, here_document=False):
|
||||
"""Parse a delimited token and return a tree similar to the ones returned by
|
||||
WordLexer. token may contain any combinations of expansion/quoted fields and
|
||||
non-ones.
|
||||
"""
|
||||
tree = ['']
|
||||
remaining = token
|
||||
delimiters = '\\$`'
|
||||
if not here_document:
|
||||
delimiters += '\'"'
|
||||
|
||||
while 1:
|
||||
pos, sep = find_chars(remaining, delimiters)
|
||||
if pos==-1:
|
||||
tree += [remaining, '']
|
||||
return normalize_wordtree(tree)
|
||||
tree.append(remaining[:pos])
|
||||
remaining = remaining[pos:]
|
||||
|
||||
try:
|
||||
result, remaining = WordLexer(heredoc = here_document).add(remaining, True)
|
||||
except NeedMore:
|
||||
raise ShellSyntaxError('Invalid token "%s"')
|
||||
tree.append(result)
|
||||
|
||||
|
||||
def wordtree_as_string(wtree):
|
||||
"""Rewrite an expression tree generated by make_wordtree as string."""
|
||||
def visit(node, output):
|
||||
for child in node:
|
||||
if isinstance(child, list):
|
||||
visit(child, output)
|
||||
else:
|
||||
output.append(child)
|
||||
|
||||
output = []
|
||||
visit(wtree, output)
|
||||
return ''.join(output)
|
||||
|
||||
|
||||
def unquote_wordtree(wtree):
|
||||
"""Fold the word tree while removing quotes everywhere. Other expansion
|
||||
sequences are joined as such.
|
||||
"""
|
||||
def unquote(wtree):
|
||||
unquoted = []
|
||||
if wtree[0] in ('', "'", '"', '\\'):
|
||||
wtree = wtree[1:-1]
|
||||
|
||||
for part in wtree:
|
||||
if isinstance(part, list):
|
||||
part = unquote(part)
|
||||
unquoted.append(part)
|
||||
return ''.join(unquoted)
|
||||
|
||||
return unquote(wtree)
|
||||
|
||||
|
||||
class HereDocLexer:
|
||||
"""HereDocLexer delimits whatever comes from the here-document starting newline
|
||||
not included to the closing delimiter line included.
|
||||
"""
|
||||
def __init__(self, op, delim):
|
||||
assert op in ('<<', '<<-')
|
||||
if not delim:
|
||||
raise ShellSyntaxError('invalid here document delimiter %s' % str(delim))
|
||||
|
||||
self._op = op
|
||||
self._delim = delim
|
||||
self._buffer = []
|
||||
self._token = []
|
||||
|
||||
def add(self, data, eof):
|
||||
"""If the here-document was delimited, return a tuple (content, remaining).
|
||||
Raise NeedMore() otherwise.
|
||||
"""
|
||||
self._buffer += list(data)
|
||||
self._parse(eof)
|
||||
token = ''.join(self._token)
|
||||
remaining = ''.join(self._buffer)
|
||||
self._token, self._remaining = [], []
|
||||
return token, remaining
|
||||
|
||||
def _parse(self, eof):
|
||||
while 1:
|
||||
#Look for first unescaped newline. Quotes may be ignored
|
||||
escaped = False
|
||||
for i,c in enumerate(self._buffer):
|
||||
if escaped:
|
||||
escaped = False
|
||||
elif c=='\\':
|
||||
escaped = True
|
||||
elif c=='\n':
|
||||
break
|
||||
else:
|
||||
i = -1
|
||||
|
||||
if i==-1 or self._buffer[i]!='\n':
|
||||
if not eof:
|
||||
raise NeedMore()
|
||||
#No more data, maybe the last line is closing delimiter
|
||||
line = ''.join(self._buffer)
|
||||
eol = ''
|
||||
self._buffer[:] = []
|
||||
else:
|
||||
line = ''.join(self._buffer[:i])
|
||||
eol = self._buffer[i]
|
||||
self._buffer[:i+1] = []
|
||||
|
||||
if self._op=='<<-':
|
||||
line = line.lstrip('\t')
|
||||
|
||||
if line==self._delim:
|
||||
break
|
||||
|
||||
self._token += [line, eol]
|
||||
if i==-1:
|
||||
break
|
||||
|
||||
class Token:
|
||||
#TODO: check this is still in use
|
||||
OPERATOR = 'OPERATOR'
|
||||
WORD = 'WORD'
|
||||
|
||||
def __init__(self):
|
||||
self.value = ''
|
||||
self.type = None
|
||||
|
||||
def __getitem__(self, key):
|
||||
#Behave like a two elements tuple
|
||||
if key==0:
|
||||
return self.type
|
||||
if key==1:
|
||||
return self.value
|
||||
raise IndexError(key)
|
||||
|
||||
|
||||
class HereDoc:
|
||||
def __init__(self, op, name=None):
|
||||
self.op = op
|
||||
self.name = name
|
||||
self.pendings = []
|
||||
|
||||
TK_COMMA = 'COMMA'
|
||||
TK_AMPERSAND = 'AMP'
|
||||
TK_OP = 'OP'
|
||||
TK_TOKEN = 'TOKEN'
|
||||
TK_COMMENT = 'COMMENT'
|
||||
TK_NEWLINE = 'NEWLINE'
|
||||
TK_IONUMBER = 'IO_NUMBER'
|
||||
TK_ASSIGNMENT = 'ASSIGNMENT_WORD'
|
||||
TK_HERENAME = 'HERENAME'
|
||||
|
||||
class Lexer:
|
||||
"""Main lexer.
|
||||
|
||||
Call add() until the script AST is returned.
|
||||
"""
|
||||
# Here-document handling makes the whole thing more complex because they basically
|
||||
# force tokens to be reordered: here-content must come right after the operator
|
||||
# and the here-document name, while some other tokens might be following the
|
||||
# here-document expression on the same line.
|
||||
#
|
||||
# So, here-doc states are basically:
|
||||
# *self._state==ST_NORMAL
|
||||
# - self._heredoc.op is None: no here-document
|
||||
# - self._heredoc.op is not None but name is: here-document operator matched,
|
||||
# waiting for the document name/delimiter
|
||||
# - self._heredoc.op and name are not None: here-document is ready, following
|
||||
# tokens are being stored and will be pushed again when the document is
|
||||
# completely parsed.
|
||||
# *self._state==ST_HEREDOC
|
||||
# - The here-document is being delimited by self._herelexer. Once it is done
|
||||
# the content is pushed in front of the pending token list then all these
|
||||
# tokens are pushed once again.
|
||||
ST_NORMAL = 'ST_NORMAL'
|
||||
ST_OP = 'ST_OP'
|
||||
ST_BACKSLASH = 'ST_BACKSLASH'
|
||||
ST_QUOTED = 'ST_QUOTED'
|
||||
ST_COMMENT = 'ST_COMMENT'
|
||||
ST_HEREDOC = 'ST_HEREDOC'
|
||||
|
||||
#Match end of backquote strings
|
||||
RE_BACKQUOTE_END = re.compile(r'(?<!\\)(`)')
|
||||
|
||||
def __init__(self, parent_state = None):
|
||||
self._input = []
|
||||
self._pos = 0
|
||||
|
||||
self._token = ''
|
||||
self._type = TK_TOKEN
|
||||
|
||||
self._state = self.ST_NORMAL
|
||||
self._parent_state = parent_state
|
||||
self._wordlexer = None
|
||||
|
||||
self._heredoc = HereDoc(None)
|
||||
self._herelexer = None
|
||||
|
||||
### Following attributes are not used for delimiting token and can safely
|
||||
### be changed after here-document detection (see _push_toke)
|
||||
|
||||
# Count the number of tokens following a 'For' reserved word. Needed to
|
||||
# return an 'In' reserved word if it comes in third place.
|
||||
self._for_count = None
|
||||
|
||||
def add(self, data, eof=False):
|
||||
"""Feed the lexer with data.
|
||||
|
||||
When eof is set to True, returns unconsumed data or raise if the lexer
|
||||
is in the middle of a delimiting operation.
|
||||
Raise NeedMore otherwise.
|
||||
"""
|
||||
self._input += list(data)
|
||||
self._parse(eof)
|
||||
self._input[:self._pos] = []
|
||||
return ''.join(self._input)
|
||||
|
||||
def _parse(self, eof):
|
||||
while self._state:
|
||||
if self._pos>=len(self._input):
|
||||
if not eof:
|
||||
raise NeedMore()
|
||||
elif self._state not in (self.ST_OP, self.ST_QUOTED, self.ST_HEREDOC):
|
||||
#Delimit the current token and leave cleanly
|
||||
self._push_token('')
|
||||
break
|
||||
else:
|
||||
#Let the sublexer handle the eof themselves
|
||||
pass
|
||||
|
||||
if self._state==self.ST_NORMAL:
|
||||
self._parse_normal()
|
||||
elif self._state==self.ST_COMMENT:
|
||||
self._parse_comment()
|
||||
elif self._state==self.ST_OP:
|
||||
self._parse_op(eof)
|
||||
elif self._state==self.ST_QUOTED:
|
||||
self._parse_quoted(eof)
|
||||
elif self._state==self.ST_HEREDOC:
|
||||
self._parse_heredoc(eof)
|
||||
else:
|
||||
assert False, "Unknown state " + str(self._state)
|
||||
|
||||
if self._heredoc.op is not None:
|
||||
raise ShellSyntaxError('missing here-document delimiter')
|
||||
|
||||
def _parse_normal(self):
|
||||
c = self._input[self._pos]
|
||||
if c=='\n':
|
||||
self._push_token(c)
|
||||
self._token = c
|
||||
self._type = TK_NEWLINE
|
||||
self._push_token('')
|
||||
self._pos += 1
|
||||
elif c in ('\\', '\'', '"', '`', '$'):
|
||||
self._state = self.ST_QUOTED
|
||||
elif is_partial_op(c):
|
||||
self._push_token(c)
|
||||
|
||||
self._type = TK_OP
|
||||
self._token += c
|
||||
self._pos += 1
|
||||
self._state = self.ST_OP
|
||||
elif is_blank(c):
|
||||
self._push_token(c)
|
||||
|
||||
#Discard blanks
|
||||
self._pos += 1
|
||||
elif self._token:
|
||||
self._token += c
|
||||
self._pos += 1
|
||||
elif c=='#':
|
||||
self._state = self.ST_COMMENT
|
||||
self._type = TK_COMMENT
|
||||
self._pos += 1
|
||||
else:
|
||||
self._pos += 1
|
||||
self._token += c
|
||||
|
||||
def _parse_op(self, eof):
|
||||
assert self._token
|
||||
|
||||
while 1:
|
||||
if self._pos>=len(self._input):
|
||||
if not eof:
|
||||
raise NeedMore()
|
||||
c = ''
|
||||
else:
|
||||
c = self._input[self._pos]
|
||||
|
||||
op = self._token + c
|
||||
if c and is_partial_op(op):
|
||||
#Still parsing an operator
|
||||
self._token = op
|
||||
self._pos += 1
|
||||
else:
|
||||
#End of operator
|
||||
self._push_token(c)
|
||||
self._state = self.ST_NORMAL
|
||||
break
|
||||
|
||||
def _parse_comment(self):
|
||||
while 1:
|
||||
if self._pos>=len(self._input):
|
||||
raise NeedMore()
|
||||
|
||||
c = self._input[self._pos]
|
||||
if c=='\n':
|
||||
#End of comment, do not consume the end of line
|
||||
self._state = self.ST_NORMAL
|
||||
break
|
||||
else:
|
||||
self._token += c
|
||||
self._pos += 1
|
||||
|
||||
def _parse_quoted(self, eof):
|
||||
"""Precondition: the starting backquote/dollar is still in the input queue."""
|
||||
if not self._wordlexer:
|
||||
self._wordlexer = WordLexer()
|
||||
|
||||
if self._pos<len(self._input):
|
||||
#Transfer input queue character into the subparser
|
||||
input = self._input[self._pos:]
|
||||
self._pos += len(input)
|
||||
|
||||
wtree, remaining = self._wordlexer.add(input, eof)
|
||||
self._wordlexer = None
|
||||
self._token += wordtree_as_string(wtree)
|
||||
|
||||
#Put unparsed character back in the input queue
|
||||
if remaining:
|
||||
self._input[self._pos:self._pos] = list(remaining)
|
||||
self._state = self.ST_NORMAL
|
||||
|
||||
def _parse_heredoc(self, eof):
|
||||
assert not self._token
|
||||
|
||||
if self._herelexer is None:
|
||||
self._herelexer = HereDocLexer(self._heredoc.op, self._heredoc.name)
|
||||
|
||||
if self._pos<len(self._input):
|
||||
#Transfer input queue character into the subparser
|
||||
input = self._input[self._pos:]
|
||||
self._pos += len(input)
|
||||
|
||||
self._token, remaining = self._herelexer.add(input, eof)
|
||||
|
||||
#Reset here-document state
|
||||
self._herelexer = None
|
||||
heredoc, self._heredoc = self._heredoc, HereDoc(None)
|
||||
if remaining:
|
||||
self._input[self._pos:self._pos] = list(remaining)
|
||||
self._state = self.ST_NORMAL
|
||||
|
||||
#Push pending tokens
|
||||
heredoc.pendings[:0] = [(self._token, self._type, heredoc.name)]
|
||||
for token, type, delim in heredoc.pendings:
|
||||
self._token = token
|
||||
self._type = type
|
||||
self._push_token(delim)
|
||||
|
||||
def _push_token(self, delim):
|
||||
if not self._token:
|
||||
return 0
|
||||
|
||||
if self._heredoc.op is not None:
|
||||
if self._heredoc.name is None:
|
||||
#Here-document name
|
||||
if self._type!=TK_TOKEN:
|
||||
raise ShellSyntaxError("expecting here-document name, got '%s'" % self._token)
|
||||
self._heredoc.name = unquote_wordtree(make_wordtree(self._token))
|
||||
self._type = TK_HERENAME
|
||||
else:
|
||||
#Capture all tokens until the newline starting the here-document
|
||||
if self._type==TK_NEWLINE:
|
||||
assert self._state==self.ST_NORMAL
|
||||
self._state = self.ST_HEREDOC
|
||||
|
||||
self._heredoc.pendings.append((self._token, self._type, delim))
|
||||
self._token = ''
|
||||
self._type = TK_TOKEN
|
||||
return 1
|
||||
|
||||
# BEWARE: do not change parser state from here to the end of the function:
|
||||
# when parsing between an here-document operator to the end of the line
|
||||
# tokens are stored in self._heredoc.pendings. Therefore, they will not
|
||||
# reach the section below.
|
||||
|
||||
#Check operators
|
||||
if self._type==TK_OP:
|
||||
#False positive because of partial op matching
|
||||
op = is_op(self._token)
|
||||
if not op:
|
||||
self._type = TK_TOKEN
|
||||
else:
|
||||
#Map to the specific operator
|
||||
self._type = op
|
||||
if self._token in ('<<', '<<-'):
|
||||
#Done here rather than in _parse_op because there is no need
|
||||
#to change the parser state since we are still waiting for
|
||||
#the here-document name
|
||||
if self._heredoc.op is not None:
|
||||
raise ShellSyntaxError("syntax error near token '%s'" % self._token)
|
||||
assert self._heredoc.op is None
|
||||
self._heredoc.op = self._token
|
||||
|
||||
if self._type==TK_TOKEN:
|
||||
if '=' in self._token and not delim:
|
||||
if self._token.startswith('='):
|
||||
#Token is a WORD... a TOKEN that is.
|
||||
pass
|
||||
else:
|
||||
prev = self._token[:self._token.find('=')]
|
||||
if is_name(prev):
|
||||
self._type = TK_ASSIGNMENT
|
||||
else:
|
||||
#Just a token (unspecified)
|
||||
pass
|
||||
else:
|
||||
reserved = get_reserved(self._token)
|
||||
if reserved is not None:
|
||||
if reserved=='In' and self._for_count!=2:
|
||||
#Sorry, not a reserved word after all
|
||||
pass
|
||||
else:
|
||||
self._type = reserved
|
||||
if reserved in ('For', 'Case'):
|
||||
self._for_count = 0
|
||||
elif are_digits(self._token) and delim in ('<', '>'):
|
||||
#Detect IO_NUMBER
|
||||
self._type = TK_IONUMBER
|
||||
elif self._token==';':
|
||||
self._type = TK_COMMA
|
||||
elif self._token=='&':
|
||||
self._type = TK_AMPERSAND
|
||||
elif self._type==TK_COMMENT:
|
||||
#Comments are not part of sh grammar, ignore them
|
||||
self._token = ''
|
||||
self._type = TK_TOKEN
|
||||
return 0
|
||||
|
||||
if self._for_count is not None:
|
||||
#Track token count in 'For' expression to detect 'In' reserved words.
|
||||
#Can only be in third position, no need to go beyond
|
||||
self._for_count += 1
|
||||
if self._for_count==3:
|
||||
self._for_count = None
|
||||
|
||||
self.on_token((self._token, self._type))
|
||||
self._token = ''
|
||||
self._type = TK_TOKEN
|
||||
return 1
|
||||
|
||||
def on_token(self, token):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
tokens = [
|
||||
TK_TOKEN,
|
||||
# To silence yacc unused token warnings
|
||||
# TK_COMMENT,
|
||||
TK_NEWLINE,
|
||||
TK_IONUMBER,
|
||||
TK_ASSIGNMENT,
|
||||
TK_HERENAME,
|
||||
]
|
||||
|
||||
#Add specific operators
|
||||
tokens += _OPERATORS.values()
|
||||
#Add reserved words
|
||||
tokens += _RESERVEDS.values()
|
||||
|
||||
class PLYLexer(Lexer):
|
||||
"""Bridge Lexer and PLY lexer interface."""
|
||||
def __init__(self):
|
||||
Lexer.__init__(self)
|
||||
self._tokens = []
|
||||
self._current = 0
|
||||
self.lineno = 0
|
||||
|
||||
def on_token(self, token):
|
||||
value, type = token
|
||||
|
||||
self.lineno = 0
|
||||
t = lex.LexToken()
|
||||
t.value = value
|
||||
t.type = type
|
||||
t.lexer = self
|
||||
t.lexpos = 0
|
||||
t.lineno = 0
|
||||
|
||||
self._tokens.append(t)
|
||||
|
||||
def is_empty(self):
|
||||
return not bool(self._tokens)
|
||||
|
||||
#PLY compliant interface
|
||||
def token(self):
|
||||
if self._current>=len(self._tokens):
|
||||
return None
|
||||
t = self._tokens[self._current]
|
||||
self._current += 1
|
||||
return t
|
||||
|
||||
|
||||
def get_tokens(s):
|
||||
"""Parse the input string and return a tuple (tokens, unprocessed) where
|
||||
tokens is a list of parsed tokens and unprocessed is the part of the input
|
||||
string left untouched by the lexer.
|
||||
"""
|
||||
lexer = PLYLexer()
|
||||
untouched = lexer.add(s, True)
|
||||
tokens = []
|
||||
while 1:
|
||||
token = lexer.token()
|
||||
if token is None:
|
||||
break
|
||||
tokens.append(token)
|
||||
|
||||
tokens = [(t.value, t.type) for t in tokens]
|
||||
return tokens, untouched
|
||||
@@ -1,779 +0,0 @@
|
||||
# pyshyacc.py - PLY grammar definition for pysh
|
||||
#
|
||||
# Copyright 2007 Patrick Mezard
|
||||
#
|
||||
# This software may be used and distributed according to the terms
|
||||
# of the GNU General Public License, incorporated herein by reference.
|
||||
|
||||
"""PLY grammar file.
|
||||
"""
|
||||
import os.path
|
||||
import sys
|
||||
|
||||
import pyshlex
|
||||
tokens = pyshlex.tokens
|
||||
|
||||
from ply import yacc
|
||||
import sherrors
|
||||
|
||||
class IORedirect:
|
||||
def __init__(self, op, filename, io_number=None):
|
||||
self.op = op
|
||||
self.filename = filename
|
||||
self.io_number = io_number
|
||||
|
||||
class HereDocument:
|
||||
def __init__(self, op, name, content, io_number=None):
|
||||
self.op = op
|
||||
self.name = name
|
||||
self.content = content
|
||||
self.io_number = io_number
|
||||
|
||||
def make_io_redirect(p):
|
||||
"""Make an IORedirect instance from the input 'io_redirect' production."""
|
||||
name, io_number, io_target = p
|
||||
assert name=='io_redirect'
|
||||
|
||||
if io_target[0]=='io_file':
|
||||
io_type, io_op, io_file = io_target
|
||||
return IORedirect(io_op, io_file, io_number)
|
||||
elif io_target[0]=='io_here':
|
||||
io_type, io_op, io_name, io_content = io_target
|
||||
return HereDocument(io_op, io_name, io_content, io_number)
|
||||
else:
|
||||
assert False, "Invalid IO redirection token %s" % repr(io_type)
|
||||
|
||||
class SimpleCommand:
|
||||
"""
|
||||
assigns contains (name, value) pairs.
|
||||
"""
|
||||
def __init__(self, words, redirs, assigns):
|
||||
self.words = list(words)
|
||||
self.redirs = list(redirs)
|
||||
self.assigns = list(assigns)
|
||||
|
||||
class Pipeline:
|
||||
def __init__(self, commands, reverse_status=False):
|
||||
self.commands = list(commands)
|
||||
assert self.commands #Grammar forbids this
|
||||
self.reverse_status = reverse_status
|
||||
|
||||
class AndOr:
|
||||
def __init__(self, op, left, right):
|
||||
self.op = str(op)
|
||||
self.left = left
|
||||
self.right = right
|
||||
|
||||
class ForLoop:
|
||||
def __init__(self, name, items, cmds):
|
||||
self.name = str(name)
|
||||
self.items = list(items)
|
||||
self.cmds = list(cmds)
|
||||
|
||||
class WhileLoop:
|
||||
def __init__(self, condition, cmds):
|
||||
self.condition = list(condition)
|
||||
self.cmds = list(cmds)
|
||||
|
||||
class UntilLoop:
|
||||
def __init__(self, condition, cmds):
|
||||
self.condition = list(condition)
|
||||
self.cmds = list(cmds)
|
||||
|
||||
class FunDef:
|
||||
def __init__(self, name, body):
|
||||
self.name = str(name)
|
||||
self.body = body
|
||||
|
||||
class BraceGroup:
|
||||
def __init__(self, cmds):
|
||||
self.cmds = list(cmds)
|
||||
|
||||
class IfCond:
|
||||
def __init__(self, cond, if_cmds, else_cmds):
|
||||
self.cond = list(cond)
|
||||
self.if_cmds = if_cmds
|
||||
self.else_cmds = else_cmds
|
||||
|
||||
class Case:
|
||||
def __init__(self, name, items):
|
||||
self.name = name
|
||||
self.items = items
|
||||
|
||||
class SubShell:
|
||||
def __init__(self, cmds):
|
||||
self.cmds = cmds
|
||||
|
||||
class RedirectList:
|
||||
def __init__(self, cmd, redirs):
|
||||
self.cmd = cmd
|
||||
self.redirs = list(redirs)
|
||||
|
||||
def get_production(productions, ptype):
|
||||
"""productions must be a list of production tuples like (name, obj) where
|
||||
name is the production string identifier.
|
||||
Return the first production named 'ptype'. Raise KeyError if None can be
|
||||
found.
|
||||
"""
|
||||
for production in productions:
|
||||
if production is not None and production[0]==ptype:
|
||||
return production
|
||||
raise KeyError(ptype)
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# PLY grammar definition
|
||||
#-------------------------------------------------------------------------------
|
||||
|
||||
def p_multiple_commands(p):
|
||||
"""multiple_commands : newline_sequence
|
||||
| complete_command
|
||||
| multiple_commands complete_command"""
|
||||
if len(p)==2:
|
||||
if p[1] is not None:
|
||||
p[0] = [p[1]]
|
||||
else:
|
||||
p[0] = []
|
||||
else:
|
||||
p[0] = p[1] + [p[2]]
|
||||
|
||||
def p_complete_command(p):
|
||||
"""complete_command : list separator
|
||||
| list"""
|
||||
if len(p)==3 and p[2] and p[2][1] == '&':
|
||||
p[0] = ('async', p[1])
|
||||
else:
|
||||
p[0] = p[1]
|
||||
|
||||
def p_list(p):
|
||||
"""list : list separator_op and_or
|
||||
| and_or"""
|
||||
if len(p)==2:
|
||||
p[0] = [p[1]]
|
||||
else:
|
||||
#if p[2]!=';':
|
||||
# raise NotImplementedError('AND-OR list asynchronous execution is not implemented')
|
||||
p[0] = p[1] + [p[3]]
|
||||
|
||||
def p_and_or(p):
|
||||
"""and_or : pipeline
|
||||
| and_or AND_IF linebreak pipeline
|
||||
| and_or OR_IF linebreak pipeline"""
|
||||
if len(p)==2:
|
||||
p[0] = p[1]
|
||||
else:
|
||||
p[0] = ('and_or', AndOr(p[2], p[1], p[4]))
|
||||
|
||||
def p_maybe_bang_word(p):
|
||||
"""maybe_bang_word : Bang"""
|
||||
p[0] = ('maybe_bang_word', p[1])
|
||||
|
||||
def p_pipeline(p):
|
||||
"""pipeline : pipe_sequence
|
||||
| bang_word pipe_sequence"""
|
||||
if len(p)==3:
|
||||
p[0] = ('pipeline', Pipeline(p[2][1:], True))
|
||||
else:
|
||||
p[0] = ('pipeline', Pipeline(p[1][1:]))
|
||||
|
||||
def p_pipe_sequence(p):
|
||||
"""pipe_sequence : command
|
||||
| pipe_sequence PIPE linebreak command"""
|
||||
if len(p)==2:
|
||||
p[0] = ['pipe_sequence', p[1]]
|
||||
else:
|
||||
p[0] = p[1] + [p[4]]
|
||||
|
||||
def p_command(p):
|
||||
"""command : simple_command
|
||||
| compound_command
|
||||
| compound_command redirect_list
|
||||
| function_definition"""
|
||||
|
||||
if p[1][0] in ( 'simple_command',
|
||||
'for_clause',
|
||||
'while_clause',
|
||||
'until_clause',
|
||||
'case_clause',
|
||||
'if_clause',
|
||||
'function_definition',
|
||||
'subshell',
|
||||
'brace_group',):
|
||||
if len(p) == 2:
|
||||
p[0] = p[1]
|
||||
else:
|
||||
p[0] = ('redirect_list', RedirectList(p[1], p[2][1:]))
|
||||
else:
|
||||
raise NotImplementedError('%s command is not implemented' % repr(p[1][0]))
|
||||
|
||||
def p_compound_command(p):
|
||||
"""compound_command : brace_group
|
||||
| subshell
|
||||
| for_clause
|
||||
| case_clause
|
||||
| if_clause
|
||||
| while_clause
|
||||
| until_clause"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_subshell(p):
|
||||
"""subshell : LPARENS compound_list RPARENS"""
|
||||
p[0] = ('subshell', SubShell(p[2][1:]))
|
||||
|
||||
def p_compound_list(p):
|
||||
"""compound_list : term
|
||||
| newline_list term
|
||||
| term separator
|
||||
| newline_list term separator"""
|
||||
productions = p[1:]
|
||||
try:
|
||||
sep = get_production(productions, 'separator')
|
||||
if sep[1]!=';':
|
||||
raise NotImplementedError()
|
||||
except KeyError:
|
||||
pass
|
||||
term = get_production(productions, 'term')
|
||||
p[0] = ['compound_list'] + term[1:]
|
||||
|
||||
def p_term(p):
|
||||
"""term : term separator and_or
|
||||
| and_or"""
|
||||
if len(p)==2:
|
||||
p[0] = ['term', p[1]]
|
||||
else:
|
||||
if p[2] is not None and p[2][1] == '&':
|
||||
p[0] = ['term', ('async', p[1][1:])] + [p[3]]
|
||||
else:
|
||||
p[0] = p[1] + [p[3]]
|
||||
|
||||
def p_maybe_for_word(p):
|
||||
# Rearrange 'For' priority wrt TOKEN. See p_for_word
|
||||
"""maybe_for_word : For"""
|
||||
p[0] = ('maybe_for_word', p[1])
|
||||
|
||||
def p_for_clause(p):
|
||||
"""for_clause : for_word name linebreak do_group
|
||||
| for_word name linebreak in sequential_sep do_group
|
||||
| for_word name linebreak in wordlist sequential_sep do_group"""
|
||||
productions = p[1:]
|
||||
do_group = get_production(productions, 'do_group')
|
||||
try:
|
||||
items = get_production(productions, 'in')[1:]
|
||||
except KeyError:
|
||||
raise NotImplementedError('"in" omission is not implemented')
|
||||
|
||||
try:
|
||||
items = get_production(productions, 'wordlist')[1:]
|
||||
except KeyError:
|
||||
items = []
|
||||
|
||||
name = p[2]
|
||||
p[0] = ('for_clause', ForLoop(name, items, do_group[1:]))
|
||||
|
||||
def p_name(p):
|
||||
"""name : token""" #Was NAME instead of token
|
||||
p[0] = p[1]
|
||||
|
||||
def p_in(p):
|
||||
"""in : In"""
|
||||
p[0] = ('in', p[1])
|
||||
|
||||
def p_wordlist(p):
|
||||
"""wordlist : wordlist token
|
||||
| token"""
|
||||
if len(p)==2:
|
||||
p[0] = ['wordlist', ('TOKEN', p[1])]
|
||||
else:
|
||||
p[0] = p[1] + [('TOKEN', p[2])]
|
||||
|
||||
def p_case_clause(p):
|
||||
"""case_clause : Case token linebreak in linebreak case_list Esac
|
||||
| Case token linebreak in linebreak case_list_ns Esac
|
||||
| Case token linebreak in linebreak Esac"""
|
||||
if len(p) < 8:
|
||||
items = []
|
||||
else:
|
||||
items = p[6][1:]
|
||||
name = p[2]
|
||||
p[0] = ('case_clause', Case(name, [c[1] for c in items]))
|
||||
|
||||
def p_case_list_ns(p):
|
||||
"""case_list_ns : case_list case_item_ns
|
||||
| case_item_ns"""
|
||||
p_case_list(p)
|
||||
|
||||
def p_case_list(p):
|
||||
"""case_list : case_list case_item
|
||||
| case_item"""
|
||||
if len(p)==2:
|
||||
p[0] = ['case_list', p[1]]
|
||||
else:
|
||||
p[0] = p[1] + [p[2]]
|
||||
|
||||
def p_case_item_ns(p):
|
||||
"""case_item_ns : pattern RPARENS linebreak
|
||||
| pattern RPARENS compound_list linebreak
|
||||
| LPARENS pattern RPARENS linebreak
|
||||
| LPARENS pattern RPARENS compound_list linebreak"""
|
||||
p_case_item(p)
|
||||
|
||||
def p_case_item(p):
|
||||
"""case_item : pattern RPARENS linebreak DSEMI linebreak
|
||||
| pattern RPARENS compound_list DSEMI linebreak
|
||||
| LPARENS pattern RPARENS linebreak DSEMI linebreak
|
||||
| LPARENS pattern RPARENS compound_list DSEMI linebreak"""
|
||||
if len(p) < 7:
|
||||
name = p[1][1:]
|
||||
else:
|
||||
name = p[2][1:]
|
||||
|
||||
try:
|
||||
cmds = get_production(p[1:], "compound_list")[1:]
|
||||
except KeyError:
|
||||
cmds = []
|
||||
|
||||
p[0] = ('case_item', (name, cmds))
|
||||
|
||||
def p_pattern(p):
|
||||
"""pattern : token
|
||||
| pattern PIPE token"""
|
||||
if len(p)==2:
|
||||
p[0] = ['pattern', ('TOKEN', p[1])]
|
||||
else:
|
||||
p[0] = p[1] + [('TOKEN', p[2])]
|
||||
|
||||
def p_maybe_if_word(p):
|
||||
# Rearrange 'If' priority wrt TOKEN. See p_if_word
|
||||
"""maybe_if_word : If"""
|
||||
p[0] = ('maybe_if_word', p[1])
|
||||
|
||||
def p_maybe_then_word(p):
|
||||
# Rearrange 'Then' priority wrt TOKEN. See p_then_word
|
||||
"""maybe_then_word : Then"""
|
||||
p[0] = ('maybe_then_word', p[1])
|
||||
|
||||
def p_if_clause(p):
|
||||
"""if_clause : if_word compound_list then_word compound_list else_part Fi
|
||||
| if_word compound_list then_word compound_list Fi"""
|
||||
else_part = []
|
||||
if len(p)==7:
|
||||
else_part = p[5]
|
||||
p[0] = ('if_clause', IfCond(p[2][1:], p[4][1:], else_part))
|
||||
|
||||
def p_else_part(p):
|
||||
"""else_part : Elif compound_list then_word compound_list else_part
|
||||
| Elif compound_list then_word compound_list
|
||||
| Else compound_list"""
|
||||
if len(p)==3:
|
||||
p[0] = p[2][1:]
|
||||
else:
|
||||
else_part = []
|
||||
if len(p)==6:
|
||||
else_part = p[5]
|
||||
p[0] = ('elif', IfCond(p[2][1:], p[4][1:], else_part))
|
||||
|
||||
def p_while_clause(p):
|
||||
"""while_clause : While compound_list do_group"""
|
||||
p[0] = ('while_clause', WhileLoop(p[2][1:], p[3][1:]))
|
||||
|
||||
def p_maybe_until_word(p):
|
||||
# Rearrange 'Until' priority wrt TOKEN. See p_until_word
|
||||
"""maybe_until_word : Until"""
|
||||
p[0] = ('maybe_until_word', p[1])
|
||||
|
||||
def p_until_clause(p):
|
||||
"""until_clause : until_word compound_list do_group"""
|
||||
p[0] = ('until_clause', UntilLoop(p[2][1:], p[3][1:]))
|
||||
|
||||
def p_function_definition(p):
|
||||
"""function_definition : fname LPARENS RPARENS linebreak function_body"""
|
||||
p[0] = ('function_definition', FunDef(p[1], p[5]))
|
||||
|
||||
def p_function_body(p):
|
||||
"""function_body : compound_command
|
||||
| compound_command redirect_list"""
|
||||
if len(p)!=2:
|
||||
raise NotImplementedError('functions redirections lists are not implemented')
|
||||
p[0] = p[1]
|
||||
|
||||
def p_fname(p):
|
||||
"""fname : TOKEN""" #Was NAME instead of token
|
||||
p[0] = p[1]
|
||||
|
||||
def p_brace_group(p):
|
||||
"""brace_group : Lbrace compound_list Rbrace"""
|
||||
p[0] = ('brace_group', BraceGroup(p[2][1:]))
|
||||
|
||||
def p_maybe_done_word(p):
|
||||
#See p_assignment_word for details.
|
||||
"""maybe_done_word : Done"""
|
||||
p[0] = ('maybe_done_word', p[1])
|
||||
|
||||
def p_maybe_do_word(p):
|
||||
"""maybe_do_word : Do"""
|
||||
p[0] = ('maybe_do_word', p[1])
|
||||
|
||||
def p_do_group(p):
|
||||
"""do_group : do_word compound_list done_word"""
|
||||
#Do group contains a list of AndOr
|
||||
p[0] = ['do_group'] + p[2][1:]
|
||||
|
||||
def p_simple_command(p):
|
||||
"""simple_command : cmd_prefix cmd_word cmd_suffix
|
||||
| cmd_prefix cmd_word
|
||||
| cmd_prefix
|
||||
| cmd_name cmd_suffix
|
||||
| cmd_name"""
|
||||
words, redirs, assigns = [], [], []
|
||||
for e in p[1:]:
|
||||
name = e[0]
|
||||
if name in ('cmd_prefix', 'cmd_suffix'):
|
||||
for sube in e[1:]:
|
||||
subname = sube[0]
|
||||
if subname=='io_redirect':
|
||||
redirs.append(make_io_redirect(sube))
|
||||
elif subname=='ASSIGNMENT_WORD':
|
||||
assigns.append(sube)
|
||||
else:
|
||||
words.append(sube)
|
||||
elif name in ('cmd_word', 'cmd_name'):
|
||||
words.append(e)
|
||||
|
||||
cmd = SimpleCommand(words, redirs, assigns)
|
||||
p[0] = ('simple_command', cmd)
|
||||
|
||||
def p_cmd_name(p):
|
||||
"""cmd_name : TOKEN"""
|
||||
p[0] = ('cmd_name', p[1])
|
||||
|
||||
def p_cmd_word(p):
|
||||
"""cmd_word : token"""
|
||||
p[0] = ('cmd_word', p[1])
|
||||
|
||||
def p_maybe_assignment_word(p):
|
||||
#See p_assignment_word for details.
|
||||
"""maybe_assignment_word : ASSIGNMENT_WORD"""
|
||||
p[0] = ('maybe_assignment_word', p[1])
|
||||
|
||||
def p_cmd_prefix(p):
|
||||
"""cmd_prefix : io_redirect
|
||||
| cmd_prefix io_redirect
|
||||
| assignment_word
|
||||
| cmd_prefix assignment_word"""
|
||||
try:
|
||||
prefix = get_production(p[1:], 'cmd_prefix')
|
||||
except KeyError:
|
||||
prefix = ['cmd_prefix']
|
||||
|
||||
try:
|
||||
value = get_production(p[1:], 'assignment_word')[1]
|
||||
value = ('ASSIGNMENT_WORD', value.split('=', 1))
|
||||
except KeyError:
|
||||
value = get_production(p[1:], 'io_redirect')
|
||||
p[0] = prefix + [value]
|
||||
|
||||
def p_cmd_suffix(p):
|
||||
"""cmd_suffix : io_redirect
|
||||
| cmd_suffix io_redirect
|
||||
| token
|
||||
| cmd_suffix token
|
||||
| maybe_for_word
|
||||
| cmd_suffix maybe_for_word
|
||||
| maybe_done_word
|
||||
| cmd_suffix maybe_done_word
|
||||
| maybe_do_word
|
||||
| cmd_suffix maybe_do_word
|
||||
| maybe_until_word
|
||||
| cmd_suffix maybe_until_word
|
||||
| maybe_assignment_word
|
||||
| cmd_suffix maybe_assignment_word
|
||||
| maybe_if_word
|
||||
| cmd_suffix maybe_if_word
|
||||
| maybe_then_word
|
||||
| cmd_suffix maybe_then_word
|
||||
| maybe_bang_word
|
||||
| cmd_suffix maybe_bang_word"""
|
||||
try:
|
||||
suffix = get_production(p[1:], 'cmd_suffix')
|
||||
token = p[2]
|
||||
except KeyError:
|
||||
suffix = ['cmd_suffix']
|
||||
token = p[1]
|
||||
|
||||
if isinstance(token, tuple):
|
||||
if token[0]=='io_redirect':
|
||||
p[0] = suffix + [token]
|
||||
else:
|
||||
#Convert maybe_* to TOKEN if necessary
|
||||
p[0] = suffix + [('TOKEN', token[1])]
|
||||
else:
|
||||
p[0] = suffix + [('TOKEN', token)]
|
||||
|
||||
def p_redirect_list(p):
|
||||
"""redirect_list : io_redirect
|
||||
| redirect_list io_redirect"""
|
||||
if len(p) == 2:
|
||||
p[0] = ['redirect_list', make_io_redirect(p[1])]
|
||||
else:
|
||||
p[0] = p[1] + [make_io_redirect(p[2])]
|
||||
|
||||
def p_io_redirect(p):
|
||||
"""io_redirect : io_file
|
||||
| IO_NUMBER io_file
|
||||
| io_here
|
||||
| IO_NUMBER io_here"""
|
||||
if len(p)==3:
|
||||
p[0] = ('io_redirect', p[1], p[2])
|
||||
else:
|
||||
p[0] = ('io_redirect', None, p[1])
|
||||
|
||||
def p_io_file(p):
|
||||
#Return the tuple (operator, filename)
|
||||
"""io_file : LESS filename
|
||||
| LESSAND filename
|
||||
| GREATER filename
|
||||
| GREATAND filename
|
||||
| DGREAT filename
|
||||
| LESSGREAT filename
|
||||
| CLOBBER filename"""
|
||||
#Extract the filename from the file
|
||||
p[0] = ('io_file', p[1], p[2][1])
|
||||
|
||||
def p_filename(p):
|
||||
#Return the filename
|
||||
"""filename : TOKEN"""
|
||||
p[0] = ('filename', p[1])
|
||||
|
||||
def p_io_here(p):
|
||||
"""io_here : DLESS here_end
|
||||
| DLESSDASH here_end"""
|
||||
p[0] = ('io_here', p[1], p[2][1], p[2][2])
|
||||
|
||||
def p_here_end(p):
|
||||
"""here_end : HERENAME TOKEN"""
|
||||
p[0] = ('here_document', p[1], p[2])
|
||||
|
||||
def p_newline_sequence(p):
|
||||
# Nothing in the grammar can handle leading NEWLINE productions, so add
|
||||
# this one with the lowest possible priority relatively to newline_list.
|
||||
"""newline_sequence : newline_list"""
|
||||
p[0] = None
|
||||
|
||||
def p_newline_list(p):
|
||||
"""newline_list : NEWLINE
|
||||
| newline_list NEWLINE"""
|
||||
p[0] = None
|
||||
|
||||
def p_linebreak(p):
|
||||
"""linebreak : newline_list
|
||||
| empty"""
|
||||
p[0] = None
|
||||
|
||||
def p_separator_op(p):
|
||||
"""separator_op : COMMA
|
||||
| AMP"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_separator(p):
|
||||
"""separator : separator_op linebreak
|
||||
| newline_list"""
|
||||
if len(p)==2:
|
||||
#Ignore newlines
|
||||
p[0] = None
|
||||
else:
|
||||
#Keep the separator operator
|
||||
p[0] = ('separator', p[1])
|
||||
|
||||
def p_sequential_sep(p):
|
||||
"""sequential_sep : COMMA linebreak
|
||||
| newline_list"""
|
||||
p[0] = None
|
||||
|
||||
# Low priority TOKEN => for_word conversion.
|
||||
# Let maybe_for_word be used as a token when necessary in higher priority
|
||||
# rules.
|
||||
def p_for_word(p):
|
||||
"""for_word : maybe_for_word"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_if_word(p):
|
||||
"""if_word : maybe_if_word"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_then_word(p):
|
||||
"""then_word : maybe_then_word"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_done_word(p):
|
||||
"""done_word : maybe_done_word"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_do_word(p):
|
||||
"""do_word : maybe_do_word"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_until_word(p):
|
||||
"""until_word : maybe_until_word"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_assignment_word(p):
|
||||
"""assignment_word : maybe_assignment_word"""
|
||||
p[0] = ('assignment_word', p[1][1])
|
||||
|
||||
def p_bang_word(p):
|
||||
"""bang_word : maybe_bang_word"""
|
||||
p[0] = ('bang_word', p[1][1])
|
||||
|
||||
def p_token(p):
|
||||
"""token : TOKEN
|
||||
| Fi"""
|
||||
p[0] = p[1]
|
||||
|
||||
def p_empty(p):
|
||||
'empty :'
|
||||
p[0] = None
|
||||
|
||||
# Error rule for syntax errors
|
||||
def p_error(p):
|
||||
msg = []
|
||||
w = msg.append
|
||||
w('%r\n' % p)
|
||||
w('followed by:\n')
|
||||
for i in range(5):
|
||||
n = yacc.token()
|
||||
if not n:
|
||||
break
|
||||
w(' %r\n' % n)
|
||||
raise sherrors.ShellSyntaxError(''.join(msg))
|
||||
|
||||
# Build the parser
|
||||
try:
|
||||
import pyshtables
|
||||
except ImportError:
|
||||
outputdir = os.path.dirname(__file__)
|
||||
if not os.access(outputdir, os.W_OK):
|
||||
outputdir = ''
|
||||
yacc.yacc(tabmodule = 'pyshtables', outputdir = outputdir, debug = 0)
|
||||
else:
|
||||
yacc.yacc(tabmodule = 'pysh.pyshtables', write_tables = 0, debug = 0)
|
||||
|
||||
|
||||
def parse(input, eof=False, debug=False):
|
||||
"""Parse a whole script at once and return the generated AST and unconsumed
|
||||
data in a tuple.
|
||||
|
||||
NOTE: eof is probably meaningless for now, the parser being unable to work
|
||||
in pull mode. It should be set to True.
|
||||
"""
|
||||
lexer = pyshlex.PLYLexer()
|
||||
remaining = lexer.add(input, eof)
|
||||
if lexer.is_empty():
|
||||
return [], remaining
|
||||
if debug:
|
||||
debug = 2
|
||||
return yacc.parse(lexer=lexer, debug=debug), remaining
|
||||
|
||||
#-------------------------------------------------------------------------------
|
||||
# AST rendering helpers
|
||||
#-------------------------------------------------------------------------------
|
||||
|
||||
def format_commands(v):
|
||||
"""Return a tree made of strings and lists. Make command trees easier to
|
||||
display.
|
||||
"""
|
||||
if isinstance(v, list):
|
||||
return [format_commands(c) for c in v]
|
||||
if isinstance(v, tuple):
|
||||
if len(v)==2 and isinstance(v[0], str) and not isinstance(v[1], str):
|
||||
if v[0] == 'async':
|
||||
return ['AsyncList', map(format_commands, v[1])]
|
||||
else:
|
||||
#Avoid decomposing tuples like ('pipeline', Pipeline(...))
|
||||
return format_commands(v[1])
|
||||
return format_commands(list(v))
|
||||
elif isinstance(v, IfCond):
|
||||
name = ['IfCond']
|
||||
name += ['if', map(format_commands, v.cond)]
|
||||
name += ['then', map(format_commands, v.if_cmds)]
|
||||
name += ['else', map(format_commands, v.else_cmds)]
|
||||
return name
|
||||
elif isinstance(v, ForLoop):
|
||||
name = ['ForLoop']
|
||||
name += [repr(v.name)+' in ', map(str, v.items)]
|
||||
name += ['commands', map(format_commands, v.cmds)]
|
||||
return name
|
||||
elif isinstance(v, AndOr):
|
||||
return [v.op, format_commands(v.left), format_commands(v.right)]
|
||||
elif isinstance(v, Pipeline):
|
||||
name = 'Pipeline'
|
||||
if v.reverse_status:
|
||||
name = '!' + name
|
||||
return [name, format_commands(v.commands)]
|
||||
elif isinstance(v, Case):
|
||||
name = ['Case']
|
||||
name += [v.name, format_commands(v.items)]
|
||||
elif isinstance(v, SimpleCommand):
|
||||
name = ['SimpleCommand']
|
||||
if v.words:
|
||||
name += ['words', map(str, v.words)]
|
||||
if v.assigns:
|
||||
assigns = [tuple(a[1]) for a in v.assigns]
|
||||
name += ['assigns', map(str, assigns)]
|
||||
if v.redirs:
|
||||
name += ['redirs', map(format_commands, v.redirs)]
|
||||
return name
|
||||
elif isinstance(v, RedirectList):
|
||||
name = ['RedirectList']
|
||||
if v.redirs:
|
||||
name += ['redirs', map(format_commands, v.redirs)]
|
||||
name += ['command', format_commands(v.cmd)]
|
||||
return name
|
||||
elif isinstance(v, IORedirect):
|
||||
return ' '.join(map(str, (v.io_number, v.op, v.filename)))
|
||||
elif isinstance(v, HereDocument):
|
||||
return ' '.join(map(str, (v.io_number, v.op, repr(v.name), repr(v.content))))
|
||||
elif isinstance(v, SubShell):
|
||||
return ['SubShell', map(format_commands, v.cmds)]
|
||||
else:
|
||||
return repr(v)
|
||||
|
||||
def print_commands(cmds, output=sys.stdout):
|
||||
"""Pretty print a command tree."""
|
||||
def print_tree(cmd, spaces, output):
|
||||
if isinstance(cmd, list):
|
||||
for c in cmd:
|
||||
print_tree(c, spaces + 3, output)
|
||||
else:
|
||||
print >>output, ' '*spaces + str(cmd)
|
||||
|
||||
formatted = format_commands(cmds)
|
||||
print_tree(formatted, 0, output)
|
||||
|
||||
|
||||
def stringify_commands(cmds):
|
||||
"""Serialize a command tree as a string.
|
||||
|
||||
Returned string is not pretty and is currently used for unit tests only.
|
||||
"""
|
||||
def stringify(value):
|
||||
output = []
|
||||
if isinstance(value, list):
|
||||
formatted = []
|
||||
for v in value:
|
||||
formatted.append(stringify(v))
|
||||
formatted = ' '.join(formatted)
|
||||
output.append(''.join(['<', formatted, '>']))
|
||||
else:
|
||||
output.append(value)
|
||||
return ' '.join(output)
|
||||
|
||||
return stringify(format_commands(cmds))
|
||||
|
||||
|
||||
def visit_commands(cmds, callable):
|
||||
"""Visit the command tree and execute callable on every Pipeline and
|
||||
SimpleCommand instances.
|
||||
"""
|
||||
if isinstance(cmds, (tuple, list)):
|
||||
map(lambda c: visit_commands(c,callable), cmds)
|
||||
elif isinstance(cmds, (Pipeline, SimpleCommand)):
|
||||
callable(cmds)
|
||||
@@ -1,41 +0,0 @@
|
||||
# sherrors.py - shell errors and signals
|
||||
#
|
||||
# Copyright 2007 Patrick Mezard
|
||||
#
|
||||
# This software may be used and distributed according to the terms
|
||||
# of the GNU General Public License, incorporated herein by reference.
|
||||
|
||||
"""Define shell exceptions and error codes.
|
||||
"""
|
||||
|
||||
class ShellError(Exception):
|
||||
pass
|
||||
|
||||
class ShellSyntaxError(ShellError):
|
||||
pass
|
||||
|
||||
class UtilityError(ShellError):
|
||||
"""Raised upon utility syntax error (option or operand error)."""
|
||||
pass
|
||||
|
||||
class ExpansionError(ShellError):
|
||||
pass
|
||||
|
||||
class CommandNotFound(ShellError):
|
||||
"""Specified command was not found."""
|
||||
pass
|
||||
|
||||
class RedirectionError(ShellError):
|
||||
pass
|
||||
|
||||
class VarAssignmentError(ShellError):
|
||||
"""Variable assignment error."""
|
||||
pass
|
||||
|
||||
class ExitSignal(ShellError):
|
||||
"""Exit signal."""
|
||||
pass
|
||||
|
||||
class ReturnSignal(ShellError):
|
||||
"""Exit signal."""
|
||||
pass
|
||||
@@ -1,77 +0,0 @@
|
||||
# subprocess - Subprocesses with accessible I/O streams
|
||||
#
|
||||
# For more information about this module, see PEP 324.
|
||||
#
|
||||
# This module should remain compatible with Python 2.2, see PEP 291.
|
||||
#
|
||||
# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
|
||||
#
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
# See http://www.python.org/2.4/license for licensing details.
|
||||
|
||||
def list2cmdline(seq):
|
||||
"""
|
||||
Translate a sequence of arguments into a command line
|
||||
string, using the same rules as the MS C runtime:
|
||||
|
||||
1) Arguments are delimited by white space, which is either a
|
||||
space or a tab.
|
||||
|
||||
2) A string surrounded by double quotation marks is
|
||||
interpreted as a single argument, regardless of white space
|
||||
contained within. A quoted string can be embedded in an
|
||||
argument.
|
||||
|
||||
3) A double quotation mark preceded by a backslash is
|
||||
interpreted as a literal double quotation mark.
|
||||
|
||||
4) Backslashes are interpreted literally, unless they
|
||||
immediately precede a double quotation mark.
|
||||
|
||||
5) If backslashes immediately precede a double quotation mark,
|
||||
every pair of backslashes is interpreted as a literal
|
||||
backslash. If the number of backslashes is odd, the last
|
||||
backslash escapes the next double quotation mark as
|
||||
described in rule 3.
|
||||
"""
|
||||
|
||||
# See
|
||||
# http://msdn.microsoft.com/library/en-us/vccelng/htm/progs_12.asp
|
||||
result = []
|
||||
needquote = False
|
||||
for arg in seq:
|
||||
bs_buf = []
|
||||
|
||||
# Add a space to separate this argument from the others
|
||||
if result:
|
||||
result.append(' ')
|
||||
|
||||
needquote = (" " in arg) or ("\t" in arg) or ("|" in arg) or arg == ""
|
||||
if needquote:
|
||||
result.append('"')
|
||||
|
||||
for c in arg:
|
||||
if c == '\\':
|
||||
# Don't know if we need to double yet.
|
||||
bs_buf.append(c)
|
||||
elif c == '"':
|
||||
# Double backspaces.
|
||||
result.append('\\' * len(bs_buf)*2)
|
||||
bs_buf = []
|
||||
result.append('\\"')
|
||||
else:
|
||||
# Normal char
|
||||
if bs_buf:
|
||||
result.extend(bs_buf)
|
||||
bs_buf = []
|
||||
result.append(c)
|
||||
|
||||
# Add remaining backspaces, if any.
|
||||
if bs_buf:
|
||||
result.extend(bs_buf)
|
||||
|
||||
if needquote:
|
||||
result.extend(bs_buf)
|
||||
result.append('"')
|
||||
|
||||
return ''.join(result)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -18,24 +18,33 @@
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
This module implements a passthrough server for BitBake.
|
||||
This module implements an xmlrpc server for BitBake.
|
||||
|
||||
Use register_idle_function() to add a function which the server
|
||||
calls from within idle_commands when no requests are pending. Make sure
|
||||
Use this by deriving a class from BitBakeXMLRPCServer and then adding
|
||||
methods which you want to "export" via XMLRPC. If the methods have the
|
||||
prefix xmlrpc_, then registering those function will happen automatically,
|
||||
if not, you need to call register_function.
|
||||
|
||||
Use register_idle_function() to add a function which the xmlrpc server
|
||||
calls from within server_forever when no requests are pending. Make sure
|
||||
that those functions are non-blocking or else you will introduce latency
|
||||
in the server's main loop.
|
||||
"""
|
||||
|
||||
import time
|
||||
import bb
|
||||
import signal
|
||||
from bb.ui import uievent
|
||||
import xmlrpclib
|
||||
import pickle
|
||||
|
||||
DEBUG = False
|
||||
|
||||
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import inspect, select
|
||||
|
||||
class BitBakeServerCommands():
|
||||
def __init__(self, server):
|
||||
def __init__(self, server, cooker):
|
||||
self.cooker = cooker
|
||||
self.server = server
|
||||
|
||||
def runCommand(self, command):
|
||||
@@ -67,7 +76,7 @@ class BBUIEventQueue:
|
||||
self.parent = parent
|
||||
@staticmethod
|
||||
def send(event):
|
||||
bb.server.none.eventQueue.append(event)
|
||||
bb.server.none.eventQueue.append(pickle.loads(event))
|
||||
@staticmethod
|
||||
def quit():
|
||||
return
|
||||
@@ -77,48 +86,36 @@ class BBUIEventQueue:
|
||||
self.BBServer = BBServer
|
||||
self.EventHandle = bb.event.register_UIHhandler(self)
|
||||
|
||||
def __popEvent(self):
|
||||
if len(self.eventQueue) == 0:
|
||||
return None
|
||||
return self.eventQueue.pop(0)
|
||||
|
||||
def getEvent(self):
|
||||
if len(self.eventQueue) == 0:
|
||||
self.BBServer.idle_commands(0)
|
||||
return self.__popEvent()
|
||||
return None
|
||||
|
||||
return self.eventQueue.pop(0)
|
||||
|
||||
def waitEvent(self, delay):
|
||||
event = self.__popEvent()
|
||||
event = self.getEvent()
|
||||
if event:
|
||||
return event
|
||||
self.BBServer.idle_commands(delay)
|
||||
return self.__popEvent()
|
||||
return self.getEvent()
|
||||
|
||||
def queue_event(self, event):
|
||||
self.eventQueue.append(event)
|
||||
|
||||
def system_quit( self ):
|
||||
def system_quit( self ):
|
||||
bb.event.unregister_UIHhandler(self.EventHandle)
|
||||
|
||||
# Dummy signal handler to ensure we break out of sleep upon SIGCHLD
|
||||
def chldhandler(signum, stackframe):
|
||||
pass
|
||||
|
||||
class BitBakeNoneServer():
|
||||
class BitBakeServer():
|
||||
# remove this when you're done with debugging
|
||||
# allow_reuse_address = True
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, cooker):
|
||||
self._idlefuns = {}
|
||||
self.commands = BitBakeServerCommands(self)
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.commands.cooker = cooker
|
||||
self.commands = BitBakeServerCommands(self, cooker)
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
assert hasattr(function, '__call__')
|
||||
assert callable(function)
|
||||
self._idlefuns[function] = data
|
||||
|
||||
def idle_commands(self, delay):
|
||||
@@ -143,13 +140,10 @@ class BitBakeNoneServer():
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
self.commands.runCommand(["stateShutdown"])
|
||||
pass
|
||||
if nextsleep is not None:
|
||||
#print "Sleeping for %s (%s)" % (nextsleep, delay)
|
||||
signal.signal(signal.SIGCHLD, chldhandler)
|
||||
time.sleep(nextsleep)
|
||||
signal.signal(signal.SIGCHLD, signal.SIG_DFL)
|
||||
|
||||
def server_exit(self):
|
||||
# Tell idle functions we're exiting
|
||||
@@ -159,13 +153,21 @@ class BitBakeNoneServer():
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitBakeServerConnection():
|
||||
class BitbakeServerInfo():
|
||||
def __init__(self, server):
|
||||
self.server = server.server
|
||||
self.connection = self.server.commands
|
||||
self.server = server
|
||||
self.commands = server.commands
|
||||
|
||||
class BitBakeServerFork():
|
||||
def __init__(self, serverinfo, command, logfile):
|
||||
serverinfo.forkCommand = command
|
||||
serverinfo.logfile = logfile
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, serverinfo):
|
||||
self.server = serverinfo.server
|
||||
self.connection = serverinfo.commands
|
||||
self.events = bb.server.none.BBUIEventQueue(self.server)
|
||||
for event in bb.event.ui_queue:
|
||||
self.events.queue_event(event)
|
||||
|
||||
def terminate(self):
|
||||
try:
|
||||
@@ -177,27 +179,3 @@ class BitBakeServerConnection():
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitBakeServer(object):
|
||||
def initServer(self):
|
||||
self.server = BitBakeNoneServer()
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.server.addcooker(cooker)
|
||||
|
||||
def getServerIdleCB(self):
|
||||
return self.server.register_idle_function
|
||||
|
||||
def saveConnectionDetails(self):
|
||||
return
|
||||
|
||||
def detach(self, cooker_logfile):
|
||||
self.logfile = cooker_logfile
|
||||
|
||||
def establishConnection(self):
|
||||
self.connection = BitBakeServerConnection(self)
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return bb.cooker.server_main(self.cooker, uifunc, *args)
|
||||
|
||||
|
||||
@@ -1,270 +0,0 @@
|
||||
#
|
||||
# BitBake Process based server.
|
||||
#
|
||||
# Copyright (C) 2010 Bob Foerster <robert@erafx.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
"""
|
||||
This module implements a multiprocessing.Process based server for bitbake.
|
||||
"""
|
||||
|
||||
import bb
|
||||
import bb.event
|
||||
import itertools
|
||||
import logging
|
||||
import multiprocessing
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
from Queue import Empty
|
||||
from multiprocessing import Event, Process, util, Queue, Pipe, queues
|
||||
|
||||
logger = logging.getLogger('BitBake')
|
||||
|
||||
class ServerCommunicator():
|
||||
def __init__(self, connection):
|
||||
self.connection = connection
|
||||
|
||||
def runCommand(self, command):
|
||||
# @todo try/except
|
||||
self.connection.send(command)
|
||||
|
||||
while True:
|
||||
# don't let the user ctrl-c while we're waiting for a response
|
||||
try:
|
||||
if self.connection.poll(.5):
|
||||
return self.connection.recv()
|
||||
else:
|
||||
return None
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
|
||||
class EventAdapter():
|
||||
"""
|
||||
Adapter to wrap our event queue since the caller (bb.event) expects to
|
||||
call a send() method, but our actual queue only has put()
|
||||
"""
|
||||
def __init__(self, queue):
|
||||
self.queue = queue
|
||||
|
||||
def send(self, event):
|
||||
try:
|
||||
self.queue.put(event)
|
||||
except Exception as err:
|
||||
print("EventAdapter puked: %s" % str(err))
|
||||
|
||||
|
||||
class ProcessServer(Process):
|
||||
profile_filename = "profile.log"
|
||||
profile_processed_filename = "profile.log.processed"
|
||||
|
||||
def __init__(self, command_channel, event_queue):
|
||||
Process.__init__(self)
|
||||
self.command_channel = command_channel
|
||||
self.event_queue = event_queue
|
||||
self.event = EventAdapter(event_queue)
|
||||
self._idlefunctions = {}
|
||||
self.quit = False
|
||||
|
||||
self.keep_running = Event()
|
||||
self.keep_running.set()
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
assert hasattr(function, '__call__')
|
||||
self._idlefunctions[function] = data
|
||||
|
||||
def run(self):
|
||||
for event in bb.event.ui_queue:
|
||||
self.event_queue.put(event)
|
||||
self.event_handle = bb.event.register_UIHhandler(self)
|
||||
bb.cooker.server_main(self.cooker, self.main)
|
||||
|
||||
def main(self):
|
||||
# Ignore SIGINT within the server, as all SIGINT handling is done by
|
||||
# the UI and communicated to us
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
while self.keep_running.is_set():
|
||||
try:
|
||||
if self.command_channel.poll():
|
||||
command = self.command_channel.recv()
|
||||
self.runCommand(command)
|
||||
|
||||
self.idle_commands(.1)
|
||||
except Exception:
|
||||
logger.exception('Running command %s', command)
|
||||
|
||||
self.event_queue.cancel_join_thread()
|
||||
bb.event.unregister_UIHhandler(self.event_handle)
|
||||
self.command_channel.close()
|
||||
self.cooker.stop()
|
||||
self.idle_commands(.1)
|
||||
|
||||
def idle_commands(self, delay):
|
||||
nextsleep = delay
|
||||
|
||||
for function, data in self._idlefunctions.items():
|
||||
try:
|
||||
retval = function(self, data, False)
|
||||
if retval is False:
|
||||
del self._idlefunctions[function]
|
||||
elif retval is True:
|
||||
nextsleep = None
|
||||
elif nextsleep is None:
|
||||
continue
|
||||
elif retval < nextsleep:
|
||||
nextsleep = retval
|
||||
except SystemExit:
|
||||
raise
|
||||
except Exception:
|
||||
logger.exception('Running idle function')
|
||||
|
||||
if nextsleep is not None:
|
||||
time.sleep(nextsleep)
|
||||
|
||||
def runCommand(self, command):
|
||||
"""
|
||||
Run a cooker command on the server
|
||||
"""
|
||||
self.command_channel.send(self.cooker.command.runCommand(command))
|
||||
|
||||
def stop(self):
|
||||
self.keep_running.clear()
|
||||
|
||||
def bootstrap_2_6_6(self):
|
||||
"""Pulled from python 2.6.6. Needed to ensure we have the fix from
|
||||
http://bugs.python.org/issue5313 when running on python version 2.6.2
|
||||
or lower."""
|
||||
|
||||
try:
|
||||
self._children = set()
|
||||
self._counter = itertools.count(1)
|
||||
try:
|
||||
sys.stdin.close()
|
||||
sys.stdin = open(os.devnull)
|
||||
except (OSError, ValueError):
|
||||
pass
|
||||
multiprocessing._current_process = self
|
||||
util._finalizer_registry.clear()
|
||||
util._run_after_forkers()
|
||||
util.info('child process calling self.run()')
|
||||
try:
|
||||
self.run()
|
||||
exitcode = 0
|
||||
finally:
|
||||
util._exit_function()
|
||||
except SystemExit as e:
|
||||
if not e.args:
|
||||
exitcode = 1
|
||||
elif type(e.args[0]) is int:
|
||||
exitcode = e.args[0]
|
||||
else:
|
||||
sys.stderr.write(e.args[0] + '\n')
|
||||
sys.stderr.flush()
|
||||
exitcode = 1
|
||||
except:
|
||||
exitcode = 1
|
||||
import traceback
|
||||
sys.stderr.write('Process %s:\n' % self.name)
|
||||
sys.stderr.flush()
|
||||
traceback.print_exc()
|
||||
|
||||
util.info('process exiting with exitcode %d' % exitcode)
|
||||
return exitcode
|
||||
|
||||
# Python versions 2.6.0 through 2.6.2 suffer from a multiprocessing bug
|
||||
# which can result in a bitbake server hang during the parsing process
|
||||
if (2, 6, 0) <= sys.version_info < (2, 6, 3):
|
||||
_bootstrap = bootstrap_2_6_6
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, server):
|
||||
self.server = server
|
||||
self.procserver = server.server
|
||||
self.connection = ServerCommunicator(server.ui_channel)
|
||||
self.events = server.event_queue
|
||||
|
||||
def terminate(self, force = False):
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
self.procserver.stop()
|
||||
if force:
|
||||
self.procserver.join(0.5)
|
||||
if self.procserver.is_alive():
|
||||
self.procserver.terminate()
|
||||
self.procserver.join()
|
||||
else:
|
||||
self.procserver.join()
|
||||
while True:
|
||||
try:
|
||||
event = self.server.event_queue.get(block=False)
|
||||
except (Empty, IOError):
|
||||
break
|
||||
if isinstance(event, logging.LogRecord):
|
||||
logger.handle(event)
|
||||
self.server.ui_channel.close()
|
||||
self.server.event_queue.close()
|
||||
if force:
|
||||
sys.exit(1)
|
||||
|
||||
# Wrap Queue to provide API which isn't server implementation specific
|
||||
class ProcessEventQueue(multiprocessing.queues.Queue):
|
||||
def waitEvent(self, timeout):
|
||||
try:
|
||||
return self.get(True, timeout)
|
||||
except Empty:
|
||||
return None
|
||||
|
||||
def getEvent(self):
|
||||
try:
|
||||
return self.get(False)
|
||||
except Empty:
|
||||
return None
|
||||
|
||||
|
||||
class BitBakeServer(object):
|
||||
def initServer(self):
|
||||
# establish communication channels. We use bidirectional pipes for
|
||||
# ui <--> server command/response pairs
|
||||
# and a queue for server -> ui event notifications
|
||||
#
|
||||
self.ui_channel, self.server_channel = Pipe()
|
||||
self.event_queue = ProcessEventQueue(0)
|
||||
|
||||
self.server = ProcessServer(self.server_channel, self.event_queue)
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.server.cooker = cooker
|
||||
|
||||
def getServerIdleCB(self):
|
||||
return self.server.register_idle_function
|
||||
|
||||
def saveConnectionDetails(self):
|
||||
return
|
||||
|
||||
def detach(self, cooker_logfile):
|
||||
self.server.start()
|
||||
return
|
||||
|
||||
def establishConnection(self):
|
||||
self.connection = BitBakeServerConnection(self)
|
||||
signal.signal(signal.SIGTERM, lambda i, s: self.connection.terminate(force=True))
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return bb.cooker.server_main(self.cooker, uifunc, *args)
|
||||
|
||||
@@ -42,95 +42,19 @@ from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import inspect, select
|
||||
|
||||
if sys.hexversion < 0x020600F0:
|
||||
print("Sorry, python 2.6 or later is required for bitbake's XMLRPC mode")
|
||||
print "Sorry, python 2.6 or later is required for bitbake's XMLRPC mode"
|
||||
sys.exit(1)
|
||||
|
||||
##
|
||||
# The xmlrpclib.Transport class has undergone various changes in Python 2.7
|
||||
# which break BitBake's XMLRPC implementation.
|
||||
# To work around this we subclass Transport and have a copy/paste of method
|
||||
# implementations from Python 2.6.6's xmlrpclib.
|
||||
#
|
||||
# Upstream Python bug is #8194 (http://bugs.python.org/issue8194)
|
||||
# This bug is relevant for Python 2.7.0 and 2.7.1 but was fixed for
|
||||
# Python > 2.7.2
|
||||
##
|
||||
|
||||
class BBTransport(xmlrpclib.Transport):
|
||||
def request(self, host, handler, request_body, verbose=0):
|
||||
h = self.make_connection(host)
|
||||
if verbose:
|
||||
h.set_debuglevel(1)
|
||||
|
||||
self.send_request(h, handler, request_body)
|
||||
self.send_host(h, host)
|
||||
self.send_user_agent(h)
|
||||
self.send_content(h, request_body)
|
||||
|
||||
errcode, errmsg, headers = h.getreply()
|
||||
|
||||
if errcode != 200:
|
||||
raise ProtocolError(
|
||||
host + handler,
|
||||
errcode, errmsg,
|
||||
headers
|
||||
)
|
||||
|
||||
self.verbose = verbose
|
||||
|
||||
try:
|
||||
sock = h._conn.sock
|
||||
except AttributeError:
|
||||
sock = None
|
||||
|
||||
return self._parse_response(h.getfile(), sock)
|
||||
|
||||
def make_connection(self, host):
|
||||
import httplib
|
||||
host, extra_headers, x509 = self.get_host_info(host)
|
||||
return httplib.HTTP(host)
|
||||
|
||||
def _parse_response(self, file, sock):
|
||||
p, u = self.getparser()
|
||||
|
||||
while 1:
|
||||
if sock:
|
||||
response = sock.recv(1024)
|
||||
else:
|
||||
response = file.read(1024)
|
||||
if not response:
|
||||
break
|
||||
if self.verbose:
|
||||
print "body:", repr(response)
|
||||
p.feed(response)
|
||||
|
||||
file.close()
|
||||
p.close()
|
||||
|
||||
return u.close()
|
||||
|
||||
def _create_server(host, port):
|
||||
# Python 2.7.0 and 2.7.1 have a buggy Transport implementation
|
||||
# For those versions of Python, and only those versions, use our
|
||||
# own copy/paste BBTransport class.
|
||||
if (2, 7, 0) <= sys.version_info < (2, 7, 2):
|
||||
t = BBTransport()
|
||||
s = xmlrpclib.Server("http://%s:%d/" % (host, port), transport=t, allow_none=True)
|
||||
else:
|
||||
s = xmlrpclib.Server("http://%s:%d/" % (host, port), allow_none=True)
|
||||
|
||||
return s
|
||||
|
||||
class BitBakeServerCommands():
|
||||
def __init__(self, server):
|
||||
def __init__(self, server, cooker):
|
||||
self.cooker = cooker
|
||||
self.server = server
|
||||
|
||||
def registerEventHandler(self, host, port):
|
||||
"""
|
||||
Register a remote UI Event Handler
|
||||
"""
|
||||
s = _create_server(host, port)
|
||||
|
||||
s = xmlrpclib.Server("http://%s:%d" % (host, port), allow_none=True)
|
||||
return bb.event.register_UIHhandler(s)
|
||||
|
||||
def unregisterEventHandler(self, handlerNum):
|
||||
@@ -150,7 +74,7 @@ class BitBakeServerCommands():
|
||||
Trigger the server to quit
|
||||
"""
|
||||
self.server.quit = True
|
||||
print("Server (cooker) exiting")
|
||||
print "Server (cooker) exitting"
|
||||
return
|
||||
|
||||
def ping(self):
|
||||
@@ -159,26 +83,22 @@ class BitBakeServerCommands():
|
||||
"""
|
||||
return True
|
||||
|
||||
class BitBakeXMLRPCServer(SimpleXMLRPCServer):
|
||||
class BitBakeServer(SimpleXMLRPCServer):
|
||||
# remove this when you're done with debugging
|
||||
# allow_reuse_address = True
|
||||
|
||||
def __init__(self, interface):
|
||||
"""
|
||||
Constructor
|
||||
def __init__(self, cooker, interface = ("localhost", 0)):
|
||||
"""
|
||||
Constructor
|
||||
"""
|
||||
SimpleXMLRPCServer.__init__(self, interface,
|
||||
requestHandler=SimpleXMLRPCRequestHandler,
|
||||
logRequests=False, allow_none=True)
|
||||
self._idlefuns = {}
|
||||
self.host, self.port = self.socket.getsockname()
|
||||
#self.register_introspection_functions()
|
||||
self.commands = BitBakeServerCommands(self)
|
||||
self.autoregister_all_functions(self.commands, "")
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.commands.cooker = cooker
|
||||
commands = BitBakeServerCommands(self, cooker)
|
||||
self.autoregister_all_functions(commands, "")
|
||||
|
||||
def autoregister_all_functions(self, context, prefix):
|
||||
"""
|
||||
@@ -192,13 +112,10 @@ class BitBakeXMLRPCServer(SimpleXMLRPCServer):
|
||||
|
||||
def register_idle_function(self, function, data):
|
||||
"""Register a function to be called while the server is idle"""
|
||||
assert hasattr(function, '__call__')
|
||||
assert callable(function)
|
||||
self._idlefuns[function] = data
|
||||
|
||||
def serve_forever(self):
|
||||
bb.cooker.server_main(self.cooker, self._serve_forever)
|
||||
|
||||
def _serve_forever(self):
|
||||
"""
|
||||
Serve Requests. Overloaded to honor a quit command
|
||||
"""
|
||||
@@ -229,7 +146,7 @@ class BitBakeXMLRPCServer(SimpleXMLRPCServer):
|
||||
traceback.print_exc()
|
||||
pass
|
||||
if nextsleep is None and len(self._idlefuns) > 0:
|
||||
nextsleep = 0
|
||||
nextsleep = 0
|
||||
self.timeout = nextsleep
|
||||
# Tell idle functions we're exiting
|
||||
for function, data in self._idlefuns.items():
|
||||
@@ -242,21 +159,23 @@ class BitBakeXMLRPCServer(SimpleXMLRPCServer):
|
||||
return
|
||||
|
||||
class BitbakeServerInfo():
|
||||
def __init__(self, host, port):
|
||||
self.host = host
|
||||
self.port = port
|
||||
def __init__(self, server):
|
||||
self.host = server.host
|
||||
self.port = server.port
|
||||
|
||||
class BitBakeServerFork():
|
||||
def __init__(self, serverinfo, command, logfile):
|
||||
daemonize.createDaemon(command, logfile)
|
||||
|
||||
class BitBakeServerConnection():
|
||||
def __init__(self, serverinfo, clientinfo=("localhost", 0)):
|
||||
self.connection = _create_server(serverinfo.host, serverinfo.port)
|
||||
self.events = uievent.BBUIEventQueue(self.connection, clientinfo)
|
||||
for event in bb.event.ui_queue:
|
||||
self.events.queue_event(event)
|
||||
def __init__(self, serverinfo):
|
||||
self.connection = xmlrpclib.Server("http://%s:%s" % (serverinfo.host, serverinfo.port), allow_none=True)
|
||||
self.events = uievent.BBUIEventQueue(self.connection)
|
||||
|
||||
def terminate(self):
|
||||
# Don't wait for server indefinitely
|
||||
import socket
|
||||
socket.setdefaulttimeout(2)
|
||||
socket.setdefaulttimeout(2)
|
||||
try:
|
||||
self.events.system_quit()
|
||||
except:
|
||||
@@ -266,30 +185,3 @@ class BitBakeServerConnection():
|
||||
except:
|
||||
pass
|
||||
|
||||
class BitBakeServer(object):
|
||||
def initServer(self, interface = ("localhost", 0)):
|
||||
self.server = BitBakeXMLRPCServer(interface)
|
||||
|
||||
def addcooker(self, cooker):
|
||||
self.cooker = cooker
|
||||
self.server.addcooker(cooker)
|
||||
|
||||
def getServerIdleCB(self):
|
||||
return self.server.register_idle_function
|
||||
|
||||
def saveConnectionDetails(self):
|
||||
self.serverinfo = BitbakeServerInfo(self.server.host, self.server.port)
|
||||
|
||||
def detach(self, cooker_logfile):
|
||||
daemonize.createDaemon(self.server.serve_forever, cooker_logfile)
|
||||
del self.cooker
|
||||
del self.server
|
||||
|
||||
def establishConnection(self):
|
||||
self.connection = BitBakeServerConnection(self.serverinfo)
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return uifunc(*args)
|
||||
|
||||
|
||||
|
||||
@@ -52,14 +52,12 @@ PROBLEMS:
|
||||
# Import and setup global variables
|
||||
##########################################################################
|
||||
|
||||
from __future__ import print_function
|
||||
from functools import reduce
|
||||
try:
|
||||
set
|
||||
except NameError:
|
||||
from sets import Set as set
|
||||
import sys, os, readline, socket, httplib, urllib, commands, popen2, shlex, Queue, fnmatch
|
||||
from bb import data, parse, build, cache, taskdata, runqueue, providers as Providers
|
||||
import sys, os, readline, socket, httplib, urllib, commands, popen2, copy, shlex, Queue, fnmatch
|
||||
from bb import data, parse, build, fatal, cache, taskdata, runqueue, providers as Providers
|
||||
|
||||
__version__ = "0.5.3.1"
|
||||
__credits__ = """BitBake Shell Version %s (C) 2005 Michael 'Mickey' Lauer <mickey@Vanille.de>
|
||||
@@ -100,7 +98,7 @@ class BitBakeShellCommands:
|
||||
|
||||
def _checkParsed( self ):
|
||||
if not parsed:
|
||||
print("SHELL: This command needs to parse bbfiles...")
|
||||
print "SHELL: This command needs to parse bbfiles..."
|
||||
self.parse( None )
|
||||
|
||||
def _findProvider( self, item ):
|
||||
@@ -121,28 +119,28 @@ class BitBakeShellCommands:
|
||||
"""Register a new name for a command"""
|
||||
new, old = params
|
||||
if not old in cmds:
|
||||
print("ERROR: Command '%s' not known" % old)
|
||||
print "ERROR: Command '%s' not known" % old
|
||||
else:
|
||||
cmds[new] = cmds[old]
|
||||
print("OK")
|
||||
print "OK"
|
||||
alias.usage = "<alias> <command>"
|
||||
|
||||
def buffer( self, params ):
|
||||
"""Dump specified output buffer"""
|
||||
index = params[0]
|
||||
print(self._shell.myout.buffer( int( index ) ))
|
||||
print self._shell.myout.buffer( int( index ) )
|
||||
buffer.usage = "<index>"
|
||||
|
||||
def buffers( self, params ):
|
||||
"""Show the available output buffers"""
|
||||
commands = self._shell.myout.bufferedCommands()
|
||||
if not commands:
|
||||
print("SHELL: No buffered commands available yet. Start doing something.")
|
||||
print "SHELL: No buffered commands available yet. Start doing something."
|
||||
else:
|
||||
print("="*35, "Available Output Buffers", "="*27)
|
||||
print "="*35, "Available Output Buffers", "="*27
|
||||
for index, cmd in enumerate( commands ):
|
||||
print("| %s %s" % ( str( index ).ljust( 3 ), cmd ))
|
||||
print("="*88)
|
||||
print "| %s %s" % ( str( index ).ljust( 3 ), cmd )
|
||||
print "="*88
|
||||
|
||||
def build( self, params, cmd = "build" ):
|
||||
"""Build a providee"""
|
||||
@@ -151,7 +149,7 @@ class BitBakeShellCommands:
|
||||
self._checkParsed()
|
||||
names = globfilter( cooker.status.pkg_pn, globexpr )
|
||||
if len( names ) == 0: names = [ globexpr ]
|
||||
print("SHELL: Building %s" % ' '.join( names ))
|
||||
print "SHELL: Building %s" % ' '.join( names )
|
||||
|
||||
td = taskdata.TaskData(cooker.configuration.abort)
|
||||
localdata = data.createCopy(cooker.configuration.data)
|
||||
@@ -170,20 +168,22 @@ class BitBakeShellCommands:
|
||||
tasks.append([name, "do_%s" % cmd])
|
||||
|
||||
td.add_unresolved(localdata, cooker.status)
|
||||
|
||||
|
||||
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
|
||||
rq.prepare_runqueue()
|
||||
rq.execute_runqueue()
|
||||
|
||||
except Providers.NoProvider:
|
||||
print("ERROR: No Provider")
|
||||
print "ERROR: No Provider"
|
||||
last_exception = Providers.NoProvider
|
||||
|
||||
except runqueue.TaskFailure as fnids:
|
||||
except runqueue.TaskFailure, fnids:
|
||||
for fnid in fnids:
|
||||
print "ERROR: '%s' failed" % td.fn_index[fnid]
|
||||
last_exception = runqueue.TaskFailure
|
||||
|
||||
except build.FuncFailed as e:
|
||||
print("ERROR: Couldn't build '%s'" % names)
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % names
|
||||
last_exception = e
|
||||
|
||||
|
||||
@@ -216,7 +216,7 @@ class BitBakeShellCommands:
|
||||
if bbfile is not None:
|
||||
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), bbfile ) )
|
||||
else:
|
||||
print("ERROR: Nothing provides '%s'" % name)
|
||||
print "ERROR: Nothing provides '%s'" % name
|
||||
edit.usage = "<providee>"
|
||||
|
||||
def environment( self, params ):
|
||||
@@ -239,14 +239,14 @@ class BitBakeShellCommands:
|
||||
global last_exception
|
||||
name = params[0]
|
||||
bf = completeFilePath( name )
|
||||
print("SHELL: Calling '%s' on '%s'" % ( cmd, bf ))
|
||||
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
|
||||
|
||||
try:
|
||||
cooker.buildFile(bf, cmd)
|
||||
except parse.ParseError:
|
||||
print("ERROR: Unable to open or parse '%s'" % bf)
|
||||
except build.FuncFailed as e:
|
||||
print("ERROR: Couldn't build '%s'" % name)
|
||||
print "ERROR: Unable to open or parse '%s'" % bf
|
||||
except build.EventException, e:
|
||||
print "ERROR: Couldn't build '%s'" % name
|
||||
last_exception = e
|
||||
|
||||
fileBuild.usage = "<bbfile>"
|
||||
@@ -270,60 +270,62 @@ class BitBakeShellCommands:
|
||||
def fileReparse( self, params ):
|
||||
"""(re)Parse a bb file"""
|
||||
bbfile = params[0]
|
||||
print("SHELL: Parsing '%s'" % bbfile)
|
||||
print "SHELL: Parsing '%s'" % bbfile
|
||||
parse.update_mtime( bbfile )
|
||||
cooker.parser.reparse(bbfile)
|
||||
cooker.bb_cache.cacheValidUpdate(bbfile)
|
||||
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data, cooker.status)
|
||||
cooker.bb_cache.sync()
|
||||
if False: #fromCache:
|
||||
print("SHELL: File has not been updated, not reparsing")
|
||||
print "SHELL: File has not been updated, not reparsing"
|
||||
else:
|
||||
print("SHELL: Parsed")
|
||||
print "SHELL: Parsed"
|
||||
fileReparse.usage = "<bbfile>"
|
||||
|
||||
def abort( self, params ):
|
||||
"""Toggle abort task execution flag (see bitbake -k)"""
|
||||
cooker.configuration.abort = not cooker.configuration.abort
|
||||
print("SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort ))
|
||||
print "SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort )
|
||||
|
||||
def force( self, params ):
|
||||
"""Toggle force task execution flag (see bitbake -f)"""
|
||||
cooker.configuration.force = not cooker.configuration.force
|
||||
print("SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force ))
|
||||
print "SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force )
|
||||
|
||||
def help( self, params ):
|
||||
"""Show a comprehensive list of commands and their purpose"""
|
||||
print("="*30, "Available Commands", "="*30)
|
||||
print "="*30, "Available Commands", "="*30
|
||||
for cmd in sorted(cmds):
|
||||
function, numparams, usage, helptext = cmds[cmd]
|
||||
print("| %s | %s" % (usage.ljust(30), helptext))
|
||||
print("="*78)
|
||||
function,numparams,usage,helptext = cmds[cmd]
|
||||
print "| %s | %s" % (usage.ljust(30), helptext)
|
||||
print "="*78
|
||||
|
||||
def lastError( self, params ):
|
||||
"""Show the reason or log that was produced by the last BitBake event exception"""
|
||||
if last_exception is None:
|
||||
print("SHELL: No Errors yet (Phew)...")
|
||||
print "SHELL: No Errors yet (Phew)..."
|
||||
else:
|
||||
reason, event = last_exception.args
|
||||
print("SHELL: Reason for the last error: '%s'" % reason)
|
||||
print "SHELL: Reason for the last error: '%s'" % reason
|
||||
if ':' in reason:
|
||||
msg, filename = reason.split( ':' )
|
||||
filename = filename.strip()
|
||||
print("SHELL: Dumping log file for last error:")
|
||||
print "SHELL: Dumping log file for last error:"
|
||||
try:
|
||||
print(open( filename ).read())
|
||||
print open( filename ).read()
|
||||
except IOError:
|
||||
print("ERROR: Couldn't open '%s'" % filename)
|
||||
print "ERROR: Couldn't open '%s'" % filename
|
||||
|
||||
def match( self, params ):
|
||||
"""Dump all files or providers matching a glob expression"""
|
||||
what, globexpr = params
|
||||
if what == "files":
|
||||
self._checkParsed()
|
||||
for key in globfilter( cooker.status.pkg_fn, globexpr ): print(key)
|
||||
for key in globfilter( cooker.status.pkg_fn, globexpr ): print key
|
||||
elif what == "providers":
|
||||
self._checkParsed()
|
||||
for key in globfilter( cooker.status.pkg_pn, globexpr ): print(key)
|
||||
for key in globfilter( cooker.status.pkg_pn, globexpr ): print key
|
||||
else:
|
||||
print("Usage: match %s" % self.print_.usage)
|
||||
print "Usage: match %s" % self.print_.usage
|
||||
match.usage = "<files|providers> <glob>"
|
||||
|
||||
def new( self, params ):
|
||||
@@ -333,15 +335,15 @@ class BitBakeShellCommands:
|
||||
fulldirname = "%s/%s" % ( packages, dirname )
|
||||
|
||||
if not os.path.exists( fulldirname ):
|
||||
print("SHELL: Creating '%s'" % fulldirname)
|
||||
print "SHELL: Creating '%s'" % fulldirname
|
||||
os.mkdir( fulldirname )
|
||||
if os.path.exists( fulldirname ) and os.path.isdir( fulldirname ):
|
||||
if os.path.exists( "%s/%s" % ( fulldirname, filename ) ):
|
||||
print("SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename ))
|
||||
print "SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename )
|
||||
return False
|
||||
print("SHELL: Creating '%s/%s'" % ( fulldirname, filename ))
|
||||
print "SHELL: Creating '%s/%s'" % ( fulldirname, filename )
|
||||
newpackage = open( "%s/%s" % ( fulldirname, filename ), "w" )
|
||||
print("""DESCRIPTION = ""
|
||||
print >>newpackage,"""DESCRIPTION = ""
|
||||
SECTION = ""
|
||||
AUTHOR = ""
|
||||
HOMEPAGE = ""
|
||||
@@ -368,7 +370,7 @@ SRC_URI = ""
|
||||
#do_install() {
|
||||
#
|
||||
#}
|
||||
""", file=newpackage)
|
||||
"""
|
||||
newpackage.close()
|
||||
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
|
||||
new.usage = "<directory> <filename>"
|
||||
@@ -388,14 +390,14 @@ SRC_URI = ""
|
||||
def pasteLog( self, params ):
|
||||
"""Send the last event exception error log (if there is one) to http://rafb.net/paste"""
|
||||
if last_exception is None:
|
||||
print("SHELL: No Errors yet (Phew)...")
|
||||
print "SHELL: No Errors yet (Phew)..."
|
||||
else:
|
||||
reason, event = last_exception.args
|
||||
print("SHELL: Reason for the last error: '%s'" % reason)
|
||||
print "SHELL: Reason for the last error: '%s'" % reason
|
||||
if ':' in reason:
|
||||
msg, filename = reason.split( ':' )
|
||||
filename = filename.strip()
|
||||
print("SHELL: Pasting log file to pastebin...")
|
||||
print "SHELL: Pasting log file to pastebin..."
|
||||
|
||||
file = open( filename ).read()
|
||||
sendToPastebin( "contents of " + filename, file )
|
||||
@@ -407,7 +409,7 @@ SRC_URI = ""
|
||||
|
||||
def parse( self, params ):
|
||||
"""(Re-)parse .bb files and calculate the dependency graph"""
|
||||
cooker.status = cache.CacheData(cooker.caches_array)
|
||||
cooker.status = cache.CacheData()
|
||||
ignore = data.getVar("ASSUME_PROVIDED", cooker.configuration.data, 1) or ""
|
||||
cooker.status.ignored_dependencies = set( ignore.split() )
|
||||
cooker.handleCollections( data.getVar("BBFILE_COLLECTIONS", cooker.configuration.data, 1) )
|
||||
@@ -417,23 +419,23 @@ SRC_URI = ""
|
||||
cooker.buildDepgraph()
|
||||
global parsed
|
||||
parsed = True
|
||||
print()
|
||||
print
|
||||
|
||||
def reparse( self, params ):
|
||||
"""(re)Parse a providee's bb file"""
|
||||
bbfile = self._findProvider( params[0] )
|
||||
if bbfile is not None:
|
||||
print("SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] ))
|
||||
print "SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] )
|
||||
self.fileReparse( [ bbfile ] )
|
||||
else:
|
||||
print("ERROR: Nothing provides '%s'" % params[0])
|
||||
print "ERROR: Nothing provides '%s'" % params[0]
|
||||
reparse.usage = "<providee>"
|
||||
|
||||
def getvar( self, params ):
|
||||
"""Dump the contents of an outer BitBake environment variable"""
|
||||
var = params[0]
|
||||
value = data.getVar( var, cooker.configuration.data, 1 )
|
||||
print(value)
|
||||
print value
|
||||
getvar.usage = "<variable>"
|
||||
|
||||
def peek( self, params ):
|
||||
@@ -441,11 +443,11 @@ SRC_URI = ""
|
||||
name, var = params
|
||||
bbfile = self._findProvider( name )
|
||||
if bbfile is not None:
|
||||
the_data = cache.Cache.loadDataFull(bbfile, cooker.configuration.data)
|
||||
the_data = cooker.bb_cache.loadDataFull(bbfile, cooker.configuration.data)
|
||||
value = the_data.getVar( var, 1 )
|
||||
print(value)
|
||||
print value
|
||||
else:
|
||||
print("ERROR: Nothing provides '%s'" % name)
|
||||
print "ERROR: Nothing provides '%s'" % name
|
||||
peek.usage = "<providee> <variable>"
|
||||
|
||||
def poke( self, params ):
|
||||
@@ -453,7 +455,7 @@ SRC_URI = ""
|
||||
name, var, value = params
|
||||
bbfile = self._findProvider( name )
|
||||
if bbfile is not None:
|
||||
print("ERROR: Sorry, this functionality is currently broken")
|
||||
print "ERROR: Sorry, this functionality is currently broken"
|
||||
#d = cooker.pkgdata[bbfile]
|
||||
#data.setVar( var, value, d )
|
||||
|
||||
@@ -461,7 +463,7 @@ SRC_URI = ""
|
||||
#cooker.pkgdata.setDirty(bbfile, d)
|
||||
#print "OK"
|
||||
else:
|
||||
print("ERROR: Nothing provides '%s'" % name)
|
||||
print "ERROR: Nothing provides '%s'" % name
|
||||
poke.usage = "<providee> <variable> <value>"
|
||||
|
||||
def print_( self, params ):
|
||||
@@ -469,12 +471,12 @@ SRC_URI = ""
|
||||
what = params[0]
|
||||
if what == "files":
|
||||
self._checkParsed()
|
||||
for key in cooker.status.pkg_fn: print(key)
|
||||
for key in cooker.status.pkg_fn: print key
|
||||
elif what == "providers":
|
||||
self._checkParsed()
|
||||
for key in cooker.status.providers: print(key)
|
||||
for key in cooker.status.providers: print key
|
||||
else:
|
||||
print("Usage: print %s" % self.print_.usage)
|
||||
print "Usage: print %s" % self.print_.usage
|
||||
print_.usage = "<files|providers>"
|
||||
|
||||
def python( self, params ):
|
||||
@@ -494,7 +496,7 @@ SRC_URI = ""
|
||||
"""Set an outer BitBake environment variable"""
|
||||
var, value = params
|
||||
data.setVar( var, value, cooker.configuration.data )
|
||||
print("OK")
|
||||
print "OK"
|
||||
setVar.usage = "<variable> <value>"
|
||||
|
||||
def rebuild( self, params ):
|
||||
@@ -506,7 +508,7 @@ SRC_URI = ""
|
||||
def shell( self, params ):
|
||||
"""Execute a shell command and dump the output"""
|
||||
if params != "":
|
||||
print(commands.getoutput( " ".join( params ) ))
|
||||
print commands.getoutput( " ".join( params ) )
|
||||
shell.usage = "<...>"
|
||||
|
||||
def stage( self, params ):
|
||||
@@ -516,17 +518,17 @@ SRC_URI = ""
|
||||
|
||||
def status( self, params ):
|
||||
"""<just for testing>"""
|
||||
print("-" * 78)
|
||||
print("building list = '%s'" % cooker.building_list)
|
||||
print("build path = '%s'" % cooker.build_path)
|
||||
print("consider_msgs_cache = '%s'" % cooker.consider_msgs_cache)
|
||||
print("build stats = '%s'" % cooker.stats)
|
||||
if last_exception is not None: print("last_exception = '%s'" % repr( last_exception.args ))
|
||||
print("memory output contents = '%s'" % self._shell.myout._buffer)
|
||||
print "-" * 78
|
||||
print "building list = '%s'" % cooker.building_list
|
||||
print "build path = '%s'" % cooker.build_path
|
||||
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
|
||||
print "build stats = '%s'" % cooker.stats
|
||||
if last_exception is not None: print "last_exception = '%s'" % repr( last_exception.args )
|
||||
print "memory output contents = '%s'" % self._shell.myout._buffer
|
||||
|
||||
def test( self, params ):
|
||||
"""<just for testing>"""
|
||||
print("testCommand called with '%s'" % params)
|
||||
print "testCommand called with '%s'" % params
|
||||
|
||||
def unpack( self, params ):
|
||||
"""Execute 'unpack' on a providee"""
|
||||
@@ -551,12 +553,12 @@ SRC_URI = ""
|
||||
try:
|
||||
providers = cooker.status.providers[item]
|
||||
except KeyError:
|
||||
print("SHELL: ERROR: Nothing provides", preferred)
|
||||
print "SHELL: ERROR: Nothing provides", preferred
|
||||
else:
|
||||
for provider in providers:
|
||||
if provider == pf: provider = " (***) %s" % provider
|
||||
else: provider = " %s" % provider
|
||||
print(provider)
|
||||
print provider
|
||||
which.usage = "<providee>"
|
||||
|
||||
##########################################################################
|
||||
@@ -581,7 +583,7 @@ def sendToPastebin( desc, content ):
|
||||
mydata["nick"] = "%s@%s" % ( os.environ.get( "USER", "unknown" ), socket.gethostname() or "unknown" )
|
||||
mydata["text"] = content
|
||||
params = urllib.urlencode( mydata )
|
||||
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
|
||||
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
|
||||
|
||||
host = "rafb.net"
|
||||
conn = httplib.HTTPConnection( "%s:80" % host )
|
||||
@@ -592,9 +594,9 @@ def sendToPastebin( desc, content ):
|
||||
|
||||
if response.status == 302:
|
||||
location = response.getheader( "location" ) or "unknown"
|
||||
print("SHELL: Pasted to http://%s%s" % ( host, location ))
|
||||
print "SHELL: Pasted to http://%s%s" % ( host, location )
|
||||
else:
|
||||
print("ERROR: %s %s" % ( response.status, response.reason ))
|
||||
print "ERROR: %s %s" % ( response.status, response.reason )
|
||||
|
||||
def completer( text, state ):
|
||||
"""Return a possible readline completion"""
|
||||
@@ -641,7 +643,7 @@ def columnize( alist, width = 80 ):
|
||||
return reduce(lambda line, word, width=width: '%s%s%s' %
|
||||
(line,
|
||||
' \n'[(len(line[line.rfind('\n')+1:])
|
||||
+ len(word.split('\n', 1)[0]
|
||||
+ len(word.split('\n',1)[0]
|
||||
) >= width)],
|
||||
word),
|
||||
alist
|
||||
@@ -716,7 +718,7 @@ class BitBakeShell:
|
||||
except IOError:
|
||||
pass # It doesn't exist yet.
|
||||
|
||||
print(__credits__)
|
||||
print __credits__
|
||||
|
||||
def cleanup( self ):
|
||||
"""Write readline history and clean up resources"""
|
||||
@@ -724,7 +726,7 @@ class BitBakeShell:
|
||||
try:
|
||||
readline.write_history_file( self.historyfilename )
|
||||
except:
|
||||
print("SHELL: Unable to save command history")
|
||||
print "SHELL: Unable to save command history"
|
||||
|
||||
def registerCommand( self, command, function, numparams = 0, usage = "", helptext = "" ):
|
||||
"""Register a command"""
|
||||
@@ -738,11 +740,11 @@ class BitBakeShell:
|
||||
try:
|
||||
function, numparams, usage, helptext = cmds[command]
|
||||
except KeyError:
|
||||
print("SHELL: ERROR: '%s' command is not a valid command." % command)
|
||||
print "SHELL: ERROR: '%s' command is not a valid command." % command
|
||||
self.myout.removeLast()
|
||||
else:
|
||||
if (numparams != -1) and (not len( params ) == numparams):
|
||||
print("Usage: '%s'" % usage)
|
||||
print "Usage: '%s'" % usage
|
||||
return
|
||||
|
||||
result = function( self.commands, params )
|
||||
@@ -757,7 +759,7 @@ class BitBakeShell:
|
||||
if not cmdline:
|
||||
continue
|
||||
if "|" in cmdline:
|
||||
print("ERROR: '|' in startup file is not allowed. Ignoring line")
|
||||
print "ERROR: '|' in startup file is not allowed. Ignoring line"
|
||||
continue
|
||||
self.commandQ.put( cmdline.strip() )
|
||||
|
||||
@@ -799,10 +801,10 @@ class BitBakeShell:
|
||||
sys.stdout.write( pipe.fromchild.read() )
|
||||
#
|
||||
except EOFError:
|
||||
print()
|
||||
print
|
||||
return
|
||||
except KeyboardInterrupt:
|
||||
print()
|
||||
print
|
||||
|
||||
##########################################################################
|
||||
# Start function - called from the BitBake command line utility
|
||||
@@ -817,4 +819,4 @@ def start( aCooker ):
|
||||
bbshell.cleanup()
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("SHELL: Sorry, this program should only be called by BitBake.")
|
||||
print "SHELL: Sorry, this program should only be called by BitBake."
|
||||
|
||||
@@ -1,356 +0,0 @@
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import bb.data
|
||||
|
||||
logger = logging.getLogger('BitBake.SigGen')
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
|
||||
|
||||
def init(d):
|
||||
siggens = [obj for obj in globals().itervalues()
|
||||
if type(obj) is type and issubclass(obj, SignatureGenerator)]
|
||||
|
||||
desired = d.getVar("BB_SIGNATURE_HANDLER", True) or "noop"
|
||||
for sg in siggens:
|
||||
if desired == sg.name:
|
||||
return sg(d)
|
||||
break
|
||||
else:
|
||||
logger.error("Invalid signature generator '%s', using default 'noop'\n"
|
||||
"Available generators: %s",
|
||||
', '.join(obj.name for obj in siggens))
|
||||
return SignatureGenerator(d)
|
||||
|
||||
class SignatureGenerator(object):
|
||||
"""
|
||||
"""
|
||||
name = "noop"
|
||||
|
||||
def __init__(self, data):
|
||||
return
|
||||
|
||||
def finalise(self, fn, d, varient):
|
||||
return
|
||||
|
||||
def get_taskhash(self, fn, task, deps, dataCache):
|
||||
return "0"
|
||||
|
||||
def set_taskdata(self, hashes, deps):
|
||||
return
|
||||
|
||||
def stampfile(self, stampbase, file_name, taskname, extrainfo):
|
||||
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
|
||||
|
||||
def dump_sigtask(self, fn, task, stampbase, runtime):
|
||||
return
|
||||
|
||||
class SignatureGeneratorBasic(SignatureGenerator):
|
||||
"""
|
||||
"""
|
||||
name = "basic"
|
||||
|
||||
def __init__(self, data):
|
||||
self.basehash = {}
|
||||
self.taskhash = {}
|
||||
self.taskdeps = {}
|
||||
self.runtaskdeps = {}
|
||||
self.gendeps = {}
|
||||
self.lookupcache = {}
|
||||
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
|
||||
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST", True) or "").split())
|
||||
self.taskwhitelist = None
|
||||
self.init_rundepcheck(data)
|
||||
|
||||
def init_rundepcheck(self, data):
|
||||
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
|
||||
if self.taskwhitelist:
|
||||
self.twl = re.compile(self.taskwhitelist)
|
||||
else:
|
||||
self.twl = None
|
||||
|
||||
def _build_data(self, fn, d):
|
||||
|
||||
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
|
||||
|
||||
taskdeps = {}
|
||||
basehash = {}
|
||||
|
||||
for task in tasklist:
|
||||
data = d.getVar(task, False)
|
||||
lookupcache[task] = data
|
||||
|
||||
if data is None:
|
||||
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
|
||||
data = ''
|
||||
|
||||
newdeps = gendeps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
nextdeps = newdeps
|
||||
seen |= nextdeps
|
||||
newdeps = set()
|
||||
for dep in nextdeps:
|
||||
if dep in self.basewhitelist:
|
||||
continue
|
||||
newdeps |= gendeps[dep]
|
||||
newdeps -= seen
|
||||
|
||||
alldeps = seen - self.basewhitelist
|
||||
|
||||
for dep in sorted(alldeps):
|
||||
data = data + dep
|
||||
if dep in lookupcache:
|
||||
var = lookupcache[dep]
|
||||
else:
|
||||
var = d.getVar(dep, False)
|
||||
lookupcache[dep] = var
|
||||
if var:
|
||||
data = data + str(var)
|
||||
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
|
||||
taskdeps[task] = sorted(alldeps)
|
||||
|
||||
self.taskdeps[fn] = taskdeps
|
||||
self.gendeps[fn] = gendeps
|
||||
self.lookupcache[fn] = lookupcache
|
||||
|
||||
return taskdeps
|
||||
|
||||
def finalise(self, fn, d, variant):
|
||||
|
||||
if variant:
|
||||
fn = "virtual:" + variant + ":" + fn
|
||||
|
||||
try:
|
||||
taskdeps = self._build_data(fn, d)
|
||||
except:
|
||||
bb.note("Error during finalise of %s" % fn)
|
||||
raise
|
||||
|
||||
#Slow but can be useful for debugging mismatched basehashes
|
||||
#for task in self.taskdeps[fn]:
|
||||
# self.dump_sigtask(fn, task, d.getVar("STAMP", True), False)
|
||||
|
||||
for task in taskdeps:
|
||||
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + "." + task])
|
||||
|
||||
def rundep_check(self, fn, recipename, task, dep, depname, dataCache):
|
||||
# Return True if we should keep the dependency, False to drop it
|
||||
# We only manipulate the dependencies for packages not in the whitelist
|
||||
if self.twl and not self.twl.search(recipename):
|
||||
# then process the actual dependencies
|
||||
if self.twl.search(depname):
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_taskhash(self, fn, task, deps, dataCache):
|
||||
k = fn + "." + task
|
||||
data = dataCache.basetaskhash[k]
|
||||
self.runtaskdeps[k] = []
|
||||
recipename = dataCache.pkg_fn[fn]
|
||||
for dep in sorted(deps, key=clean_basepath):
|
||||
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
|
||||
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
|
||||
continue
|
||||
if dep not in self.taskhash:
|
||||
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?", dep)
|
||||
data = data + self.taskhash[dep]
|
||||
self.runtaskdeps[k].append(dep)
|
||||
h = hashlib.md5(data).hexdigest()
|
||||
self.taskhash[k] = h
|
||||
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
|
||||
return h
|
||||
|
||||
def set_taskdata(self, hashes, deps):
|
||||
self.runtaskdeps = deps
|
||||
self.taskhash = hashes
|
||||
|
||||
def dump_sigtask(self, fn, task, stampbase, runtime):
|
||||
k = fn + "." + task
|
||||
if runtime == "customfile":
|
||||
sigfile = stampbase
|
||||
elif runtime and k in self.taskhash:
|
||||
sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[k]
|
||||
else:
|
||||
sigfile = stampbase + "." + task + ".sigbasedata" + "." + self.basehash[k]
|
||||
|
||||
bb.utils.mkdirhier(os.path.dirname(sigfile))
|
||||
|
||||
data = {}
|
||||
data['basewhitelist'] = self.basewhitelist
|
||||
data['taskwhitelist'] = self.taskwhitelist
|
||||
data['taskdeps'] = self.taskdeps[fn][task]
|
||||
data['basehash'] = self.basehash[k]
|
||||
data['gendeps'] = {}
|
||||
data['varvals'] = {}
|
||||
data['varvals'][task] = self.lookupcache[fn][task]
|
||||
for dep in self.taskdeps[fn][task]:
|
||||
if dep in self.basewhitelist:
|
||||
continue
|
||||
data['gendeps'][dep] = self.gendeps[fn][dep]
|
||||
data['varvals'][dep] = self.lookupcache[fn][dep]
|
||||
|
||||
if runtime and k in self.taskhash:
|
||||
data['runtaskdeps'] = self.runtaskdeps[k]
|
||||
data['runtaskhashes'] = {}
|
||||
for dep in data['runtaskdeps']:
|
||||
data['runtaskhashes'][dep] = self.taskhash[dep]
|
||||
|
||||
p = pickle.Pickler(file(sigfile, "wb"), -1)
|
||||
p.dump(data)
|
||||
|
||||
def dump_sigs(self, dataCache):
|
||||
for fn in self.taskdeps:
|
||||
for task in self.taskdeps[fn]:
|
||||
k = fn + "." + task
|
||||
if k not in self.taskhash:
|
||||
continue
|
||||
if dataCache.basetaskhash[k] != self.basehash[k]:
|
||||
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
|
||||
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
|
||||
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
|
||||
|
||||
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
|
||||
name = "basichash"
|
||||
|
||||
def stampfile(self, stampbase, fn, taskname, extrainfo):
|
||||
if taskname != "do_setscene" and taskname.endswith("_setscene"):
|
||||
k = fn + "." + taskname[:-9]
|
||||
else:
|
||||
k = fn + "." + taskname
|
||||
if k in self.taskhash:
|
||||
h = self.taskhash[k]
|
||||
else:
|
||||
# If k is not in basehash, then error
|
||||
h = self.basehash[k]
|
||||
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
|
||||
|
||||
def dump_this_task(outfile, d):
|
||||
import bb.parse
|
||||
fn = d.getVar("BB_FILENAME", True)
|
||||
task = "do_" + d.getVar("BB_CURRENTTASK", True)
|
||||
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile")
|
||||
|
||||
def clean_basepath(a):
|
||||
if a.startswith("virtual:"):
|
||||
b = a.rsplit(":", 1)[0] + a.rsplit("/", 1)[1]
|
||||
else:
|
||||
b = a.rsplit("/", 1)[1]
|
||||
return b
|
||||
|
||||
def clean_basepaths(a):
|
||||
b = {}
|
||||
for x in a:
|
||||
b[clean_basepath(x)] = a[x]
|
||||
return b
|
||||
|
||||
def compare_sigfiles(a, b):
|
||||
p1 = pickle.Unpickler(file(a, "rb"))
|
||||
a_data = p1.load()
|
||||
p2 = pickle.Unpickler(file(b, "rb"))
|
||||
b_data = p2.load()
|
||||
|
||||
def dict_diff(a, b, whitelist=set()):
|
||||
sa = set(a.keys())
|
||||
sb = set(b.keys())
|
||||
common = sa & sb
|
||||
changed = set()
|
||||
for i in common:
|
||||
if a[i] != b[i] and i not in whitelist:
|
||||
changed.add(i)
|
||||
added = sa - sb
|
||||
removed = sb - sa
|
||||
return changed, added, removed
|
||||
|
||||
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
|
||||
print "basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist'])
|
||||
if a_data['basewhitelist'] and b_data['basewhitelist']:
|
||||
print "changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist'])
|
||||
|
||||
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
|
||||
print "taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist'])
|
||||
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
|
||||
print "changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist'])
|
||||
|
||||
if a_data['taskdeps'] != b_data['taskdeps']:
|
||||
print "Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps']))
|
||||
|
||||
if a_data['basehash'] != b_data['basehash']:
|
||||
print "basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash'])
|
||||
|
||||
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
|
||||
if changed:
|
||||
for dep in changed:
|
||||
print "List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep])
|
||||
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
|
||||
print "changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep])
|
||||
if added:
|
||||
for dep in added:
|
||||
print "Dependency on variable %s was added" % (dep)
|
||||
if removed:
|
||||
for dep in removed:
|
||||
print "Dependency on Variable %s was removed" % (dep)
|
||||
|
||||
|
||||
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
|
||||
if changed:
|
||||
for dep in changed:
|
||||
print "Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep])
|
||||
|
||||
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
|
||||
a = clean_basepaths(a_data['runtaskhashes'])
|
||||
b = clean_basepaths(b_data['runtaskhashes'])
|
||||
changed, added, removed = dict_diff(a, b)
|
||||
if added:
|
||||
for dep in added:
|
||||
bdep_found = False
|
||||
if removed:
|
||||
for bdep in removed:
|
||||
if a[dep] == b[bdep]:
|
||||
#print "Dependency on task %s was replaced by %s with same hash" % (dep, bdep)
|
||||
bdep_found = True
|
||||
if not bdep_found:
|
||||
print "Dependency on task %s was added with hash %s" % (dep, a[dep])
|
||||
if removed:
|
||||
for dep in removed:
|
||||
adep_found = False
|
||||
if added:
|
||||
for adep in added:
|
||||
if a[adep] == b[dep]:
|
||||
#print "Dependency on task %s was replaced by %s with same hash" % (adep, dep)
|
||||
adep_found = True
|
||||
if not adep_found:
|
||||
print "Dependency on task %s was removed with hash %s" % (dep, b[dep])
|
||||
if changed:
|
||||
for dep in changed:
|
||||
print "Hash for dependent task %s changed from %s to %s" % (dep, a[dep], b[dep])
|
||||
|
||||
def dump_sigfile(a):
|
||||
p1 = pickle.Unpickler(file(a, "rb"))
|
||||
a_data = p1.load()
|
||||
|
||||
print "basewhitelist: %s" % (a_data['basewhitelist'])
|
||||
|
||||
print "taskwhitelist: %s" % (a_data['taskwhitelist'])
|
||||
|
||||
print "Task dependencies: %s" % (sorted(a_data['taskdeps']))
|
||||
|
||||
print "basehash: %s" % (a_data['basehash'])
|
||||
|
||||
for dep in a_data['gendeps']:
|
||||
print "List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep])
|
||||
|
||||
for dep in a_data['varvals']:
|
||||
print "Variable %s value is %s" % (dep, a_data['varvals'][dep])
|
||||
|
||||
if 'runtaskdeps' in a_data:
|
||||
print "Tasks this task depends on: %s" % (a_data['runtaskdeps'])
|
||||
|
||||
if 'runtaskhashes' in a_data:
|
||||
for dep in a_data['runtaskhashes']:
|
||||
print "Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep])
|
||||
@@ -23,25 +23,26 @@ Task data collection and handling
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import logging
|
||||
import re
|
||||
import bb
|
||||
|
||||
logger = logging.getLogger("BitBake.TaskData")
|
||||
|
||||
def re_match_strings(target, strings):
|
||||
"""
|
||||
Whether or not the string 'target' matches
|
||||
any one string of the strings which can be regular expression string
|
||||
"""
|
||||
return any(name == target or re.match(name, target)
|
||||
for name in strings)
|
||||
import re
|
||||
|
||||
for name in strings:
|
||||
if (name==target or
|
||||
re.search(name,target)!=None):
|
||||
return True
|
||||
return False
|
||||
|
||||
class TaskData:
|
||||
"""
|
||||
BitBake Task Data implementation
|
||||
"""
|
||||
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None):
|
||||
def __init__(self, abort = True, tryaltconfigs = False):
|
||||
self.build_names_index = []
|
||||
self.run_names_index = []
|
||||
self.fn_index = []
|
||||
@@ -70,8 +71,6 @@ class TaskData:
|
||||
self.abort = abort
|
||||
self.tryaltconfigs = tryaltconfigs
|
||||
|
||||
self.skiplist = skiplist
|
||||
|
||||
def getbuild_id(self, name):
|
||||
"""
|
||||
Return an ID number for the build target name.
|
||||
@@ -85,7 +84,7 @@ class TaskData:
|
||||
|
||||
def getrun_id(self, name):
|
||||
"""
|
||||
Return an ID number for the run target name.
|
||||
Return an ID number for the run target name.
|
||||
If it doesn't exist, create one.
|
||||
"""
|
||||
if not name in self.run_names_index:
|
||||
@@ -96,7 +95,7 @@ class TaskData:
|
||||
|
||||
def getfn_id(self, name):
|
||||
"""
|
||||
Return an ID number for the filename.
|
||||
Return an ID number for the filename.
|
||||
If it doesn't exist, create one.
|
||||
"""
|
||||
if not name in self.fn_index:
|
||||
@@ -153,7 +152,7 @@ class TaskData:
|
||||
fnid = self.getfn_id(fn)
|
||||
|
||||
if fnid in self.failed_fnids:
|
||||
bb.msg.fatal("TaskData", "Trying to re-add a failed file? Something is broken...")
|
||||
bb.msg.fatal(bb.msg.domain.TaskData, "Trying to re-add a failed file? Something is broken...")
|
||||
|
||||
# Check if we've already seen this fn
|
||||
if fnid in self.tasks_fnid:
|
||||
@@ -175,7 +174,7 @@ class TaskData:
|
||||
for dep in task_deps['depends'][task].split():
|
||||
if dep:
|
||||
if ":" not in dep:
|
||||
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (fn, dep))
|
||||
bb.msg.fatal(bb.msg.domain.TaskData, "Error, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (dep, fn))
|
||||
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
|
||||
self.tasks_idepends[taskid].extend(ids)
|
||||
|
||||
@@ -183,7 +182,7 @@ class TaskData:
|
||||
if not fnid in self.depids:
|
||||
dependids = {}
|
||||
for depend in dataCache.deps[fn]:
|
||||
logger.debug(2, "Added dependency %s for %s", depend, fn)
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added dependency %s for %s" % (depend, fn))
|
||||
dependids[self.getbuild_id(depend)] = None
|
||||
self.depids[fnid] = dependids.keys()
|
||||
|
||||
@@ -193,12 +192,12 @@ class TaskData:
|
||||
rdepends = dataCache.rundeps[fn]
|
||||
rrecs = dataCache.runrecs[fn]
|
||||
for package in rdepends:
|
||||
for rdepend in rdepends[package]:
|
||||
logger.debug(2, "Added runtime dependency %s for %s", rdepend, fn)
|
||||
for rdepend in bb.utils.explode_deps(rdepends[package]):
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime dependency %s for %s" % (rdepend, fn))
|
||||
rdependids[self.getrun_id(rdepend)] = None
|
||||
for package in rrecs:
|
||||
for rdepend in rrecs[package]:
|
||||
logger.debug(2, "Added runtime recommendation %s for %s", rdepend, fn)
|
||||
for rdepend in bb.utils.explode_deps(rrecs[package]):
|
||||
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime recommendation %s for %s" % (rdepend, fn))
|
||||
rdependids[self.getrun_id(rdepend)] = None
|
||||
self.rdepids[fnid] = rdependids.keys()
|
||||
|
||||
@@ -272,7 +271,7 @@ class TaskData:
|
||||
|
||||
def get_unresolved_build_targets(self, dataCache):
|
||||
"""
|
||||
Return a list of build targets who's providers
|
||||
Return a list of build targets who's providers
|
||||
are unknown.
|
||||
"""
|
||||
unresolved = []
|
||||
@@ -287,7 +286,7 @@ class TaskData:
|
||||
|
||||
def get_unresolved_run_targets(self, dataCache):
|
||||
"""
|
||||
Return a list of runtime targets who's providers
|
||||
Return a list of runtime targets who's providers
|
||||
are unknown.
|
||||
"""
|
||||
unresolved = []
|
||||
@@ -305,7 +304,7 @@ class TaskData:
|
||||
Return a list of providers of item
|
||||
"""
|
||||
targetid = self.getbuild_id(item)
|
||||
|
||||
|
||||
return self.build_targets[targetid]
|
||||
|
||||
def get_dependees(self, itemid):
|
||||
@@ -350,36 +349,25 @@ class TaskData:
|
||||
dependees.append(self.fn_index[fnid])
|
||||
return dependees
|
||||
|
||||
def get_reasons(self, item, runtime=False):
|
||||
"""
|
||||
Get the reason(s) for an item not being provided, if any
|
||||
"""
|
||||
reasons = []
|
||||
if self.skiplist:
|
||||
for fn in self.skiplist:
|
||||
skipitem = self.skiplist[fn]
|
||||
if skipitem.pn == item:
|
||||
reasons.append("%s was skipped: %s" % (skipitem.pn, skipitem.skipreason))
|
||||
elif runtime and item in skipitem.rprovides:
|
||||
reasons.append("%s RPROVIDES %s but was skipped: %s" % (skipitem.pn, item, skipitem.skipreason))
|
||||
elif not runtime and item in skipitem.provides:
|
||||
reasons.append("%s PROVIDES %s but was skipped: %s" % (skipitem.pn, item, skipitem.skipreason))
|
||||
return reasons
|
||||
|
||||
def add_provider(self, cfgData, dataCache, item):
|
||||
try:
|
||||
self.add_provider_internal(cfgData, dataCache, item)
|
||||
except bb.providers.NoProvider:
|
||||
if self.abort:
|
||||
if self.get_rdependees_str(item):
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (item))
|
||||
raise
|
||||
self.remove_buildtarget(self.getbuild_id(item))
|
||||
targetid = self.getbuild_id(item)
|
||||
self.remove_buildtarget(targetid)
|
||||
|
||||
self.mark_external_target(item)
|
||||
|
||||
def add_provider_internal(self, cfgData, dataCache, item):
|
||||
"""
|
||||
Add the providers of item to the task data
|
||||
Mark entries were specifically added externally as against dependencies
|
||||
Mark entries were specifically added externally as against dependencies
|
||||
added internally during dependency resolution
|
||||
"""
|
||||
|
||||
@@ -387,7 +375,11 @@ class TaskData:
|
||||
return
|
||||
|
||||
if not item in dataCache.providers:
|
||||
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=self.get_reasons(item)), cfgData)
|
||||
if self.get_rdependees_str(item):
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
|
||||
else:
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (item))
|
||||
bb.event.fire(bb.event.NoProvider(item), cfgData)
|
||||
raise bb.providers.NoProvider(item)
|
||||
|
||||
if self.have_build_target(item):
|
||||
@@ -399,7 +391,8 @@ class TaskData:
|
||||
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
|
||||
|
||||
if not eligible:
|
||||
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=["No eligible PROVIDERs exist for '%s'" % item]), cfgData)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "No buildable provider PROVIDES '%s' but '%s' DEPENDS on or otherwise requires it. Enable debugging and see earlier logs to find unbuildable providers." % (item, self.get_dependees_str(item)))
|
||||
bb.event.fire(bb.event.NoProvider(item), cfgData)
|
||||
raise bb.providers.NoProvider(item)
|
||||
|
||||
if len(eligible) > 1 and foundUnique == False:
|
||||
@@ -407,6 +400,8 @@ class TaskData:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "multiple providers are available for %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "consider defining PREFERRED_PROVIDER_%s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item, providers_list), cfgData)
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
@@ -414,7 +409,7 @@ class TaskData:
|
||||
fnid = self.getfn_id(fn)
|
||||
if fnid in self.failed_fnids:
|
||||
continue
|
||||
logger.debug(2, "adding %s to satisfy %s", fn, item)
|
||||
bb.msg.debug(2, bb.msg.domain.Provider, "adding %s to satisfy %s" % (fn, item))
|
||||
self.add_build_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
@@ -436,14 +431,16 @@ class TaskData:
|
||||
all_p = bb.providers.getRuntimeProviders(dataCache, item)
|
||||
|
||||
if not all_p:
|
||||
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=self.get_reasons(item, True)), cfgData)
|
||||
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables" % (self.get_rdependees_str(item), item))
|
||||
bb.event.fire(bb.event.NoProvider(item, runtime=True), cfgData)
|
||||
raise bb.providers.NoRProvider(item)
|
||||
|
||||
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
|
||||
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
|
||||
|
||||
if not eligible:
|
||||
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=["No eligible RPROVIDERs exist for '%s'" % item]), cfgData)
|
||||
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables of any buildable targets.\nEnable debugging and see earlier logs to find unbuildable targets." % (self.get_rdependees_str(item), item))
|
||||
bb.event.fire(bb.event.NoProvider(item, runtime=True), cfgData)
|
||||
raise bb.providers.NoRProvider(item)
|
||||
|
||||
if len(eligible) > 1 and numberPreferred == 0:
|
||||
@@ -451,7 +448,9 @@ class TaskData:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "consider defining a PREFERRED_PROVIDER entry to match runtime %s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, runtime=True), cfgData)
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
if numberPreferred > 1:
|
||||
@@ -459,7 +458,9 @@ class TaskData:
|
||||
providers_list = []
|
||||
for fn in eligible:
|
||||
providers_list.append(dataCache.pkg_fn[fn])
|
||||
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (top %s entries preferred) (%s);" % (item, numberPreferred, ", ".join(providers_list)))
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "consider defining only one PREFERRED_PROVIDER entry to match runtime %s" % item)
|
||||
bb.event.fire(bb.event.MultipleProviders(item,providers_list, runtime=True), cfgData)
|
||||
self.consider_msgs_cache.append(item)
|
||||
|
||||
# run through the list until we find one that we can build
|
||||
@@ -467,7 +468,7 @@ class TaskData:
|
||||
fnid = self.getfn_id(fn)
|
||||
if fnid in self.failed_fnids:
|
||||
continue
|
||||
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
|
||||
bb.msg.debug(2, bb.msg.domain.Provider, "adding '%s' to satisfy runtime '%s'" % (fn, item))
|
||||
self.add_runtime_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
@@ -480,7 +481,7 @@ class TaskData:
|
||||
"""
|
||||
if fnid in self.failed_fnids:
|
||||
return
|
||||
logger.debug(1, "File '%s' is unbuildable, removing...", self.fn_index[fnid])
|
||||
bb.msg.debug(1, bb.msg.domain.Provider, "File '%s' is unbuildable, removing..." % self.fn_index[fnid])
|
||||
self.failed_fnids.append(fnid)
|
||||
for target in self.build_targets:
|
||||
if fnid in self.build_targets[target]:
|
||||
@@ -502,21 +503,20 @@ class TaskData:
|
||||
missing_list = [self.build_names_index[targetid]]
|
||||
else:
|
||||
missing_list = [self.build_names_index[targetid]] + missing_list
|
||||
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.build_names_index[targetid], missing_list)
|
||||
bb.msg.note(2, bb.msg.domain.Provider, "Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
|
||||
self.failed_deps.append(targetid)
|
||||
dependees = self.get_dependees(targetid)
|
||||
for fnid in dependees:
|
||||
self.fail_fnid(fnid, missing_list)
|
||||
for taskid in xrange(len(self.tasks_idepends)):
|
||||
for taskid in range(len(self.tasks_idepends)):
|
||||
idepends = self.tasks_idepends[taskid]
|
||||
for (idependid, idependtask) in idepends:
|
||||
if idependid == targetid:
|
||||
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
|
||||
|
||||
if self.abort and targetid in self.external_targets:
|
||||
target = self.build_names_index[targetid]
|
||||
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
|
||||
raise bb.providers.NoProvider(target)
|
||||
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
|
||||
raise bb.providers.NoProvider
|
||||
|
||||
def remove_runtarget(self, targetid, missing_list = []):
|
||||
"""
|
||||
@@ -528,7 +528,7 @@ class TaskData:
|
||||
else:
|
||||
missing_list = [self.run_names_index[targetid]] + missing_list
|
||||
|
||||
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.run_names_index[targetid], missing_list)
|
||||
bb.msg.note(1, bb.msg.domain.Provider, "Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.run_names_index[targetid], missing_list))
|
||||
self.failed_rdeps.append(targetid)
|
||||
dependees = self.get_rdependees(targetid)
|
||||
for fnid in dependees:
|
||||
@@ -538,8 +538,8 @@ class TaskData:
|
||||
"""
|
||||
Resolve all unresolved build and runtime targets
|
||||
"""
|
||||
logger.info("Resolving any missing task queue dependencies")
|
||||
while True:
|
||||
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
|
||||
while 1:
|
||||
added = 0
|
||||
for target in self.get_unresolved_build_targets(dataCache):
|
||||
try:
|
||||
@@ -548,6 +548,10 @@ class TaskData:
|
||||
except bb.providers.NoProvider:
|
||||
targetid = self.getbuild_id(target)
|
||||
if self.abort and targetid in self.external_targets:
|
||||
if self.get_rdependees_str(target):
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (target, self.get_dependees_str(target)))
|
||||
else:
|
||||
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (target))
|
||||
raise
|
||||
self.remove_buildtarget(targetid)
|
||||
for target in self.get_unresolved_run_targets(dataCache):
|
||||
@@ -556,7 +560,7 @@ class TaskData:
|
||||
added = added + 1
|
||||
except bb.providers.NoRProvider:
|
||||
self.remove_runtarget(self.getrun_id(target))
|
||||
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
|
||||
bb.msg.debug(1, bb.msg.domain.TaskData, "Resolved " + str(added) + " extra dependencies")
|
||||
if added == 0:
|
||||
break
|
||||
# self.dump_data()
|
||||
@@ -565,40 +569,42 @@ class TaskData:
|
||||
"""
|
||||
Dump some debug information on the internal data structures
|
||||
"""
|
||||
logger.debug(3, "build_names:")
|
||||
logger.debug(3, ", ".join(self.build_names_index))
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "build_names:")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.build_names_index))
|
||||
|
||||
logger.debug(3, "run_names:")
|
||||
logger.debug(3, ", ".join(self.run_names_index))
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "run_names:")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.run_names_index))
|
||||
|
||||
logger.debug(3, "build_targets:")
|
||||
for buildid in xrange(len(self.build_names_index)):
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "build_targets:")
|
||||
for buildid in range(len(self.build_names_index)):
|
||||
target = self.build_names_index[buildid]
|
||||
targets = "None"
|
||||
if buildid in self.build_targets:
|
||||
targets = self.build_targets[buildid]
|
||||
logger.debug(3, " (%s)%s: %s", buildid, target, targets)
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (buildid, target, targets))
|
||||
|
||||
logger.debug(3, "run_targets:")
|
||||
for runid in xrange(len(self.run_names_index)):
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "run_targets:")
|
||||
for runid in range(len(self.run_names_index)):
|
||||
target = self.run_names_index[runid]
|
||||
targets = "None"
|
||||
if runid in self.run_targets:
|
||||
targets = self.run_targets[runid]
|
||||
logger.debug(3, " (%s)%s: %s", runid, target, targets)
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (runid, target, targets))
|
||||
|
||||
logger.debug(3, "tasks:")
|
||||
for task in xrange(len(self.tasks_name)):
|
||||
logger.debug(3, " (%s)%s - %s: %s",
|
||||
task,
|
||||
self.fn_index[self.tasks_fnid[task]],
|
||||
self.tasks_name[task],
|
||||
self.tasks_tdepends[task])
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
|
||||
for task in range(len(self.tasks_name)):
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
|
||||
task,
|
||||
self.fn_index[self.tasks_fnid[task]],
|
||||
self.tasks_name[task],
|
||||
self.tasks_tdepends[task]))
|
||||
|
||||
logger.debug(3, "dependency ids (per fn):")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
|
||||
for fnid in self.depids:
|
||||
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.depids[fnid])
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.depids[fnid]))
|
||||
|
||||
logger.debug(3, "runtime dependency ids (per fn):")
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
|
||||
for fnid in self.rdepids:
|
||||
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.rdepids[fnid])
|
||||
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))
|
||||
|
||||
|
||||
|
||||
@@ -15,3 +15,4 @@
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
#
|
||||
# Gtk+ UI pieces for BitBake
|
||||
# BitBake UI Implementation
|
||||
#
|
||||
# Copyright (C) 2006-2007 Richard Purdie
|
||||
#
|
||||
@@ -15,3 +15,4 @@
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
@@ -1,252 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import pango
|
||||
import gobject
|
||||
from bb.ui.crumbs.progressbar import HobProgressBar
|
||||
from bb.ui.crumbs.hobwidget import hic, HobNotebook, HobAltButton, HobWarpCellRendererText
|
||||
from bb.ui.crumbs.runningbuild import RunningBuildTreeView
|
||||
from bb.ui.crumbs.runningbuild import BuildFailureTreeView
|
||||
from bb.ui.crumbs.hobpages import HobPage
|
||||
|
||||
class BuildConfigurationTreeView(gtk.TreeView):
|
||||
def __init__ (self):
|
||||
gtk.TreeView.__init__(self)
|
||||
self.set_rules_hint(False)
|
||||
self.set_headers_visible(False)
|
||||
self.set_property("hover-expand", True)
|
||||
self.get_selection().set_mode(gtk.SELECTION_SINGLE)
|
||||
|
||||
# The icon that indicates whether we're building or failed.
|
||||
renderer0 = gtk.CellRendererText()
|
||||
renderer0.set_property('font-desc', pango.FontDescription('courier bold 12'))
|
||||
col0 = gtk.TreeViewColumn ("Name", renderer0, text=0)
|
||||
self.append_column (col0)
|
||||
|
||||
# The message of configuration.
|
||||
renderer1 = HobWarpCellRendererText(col_number=1)
|
||||
col1 = gtk.TreeViewColumn ("Values", renderer1, text=1)
|
||||
self.append_column (col1)
|
||||
|
||||
def set_vars(self, key="", var=[""]):
|
||||
d = {}
|
||||
if type(var) == str:
|
||||
d = {key: [var]}
|
||||
elif type(var) == list and len(var) > 1:
|
||||
#create the sub item line
|
||||
l = []
|
||||
text = ""
|
||||
for item in var:
|
||||
text = " - " + item
|
||||
l.append(text)
|
||||
d = {key: var}
|
||||
|
||||
return d
|
||||
|
||||
def set_config_model(self, show_vars):
|
||||
listmodel = gtk.TreeStore(gobject.TYPE_STRING, gobject.TYPE_STRING)
|
||||
parent = None
|
||||
for var in show_vars:
|
||||
for subitem in var.items():
|
||||
name = subitem[0]
|
||||
is_parent = True
|
||||
for value in subitem[1]:
|
||||
if is_parent:
|
||||
parent = listmodel.append(parent, (name, value))
|
||||
is_parent = False
|
||||
else:
|
||||
listmodel.append(parent, (None, value))
|
||||
name = " - "
|
||||
parent = None
|
||||
# renew the tree model after get the configuration messages
|
||||
self.set_model(listmodel)
|
||||
|
||||
def show(self, src_config_info):
|
||||
vars = []
|
||||
vars.append(self.set_vars("BB version:", src_config_info.bb_version))
|
||||
vars.append(self.set_vars("Target arch:", src_config_info.target_arch))
|
||||
vars.append(self.set_vars("Target OS:", src_config_info.target_os))
|
||||
vars.append(self.set_vars("Machine:", src_config_info.curr_mach))
|
||||
vars.append(self.set_vars("Distro:", src_config_info.curr_distro))
|
||||
vars.append(self.set_vars("Distro version:", src_config_info.distro_version))
|
||||
vars.append(self.set_vars("SDK machine:", src_config_info.curr_sdk_machine))
|
||||
vars.append(self.set_vars("Tune feature:", src_config_info.tune_pkgarch))
|
||||
vars.append(self.set_vars("Layers:", src_config_info.layers))
|
||||
|
||||
for path in src_config_info.layers:
|
||||
import os, os.path
|
||||
if os.path.exists(path):
|
||||
f = os.popen('cd %s; git branch 2>&1 | grep "^* " | tr -d "* "' % path)
|
||||
if f:
|
||||
branch = f.readline().lstrip('\n').rstrip('\n')
|
||||
vars.append(self.set_vars("Branch:", branch))
|
||||
f.close()
|
||||
break
|
||||
|
||||
self.set_config_model(vars)
|
||||
|
||||
def reset(self):
|
||||
self.set_model(None)
|
||||
|
||||
#
|
||||
# BuildDetailsPage
|
||||
#
|
||||
|
||||
class BuildDetailsPage (HobPage):
|
||||
|
||||
def __init__(self, builder):
|
||||
super(BuildDetailsPage, self).__init__(builder, "Building ...")
|
||||
|
||||
self.num_of_issues = 0
|
||||
self.endpath = (0,)
|
||||
# create visual elements
|
||||
self.create_visual_elements()
|
||||
|
||||
def create_visual_elements(self):
|
||||
# create visual elements
|
||||
self.vbox = gtk.VBox(False, 12)
|
||||
|
||||
self.progress_box = gtk.VBox(False, 12)
|
||||
self.task_status = gtk.Label("\n") # to ensure layout is correct
|
||||
self.task_status.set_alignment(0.0, 0.5)
|
||||
self.progress_box.pack_start(self.task_status, expand=False, fill=False)
|
||||
self.progress_hbox = gtk.HBox(False, 6)
|
||||
self.progress_box.pack_end(self.progress_hbox, expand=True, fill=True)
|
||||
self.progress_bar = HobProgressBar()
|
||||
self.progress_hbox.pack_start(self.progress_bar, expand=True, fill=True)
|
||||
self.stop_button = HobAltButton("Stop")
|
||||
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
|
||||
self.stop_button.set_sensitive(False)
|
||||
self.progress_hbox.pack_end(self.stop_button, expand=False, fill=False)
|
||||
|
||||
self.notebook = HobNotebook()
|
||||
self.config_tv = BuildConfigurationTreeView()
|
||||
self.scrolled_view_config = gtk.ScrolledWindow ()
|
||||
self.scrolled_view_config.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
|
||||
self.scrolled_view_config.add(self.config_tv)
|
||||
self.notebook.append_page(self.scrolled_view_config, gtk.Label("Build configuration"))
|
||||
|
||||
self.failure_tv = BuildFailureTreeView()
|
||||
self.failure_model = self.builder.handler.build.model.failure_model()
|
||||
self.failure_tv.set_model(self.failure_model)
|
||||
self.scrolled_view_failure = gtk.ScrolledWindow ()
|
||||
self.scrolled_view_failure.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
|
||||
self.scrolled_view_failure.add(self.failure_tv)
|
||||
self.notebook.append_page(self.scrolled_view_failure, gtk.Label("Issues"))
|
||||
|
||||
self.build_tv = RunningBuildTreeView(readonly=True, hob=True)
|
||||
self.build_tv.set_model(self.builder.handler.build.model)
|
||||
self.scrolled_view_build = gtk.ScrolledWindow ()
|
||||
self.scrolled_view_build.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
|
||||
self.scrolled_view_build.add(self.build_tv)
|
||||
self.notebook.append_page(self.scrolled_view_build, gtk.Label("Log"))
|
||||
|
||||
self.builder.handler.build.model.connect_after("row-changed", self.scroll_to_present_row, self.scrolled_view_build.get_vadjustment(), self.build_tv)
|
||||
|
||||
self.button_box = gtk.HBox(False, 6)
|
||||
self.back_button = HobAltButton("<< Back to image configuration")
|
||||
self.back_button.connect("clicked", self.back_button_clicked_cb)
|
||||
self.button_box.pack_start(self.back_button, expand=False, fill=False)
|
||||
|
||||
def update_build_status(self, current, total, task):
|
||||
recipe_path, recipe_task = task.split(", ")
|
||||
recipe = os.path.basename(recipe_path).rstrip(".bb")
|
||||
tsk_msg = "<b>Running task %s of %s:</b> %s\n<b>Recipe:</b> %s" % (current, total, recipe_task, recipe)
|
||||
self.task_status.set_markup(tsk_msg)
|
||||
self.stop_button.set_sensitive(True)
|
||||
|
||||
def reset_build_status(self):
|
||||
self.task_status.set_markup("\n") # to ensure layout is correct
|
||||
self.endpath = (0,)
|
||||
|
||||
def show_issues(self):
|
||||
self.num_of_issues += 1
|
||||
self.notebook.show_indicator_icon("Issues", self.num_of_issues)
|
||||
|
||||
def reset_issues(self):
|
||||
self.num_of_issues = 0
|
||||
self.notebook.hide_indicator_icon("Issues")
|
||||
|
||||
def _remove_all_widget(self):
|
||||
children = self.vbox.get_children() or []
|
||||
for child in children:
|
||||
self.vbox.remove(child)
|
||||
children = self.box_group_area.get_children() or []
|
||||
for child in children:
|
||||
self.box_group_area.remove(child)
|
||||
children = self.get_children() or []
|
||||
for child in children:
|
||||
self.remove(child)
|
||||
|
||||
def show_page(self, step):
|
||||
self._remove_all_widget()
|
||||
if step == self.builder.PACKAGE_GENERATING or step == self.builder.FAST_IMAGE_GENERATING:
|
||||
self.title = "Building packages ..."
|
||||
else:
|
||||
self.title = "Building image ..."
|
||||
self.build_details_top = self.add_onto_top_bar(None)
|
||||
self.pack_start(self.build_details_top, expand=False, fill=False)
|
||||
self.pack_start(self.group_align, expand=True, fill=True)
|
||||
|
||||
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
|
||||
|
||||
self.progress_bar.reset()
|
||||
self.config_tv.reset()
|
||||
self.vbox.pack_start(self.progress_box, expand=False, fill=False)
|
||||
|
||||
self.vbox.pack_start(self.notebook, expand=True, fill=True)
|
||||
|
||||
self.box_group_area.pack_end(self.button_box, expand=False, fill=False)
|
||||
self.show_all()
|
||||
self.back_button.hide()
|
||||
|
||||
self.reset_build_status()
|
||||
self.reset_issues()
|
||||
|
||||
def update_progress_bar(self, title, fraction, status=None):
|
||||
self.progress_bar.update(fraction)
|
||||
self.progress_bar.set_title(title)
|
||||
self.progress_bar.set_rcstyle(status)
|
||||
|
||||
def back_button_clicked_cb(self, button):
|
||||
self.builder.show_configuration()
|
||||
|
||||
def show_back_button(self):
|
||||
self.back_button.show()
|
||||
|
||||
def stop_button_clicked_cb(self, button):
|
||||
self.builder.stop_build()
|
||||
|
||||
def hide_stop_button(self):
|
||||
self.stop_button.hide()
|
||||
|
||||
def scroll_to_present_row(self, model, path, iter, v_adj, treeview):
|
||||
if treeview and v_adj:
|
||||
if path[0] > self.endpath[0]: # check the event is a new row append or not
|
||||
self.endpath = path
|
||||
# check the gtk.adjustment position is at end boundary or not
|
||||
if (v_adj.upper <= v_adj.page_size) or (v_adj.value == v_adj.upper - v_adj.page_size):
|
||||
treeview.scroll_to_cell(path)
|
||||
|
||||
def show_configurations(self, configurations):
|
||||
self.config_tv.show(configurations)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -28,7 +28,7 @@ import time
|
||||
class BuildConfiguration:
|
||||
""" Represents a potential *or* historic *or* concrete build. It
|
||||
encompasses all the things that we need to tell bitbake to do to make it
|
||||
build what we want it to build.
|
||||
build what we want it to build.
|
||||
|
||||
It also stored the metadata URL and the set of possible machines (and the
|
||||
distros / images / uris for these. Apart from the metdata URL these are
|
||||
@@ -73,33 +73,34 @@ class BuildConfiguration:
|
||||
return self.urls
|
||||
|
||||
# It might be a lot lot better if we stored these in like, bitbake conf
|
||||
# file format.
|
||||
@staticmethod
|
||||
# file format.
|
||||
@staticmethod
|
||||
def load_from_file (filename):
|
||||
f = open (filename, "r")
|
||||
|
||||
conf = BuildConfiguration()
|
||||
with open(filename, "r") as f:
|
||||
for line in f:
|
||||
data = line.split (";")[1]
|
||||
if (line.startswith ("metadata-url;")):
|
||||
conf.metadata_url = data.strip()
|
||||
continue
|
||||
if (line.startswith ("url;")):
|
||||
conf.urls += [data.strip()]
|
||||
continue
|
||||
if (line.startswith ("extra-url;")):
|
||||
conf.extra_urls += [data.strip()]
|
||||
continue
|
||||
if (line.startswith ("machine;")):
|
||||
conf.machine = data.strip()
|
||||
continue
|
||||
if (line.startswith ("distribution;")):
|
||||
conf.distro = data.strip()
|
||||
continue
|
||||
if (line.startswith ("image;")):
|
||||
conf.image = data.strip()
|
||||
continue
|
||||
for line in f.readlines():
|
||||
data = line.split (";")[1]
|
||||
if (line.startswith ("metadata-url;")):
|
||||
conf.metadata_url = data.strip()
|
||||
continue
|
||||
if (line.startswith ("url;")):
|
||||
conf.urls += [data.strip()]
|
||||
continue
|
||||
if (line.startswith ("extra-url;")):
|
||||
conf.extra_urls += [data.strip()]
|
||||
continue
|
||||
if (line.startswith ("machine;")):
|
||||
conf.machine = data.strip()
|
||||
continue
|
||||
if (line.startswith ("distribution;")):
|
||||
conf.distro = data.strip()
|
||||
continue
|
||||
if (line.startswith ("image;")):
|
||||
conf.image = data.strip()
|
||||
continue
|
||||
|
||||
f.close ()
|
||||
return conf
|
||||
|
||||
# Serialise to a file. This is part of the build process and we use this
|
||||
@@ -139,13 +140,13 @@ class BuildResult(gobject.GObject):
|
||||
".conf" in the directory for the build.
|
||||
|
||||
This is GObject so that it can be included in the TreeStore."""
|
||||
|
||||
|
||||
(STATE_COMPLETE, STATE_FAILED, STATE_ONGOING) = \
|
||||
(0, 1, 2)
|
||||
|
||||
def __init__ (self, parent, identifier):
|
||||
gobject.GObject.__init__ (self)
|
||||
self.date = None
|
||||
self.date = None
|
||||
|
||||
self.files = []
|
||||
self.status = None
|
||||
@@ -156,8 +157,8 @@ class BuildResult(gobject.GObject):
|
||||
# format build-<year><month><day>-<ordinal> we can easily
|
||||
# pull it out.
|
||||
# TODO: Better to stat a file?
|
||||
(_, date, revision) = identifier.split ("-")
|
||||
print(date)
|
||||
(_ , date, revision) = identifier.split ("-")
|
||||
print date
|
||||
|
||||
year = int (date[0:4])
|
||||
month = int (date[4:6])
|
||||
@@ -180,7 +181,7 @@ class BuildResult(gobject.GObject):
|
||||
self.add_file (file)
|
||||
|
||||
def add_file (self, file):
|
||||
# Just add the file for now. Don't care about the type.
|
||||
# Just add the file for now. Don't care about the type.
|
||||
self.files += [(file, None)]
|
||||
|
||||
class BuildManagerModel (gtk.TreeStore):
|
||||
@@ -193,7 +194,7 @@ class BuildManagerModel (gtk.TreeStore):
|
||||
|
||||
def __init__ (self):
|
||||
gtk.TreeStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
@@ -206,7 +207,7 @@ class BuildManager (gobject.GObject):
|
||||
"results" directory but is also used for starting a new build."""
|
||||
|
||||
__gsignals__ = {
|
||||
'population-finished' : (gobject.SIGNAL_RUN_LAST,
|
||||
'population-finished' : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
'populate-error' : (gobject.SIGNAL_RUN_LAST,
|
||||
@@ -219,13 +220,13 @@ class BuildManager (gobject.GObject):
|
||||
date = long (time.mktime (result.date.timetuple()))
|
||||
|
||||
# Add a top level entry for the build
|
||||
|
||||
self.model.set (iter,
|
||||
|
||||
self.model.set (iter,
|
||||
BuildManagerModel.COL_IDENT, result.identifier,
|
||||
BuildManagerModel.COL_DESC, result.conf.image,
|
||||
BuildManagerModel.COL_MACHINE, result.conf.machine,
|
||||
BuildManagerModel.COL_DISTRO, result.conf.distro,
|
||||
BuildManagerModel.COL_BUILD_RESULT, result,
|
||||
BuildManagerModel.COL_MACHINE, result.conf.machine,
|
||||
BuildManagerModel.COL_DISTRO, result.conf.distro,
|
||||
BuildManagerModel.COL_BUILD_RESULT, result,
|
||||
BuildManagerModel.COL_DATE, date,
|
||||
BuildManagerModel.COL_STATE, result.state)
|
||||
|
||||
@@ -256,7 +257,7 @@ class BuildManager (gobject.GObject):
|
||||
|
||||
while (iter):
|
||||
(ident, state) = self.model.get(iter,
|
||||
BuildManagerModel.COL_IDENT,
|
||||
BuildManagerModel.COL_IDENT,
|
||||
BuildManagerModel.COL_STATE)
|
||||
|
||||
if state == BuildResult.STATE_ONGOING:
|
||||
@@ -384,8 +385,8 @@ class BuildManager (gobject.GObject):
|
||||
build_directory])
|
||||
server.runCommand(["buildTargets", [conf.image], "rootfs"])
|
||||
|
||||
except Exception as e:
|
||||
print(e)
|
||||
except Exception, e:
|
||||
print e
|
||||
|
||||
class BuildManagerTreeView (gtk.TreeView):
|
||||
""" The tree view for the build manager. This shows the historic builds
|
||||
@@ -421,29 +422,29 @@ class BuildManagerTreeView (gtk.TreeView):
|
||||
|
||||
# Misc descriptiony thing
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn (None, renderer,
|
||||
col = gtk.TreeViewColumn (None, renderer,
|
||||
text=BuildManagerModel.COL_DESC)
|
||||
self.append_column (col)
|
||||
|
||||
# Machine
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Machine", renderer,
|
||||
col = gtk.TreeViewColumn ("Machine", renderer,
|
||||
text=BuildManagerModel.COL_MACHINE)
|
||||
self.append_column (col)
|
||||
|
||||
# distro
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Distribution", renderer,
|
||||
col = gtk.TreeViewColumn ("Distribution", renderer,
|
||||
text=BuildManagerModel.COL_DISTRO)
|
||||
self.append_column (col)
|
||||
|
||||
# date (using a custom function for formatting the cell contents it
|
||||
# takes epoch -> human readable string)
|
||||
renderer = gtk.CellRendererText ()
|
||||
col = gtk.TreeViewColumn ("Date", renderer,
|
||||
col = gtk.TreeViewColumn ("Date", renderer,
|
||||
text=BuildManagerModel.COL_DATE)
|
||||
self.append_column (col)
|
||||
col.set_cell_data_func (renderer,
|
||||
col.set_cell_data_func (renderer,
|
||||
self.date_format_custom_cell_data_func)
|
||||
|
||||
# For status.
|
||||
@@ -453,3 +454,4 @@ class BuildManagerTreeView (gtk.TreeView):
|
||||
self.append_column (col)
|
||||
col.set_cell_data_func (renderer,
|
||||
self.state_format_custom_cell_data_fun)
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,37 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
class HobColors:
|
||||
WHITE = "#ffffff"
|
||||
PALE_GREEN = "#aaffaa"
|
||||
ORANGE = "#eb8e68"
|
||||
PALE_RED = "#ffaaaa"
|
||||
GRAY = "#aaaaaa"
|
||||
LIGHT_GRAY = "#dddddd"
|
||||
SLIGHT_DARK = "#5f5f5f"
|
||||
DARK = "#3c3b37"
|
||||
BLACK = "#000000"
|
||||
PALE_BLUE = "#53b8ff"
|
||||
DEEP_RED = "#aa3e3e"
|
||||
|
||||
OK = WHITE
|
||||
RUNNING = PALE_GREEN
|
||||
WARNING = ORANGE
|
||||
ERROR = PALE_RED
|
||||
@@ -1,506 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import logging
|
||||
from bb.ui.crumbs.runningbuild import RunningBuild
|
||||
from bb.ui.crumbs.hobwidget import hcc
|
||||
|
||||
class HobHandler(gobject.GObject):
|
||||
|
||||
"""
|
||||
This object does BitBake event handling for the hob gui.
|
||||
"""
|
||||
__gsignals__ = {
|
||||
"package-formats-updated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"config-updated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING, gobject.TYPE_PYOBJECT,)),
|
||||
"command-succeeded" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_INT,)),
|
||||
"command-failed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_STRING,)),
|
||||
"generating-data" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"data-generated" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
"parsing-started" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"parsing" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
"parsing-completed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
(gobject.TYPE_PYOBJECT,)),
|
||||
}
|
||||
|
||||
(PARSE_CONFIG, GENERATE_CONFIGURATION, GENERATE_RECIPES, GENERATE_PACKAGES, GENERATE_IMAGE, POPULATE_PACKAGEINFO) = range(6)
|
||||
(SUB_PATH_LAYERS, SUB_FILES_DISTRO, SUB_FILES_MACH, SUB_FILES_SDKMACH, SUB_MATCH_CLASS, SUB_PARSE_CONFIG, SUB_GNERATE_TGTS, SUB_GENERATE_PKGINFO, SUB_BUILD_RECIPES, SUB_BUILD_IMAGE) = range(10)
|
||||
|
||||
def __init__(self, server, recipe_model, package_model):
|
||||
super(HobHandler, self).__init__()
|
||||
|
||||
self.build = RunningBuild(sequential=True)
|
||||
|
||||
self.recipe_model = recipe_model
|
||||
self.package_model = package_model
|
||||
|
||||
self.commands_async = []
|
||||
self.generating = False
|
||||
self.current_phase = None
|
||||
self.building = False
|
||||
self.recipe_queue = []
|
||||
self.package_queue = []
|
||||
|
||||
self.server = server
|
||||
self.error_msg = ""
|
||||
self.initcmd = None
|
||||
|
||||
def set_busy(self):
|
||||
if not self.generating:
|
||||
self.emit("generating-data")
|
||||
self.generating = True
|
||||
|
||||
def clear_busy(self):
|
||||
if self.generating:
|
||||
self.emit("data-generated")
|
||||
self.generating = False
|
||||
|
||||
def run_next_command(self, initcmd=None):
|
||||
if initcmd != None:
|
||||
self.initcmd = initcmd
|
||||
|
||||
if self.commands_async:
|
||||
self.set_busy()
|
||||
next_command = self.commands_async.pop(0)
|
||||
else:
|
||||
self.clear_busy()
|
||||
if self.initcmd != None:
|
||||
self.emit("command-succeeded", self.initcmd)
|
||||
return
|
||||
|
||||
if next_command == self.SUB_PATH_LAYERS:
|
||||
self.server.runCommand(["findConfigFilePath", "bblayers.conf"])
|
||||
elif next_command == self.SUB_FILES_DISTRO:
|
||||
self.server.runCommand(["findConfigFiles", "DISTRO"])
|
||||
elif next_command == self.SUB_FILES_MACH:
|
||||
self.server.runCommand(["findConfigFiles", "MACHINE"])
|
||||
elif next_command == self.SUB_FILES_SDKMACH:
|
||||
self.server.runCommand(["findConfigFiles", "MACHINE-SDK"])
|
||||
elif next_command == self.SUB_MATCH_CLASS:
|
||||
self.server.runCommand(["findFilesMatchingInDir", "rootfs_", "classes"])
|
||||
elif next_command == self.SUB_PARSE_CONFIG:
|
||||
self.server.runCommand(["parseConfigurationFiles", "", ""])
|
||||
elif next_command == self.SUB_GNERATE_TGTS:
|
||||
self.server.runCommand(["generateTargetsTree", "classes/image.bbclass", []])
|
||||
elif next_command == self.SUB_GENERATE_PKGINFO:
|
||||
self.server.runCommand(["triggerEvent", "bb.event.RequestPackageInfo()"])
|
||||
elif next_command == self.SUB_BUILD_RECIPES:
|
||||
self.clear_busy()
|
||||
self.building = True
|
||||
self.server.runCommand(["buildTargets", self.recipe_queue, "build"])
|
||||
self.recipe_queue = []
|
||||
elif next_command == self.SUB_BUILD_IMAGE:
|
||||
self.clear_busy()
|
||||
self.building = True
|
||||
targets = [self.hob_image]
|
||||
self.server.runCommand(["setVariable", "LINGUAS_INSTALL", ""])
|
||||
self.server.runCommand(["setVariable", "PACKAGE_INSTALL", " ".join(self.package_queue)])
|
||||
if self.toolchain_packages:
|
||||
self.server.runCommand(["setVariable", "TOOLCHAIN_TARGET_TASK", " ".join(self.toolchain_packages)])
|
||||
targets.append(self.hob_toolchain)
|
||||
self.server.runCommand(["buildTargets", targets, "build"])
|
||||
|
||||
def handle_event(self, event):
|
||||
if not event:
|
||||
return
|
||||
|
||||
if self.building:
|
||||
self.current_phase = "building"
|
||||
self.build.handle_event(event)
|
||||
|
||||
if isinstance(event, bb.event.PackageInfo):
|
||||
self.package_model.populate(event._pkginfolist)
|
||||
self.run_next_command()
|
||||
|
||||
elif isinstance(event, logging.LogRecord):
|
||||
if event.levelno >= logging.ERROR:
|
||||
self.error_msg += event.msg + '\n'
|
||||
|
||||
elif isinstance(event, bb.event.TargetsTreeGenerated):
|
||||
self.current_phase = "data generation"
|
||||
if event._model:
|
||||
self.recipe_model.populate(event._model)
|
||||
elif isinstance(event, bb.event.ConfigFilesFound):
|
||||
self.current_phase = "configuration lookup"
|
||||
var = event._variable
|
||||
values = event._values
|
||||
values.sort()
|
||||
self.emit("config-updated", var, values)
|
||||
elif isinstance(event, bb.event.ConfigFilePathFound):
|
||||
self.current_phase = "configuration lookup"
|
||||
elif isinstance(event, bb.event.FilesMatchingFound):
|
||||
self.current_phase = "configuration lookup"
|
||||
# FIXME: hard coding, should at least be a variable shared between
|
||||
# here and the caller
|
||||
if event._pattern == "rootfs_":
|
||||
formats = []
|
||||
for match in event._matches:
|
||||
classname, sep, cls = match.rpartition(".")
|
||||
fs, sep, format = classname.rpartition("_")
|
||||
formats.append(format)
|
||||
formats.sort()
|
||||
self.emit("package-formats-updated", formats)
|
||||
elif isinstance(event, bb.command.CommandCompleted):
|
||||
self.current_phase = None
|
||||
self.run_next_command()
|
||||
# TODO: Currently there are NoProvider issues when generate
|
||||
# universe tree dependency for non-x86 architecture.
|
||||
# Comment the follow code to enable the build of non-x86
|
||||
# architectures in Hob.
|
||||
#elif isinstance(event, bb.event.NoProvider):
|
||||
# if event._runtime:
|
||||
# r = "R"
|
||||
# else:
|
||||
# r = ""
|
||||
# if event._dependees:
|
||||
# self.error_msg += " Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r)
|
||||
# else:
|
||||
# self.error_msg += " Nothing %sPROVIDES '%s'" % (r, event._item)
|
||||
# if event._reasons:
|
||||
# for reason in event._reasons:
|
||||
# self.error_msg += " %s" % reason
|
||||
|
||||
# self.commands_async = []
|
||||
# self.emit("command-failed", self.error_msg)
|
||||
# self.error_msg = ""
|
||||
|
||||
elif isinstance(event, bb.command.CommandFailed):
|
||||
self.commands_async = []
|
||||
self.clear_busy()
|
||||
self.emit("command-failed", self.error_msg)
|
||||
self.error_msg = ""
|
||||
elif isinstance(event, (bb.event.ParseStarted,
|
||||
bb.event.CacheLoadStarted,
|
||||
bb.event.TreeDataPreparationStarted,
|
||||
)):
|
||||
message = {}
|
||||
message["eventname"] = bb.event.getName(event)
|
||||
message["current"] = 0
|
||||
message["total"] = None
|
||||
message["title"] = "Parsing recipes: "
|
||||
self.emit("parsing-started", message)
|
||||
elif isinstance(event, (bb.event.ParseProgress,
|
||||
bb.event.CacheLoadProgress,
|
||||
bb.event.TreeDataPreparationProgress)):
|
||||
message = {}
|
||||
message["eventname"] = bb.event.getName(event)
|
||||
message["current"] = event.current
|
||||
message["total"] = event.total
|
||||
message["title"] = "Parsing recipes: "
|
||||
self.emit("parsing", message)
|
||||
elif isinstance(event, (bb.event.ParseCompleted,
|
||||
bb.event.CacheLoadCompleted,
|
||||
bb.event.TreeDataPreparationCompleted)):
|
||||
message = {}
|
||||
message["eventname"] = bb.event.getName(event)
|
||||
message["current"] = event.total
|
||||
message["total"] = event.total
|
||||
message["title"] = "Parsing recipes: "
|
||||
self.emit("parsing-completed", message)
|
||||
|
||||
return
|
||||
|
||||
def init_cooker(self):
|
||||
self.server.runCommand(["initCooker"])
|
||||
|
||||
def parse_config(self):
|
||||
self.commands_async.append(self.SUB_PARSE_CONFIG)
|
||||
self.run_next_command(self.PARSE_CONFIG)
|
||||
|
||||
def parse_generate_configuration(self):
|
||||
self.commands_async.append(self.SUB_PARSE_CONFIG)
|
||||
self.generate_configuration()
|
||||
|
||||
def set_extra_inherit(self, bbclass):
|
||||
inherits = self.server.runCommand(["getVariable", "INHERIT"]) or ""
|
||||
inherits = inherits + " " + bbclass
|
||||
self.server.runCommand(["setVariable", "INHERIT", inherits])
|
||||
|
||||
def set_bblayers(self, bblayers):
|
||||
self.server.runCommand(["setVariable", "BBLAYERS", " ".join(bblayers)])
|
||||
|
||||
def set_machine(self, machine):
|
||||
if machine:
|
||||
self.server.runCommand(["setVariable", "MACHINE", machine])
|
||||
|
||||
def set_sdk_machine(self, sdk_machine):
|
||||
self.server.runCommand(["setVariable", "SDKMACHINE", sdk_machine])
|
||||
|
||||
def set_image_fstypes(self, image_fstypes):
|
||||
self.server.runCommand(["setVariable", "IMAGE_FSTYPES", image_fstypes])
|
||||
|
||||
def set_distro(self, distro):
|
||||
if distro != "defaultsetup":
|
||||
self.server.runCommand(["setVariable", "DISTRO", distro])
|
||||
|
||||
def set_package_format(self, format):
|
||||
package_classes = ""
|
||||
for pkgfmt in format.split():
|
||||
package_classes += ("package_%s" % pkgfmt + " ")
|
||||
self.server.runCommand(["setVariable", "PACKAGE_CLASSES", package_classes])
|
||||
|
||||
def set_bbthreads(self, threads):
|
||||
self.server.runCommand(["setVariable", "BB_NUMBER_THREADS", threads])
|
||||
|
||||
def set_pmake(self, threads):
|
||||
pmake = "-j %s" % threads
|
||||
self.server.runCommand(["setVariable", "PARALLEL_MAKE", pmake])
|
||||
|
||||
def set_dl_dir(self, directory):
|
||||
self.server.runCommand(["setVariable", "DL_DIR", directory])
|
||||
|
||||
def set_sstate_dir(self, directory):
|
||||
self.server.runCommand(["setVariable", "SSTATE_DIR", directory])
|
||||
|
||||
def set_sstate_mirror(self, url):
|
||||
self.server.runCommand(["setVariable", "SSTATE_MIRROR", url])
|
||||
|
||||
def set_extra_size(self, image_extra_size):
|
||||
self.server.runCommand(["setVariable", "IMAGE_ROOTFS_EXTRA_SPACE", str(image_extra_size)])
|
||||
|
||||
def set_rootfs_size(self, image_rootfs_size):
|
||||
self.server.runCommand(["setVariable", "IMAGE_ROOTFS_SIZE", str(image_rootfs_size)])
|
||||
|
||||
def set_incompatible_license(self, incompat_license):
|
||||
self.server.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", incompat_license])
|
||||
|
||||
def set_extra_config(self, extra_setting):
|
||||
for key in extra_setting.keys():
|
||||
value = extra_setting[key]
|
||||
self.server.runCommand(["setVariable", key, value])
|
||||
|
||||
def set_http_proxy(self, http_proxy):
|
||||
self.server.runCommand(["setVariable", "http_proxy", http_proxy])
|
||||
|
||||
def set_https_proxy(self, https_proxy):
|
||||
self.server.runCommand(["setVariable", "https_proxy", https_proxy])
|
||||
|
||||
def set_ftp_proxy(self, ftp_proxy):
|
||||
self.server.runCommand(["setVariable", "ftp_proxy", ftp_proxy])
|
||||
|
||||
def set_all_proxy(self, all_proxy):
|
||||
self.server.runCommand(["setVariable", "all_proxy", all_proxy])
|
||||
|
||||
def set_git_proxy(self, host, port):
|
||||
self.server.runCommand(["setVariable", "GIT_PROXY_HOST", host])
|
||||
self.server.runCommand(["setVariable", "GIT_PROXY_PORT", port])
|
||||
|
||||
def set_cvs_proxy(self, host, port):
|
||||
self.server.runCommand(["setVariable", "CVS_PROXY_HOST", host])
|
||||
self.server.runCommand(["setVariable", "CVS_PROXY_PORT", port])
|
||||
|
||||
def request_package_info(self):
|
||||
self.commands_async.append(self.SUB_GENERATE_PKGINFO)
|
||||
self.run_next_command(self.POPULATE_PACKAGEINFO)
|
||||
|
||||
def generate_configuration(self):
|
||||
self.commands_async.append(self.SUB_PATH_LAYERS)
|
||||
self.commands_async.append(self.SUB_FILES_DISTRO)
|
||||
self.commands_async.append(self.SUB_FILES_MACH)
|
||||
self.commands_async.append(self.SUB_FILES_SDKMACH)
|
||||
self.commands_async.append(self.SUB_MATCH_CLASS)
|
||||
self.run_next_command(self.GENERATE_CONFIGURATION)
|
||||
|
||||
def generate_recipes(self):
|
||||
self.commands_async.append(self.SUB_PARSE_CONFIG)
|
||||
self.commands_async.append(self.SUB_GNERATE_TGTS)
|
||||
self.run_next_command(self.GENERATE_RECIPES)
|
||||
|
||||
def generate_packages(self, tgts):
|
||||
targets = []
|
||||
targets.extend(tgts)
|
||||
self.recipe_queue = targets
|
||||
self.commands_async.append(self.SUB_PARSE_CONFIG)
|
||||
self.commands_async.append(self.SUB_BUILD_RECIPES)
|
||||
self.run_next_command(self.GENERATE_PACKAGES)
|
||||
|
||||
def generate_image(self, tgts, hob_image, hob_toolchain, toolchain_packages=[]):
|
||||
self.package_queue = tgts
|
||||
self.hob_image = hob_image
|
||||
self.hob_toolchain = hob_toolchain
|
||||
self.toolchain_packages = toolchain_packages
|
||||
self.commands_async.append(self.SUB_PARSE_CONFIG)
|
||||
self.commands_async.append(self.SUB_BUILD_IMAGE)
|
||||
self.run_next_command(self.GENERATE_IMAGE)
|
||||
|
||||
def build_succeeded_async(self):
|
||||
self.building = False
|
||||
|
||||
def build_failed_async(self):
|
||||
self.initcmd = None
|
||||
self.commands_async = []
|
||||
self.building = False
|
||||
|
||||
def cancel_parse(self):
|
||||
self.server.runCommand(["stateStop"])
|
||||
|
||||
def cancel_build(self, force=False):
|
||||
if force:
|
||||
# Force the cooker to stop as quickly as possible
|
||||
self.server.runCommand(["stateStop"])
|
||||
else:
|
||||
# Wait for tasks to complete before shutting down, this helps
|
||||
# leave the workdir in a usable state
|
||||
self.server.runCommand(["stateShutdown"])
|
||||
|
||||
def reset_build(self):
|
||||
self.build.reset()
|
||||
|
||||
def _remove_redundant(self, string):
|
||||
ret = []
|
||||
for i in string.split():
|
||||
if i not in ret:
|
||||
ret.append(i)
|
||||
return " ".join(ret)
|
||||
|
||||
def get_parameters(self):
|
||||
# retrieve the parameters from bitbake
|
||||
params = {}
|
||||
params["core_base"] = self.server.runCommand(["getVariable", "COREBASE"]) or ""
|
||||
hob_layer = params["core_base"] + "/meta-hob"
|
||||
params["layer"] = self.server.runCommand(["getVariable", "BBLAYERS"]) or ""
|
||||
if hob_layer not in params["layer"].split():
|
||||
params["layer"] += (" " + hob_layer)
|
||||
params["dldir"] = self.server.runCommand(["getVariable", "DL_DIR"]) or ""
|
||||
params["machine"] = self.server.runCommand(["getVariable", "MACHINE"]) or ""
|
||||
params["distro"] = self.server.runCommand(["getVariable", "DISTRO"]) or "defaultsetup"
|
||||
params["pclass"] = self.server.runCommand(["getVariable", "PACKAGE_CLASSES"]) or ""
|
||||
params["sstatedir"] = self.server.runCommand(["getVariable", "SSTATE_DIR"]) or ""
|
||||
params["sstatemirror"] = self.server.runCommand(["getVariable", "SSTATE_MIRROR"]) or ""
|
||||
|
||||
num_threads = self.server.runCommand(["getCpuCount"])
|
||||
if not num_threads:
|
||||
num_threads = 1
|
||||
max_threads = 65536
|
||||
else:
|
||||
try:
|
||||
num_threads = int(num_threads)
|
||||
max_threads = 16 * num_threads
|
||||
except:
|
||||
num_threads = 1
|
||||
max_threads = 65536
|
||||
params["max_threads"] = max_threads
|
||||
|
||||
bbthread = self.server.runCommand(["getVariable", "BB_NUMBER_THREADS"])
|
||||
if not bbthread:
|
||||
bbthread = num_threads
|
||||
else:
|
||||
try:
|
||||
bbthread = int(bbthread)
|
||||
except:
|
||||
bbthread = num_threads
|
||||
params["bbthread"] = bbthread
|
||||
|
||||
pmake = self.server.runCommand(["getVariable", "PARALLEL_MAKE"])
|
||||
if not pmake:
|
||||
pmake = num_threads
|
||||
elif isinstance(pmake, int):
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
pmake = int(pmake.lstrip("-j "))
|
||||
except:
|
||||
pmake = num_threads
|
||||
params["pmake"] = "-j %s" % pmake
|
||||
|
||||
params["image_addr"] = self.server.runCommand(["getVariable", "DEPLOY_DIR_IMAGE"]) or ""
|
||||
|
||||
image_extra_size = self.server.runCommand(["getVariable", "IMAGE_ROOTFS_EXTRA_SPACE"])
|
||||
if not image_extra_size:
|
||||
image_extra_size = 0
|
||||
else:
|
||||
try:
|
||||
image_extra_size = int(image_extra_size)
|
||||
except:
|
||||
image_extra_size = 0
|
||||
params["image_extra_size"] = image_extra_size
|
||||
|
||||
image_rootfs_size = self.server.runCommand(["getVariable", "IMAGE_ROOTFS_SIZE"])
|
||||
if not image_rootfs_size:
|
||||
image_rootfs_size = 0
|
||||
else:
|
||||
try:
|
||||
image_rootfs_size = int(image_rootfs_size)
|
||||
except:
|
||||
image_rootfs_size = 0
|
||||
params["image_rootfs_size"] = image_rootfs_size
|
||||
|
||||
image_overhead_factor = self.server.runCommand(["getVariable", "IMAGE_OVERHEAD_FACTOR"])
|
||||
if not image_overhead_factor:
|
||||
image_overhead_factor = 1
|
||||
else:
|
||||
try:
|
||||
image_overhead_factor = float(image_overhead_factor)
|
||||
except:
|
||||
image_overhead_factor = 1
|
||||
params['image_overhead_factor'] = image_overhead_factor
|
||||
|
||||
params["incompat_license"] = self._remove_redundant(self.server.runCommand(["getVariable", "INCOMPATIBLE_LICENSE"]) or "")
|
||||
params["sdk_machine"] = self.server.runCommand(["getVariable", "SDKMACHINE"]) or self.server.runCommand(["getVariable", "SDK_ARCH"]) or ""
|
||||
|
||||
params["image_fstypes"] = self._remove_redundant(self.server.runCommand(["getVariable", "IMAGE_FSTYPES"]) or "")
|
||||
|
||||
params["image_types"] = self._remove_redundant(self.server.runCommand(["getVariable", "IMAGE_TYPES"]) or "")
|
||||
|
||||
params["conf_version"] = self.server.runCommand(["getVariable", "CONF_VERSION"]) or ""
|
||||
params["lconf_version"] = self.server.runCommand(["getVariable", "LCONF_VERSION"]) or ""
|
||||
|
||||
params["runnable_image_types"] = self._remove_redundant(self.server.runCommand(["getVariable", "RUNNABLE_IMAGE_TYPES"]) or "")
|
||||
params["runnable_machine_patterns"] = self._remove_redundant(self.server.runCommand(["getVariable", "RUNNABLE_MACHINE_PATTERNS"]) or "")
|
||||
params["deployable_image_types"] = self._remove_redundant(self.server.runCommand(["getVariable", "DEPLOYABLE_IMAGE_TYPES"]) or "")
|
||||
params["tmpdir"] = self.server.runCommand(["getVariable", "TMPDIR"]) or ""
|
||||
params["distro_version"] = self.server.runCommand(["getVariable", "DISTRO_VERSION"]) or ""
|
||||
params["target_os"] = self.server.runCommand(["getVariable", "TARGET_OS"]) or ""
|
||||
params["target_arch"] = self.server.runCommand(["getVariable", "TARGET_ARCH"]) or ""
|
||||
params["tune_pkgarch"] = self.server.runCommand(["getVariable", "TUNE_PKGARCH"]) or ""
|
||||
params["bb_version"] = self.server.runCommand(["getVariable", "BB_MIN_VERSION"]) or ""
|
||||
params["tune_arch"] = self.server.runCommand(["getVariable", "TUNE_ARCH"]) or ""
|
||||
|
||||
params["git_proxy_host"] = self.server.runCommand(["getVariable", "GIT_PROXY_HOST"]) or ""
|
||||
params["git_proxy_port"] = self.server.runCommand(["getVariable", "GIT_PROXY_PORT"]) or ""
|
||||
|
||||
params["http_proxy"] = self.server.runCommand(["getVariable", "http_proxy"]) or ""
|
||||
params["ftp_proxy"] = self.server.runCommand(["getVariable", "ftp_proxy"]) or ""
|
||||
params["https_proxy"] = self.server.runCommand(["getVariable", "https_proxy"]) or ""
|
||||
params["all_proxy"] = self.server.runCommand(["getVariable", "all_proxy"]) or ""
|
||||
|
||||
params["cvs_proxy_host"] = self.server.runCommand(["getVariable", "CVS_PROXY_HOST"]) or ""
|
||||
params["cvs_proxy_port"] = self.server.runCommand(["getVariable", "CVS_PROXY_PORT"]) or ""
|
||||
|
||||
return params
|
||||
@@ -1,764 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import gobject
|
||||
from bb.ui.crumbs.hobpages import HobPage
|
||||
|
||||
#
|
||||
# PackageListModel
|
||||
#
|
||||
class PackageListModel(gtk.TreeStore):
|
||||
"""
|
||||
This class defines an gtk.TreeStore subclass which will convert the output
|
||||
of the bb.event.TargetsTreeGenerated event into a gtk.TreeStore whilst also
|
||||
providing convenience functions to access gtk.TreeModel subclasses which
|
||||
provide filtered views of the data.
|
||||
"""
|
||||
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_BINB, COL_INC, COL_FADE_INC) = range(12)
|
||||
|
||||
__gsignals__ = {
|
||||
"package-selection-changed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
|
||||
self.contents = None
|
||||
self.images = None
|
||||
self.pkgs_size = 0
|
||||
self.pn_path = {}
|
||||
self.pkg_path = {}
|
||||
self.rprov_pkg = {}
|
||||
|
||||
gtk.TreeStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_BOOLEAN)
|
||||
|
||||
|
||||
"""
|
||||
Find the model path for the item_name
|
||||
Returns the path in the model or None
|
||||
"""
|
||||
def find_path_for_item(self, item_name):
|
||||
pkg = item_name
|
||||
if item_name not in self.pkg_path.keys():
|
||||
if item_name not in self.rprov_pkg.keys():
|
||||
return None
|
||||
pkg = self.rprov_pkg[item_name]
|
||||
if pkg not in self.pkg_path.keys():
|
||||
return None
|
||||
|
||||
return self.pkg_path[pkg]
|
||||
|
||||
def find_item_for_path(self, item_path):
|
||||
return self[item_path][self.COL_NAME]
|
||||
|
||||
"""
|
||||
Helper function to determine whether an item is an item specified by filter
|
||||
"""
|
||||
def tree_model_filter(self, model, it, filter):
|
||||
for key in filter.keys():
|
||||
if model.get_value(it, key) not in filter[key]:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
"""
|
||||
Create, if required, and return a filtered gtk.TreeModelSort
|
||||
containing only the items specified by filter
|
||||
"""
|
||||
def tree_model(self, filter):
|
||||
model = self.filter_new()
|
||||
model.set_visible_func(self.tree_model_filter, filter)
|
||||
|
||||
sort = gtk.TreeModelSort(model)
|
||||
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
return sort
|
||||
|
||||
def convert_vpath_to_path(self, view_model, view_path):
|
||||
# view_model is the model sorted
|
||||
# get the path of the model filtered
|
||||
filtered_model_path = view_model.convert_path_to_child_path(view_path)
|
||||
# get the model filtered
|
||||
filtered_model = view_model.get_model()
|
||||
# get the path of the original model
|
||||
path = filtered_model.convert_path_to_child_path(filtered_model_path)
|
||||
return path
|
||||
|
||||
def convert_path_to_vpath(self, view_model, path):
|
||||
name = self.find_item_for_path(path)
|
||||
it = view_model.get_iter_first()
|
||||
while it:
|
||||
child_it = view_model.iter_children(it)
|
||||
while child_it:
|
||||
view_name = view_model.get_value(child_it, self.COL_NAME)
|
||||
if view_name == name:
|
||||
view_path = view_model.get_path(child_it)
|
||||
return view_path
|
||||
child_it = view_model.iter_next(child_it)
|
||||
it = view_model.iter_next(it)
|
||||
return None
|
||||
|
||||
"""
|
||||
The populate() function takes as input the data from a
|
||||
bb.event.PackageInfo event and populates the package list.
|
||||
"""
|
||||
def populate(self, pkginfolist):
|
||||
self.clear()
|
||||
self.pkgs_size = 0
|
||||
self.pn_path = {}
|
||||
self.pkg_path = {}
|
||||
self.rprov_pkg = {}
|
||||
|
||||
for pkginfo in pkginfolist:
|
||||
pn = pkginfo['PN']
|
||||
pv = pkginfo['PV']
|
||||
pr = pkginfo['PR']
|
||||
if pn in self.pn_path.keys():
|
||||
pniter = self.get_iter(self.pn_path[pn])
|
||||
else:
|
||||
pniter = self.append(None)
|
||||
self.set(pniter, self.COL_NAME, pn + '-' + pv + '-' + pr,
|
||||
self.COL_INC, False)
|
||||
self.pn_path[pn] = self.get_path(pniter)
|
||||
|
||||
pkg = pkginfo['PKG']
|
||||
pkgv = pkginfo['PKGV']
|
||||
pkgr = pkginfo['PKGR']
|
||||
pkgsize = pkginfo['PKGSIZE_%s' % pkg] if 'PKGSIZE_%s' % pkg in pkginfo.keys() else "0"
|
||||
pkg_rename = pkginfo['PKG_%s' % pkg] if 'PKG_%s' % pkg in pkginfo.keys() else ""
|
||||
section = pkginfo['SECTION_%s' % pkg] if 'SECTION_%s' % pkg in pkginfo.keys() else ""
|
||||
summary = pkginfo['SUMMARY_%s' % pkg] if 'SUMMARY_%s' % pkg in pkginfo.keys() else ""
|
||||
rdep = pkginfo['RDEPENDS_%s' % pkg] if 'RDEPENDS_%s' % pkg in pkginfo.keys() else ""
|
||||
rrec = pkginfo['RRECOMMENDS_%s' % pkg] if 'RRECOMMENDS_%s' % pkg in pkginfo.keys() else ""
|
||||
rprov = pkginfo['RPROVIDES_%s' % pkg] if 'RPROVIDES_%s' % pkg in pkginfo.keys() else ""
|
||||
for i in rprov.split():
|
||||
self.rprov_pkg[i] = pkg
|
||||
|
||||
if 'ALLOW_EMPTY_%s' % pkg in pkginfo.keys():
|
||||
allow_empty = pkginfo['ALLOW_EMPTY_%s' % pkg]
|
||||
elif 'ALLOW_EMPTY' in pkginfo.keys():
|
||||
allow_empty = pkginfo['ALLOW_EMPTY']
|
||||
else:
|
||||
allow_empty = ""
|
||||
|
||||
if pkgsize == "0" and not allow_empty:
|
||||
continue
|
||||
|
||||
# pkgsize is in KB
|
||||
size = HobPage._size_to_string(HobPage._string_to_size(pkgsize + ' KB'))
|
||||
|
||||
it = self.append(pniter)
|
||||
self.pkg_path[pkg] = self.get_path(it)
|
||||
self.set(it, self.COL_NAME, pkg, self.COL_VER, pkgv,
|
||||
self.COL_REV, pkgr, self.COL_RNM, pkg_rename,
|
||||
self.COL_SEC, section, self.COL_SUM, summary,
|
||||
self.COL_RDEP, rdep + ' ' + rrec,
|
||||
self.COL_RPROV, rprov, self.COL_SIZE, size,
|
||||
self.COL_BINB, "", self.COL_INC, False)
|
||||
|
||||
"""
|
||||
Check whether the item at item_path is included or not
|
||||
"""
|
||||
def path_included(self, item_path):
|
||||
return self[item_path][self.COL_INC]
|
||||
|
||||
"""
|
||||
Update the model, send out the notification.
|
||||
"""
|
||||
def selection_change_notification(self):
|
||||
self.emit("package-selection-changed")
|
||||
|
||||
"""
|
||||
Mark a certain package as selected.
|
||||
All its dependencies are marked as selected.
|
||||
The recipe provides the package is marked as selected.
|
||||
If user explicitly selects a recipe, all its providing packages are selected
|
||||
"""
|
||||
def include_item(self, item_path, binb=""):
|
||||
if self.path_included(item_path):
|
||||
return
|
||||
|
||||
item_name = self[item_path][self.COL_NAME]
|
||||
item_rdep = self[item_path][self.COL_RDEP]
|
||||
|
||||
self[item_path][self.COL_INC] = True
|
||||
|
||||
it = self.get_iter(item_path)
|
||||
|
||||
# If user explicitly selects a recipe, all its providing packages are selected.
|
||||
if not self[item_path][self.COL_VER] and binb == "User Selected":
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
child_path = self.get_path(child_it)
|
||||
child_included = self.path_included(child_path)
|
||||
if not child_included:
|
||||
self.include_item(child_path, binb="User Selected")
|
||||
child_it = self.iter_next(child_it)
|
||||
return
|
||||
|
||||
# The recipe provides the package is also marked as selected
|
||||
parent_it = self.iter_parent(it)
|
||||
if parent_it:
|
||||
parent_path = self.get_path(parent_it)
|
||||
self[parent_path][self.COL_INC] = True
|
||||
|
||||
item_bin = self[item_path][self.COL_BINB].split(', ')
|
||||
if binb and not binb in item_bin:
|
||||
item_bin.append(binb)
|
||||
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
|
||||
|
||||
if item_rdep:
|
||||
# Ensure all of the items deps are included and, where appropriate,
|
||||
# add this item to their COL_BINB
|
||||
for dep in item_rdep.split(" "):
|
||||
if dep.startswith('('):
|
||||
continue
|
||||
# If the contents model doesn't already contain dep, add it
|
||||
dep_path = self.find_path_for_item(dep)
|
||||
if not dep_path:
|
||||
continue
|
||||
dep_included = self.path_included(dep_path)
|
||||
|
||||
if dep_included and not dep in item_bin:
|
||||
# don't set the COL_BINB to this item if the target is an
|
||||
# item in our own COL_BINB
|
||||
dep_bin = self[dep_path][self.COL_BINB].split(', ')
|
||||
if not item_name in dep_bin:
|
||||
dep_bin.append(item_name)
|
||||
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
|
||||
elif not dep_included:
|
||||
self.include_item(dep_path, binb=item_name)
|
||||
|
||||
"""
|
||||
Mark a certain package as de-selected.
|
||||
All other packages that depends on this package are marked as de-selected.
|
||||
If none of the packages provided by the recipe, the recipe should be marked as de-selected.
|
||||
If user explicitly de-select a recipe, all its providing packages are de-selected.
|
||||
"""
|
||||
def exclude_item(self, item_path):
|
||||
if not self.path_included(item_path):
|
||||
return
|
||||
|
||||
self[item_path][self.COL_INC] = False
|
||||
|
||||
item_name = self[item_path][self.COL_NAME]
|
||||
item_rdep = self[item_path][self.COL_RDEP]
|
||||
it = self.get_iter(item_path)
|
||||
|
||||
# If user explicitly de-select a recipe, all its providing packages are de-selected.
|
||||
if not self[item_path][self.COL_VER]:
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
child_path = self.get_path(child_it)
|
||||
child_included = self[child_path][self.COL_INC]
|
||||
if child_included:
|
||||
self.exclude_item(child_path)
|
||||
child_it = self.iter_next(child_it)
|
||||
return
|
||||
|
||||
# If none of the packages provided by the recipe, the recipe should be marked as de-selected.
|
||||
parent_it = self.iter_parent(it)
|
||||
peer_iter = self.iter_children(parent_it)
|
||||
enabled = 0
|
||||
while peer_iter:
|
||||
peer_path = self.get_path(peer_iter)
|
||||
if self[peer_path][self.COL_INC]:
|
||||
enabled = 1
|
||||
break
|
||||
peer_iter = self.iter_next(peer_iter)
|
||||
if not enabled:
|
||||
parent_path = self.get_path(parent_it)
|
||||
self[parent_path][self.COL_INC] = False
|
||||
|
||||
# All packages that depends on this package are de-selected.
|
||||
if item_rdep:
|
||||
for dep in item_rdep.split(" "):
|
||||
if dep.startswith('('):
|
||||
continue
|
||||
dep_path = self.find_path_for_item(dep)
|
||||
if not dep_path:
|
||||
continue
|
||||
dep_bin = self[dep_path][self.COL_BINB].split(', ')
|
||||
if item_name in dep_bin:
|
||||
dep_bin.remove(item_name)
|
||||
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
|
||||
|
||||
item_bin = self[item_path][self.COL_BINB].split(', ')
|
||||
if item_bin:
|
||||
for binb in item_bin:
|
||||
binb_path = self.find_path_for_item(binb)
|
||||
if not binb_path:
|
||||
continue
|
||||
self.exclude_item(binb_path)
|
||||
|
||||
"""
|
||||
Package model may be incomplete, therefore when calling the
|
||||
set_selected_packages(), some packages will not be set included.
|
||||
Return the un-set packages list.
|
||||
"""
|
||||
def set_selected_packages(self, packagelist):
|
||||
left = []
|
||||
for pn in packagelist:
|
||||
if pn in self.pkg_path.keys():
|
||||
path = self.pkg_path[pn]
|
||||
self.include_item(item_path=path,
|
||||
binb="User Selected")
|
||||
else:
|
||||
left.append(pn)
|
||||
|
||||
self.selection_change_notification()
|
||||
return left
|
||||
|
||||
def get_user_selected_packages(self):
|
||||
packagelist = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
if self.get_value(child_it, self.COL_INC):
|
||||
binb = self.get_value(child_it, self.COL_BINB)
|
||||
if not binb or binb == "User Selected":
|
||||
name = self.get_value(child_it, self.COL_NAME)
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return packagelist
|
||||
|
||||
def get_selected_packages(self):
|
||||
packagelist = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
if self.get_value(child_it, self.COL_INC):
|
||||
name = self.get_value(child_it, self.COL_NAME)
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return packagelist
|
||||
|
||||
def get_selected_packages_toolchain(self):
|
||||
packagelist = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
name = self.get_value(child_it, self.COL_NAME)
|
||||
inc = self.get_value(child_it, self.COL_INC)
|
||||
if inc or name.endswith("-dev") or name.endswith("-dbg"):
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return packagelist
|
||||
"""
|
||||
Return the selected package size, unit is B.
|
||||
"""
|
||||
def get_packages_size(self):
|
||||
packages_size = 0
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
if self.get_value(child_it, self.COL_INC):
|
||||
str_size = self.get_value(child_it, self.COL_SIZE)
|
||||
if not str_size:
|
||||
continue
|
||||
|
||||
packages_size += HobPage._string_to_size(str_size)
|
||||
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
return packages_size
|
||||
|
||||
"""
|
||||
Empty self.contents by setting the include of each entry to None
|
||||
"""
|
||||
def reset(self):
|
||||
self.pkgs_size = 0
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
self.set(it, self.COL_INC, False)
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
self.set(child_it,
|
||||
self.COL_INC, False,
|
||||
self.COL_BINB, "")
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
self.selection_change_notification()
|
||||
|
||||
"""
|
||||
Resync the state of included items to a backup column before performing the fadeout visible effect
|
||||
"""
|
||||
def resync_fadeout_column(self, model_first_iter=None):
|
||||
it = model_first_iter
|
||||
while it:
|
||||
active = self.get_value(it, self.COL_INC)
|
||||
self.set(it, self.COL_FADE_INC, active)
|
||||
if self.iter_has_child(it):
|
||||
self.resync_fadeout_column(self.iter_children(it))
|
||||
|
||||
it = self.iter_next(it)
|
||||
|
||||
#
|
||||
# RecipeListModel
|
||||
#
|
||||
class RecipeListModel(gtk.ListStore):
|
||||
"""
|
||||
This class defines an gtk.ListStore subclass which will convert the output
|
||||
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
|
||||
providing convenience functions to access gtk.TreeModel subclasses which
|
||||
provide filtered views of the data.
|
||||
"""
|
||||
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC) = range(12)
|
||||
|
||||
__dummy_image__ = "Create your own image"
|
||||
|
||||
__gsignals__ = {
|
||||
"recipe-selection-changed" : (gobject.SIGNAL_RUN_LAST,
|
||||
gobject.TYPE_NONE,
|
||||
()),
|
||||
}
|
||||
|
||||
"""
|
||||
"""
|
||||
def __init__(self):
|
||||
gtk.ListStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN)
|
||||
|
||||
"""
|
||||
Find the model path for the item_name
|
||||
Returns the path in the model or None
|
||||
"""
|
||||
def find_path_for_item(self, item_name):
|
||||
if self.non_target_name(item_name) or item_name not in self.pn_path.keys():
|
||||
return None
|
||||
else:
|
||||
return self.pn_path[item_name]
|
||||
|
||||
def find_item_for_path(self, item_path):
|
||||
return self[item_path][self.COL_NAME]
|
||||
|
||||
"""
|
||||
Helper method to determine whether name is a target pn
|
||||
"""
|
||||
def non_target_name(self, name):
|
||||
if name and ('-native' in name):
|
||||
return True
|
||||
return False
|
||||
|
||||
"""
|
||||
Helper function to determine whether an item is an item specified by filter
|
||||
"""
|
||||
def tree_model_filter(self, model, it, filter):
|
||||
name = model.get_value(it, self.COL_NAME)
|
||||
if self.non_target_name(name):
|
||||
return False
|
||||
|
||||
for key in filter.keys():
|
||||
if model.get_value(it, key) not in filter[key]:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def exclude_item_sort_func(self, model, iter1, iter2):
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_FADE_INC)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
|
||||
return ((val1 == True) and (val2 == False))
|
||||
|
||||
"""
|
||||
Create, if required, and return a filtered gtk.TreeModelSort
|
||||
containing only the items which are items specified by filter
|
||||
"""
|
||||
def tree_model(self, filter, excluded_items_ahead=False):
|
||||
model = self.filter_new()
|
||||
model.set_visible_func(self.tree_model_filter, filter)
|
||||
|
||||
sort = gtk.TreeModelSort(model)
|
||||
if excluded_items_ahead:
|
||||
sort.set_default_sort_func(self.exclude_item_sort_func)
|
||||
else:
|
||||
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
return sort
|
||||
|
||||
def convert_vpath_to_path(self, view_model, view_path):
|
||||
filtered_model_path = view_model.convert_path_to_child_path(view_path)
|
||||
filtered_model = view_model.get_model()
|
||||
|
||||
# get the path of the original model
|
||||
path = filtered_model.convert_path_to_child_path(filtered_model_path)
|
||||
return path
|
||||
|
||||
def convert_path_to_vpath(self, view_model, path):
|
||||
it = view_model.get_iter_first()
|
||||
while it:
|
||||
name = self.find_item_for_path(path)
|
||||
view_name = view_model.get_value(it, RecipeListModel.COL_NAME)
|
||||
if view_name == name:
|
||||
view_path = view_model.get_path(it)
|
||||
return view_path
|
||||
it = view_model.iter_next(it)
|
||||
return None
|
||||
|
||||
"""
|
||||
The populate() function takes as input the data from a
|
||||
bb.event.TargetsTreeGenerated event and populates the RecipeList.
|
||||
"""
|
||||
def populate(self, event_model):
|
||||
# First clear the model, in case repopulating
|
||||
self.clear()
|
||||
|
||||
# dummy image for prompt
|
||||
self.set(self.append(), self.COL_NAME, self.__dummy_image__,
|
||||
self.COL_DESC, "",
|
||||
self.COL_LIC, "", self.COL_GROUP, "",
|
||||
self.COL_DEPS, "", self.COL_BINB, "",
|
||||
self.COL_TYPE, "image", self.COL_INC, False,
|
||||
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__dummy_image__)
|
||||
|
||||
for item in event_model["pn"]:
|
||||
name = item
|
||||
desc = event_model["pn"][item]["description"]
|
||||
lic = event_model["pn"][item]["license"]
|
||||
group = event_model["pn"][item]["section"]
|
||||
inherits = event_model["pn"][item]["inherits"]
|
||||
install = []
|
||||
|
||||
depends = event_model["depends"].get(item, []) + event_model["rdepends-pn"].get(item, [])
|
||||
|
||||
if ('task-' in name):
|
||||
atype = 'task'
|
||||
elif ('image.bbclass' in " ".join(inherits)):
|
||||
if name != "hob-image":
|
||||
atype = 'image'
|
||||
install = event_model["rdepends-pkg"].get(item, []) + event_model["rrecs-pkg"].get(item, [])
|
||||
elif ('meta-' in name):
|
||||
atype = 'toolchain'
|
||||
elif (name == 'dummy-image' or name == 'dummy-toolchain'):
|
||||
atype = 'dummy'
|
||||
else:
|
||||
atype = 'recipe'
|
||||
|
||||
self.set(self.append(), self.COL_NAME, item, self.COL_DESC, desc,
|
||||
self.COL_LIC, lic, self.COL_GROUP, group,
|
||||
self.COL_DEPS, " ".join(depends), self.COL_BINB, "",
|
||||
self.COL_TYPE, atype, self.COL_INC, False,
|
||||
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item)
|
||||
|
||||
self.pn_path = {}
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
pn = self.get_value(it, self.COL_NAME)
|
||||
path = self.get_path(it)
|
||||
self.pn_path[pn] = path
|
||||
it = self.iter_next(it)
|
||||
|
||||
"""
|
||||
Update the model, send out the notification.
|
||||
"""
|
||||
def selection_change_notification(self):
|
||||
self.emit("recipe-selection-changed")
|
||||
|
||||
def path_included(self, item_path):
|
||||
return self[item_path][self.COL_INC]
|
||||
|
||||
"""
|
||||
Append a certain image into the combobox
|
||||
"""
|
||||
def image_list_append(self, name, deps, install):
|
||||
# check whether a certain image is there
|
||||
if not name or self.find_path_for_item(name):
|
||||
return
|
||||
it = self.append()
|
||||
self.set(it, self.COL_NAME, name, self.COL_DESC, "",
|
||||
self.COL_LIC, "", self.COL_GROUP, "",
|
||||
self.COL_DEPS, deps, self.COL_BINB, "",
|
||||
self.COL_TYPE, "image", self.COL_INC, False,
|
||||
self.COL_IMG, False, self.COL_INSTALL, install,
|
||||
self.COL_PN, name)
|
||||
self.pn_path[name] = self.get_path(it)
|
||||
|
||||
"""
|
||||
Add this item, and any of its dependencies, to the image contents
|
||||
"""
|
||||
def include_item(self, item_path, binb="", image_contents=False):
|
||||
if self.path_included(item_path):
|
||||
return
|
||||
|
||||
item_name = self[item_path][self.COL_NAME]
|
||||
item_deps = self[item_path][self.COL_DEPS]
|
||||
|
||||
self[item_path][self.COL_INC] = True
|
||||
|
||||
item_bin = self[item_path][self.COL_BINB].split(', ')
|
||||
if binb and not binb in item_bin:
|
||||
item_bin.append(binb)
|
||||
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
|
||||
|
||||
# We want to do some magic with things which are brought in by the
|
||||
# base image so tag them as so
|
||||
if image_contents:
|
||||
self[item_path][self.COL_IMG] = True
|
||||
|
||||
if item_deps:
|
||||
# Ensure all of the items deps are included and, where appropriate,
|
||||
# add this item to their COL_BINB
|
||||
for dep in item_deps.split(" "):
|
||||
# If the contents model doesn't already contain dep, add it
|
||||
dep_path = self.find_path_for_item(dep)
|
||||
if not dep_path:
|
||||
continue
|
||||
dep_included = self.path_included(dep_path)
|
||||
|
||||
if dep_included and not dep in item_bin:
|
||||
# don't set the COL_BINB to this item if the target is an
|
||||
# item in our own COL_BINB
|
||||
dep_bin = self[dep_path][self.COL_BINB].split(', ')
|
||||
if not item_name in dep_bin:
|
||||
dep_bin.append(item_name)
|
||||
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
|
||||
elif not dep_included:
|
||||
self.include_item(dep_path, binb=item_name, image_contents=image_contents)
|
||||
|
||||
def exclude_item(self, item_path):
|
||||
if not self.path_included(item_path):
|
||||
return
|
||||
|
||||
self[item_path][self.COL_INC] = False
|
||||
|
||||
item_name = self[item_path][self.COL_NAME]
|
||||
item_deps = self[item_path][self.COL_DEPS]
|
||||
if item_deps:
|
||||
for dep in item_deps.split(" "):
|
||||
dep_path = self.find_path_for_item(dep)
|
||||
if not dep_path:
|
||||
continue
|
||||
dep_bin = self[dep_path][self.COL_BINB].split(', ')
|
||||
if item_name in dep_bin:
|
||||
dep_bin.remove(item_name)
|
||||
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
|
||||
|
||||
item_bin = self[item_path][self.COL_BINB].split(', ')
|
||||
if item_bin:
|
||||
for binb in item_bin:
|
||||
binb_path = self.find_path_for_item(binb)
|
||||
if not binb_path:
|
||||
continue
|
||||
self.exclude_item(binb_path)
|
||||
|
||||
def reset(self):
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
self.set(it,
|
||||
self.COL_INC, False,
|
||||
self.COL_BINB, "",
|
||||
self.COL_IMG, False)
|
||||
it = self.iter_next(it)
|
||||
|
||||
self.selection_change_notification()
|
||||
|
||||
"""
|
||||
Returns two lists. One of user selected recipes and the other containing
|
||||
all selected recipes
|
||||
"""
|
||||
def get_selected_recipes(self):
|
||||
allrecipes = []
|
||||
userrecipes = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
name = self.get_value(it, self.COL_PN)
|
||||
type = self.get_value(it, self.COL_TYPE)
|
||||
if type != "image":
|
||||
allrecipes.append(name)
|
||||
sel = "User Selected" in self.get_value(it, self.COL_BINB)
|
||||
if sel:
|
||||
userrecipes.append(name)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return list(set(userrecipes)), list(set(allrecipes))
|
||||
|
||||
def set_selected_recipes(self, recipelist):
|
||||
for pn in recipelist:
|
||||
if pn in self.pn_path.keys():
|
||||
path = self.pn_path[pn]
|
||||
self.include_item(item_path=path,
|
||||
binb="User Selected")
|
||||
self.selection_change_notification()
|
||||
|
||||
def get_selected_image(self):
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
name = self.get_value(it, self.COL_PN)
|
||||
type = self.get_value(it, self.COL_TYPE)
|
||||
if type == "image":
|
||||
sel = "User Selected" in self.get_value(it, self.COL_BINB)
|
||||
if sel:
|
||||
return name
|
||||
it = self.iter_next(it)
|
||||
return None
|
||||
|
||||
def set_selected_image(self, img):
|
||||
if not img:
|
||||
return
|
||||
self.reset()
|
||||
path = self.find_path_for_item(img)
|
||||
self.include_item(item_path=path,
|
||||
binb="User Selected",
|
||||
image_contents=True)
|
||||
self.selection_change_notification()
|
||||
@@ -1,124 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
from bb.ui.crumbs.hobcolor import HobColors
|
||||
from bb.ui.crumbs.hobwidget import hwc
|
||||
|
||||
#
|
||||
# HobPage: the super class for all Hob-related pages
|
||||
#
|
||||
class HobPage (gtk.VBox):
|
||||
|
||||
def __init__(self, builder, title = None):
|
||||
super(HobPage, self).__init__(False, 0)
|
||||
self.builder = builder
|
||||
self.builder_width, self.builder_height = self.builder.size_request()
|
||||
|
||||
if not title:
|
||||
self.title = "Hob -- Image Creator"
|
||||
else:
|
||||
self.title = title
|
||||
|
||||
self.box_group_area = gtk.VBox(False, 12)
|
||||
self.box_group_area.set_size_request(self.builder_width - 73 - 73, self.builder_height - 88 - 15 - 15)
|
||||
self.group_align = gtk.Alignment(xalign = 0, yalign=0.5, xscale=1, yscale=1)
|
||||
self.group_align.set_padding(15, 15, 73, 73)
|
||||
self.group_align.add(self.box_group_area)
|
||||
self.box_group_area.set_homogeneous(False)
|
||||
|
||||
|
||||
def add_onto_top_bar(self, widget = None, padding = 0):
|
||||
# the top button occupies 1/7 of the page height
|
||||
# setup an event box
|
||||
eventbox = gtk.EventBox()
|
||||
style = eventbox.get_style().copy()
|
||||
style.bg[gtk.STATE_NORMAL] = eventbox.get_colormap().alloc_color(HobColors.LIGHT_GRAY, False, False)
|
||||
eventbox.set_style(style)
|
||||
eventbox.set_size_request(-1, 88)
|
||||
|
||||
hbox = gtk.HBox()
|
||||
|
||||
label = gtk.Label()
|
||||
label.set_markup("<span size='x-large'>%s</span>" % self.title)
|
||||
hbox.pack_start(label, expand=False, fill=False, padding=20)
|
||||
|
||||
if widget:
|
||||
# add the widget in the event box
|
||||
hbox.pack_end(widget, expand=False, fill=False, padding=padding)
|
||||
eventbox.add(hbox)
|
||||
|
||||
return eventbox
|
||||
|
||||
def span_tag(self, size="medium", weight="normal", forground="#1c1c1c"):
|
||||
span_tag = "weight='%s' foreground='%s' size='%s'" % (weight, forground, size)
|
||||
return span_tag
|
||||
|
||||
def append_toolbar_button(self, toolbar, buttonname, icon_disp, icon_hovor, tip, cb):
|
||||
# Create a button and append it on the toolbar according to button name
|
||||
icon = gtk.Image()
|
||||
icon_display = icon_disp
|
||||
icon_hover = icon_hovor
|
||||
pix_buffer = gtk.gdk.pixbuf_new_from_file(icon_display)
|
||||
icon.set_from_pixbuf(pix_buffer)
|
||||
tip_text = tip
|
||||
button = toolbar.append_item(buttonname, tip, None, icon, cb)
|
||||
return button
|
||||
|
||||
@staticmethod
|
||||
def _size_to_string(size):
|
||||
try:
|
||||
if not size:
|
||||
size_str = "0 B"
|
||||
else:
|
||||
if len(str(int(size))) > 6:
|
||||
size_str = '%.1f' % (size*1.0/(1024*1024)) + ' MB'
|
||||
elif len(str(int(size))) > 3:
|
||||
size_str = '%.1f' % (size*1.0/1024) + ' KB'
|
||||
else:
|
||||
size_str = str(size) + ' B'
|
||||
except:
|
||||
size_str = "0 B"
|
||||
return size_str
|
||||
|
||||
@staticmethod
|
||||
def _string_to_size(str_size):
|
||||
try:
|
||||
if not str_size:
|
||||
size = 0
|
||||
else:
|
||||
unit = str_size.split()
|
||||
if len(unit) > 1:
|
||||
if unit[1] == 'MB':
|
||||
size = float(unit[0])*1024*1024
|
||||
elif unit[1] == 'KB':
|
||||
size = float(unit[0])*1024
|
||||
elif unit[1] == 'B':
|
||||
size = float(unit[0])
|
||||
else:
|
||||
size = 0
|
||||
else:
|
||||
size = float(unit[0])
|
||||
except:
|
||||
size = 0
|
||||
return size
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,409 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import glib
|
||||
from bb.ui.crumbs.progressbar import HobProgressBar
|
||||
from bb.ui.crumbs.hobcolor import HobColors
|
||||
from bb.ui.crumbs.hobwidget import hic, HobImageButton, HobInfoButton, HobAltButton, HobButton
|
||||
from bb.ui.crumbs.hoblistmodel import RecipeListModel
|
||||
from bb.ui.crumbs.hobpages import HobPage
|
||||
|
||||
#
|
||||
# ImageConfigurationPage
|
||||
#
|
||||
class ImageConfigurationPage (HobPage):
|
||||
|
||||
def __init__(self, builder):
|
||||
super(ImageConfigurationPage, self).__init__(builder, "Image configuration")
|
||||
|
||||
self.image_combo_id = None
|
||||
# we use machine_combo_changed_by_manual to identify the machine is changed by code
|
||||
# or by manual. If by manual, all user's recipe selection and package selection are
|
||||
# cleared.
|
||||
self.machine_combo_changed_by_manual = True
|
||||
self.create_visual_elements()
|
||||
|
||||
def create_visual_elements(self):
|
||||
# create visual elements
|
||||
self.toolbar = gtk.Toolbar()
|
||||
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
|
||||
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
|
||||
|
||||
template_button = self.append_toolbar_button(self.toolbar,
|
||||
"Templates",
|
||||
hic.ICON_TEMPLATES_DISPLAY_FILE,
|
||||
hic.ICON_TEMPLATES_HOVER_FILE,
|
||||
"Load a previously saved template",
|
||||
self.template_button_clicked_cb)
|
||||
my_images_button = self.append_toolbar_button(self.toolbar,
|
||||
"Images",
|
||||
hic.ICON_IMAGES_DISPLAY_FILE,
|
||||
hic.ICON_IMAGES_HOVER_FILE,
|
||||
"Open previously built images",
|
||||
self.my_images_button_clicked_cb)
|
||||
settings_button = self.append_toolbar_button(self.toolbar,
|
||||
"Settings",
|
||||
hic.ICON_SETTINGS_DISPLAY_FILE,
|
||||
hic.ICON_SETTINGS_HOVER_FILE,
|
||||
"View additional build settings",
|
||||
self.settings_button_clicked_cb)
|
||||
|
||||
self.config_top_button = self.add_onto_top_bar(self.toolbar)
|
||||
|
||||
self.gtable = gtk.Table(40, 40, True)
|
||||
self.create_config_machine()
|
||||
self.create_config_baseimg()
|
||||
self.config_build_button = self.create_config_build_button()
|
||||
|
||||
def _remove_all_widget(self):
|
||||
children = self.gtable.get_children() or []
|
||||
for child in children:
|
||||
self.gtable.remove(child)
|
||||
children = self.box_group_area.get_children() or []
|
||||
for child in children:
|
||||
self.box_group_area.remove(child)
|
||||
children = self.get_children() or []
|
||||
for child in children:
|
||||
self.remove(child)
|
||||
|
||||
def _pack_components(self, pack_config_build_button = False):
|
||||
self._remove_all_widget()
|
||||
self.pack_start(self.config_top_button, expand=False, fill=False)
|
||||
self.pack_start(self.group_align, expand=True, fill=True)
|
||||
|
||||
self.box_group_area.pack_start(self.gtable, expand=True, fill=True)
|
||||
if pack_config_build_button:
|
||||
self.box_group_area.pack_end(self.config_build_button, expand=False, fill=False)
|
||||
else:
|
||||
box = gtk.HBox(False, 6)
|
||||
box.show()
|
||||
subbox = gtk.HBox(False, 0)
|
||||
subbox.set_size_request(205, 49)
|
||||
subbox.show()
|
||||
box.add(subbox)
|
||||
self.box_group_area.pack_end(box, False, False)
|
||||
|
||||
def show_machine(self):
|
||||
self.progress_bar.reset()
|
||||
self._pack_components(pack_config_build_button = False)
|
||||
self.set_config_machine_layout(show_progress_bar = False)
|
||||
self.show_all()
|
||||
|
||||
def update_progress_bar(self, title, fraction, status=None):
|
||||
self.progress_bar.update(fraction)
|
||||
self.progress_bar.set_title(title)
|
||||
self.progress_bar.set_rcstyle(status)
|
||||
|
||||
def show_info_populating(self):
|
||||
self._pack_components(pack_config_build_button = False)
|
||||
self.set_config_machine_layout(show_progress_bar = True)
|
||||
self.show_all()
|
||||
|
||||
def show_info_populated(self):
|
||||
self.progress_bar.reset()
|
||||
self._pack_components(pack_config_build_button = False)
|
||||
self.set_config_machine_layout(show_progress_bar = False)
|
||||
self.set_config_baseimg_layout()
|
||||
self.show_all()
|
||||
|
||||
def show_baseimg_selected(self):
|
||||
self.progress_bar.reset()
|
||||
self._pack_components(pack_config_build_button = True)
|
||||
self.set_config_machine_layout(show_progress_bar = False)
|
||||
self.set_config_baseimg_layout()
|
||||
self.set_rcppkg_layout()
|
||||
self.show_all()
|
||||
|
||||
def create_config_machine(self):
|
||||
self.machine_title = gtk.Label()
|
||||
self.machine_title.set_alignment(0.0, 0.5)
|
||||
mark = "<span %s>Select a machine</span>" % self.span_tag('x-large', 'bold')
|
||||
self.machine_title.set_markup(mark)
|
||||
|
||||
self.machine_title_desc = gtk.Label()
|
||||
self.machine_title_desc.set_alignment(0.0, 0.5)
|
||||
mark = ("<span %s>Your selection is the profile of the target machine for which you"
|
||||
" are building the image.\n</span>") % (self.span_tag('medium'))
|
||||
self.machine_title_desc.set_markup(mark)
|
||||
|
||||
self.machine_combo = gtk.combo_box_new_text()
|
||||
self.machine_combo.set_wrap_width(1)
|
||||
self.machine_combo.connect("changed", self.machine_combo_changed_cb)
|
||||
|
||||
icon_file = hic.ICON_LAYERS_DISPLAY_FILE
|
||||
hover_file = hic.ICON_LAYERS_HOVER_FILE
|
||||
self.layer_button = HobImageButton("Layers", "Add support for machines, software, etc.",
|
||||
icon_file, hover_file)
|
||||
self.layer_button.connect("clicked", self.layer_button_clicked_cb)
|
||||
|
||||
markup = "Layers are a powerful mechanism to extend the Yocto Project "
|
||||
markup += "with your own functionality.\n"
|
||||
markup += "For more on layers, check the <a href=\""
|
||||
markup += "http://www.yoctoproject.org/docs/current/dev-manual/"
|
||||
markup += "dev-manual.html#understanding-and-using-layers\">reference manual</a>."
|
||||
self.layer_info_icon = HobInfoButton(markup, self.get_parent())
|
||||
|
||||
self.progress_box = gtk.HBox(False, 6)
|
||||
self.progress_bar = HobProgressBar()
|
||||
self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
|
||||
self.stop_button = HobAltButton("Stop")
|
||||
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
|
||||
self.progress_box.pack_end(self.stop_button, expand=False, fill=False)
|
||||
|
||||
self.machine_separator = gtk.HSeparator()
|
||||
|
||||
def set_config_machine_layout(self, show_progress_bar = False):
|
||||
self.gtable.attach(self.machine_title, 0, 40, 0, 4)
|
||||
self.gtable.attach(self.machine_title_desc, 0, 40, 4, 6)
|
||||
self.gtable.attach(self.machine_combo, 0, 12, 7, 10)
|
||||
self.gtable.attach(self.layer_button, 14, 36, 7, 12)
|
||||
self.gtable.attach(self.layer_info_icon, 36, 40, 7, 11)
|
||||
if show_progress_bar:
|
||||
self.gtable.attach(self.progress_box, 0, 40, 15, 19)
|
||||
self.gtable.attach(self.machine_separator, 0, 40, 13, 14)
|
||||
|
||||
def create_config_baseimg(self):
|
||||
self.image_title = gtk.Label()
|
||||
self.image_title.set_alignment(0, 1.0)
|
||||
mark = "<span %s>Select a base image</span>" % self.span_tag('x-large', 'bold')
|
||||
self.image_title.set_markup(mark)
|
||||
|
||||
self.image_title_desc = gtk.Label()
|
||||
self.image_title_desc.set_alignment(0, 0.5)
|
||||
mark = ("<span %s>Base images are a starting point for the type of image you want. "
|
||||
"You can build them as \n"
|
||||
"they are or customize them to your specific needs.\n</span>") % self.span_tag('medium')
|
||||
self.image_title_desc.set_markup(mark)
|
||||
|
||||
self.image_combo = gtk.combo_box_new_text()
|
||||
self.image_combo.set_wrap_width(1)
|
||||
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
|
||||
|
||||
self.image_desc = gtk.Label()
|
||||
self.image_desc.set_alignment(0.0, 0.5)
|
||||
self.image_desc.set_line_wrap(True)
|
||||
|
||||
# button to view recipes
|
||||
icon_file = hic.ICON_RCIPE_DISPLAY_FILE
|
||||
hover_file = hic.ICON_RCIPE_HOVER_FILE
|
||||
self.view_recipes_button = HobImageButton("View recipes",
|
||||
"Add/remove recipes and tasks",
|
||||
icon_file, hover_file)
|
||||
self.view_recipes_button.connect("clicked", self.view_recipes_button_clicked_cb)
|
||||
|
||||
# button to view packages
|
||||
icon_file = hic.ICON_PACKAGES_DISPLAY_FILE
|
||||
hover_file = hic.ICON_PACKAGES_HOVER_FILE
|
||||
self.view_packages_button = HobImageButton("View packages",
|
||||
"Add/remove previously built packages",
|
||||
icon_file, hover_file)
|
||||
self.view_packages_button.connect("clicked", self.view_packages_button_clicked_cb)
|
||||
|
||||
self.image_separator = gtk.HSeparator()
|
||||
|
||||
def set_config_baseimg_layout(self):
|
||||
self.gtable.attach(self.image_title, 0, 40, 15, 17)
|
||||
self.gtable.attach(self.image_title_desc, 0, 40, 18, 22)
|
||||
self.gtable.attach(self.image_combo, 0, 12, 23, 26)
|
||||
self.gtable.attach(self.image_desc, 13, 38, 23, 28)
|
||||
self.gtable.attach(self.image_separator, 0, 40, 35, 36)
|
||||
|
||||
def set_rcppkg_layout(self):
|
||||
self.gtable.attach(self.view_recipes_button, 0, 20, 28, 33)
|
||||
self.gtable.attach(self.view_packages_button, 20, 40, 28, 33)
|
||||
|
||||
def create_config_build_button(self):
|
||||
# Create the "Build packages" and "Build image" buttons at the bottom
|
||||
button_box = gtk.HBox(False, 6)
|
||||
|
||||
# create button "Build image"
|
||||
just_bake_button = HobButton("Build image")
|
||||
just_bake_button.set_size_request(205, 49)
|
||||
just_bake_button.set_tooltip_text("Build target image")
|
||||
just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
|
||||
button_box.pack_end(just_bake_button, expand=False, fill=False)
|
||||
|
||||
label = gtk.Label(" or ")
|
||||
button_box.pack_end(label, expand=False, fill=False)
|
||||
|
||||
# create button "Build Packages"
|
||||
build_packages_button = HobAltButton("Build packages")
|
||||
build_packages_button.connect("clicked", self.build_packages_button_clicked_cb)
|
||||
build_packages_button.set_tooltip_text("Build recipes into packages")
|
||||
button_box.pack_end(build_packages_button, expand=False, fill=False)
|
||||
|
||||
return button_box
|
||||
|
||||
def stop_button_clicked_cb(self, button):
|
||||
self.builder.cancel_parse_sync()
|
||||
|
||||
def machine_combo_changed_cb(self, machine_combo):
|
||||
combo_item = machine_combo.get_active_text()
|
||||
if not combo_item:
|
||||
return
|
||||
|
||||
self.builder.configuration.curr_mach = combo_item
|
||||
if self.machine_combo_changed_by_manual:
|
||||
self.builder.configuration.selected_image = None
|
||||
self.builder.configuration.selected_recipes = []
|
||||
self.builder.configuration.selected_packages = []
|
||||
# reset machine_combo_changed_by_manual
|
||||
self.machine_combo_changed_by_manual = True
|
||||
|
||||
# Do reparse recipes
|
||||
self.builder.populate_recipe_package_info_async()
|
||||
|
||||
def update_machine_combo(self):
|
||||
all_machines = self.builder.parameters.all_machines
|
||||
|
||||
model = self.machine_combo.get_model()
|
||||
model.clear()
|
||||
for machine in all_machines:
|
||||
self.machine_combo.append_text(machine)
|
||||
self.machine_combo.set_active(-1)
|
||||
|
||||
def switch_machine_combo(self):
|
||||
self.machine_combo_changed_by_manual = False
|
||||
model = self.machine_combo.get_model()
|
||||
active = 0
|
||||
while active < len(model):
|
||||
if model[active][0] == self.builder.configuration.curr_mach:
|
||||
self.machine_combo.set_active(active)
|
||||
return
|
||||
active += 1
|
||||
self.machine_combo.set_active(-1)
|
||||
|
||||
def update_image_desc(self, selected_image):
|
||||
desc = ""
|
||||
if selected_image and selected_image in self.builder.recipe_model.pn_path.keys():
|
||||
image_path = self.builder.recipe_model.pn_path[selected_image]
|
||||
image_iter = self.builder.recipe_model.get_iter(image_path)
|
||||
desc = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_DESC)
|
||||
|
||||
mark = ("<span %s>%s</span>\n") % (self.span_tag('small'), desc)
|
||||
self.image_desc.set_markup(mark)
|
||||
|
||||
def image_combo_changed_idle_cb(self, selected_image, selected_recipes, selected_packages):
|
||||
self.builder.update_recipe_model(selected_image, selected_recipes)
|
||||
self.builder.update_package_model(selected_packages)
|
||||
self.builder.window_sensitive(True)
|
||||
|
||||
def image_combo_changed_cb(self, combo):
|
||||
self.builder.window_sensitive(False)
|
||||
selected_image = self.image_combo.get_active_text()
|
||||
if not selected_image:
|
||||
return
|
||||
|
||||
self.builder.customized = False
|
||||
|
||||
selected_recipes = []
|
||||
|
||||
image_path = self.builder.recipe_model.pn_path[selected_image]
|
||||
image_iter = self.builder.recipe_model.get_iter(image_path)
|
||||
selected_packages = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_INSTALL).split()
|
||||
self.update_image_desc(selected_image)
|
||||
|
||||
self.builder.recipe_model.reset()
|
||||
self.builder.package_model.reset()
|
||||
|
||||
self.show_baseimg_selected()
|
||||
|
||||
glib.idle_add(self.image_combo_changed_idle_cb, selected_image, selected_recipes, selected_packages)
|
||||
|
||||
def _image_combo_connect_signal(self):
|
||||
if not self.image_combo_id:
|
||||
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
|
||||
|
||||
def _image_combo_disconnect_signal(self):
|
||||
if self.image_combo_id:
|
||||
self.image_combo.disconnect(self.image_combo_id)
|
||||
self.image_combo_id = None
|
||||
|
||||
def update_image_combo(self, recipe_model, selected_image):
|
||||
# Update the image combo according to the images in the recipe_model
|
||||
# populate image combo
|
||||
filter = {RecipeListModel.COL_TYPE : ['image']}
|
||||
image_model = recipe_model.tree_model(filter)
|
||||
active = -1
|
||||
cnt = 0
|
||||
|
||||
it = image_model.get_iter_first()
|
||||
self._image_combo_disconnect_signal()
|
||||
model = self.image_combo.get_model()
|
||||
model.clear()
|
||||
# append and set active
|
||||
while it:
|
||||
path = image_model.get_path(it)
|
||||
it = image_model.iter_next(it)
|
||||
image_name = image_model[path][recipe_model.COL_NAME]
|
||||
if image_name == self.builder.recipe_model.__dummy_image__:
|
||||
continue
|
||||
self.image_combo.append_text(image_name)
|
||||
if image_name == selected_image:
|
||||
active = cnt
|
||||
cnt = cnt + 1
|
||||
self.image_combo.append_text(self.builder.recipe_model.__dummy_image__)
|
||||
if selected_image == self.builder.recipe_model.__dummy_image__:
|
||||
active = cnt
|
||||
|
||||
self.image_combo.set_active(-1)
|
||||
self.image_combo.set_active(active)
|
||||
|
||||
if active != -1:
|
||||
self.show_baseimg_selected()
|
||||
|
||||
self._image_combo_connect_signal()
|
||||
|
||||
def layer_button_clicked_cb(self, button):
|
||||
# Create a layer selection dialog
|
||||
self.builder.show_layer_selection_dialog()
|
||||
|
||||
def view_recipes_button_clicked_cb(self, button):
|
||||
self.builder.show_recipes()
|
||||
|
||||
def view_packages_button_clicked_cb(self, button):
|
||||
self.builder.show_packages()
|
||||
|
||||
def just_bake_button_clicked_cb(self, button):
|
||||
self.builder.just_bake()
|
||||
|
||||
def build_packages_button_clicked_cb(self, button):
|
||||
self.builder.build_packages()
|
||||
|
||||
def template_button_clicked_cb(self, button):
|
||||
response, path = self.builder.show_load_template_dialog()
|
||||
if not response:
|
||||
return
|
||||
if path:
|
||||
self.builder.load_template(path)
|
||||
|
||||
def my_images_button_clicked_cb(self, button):
|
||||
self.builder.show_load_my_images_dialog()
|
||||
|
||||
def settings_button_clicked_cb(self, button):
|
||||
# Create an advanced settings dialog
|
||||
response, settings_changed = self.builder.show_adv_settings_dialog()
|
||||
if not response:
|
||||
return
|
||||
if settings_changed:
|
||||
self.builder.reparse_post_adv_settings()
|
||||
@@ -1,459 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import gtk
|
||||
from bb.ui.crumbs.hobcolor import HobColors
|
||||
from bb.ui.crumbs.hobwidget import hic, HobViewTable, HobAltButton, HobButton
|
||||
from bb.ui.crumbs.hobpages import HobPage
|
||||
|
||||
#
|
||||
# ImageDetailsPage
|
||||
#
|
||||
class ImageDetailsPage (HobPage):
|
||||
|
||||
__columns__ = [{
|
||||
'col_name' : 'Image name',
|
||||
'col_id' : 0,
|
||||
'col_style': 'text',
|
||||
'col_min' : 500,
|
||||
'col_max' : 500
|
||||
}, {
|
||||
'col_name' : 'Image size',
|
||||
'col_id' : 1,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 100
|
||||
}, {
|
||||
'col_name' : 'Select',
|
||||
'col_id' : 2,
|
||||
'col_style': 'radio toggle',
|
||||
'col_min' : 100,
|
||||
'col_max' : 100
|
||||
}]
|
||||
|
||||
class DetailBox (gtk.EventBox):
|
||||
def __init__(self, widget = None, varlist = None, vallist = None, icon = None, button = None, color = HobColors.LIGHT_GRAY):
|
||||
gtk.EventBox.__init__(self)
|
||||
|
||||
# set color
|
||||
style = self.get_style().copy()
|
||||
style.bg[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(color, False, False)
|
||||
self.set_style(style)
|
||||
|
||||
self.hbox = gtk.HBox()
|
||||
self.hbox.set_border_width(15)
|
||||
self.add(self.hbox)
|
||||
|
||||
if widget:
|
||||
row = 1
|
||||
elif varlist and vallist:
|
||||
# pack the icon and the text on the left
|
||||
row = len(varlist)
|
||||
self.table = gtk.Table(row, 20, True)
|
||||
self.table.set_size_request(100, -1)
|
||||
self.hbox.pack_start(self.table, expand=True, fill=True, padding=15)
|
||||
|
||||
colid = 0
|
||||
self.line_widgets = {}
|
||||
if icon:
|
||||
self.table.attach(icon, colid, colid + 2, 0, 1)
|
||||
colid = colid + 2
|
||||
if widget:
|
||||
self.table.attach(widget, colid, 20, 0, 1)
|
||||
elif varlist and vallist:
|
||||
for line in range(0, row):
|
||||
self.line_widgets[varlist[line]] = self.text2label(varlist[line], vallist[line])
|
||||
self.table.attach(self.line_widgets[varlist[line]], colid, 20, line, line + 1)
|
||||
|
||||
# pack the button on the right
|
||||
if button:
|
||||
self.hbox.pack_end(button, expand=False, fill=False)
|
||||
|
||||
def update_line_widgets(self, variable, value):
|
||||
if len(self.line_widgets) == 0:
|
||||
return
|
||||
if not isinstance(self.line_widgets[variable], gtk.Label):
|
||||
return
|
||||
self.line_widgets[variable].set_markup(self.format_line(variable, value))
|
||||
|
||||
def format_line(self, variable, value):
|
||||
markup = "<span weight=\'bold\'>%s</span>" % variable
|
||||
markup += "<span weight=\'normal\' foreground=\'#1c1c1c\' font_desc=\'14px\'>%s</span>" % value
|
||||
return markup
|
||||
|
||||
def text2label(self, variable, value):
|
||||
# append the name:value to the left box
|
||||
# such as "Name: hob-core-minimal-variant-2011-12-15-beagleboard"
|
||||
label = gtk.Label()
|
||||
label.set_alignment(0.0, 0.5)
|
||||
label.set_markup(self.format_line(variable, value))
|
||||
return label
|
||||
|
||||
def __init__(self, builder):
|
||||
super(ImageDetailsPage, self).__init__(builder, "Image details")
|
||||
|
||||
self.image_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
|
||||
self.button_ids = {}
|
||||
self.details_bottom_buttons = gtk.HBox(False, 6)
|
||||
self.create_visual_elements()
|
||||
|
||||
def create_visual_elements(self):
|
||||
# create visual elements
|
||||
# create the toolbar
|
||||
self.toolbar = gtk.Toolbar()
|
||||
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
|
||||
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
|
||||
|
||||
template_button = self.append_toolbar_button(self.toolbar,
|
||||
"Templates",
|
||||
hic.ICON_TEMPLATES_DISPLAY_FILE,
|
||||
hic.ICON_TEMPLATES_HOVER_FILE,
|
||||
"Load a previously saved template",
|
||||
self.template_button_clicked_cb)
|
||||
my_images_button = self.append_toolbar_button(self.toolbar,
|
||||
"Images",
|
||||
hic.ICON_IMAGES_DISPLAY_FILE,
|
||||
hic.ICON_IMAGES_HOVER_FILE,
|
||||
"Open previously built images",
|
||||
self.my_images_button_clicked_cb)
|
||||
settings_button = self.append_toolbar_button(self.toolbar,
|
||||
"Settings",
|
||||
hic.ICON_SETTINGS_DISPLAY_FILE,
|
||||
hic.ICON_SETTINGS_HOVER_FILE,
|
||||
"View additional build settings",
|
||||
self.settings_button_clicked_cb)
|
||||
|
||||
self.details_top_buttons = self.add_onto_top_bar(self.toolbar)
|
||||
|
||||
def _remove_all_widget(self):
|
||||
children = self.get_children() or []
|
||||
for child in children:
|
||||
self.remove(child)
|
||||
children = self.box_group_area.get_children() or []
|
||||
for child in children:
|
||||
self.box_group_area.remove(child)
|
||||
children = self.details_bottom_buttons.get_children() or []
|
||||
for child in children:
|
||||
self.details_bottom_buttons.remove(child)
|
||||
|
||||
def show_page(self, step):
|
||||
build_succeeded = (step == self.builder.IMAGE_GENERATED)
|
||||
image_addr = self.builder.parameters.image_addr
|
||||
image_names = self.builder.parameters.image_names
|
||||
if build_succeeded:
|
||||
machine = self.builder.configuration.curr_mach
|
||||
base_image = self.builder.recipe_model.get_selected_image()
|
||||
layers = self.builder.configuration.layers
|
||||
pkg_num = "%s" % len(self.builder.package_model.get_selected_packages())
|
||||
else:
|
||||
pkg_num = "N/A"
|
||||
|
||||
# remove
|
||||
for button_id, button in self.button_ids.items():
|
||||
button.disconnect(button_id)
|
||||
self._remove_all_widget()
|
||||
# repack
|
||||
self.pack_start(self.details_top_buttons, expand=False, fill=False)
|
||||
self.pack_start(self.group_align, expand=True, fill=True)
|
||||
|
||||
self.build_result = None
|
||||
if build_succeeded:
|
||||
# building is the previous step
|
||||
icon = gtk.Image()
|
||||
pixmap_path = hic.ICON_INDI_CONFIRM_FILE
|
||||
color = HobColors.RUNNING
|
||||
pix_buffer = gtk.gdk.pixbuf_new_from_file(pixmap_path)
|
||||
icon.set_from_pixbuf(pix_buffer)
|
||||
varlist = [""]
|
||||
vallist = ["Your image is ready"]
|
||||
self.build_result = self.DetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
|
||||
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
|
||||
|
||||
# create the buttons at the bottom first because the buttons are used in apply_button_per_image()
|
||||
if build_succeeded:
|
||||
self.buttonlist = ["Build new image", "Save as template", "Run image", "Deploy image"]
|
||||
else: # get to this page from "My images"
|
||||
self.buttonlist = ["Build new image", "Run image", "Deploy image"]
|
||||
|
||||
# Name
|
||||
self.image_store.clear()
|
||||
default_toggled = False
|
||||
default_image_size = 0
|
||||
i = 0
|
||||
for image_name in image_names:
|
||||
image_size = HobPage._size_to_string(os.stat(os.path.join(image_addr, image_name)).st_size)
|
||||
if not default_toggled:
|
||||
default_toggled = (self.test_type_runnable(image_name) and self.test_mach_runnable(image_name)) \
|
||||
or self.test_deployable(image_name)
|
||||
if i == (len(image_names) - 1):
|
||||
default_toggled = True
|
||||
self.image_store.set(self.image_store.append(), 0, image_name, 1, image_size, 2, default_toggled)
|
||||
if default_toggled:
|
||||
default_image_size = image_size
|
||||
self.create_bottom_buttons(self.buttonlist, image_name)
|
||||
else:
|
||||
self.image_store.set(self.image_store.append(), 0, image_name, 1, image_size, 2, False)
|
||||
i = i + 1
|
||||
image_table = HobViewTable(self.__columns__)
|
||||
image_table.set_model(self.image_store)
|
||||
image_table.connect("toggled", self.toggled_cb)
|
||||
view_files_button = HobAltButton("View files")
|
||||
view_files_button.connect("clicked", self.view_files_clicked_cb, image_addr)
|
||||
view_files_button.set_tooltip_text("Open the directory containing the image files")
|
||||
self.image_detail = self.DetailBox(widget=image_table, button=view_files_button)
|
||||
self.box_group_area.pack_start(self.image_detail, expand=True, fill=True)
|
||||
|
||||
# Machine, Base image and Layers
|
||||
layer_num_limit = 15
|
||||
varlist = ["Machine: ", "Base image: ", "Layers: "]
|
||||
vallist = []
|
||||
self.setting_detail = None
|
||||
if build_succeeded:
|
||||
vallist.append(machine)
|
||||
vallist.append(base_image)
|
||||
i = 0
|
||||
for layer in layers:
|
||||
varlist.append(" - ")
|
||||
if i > layer_num_limit:
|
||||
break
|
||||
i += 1
|
||||
vallist.append("")
|
||||
i = 0
|
||||
for layer in layers:
|
||||
if i > layer_num_limit:
|
||||
break
|
||||
elif i == layer_num_limit:
|
||||
vallist.append("and more...")
|
||||
else:
|
||||
vallist.append(layer)
|
||||
i += 1
|
||||
|
||||
edit_config_button = HobAltButton("Edit configuration")
|
||||
edit_config_button.set_tooltip_text("Edit machine, base image and recipes")
|
||||
edit_config_button.connect("clicked", self.edit_config_button_clicked_cb)
|
||||
self.setting_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_config_button)
|
||||
self.box_group_area.pack_start(self.setting_detail, expand=False, fill=False)
|
||||
|
||||
# Packages included, and Total image size
|
||||
varlist = ["Packages included: ", "Total image size: "]
|
||||
vallist = []
|
||||
vallist.append(pkg_num)
|
||||
vallist.append(default_image_size)
|
||||
if build_succeeded:
|
||||
edit_packages_button = HobAltButton("Edit packages")
|
||||
edit_packages_button.set_tooltip_text("Edit the packages included in your image")
|
||||
edit_packages_button.connect("clicked", self.edit_packages_button_clicked_cb)
|
||||
else: # get to this page from "My images"
|
||||
edit_packages_button = None
|
||||
self.package_detail = self.DetailBox(varlist=varlist, vallist=vallist, button=edit_packages_button)
|
||||
self.box_group_area.pack_start(self.package_detail, expand=False, fill=False)
|
||||
|
||||
# pack the buttons at the bottom, at this time they are already created.
|
||||
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
|
||||
|
||||
self.show_all()
|
||||
|
||||
def view_files_clicked_cb(self, button, image_addr):
|
||||
os.system("xdg-open /%s" % image_addr)
|
||||
|
||||
def refresh_package_detail_box(self, image_size):
|
||||
self.package_detail.update_line_widgets("Total image size: ", image_size)
|
||||
|
||||
def test_type_runnable(self, image_name):
|
||||
type_runnable = False
|
||||
for t in self.builder.parameters.runnable_image_types:
|
||||
if image_name.endswith(t):
|
||||
type_runnable = True
|
||||
break
|
||||
return type_runnable
|
||||
|
||||
def test_mach_runnable(self, image_name):
|
||||
mach_runnable = False
|
||||
for t in self.builder.parameters.runnable_machine_patterns:
|
||||
if t in image_name:
|
||||
mach_runnable = True
|
||||
break
|
||||
return mach_runnable
|
||||
|
||||
def test_deployable(self, image_name):
|
||||
deployable = False
|
||||
for t in self.builder.parameters.deployable_image_types:
|
||||
if image_name.endswith(t):
|
||||
deployable = True
|
||||
break
|
||||
return deployable
|
||||
|
||||
def toggled_cb(self, table, cell, path, columnid, tree):
|
||||
model = tree.get_model()
|
||||
if not model:
|
||||
return
|
||||
iter = model.get_iter_first()
|
||||
while iter:
|
||||
rowpath = model.get_path(iter)
|
||||
model[rowpath][columnid] = False
|
||||
iter = model.iter_next(iter)
|
||||
|
||||
model[path][columnid] = True
|
||||
self.refresh_package_detail_box(model[path][1])
|
||||
|
||||
image_name = model[path][0]
|
||||
|
||||
# remove
|
||||
for button_id, button in self.button_ids.items():
|
||||
button.disconnect(button_id)
|
||||
self._remove_all_widget()
|
||||
# repack
|
||||
self.pack_start(self.details_top_buttons, expand=False, fill=False)
|
||||
self.pack_start(self.group_align, expand=True, fill=True)
|
||||
if self.build_result:
|
||||
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
|
||||
self.box_group_area.pack_start(self.image_detail, expand=True, fill=True)
|
||||
if self.setting_detail:
|
||||
self.box_group_area.pack_start(self.setting_detail, expand=False, fill=False)
|
||||
self.box_group_area.pack_start(self.package_detail, expand=False, fill=False)
|
||||
self.create_bottom_buttons(self.buttonlist, image_name)
|
||||
self.box_group_area.pack_end(self.details_bottom_buttons, expand=False, fill=False)
|
||||
self.show_all()
|
||||
|
||||
def create_bottom_buttons(self, buttonlist, image_name):
|
||||
# Create the buttons at the bottom
|
||||
created = False
|
||||
packed = False
|
||||
self.button_ids = {}
|
||||
|
||||
# create button "Deploy image"
|
||||
name = "Deploy image"
|
||||
if name in buttonlist and self.test_deployable(image_name):
|
||||
deploy_button = HobButton('Deploy image')
|
||||
deploy_button.set_size_request(205, 49)
|
||||
deploy_button.set_tooltip_text("Burn a live image to a USB drive or flash memory")
|
||||
deploy_button.set_flags(gtk.CAN_DEFAULT)
|
||||
button_id = deploy_button.connect("clicked", self.deploy_button_clicked_cb)
|
||||
self.button_ids[button_id] = deploy_button
|
||||
self.details_bottom_buttons.pack_end(deploy_button, expand=False, fill=False)
|
||||
created = True
|
||||
packed = True
|
||||
|
||||
name = "Run image"
|
||||
if name in buttonlist and self.test_type_runnable(image_name) and self.test_mach_runnable(image_name):
|
||||
if created == True:
|
||||
# separator
|
||||
label = gtk.Label(" or ")
|
||||
self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
|
||||
|
||||
# create button "Run image"
|
||||
run_button = HobAltButton("Run image")
|
||||
else:
|
||||
# create button "Run image" as the primary button
|
||||
run_button = HobButton("Run image")
|
||||
run_button.set_size_request(205, 49)
|
||||
run_button.set_flags(gtk.CAN_DEFAULT)
|
||||
packed = True
|
||||
run_button.set_tooltip_text("Start up an image with qemu emulator")
|
||||
button_id = run_button.connect("clicked", self.run_button_clicked_cb)
|
||||
self.button_ids[button_id] = run_button
|
||||
self.details_bottom_buttons.pack_end(run_button, expand=False, fill=False)
|
||||
created = True
|
||||
|
||||
if not packed:
|
||||
box = gtk.HBox(False, 6)
|
||||
box.show()
|
||||
subbox = gtk.HBox(False, 0)
|
||||
subbox.set_size_request(205, 49)
|
||||
subbox.show()
|
||||
box.add(subbox)
|
||||
self.details_bottom_buttons.pack_end(box, False, False)
|
||||
|
||||
name = "Save as template"
|
||||
if name in buttonlist:
|
||||
if created == True:
|
||||
# separator
|
||||
label = gtk.Label(" or ")
|
||||
self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
|
||||
|
||||
# create button "Save as template"
|
||||
save_button = HobAltButton("Save as template")
|
||||
save_button.set_tooltip_text("Save the image configuration for reuse")
|
||||
button_id = save_button.connect("clicked", self.save_button_clicked_cb)
|
||||
self.button_ids[button_id] = save_button
|
||||
self.details_bottom_buttons.pack_end(save_button, expand=False, fill=False)
|
||||
create = True
|
||||
|
||||
name = "Build new image"
|
||||
if name in buttonlist:
|
||||
# create button "Build new image"
|
||||
build_new_button = HobAltButton("Build new image")
|
||||
build_new_button.set_tooltip_text("Create a new image from scratch")
|
||||
button_id = build_new_button.connect("clicked", self.build_new_button_clicked_cb)
|
||||
self.button_ids[button_id] = build_new_button
|
||||
self.details_bottom_buttons.pack_start(build_new_button, expand=False, fill=False)
|
||||
|
||||
def _get_selected_image(self):
|
||||
image_name = ""
|
||||
iter = self.image_store.get_iter_first()
|
||||
while iter:
|
||||
path = self.image_store.get_path(iter)
|
||||
if self.image_store[path][2]:
|
||||
image_name = self.image_store[path][0]
|
||||
break
|
||||
iter = self.image_store.iter_next(iter)
|
||||
|
||||
return image_name
|
||||
|
||||
def save_button_clicked_cb(self, button):
|
||||
self.builder.show_save_template_dialog()
|
||||
|
||||
def deploy_button_clicked_cb(self, button):
|
||||
image_name = self._get_selected_image()
|
||||
self.builder.deploy_image(image_name)
|
||||
|
||||
def run_button_clicked_cb(self, button):
|
||||
image_name = self._get_selected_image()
|
||||
self.builder.runqemu_image(image_name)
|
||||
|
||||
def build_new_button_clicked_cb(self, button):
|
||||
self.builder.initiate_new_build_async()
|
||||
|
||||
def edit_config_button_clicked_cb(self, button):
|
||||
self.builder.show_configuration()
|
||||
|
||||
def edit_packages_button_clicked_cb(self, button):
|
||||
self.builder.show_packages(ask=False)
|
||||
|
||||
def template_button_clicked_cb(self, button):
|
||||
response, path = self.builder.show_load_template_dialog()
|
||||
if not response:
|
||||
return
|
||||
if path:
|
||||
self.builder.load_template(path)
|
||||
|
||||
def my_images_button_clicked_cb(self, button):
|
||||
self.builder.show_load_my_images_dialog()
|
||||
|
||||
def settings_button_clicked_cb(self, button):
|
||||
# Create an advanced settings dialog
|
||||
response, settings_changed = self.builder.show_adv_settings_dialog()
|
||||
if not response:
|
||||
return
|
||||
if settings_changed:
|
||||
self.builder.reparse_post_adv_settings()
|
||||
@@ -1,249 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
|
||||
# Authored by Shane Wang <shane.wang@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gtk
|
||||
import glib
|
||||
from bb.ui.crumbs.hobcolor import HobColors
|
||||
from bb.ui.crumbs.hobwidget import HobViewTable, HobNotebook, HobAltButton, HobButton
|
||||
from bb.ui.crumbs.hoblistmodel import PackageListModel
|
||||
from bb.ui.crumbs.hobpages import HobPage
|
||||
|
||||
#
|
||||
# PackageSelectionPage
|
||||
#
|
||||
class PackageSelectionPage (HobPage):
|
||||
|
||||
pages = [
|
||||
{
|
||||
'name' : 'Included',
|
||||
'filter' : { PackageListModel.COL_INC : [True] },
|
||||
'columns' : [{
|
||||
'col_name' : 'Package name',
|
||||
'col_id' : PackageListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 300,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Brought in by',
|
||||
'col_id' : PackageListModel.COL_BINB,
|
||||
'col_style': 'binb',
|
||||
'col_min' : 100,
|
||||
'col_max' : 350,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Size',
|
||||
'col_id' : PackageListModel.COL_SIZE,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 300,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Included',
|
||||
'col_id' : PackageListModel.COL_INC,
|
||||
'col_style': 'check toggle',
|
||||
'col_min' : 100,
|
||||
'col_max' : 100
|
||||
}]
|
||||
}, {
|
||||
'name' : 'All packages',
|
||||
'filter' : {},
|
||||
'columns' : [{
|
||||
'col_name' : 'Package name',
|
||||
'col_id' : PackageListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 400,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Size',
|
||||
'col_id' : PackageListModel.COL_SIZE,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 500,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Included',
|
||||
'col_id' : PackageListModel.COL_INC,
|
||||
'col_style': 'check toggle',
|
||||
'col_min' : 100,
|
||||
'col_max' : 100
|
||||
}]
|
||||
}
|
||||
]
|
||||
|
||||
def __init__(self, builder):
|
||||
super(PackageSelectionPage, self).__init__(builder, "Packages")
|
||||
|
||||
# set invisiable members
|
||||
self.recipe_model = self.builder.recipe_model
|
||||
self.package_model = self.builder.package_model
|
||||
|
||||
# create visual elements
|
||||
self.create_visual_elements()
|
||||
|
||||
def create_visual_elements(self):
|
||||
self.label = gtk.Label("Packages included: 0\nSelected packages size: 0 MB")
|
||||
self.eventbox = self.add_onto_top_bar(self.label, 73)
|
||||
self.pack_start(self.eventbox, expand=False, fill=False)
|
||||
self.pack_start(self.group_align, expand=True, fill=True)
|
||||
|
||||
# set visiable members
|
||||
self.ins = HobNotebook()
|
||||
self.tables = [] # we need to modify table when the dialog is shown
|
||||
# append the tab
|
||||
for page in self.pages:
|
||||
columns = page['columns']
|
||||
tab = HobViewTable(columns)
|
||||
filter = page['filter']
|
||||
tab.set_model(self.package_model.tree_model(filter))
|
||||
tab.connect("toggled", self.table_toggled_cb, page['name'])
|
||||
if page['name'] == "Included":
|
||||
tab.connect("button-release-event", self.button_click_cb)
|
||||
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
|
||||
label = gtk.Label(page['name'])
|
||||
self.ins.append_page(tab, label)
|
||||
self.tables.append(tab)
|
||||
|
||||
self.ins.set_entry("Search packages:")
|
||||
# set the search entry for each table
|
||||
for tab in self.tables:
|
||||
tab.set_search_entry(0, self.ins.search)
|
||||
|
||||
# add all into the dialog
|
||||
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
|
||||
|
||||
button_box = gtk.HBox(False, 6)
|
||||
self.box_group_area.pack_start(button_box, expand=False, fill=False)
|
||||
|
||||
self.build_image_button = HobButton('Build image')
|
||||
self.build_image_button.set_size_request(205, 49)
|
||||
self.build_image_button.set_tooltip_text("Build target image")
|
||||
self.build_image_button.set_flags(gtk.CAN_DEFAULT)
|
||||
self.build_image_button.grab_default()
|
||||
self.build_image_button.connect("clicked", self.build_image_clicked_cb)
|
||||
button_box.pack_end(self.build_image_button, expand=False, fill=False)
|
||||
|
||||
self.back_button = HobAltButton("<< Back to image configuration")
|
||||
self.back_button.connect("clicked", self.back_button_clicked_cb)
|
||||
button_box.pack_start(self.back_button, expand=False, fill=False)
|
||||
|
||||
def button_click_cb(self, widget, event):
|
||||
path, col = widget.table_tree.get_cursor()
|
||||
tree_model = widget.table_tree.get_model()
|
||||
if path: # else activation is likely a removal
|
||||
binb = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_BINB)
|
||||
if binb:
|
||||
self.builder.show_binb_dialog(binb)
|
||||
|
||||
def build_image_clicked_cb(self, button):
|
||||
self.builder.build_image()
|
||||
|
||||
def back_button_clicked_cb(self, button):
|
||||
self.builder.show_configuration()
|
||||
|
||||
def _expand_all(self):
|
||||
for tab in self.tables:
|
||||
tab.table_tree.expand_all()
|
||||
|
||||
def refresh_selection(self):
|
||||
self._expand_all()
|
||||
|
||||
self.builder.configuration.selected_packages = self.package_model.get_selected_packages()
|
||||
self.builder.configuration.user_selected_packages = self.package_model.get_user_selected_packages()
|
||||
selected_packages_num = len(self.builder.configuration.selected_packages)
|
||||
selected_packages_size = self.package_model.get_packages_size()
|
||||
selected_packages_size_str = HobPage._size_to_string(selected_packages_size)
|
||||
|
||||
image_overhead_factor = self.builder.configuration.image_overhead_factor
|
||||
image_rootfs_size = self.builder.configuration.image_rootfs_size * 1024 # image_rootfs_size is KB
|
||||
image_extra_size = self.builder.configuration.image_extra_size * 1024 # image_extra_size is KB
|
||||
base_size = image_overhead_factor * selected_packages_size
|
||||
image_total_size = max(base_size, image_rootfs_size) + image_extra_size
|
||||
if "zypper" in self.builder.configuration.selected_packages:
|
||||
image_total_size += (51200 * 1024)
|
||||
image_total_size_str = HobPage._size_to_string(image_total_size)
|
||||
|
||||
self.label.set_text("Packages included: %s\nSelected packages size: %s\nTotal image size: %s" %
|
||||
(selected_packages_num, selected_packages_size_str, image_total_size_str))
|
||||
self.ins.show_indicator_icon("Included", selected_packages_num)
|
||||
|
||||
def toggle_item_idle_cb(self, path, view_tree, cell, pagename):
|
||||
if not self.package_model.path_included(path):
|
||||
self.package_model.include_item(item_path=path, binb="User Selected")
|
||||
else:
|
||||
if pagename == "Included":
|
||||
self.pre_fadeout_checkout_include(view_tree)
|
||||
self.package_model.exclude_item(item_path=path)
|
||||
self.render_fadeout(view_tree, cell)
|
||||
else:
|
||||
self.package_model.exclude_item(item_path=path)
|
||||
|
||||
self.refresh_selection()
|
||||
if not self.builder.customized:
|
||||
self.builder.customized = True
|
||||
self.builder.configuration.selected_image = self.recipe_model.__dummy_image__
|
||||
self.builder.rcppkglist_populated()
|
||||
|
||||
self.builder.window_sensitive(True)
|
||||
|
||||
def table_toggled_cb(self, table, cell, view_path, toggled_columnid, view_tree, pagename):
|
||||
# Click to include a package
|
||||
self.builder.window_sensitive(False)
|
||||
view_model = view_tree.get_model()
|
||||
path = self.package_model.convert_vpath_to_path(view_model, view_path)
|
||||
glib.idle_add(self.toggle_item_idle_cb, path, view_tree, cell, pagename)
|
||||
|
||||
def pre_fadeout_checkout_include(self, tree):
|
||||
self.package_model.resync_fadeout_column(self.package_model.get_iter_first())
|
||||
# Check out a model which base on the column COL_FADE_INC,
|
||||
# it's save the prev state of column COL_INC before do exclude_item
|
||||
filter = { PackageListModel.COL_FADE_INC : [True]}
|
||||
new_model = self.package_model.tree_model(filter)
|
||||
tree.set_model(new_model)
|
||||
tree.expand_all()
|
||||
|
||||
def get_excluded_rows(self, to_render_cells, model, it):
|
||||
while it:
|
||||
path = model.get_path(it)
|
||||
prev_cell_is_active = model.get_value(it, PackageListModel.COL_FADE_INC)
|
||||
curr_cell_is_active = model.get_value(it, PackageListModel.COL_INC)
|
||||
if (prev_cell_is_active == True) and (curr_cell_is_active == False):
|
||||
to_render_cells.append(path)
|
||||
if model.iter_has_child(it):
|
||||
self.get_excluded_rows(to_render_cells, model, model.iter_children(it))
|
||||
it = model.iter_next(it)
|
||||
|
||||
return to_render_cells
|
||||
|
||||
def render_fadeout(self, tree, cell):
|
||||
if (not cell) or (not tree):
|
||||
return
|
||||
to_render_cells = []
|
||||
view_model = tree.get_model()
|
||||
self.get_excluded_rows(to_render_cells, view_model, view_model.get_iter_first())
|
||||
|
||||
cell.fadeout(tree, 1000, to_render_cells)
|
||||
|
||||
def after_fadeout_checkin_include(self, table, ctrl, cell, tree):
|
||||
tree.set_model(self.package_model.tree_model(self.pages[0]['filter']))
|
||||
tree.expand_all()
|
||||
@@ -1,163 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2012 Intel Corporation
|
||||
#
|
||||
# Authored by Joshua Lock <josh@linux.intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import gobject
|
||||
import gtk
|
||||
try:
|
||||
import gconf
|
||||
except:
|
||||
pass
|
||||
|
||||
class PersistentTooltip(gtk.Window):
|
||||
"""
|
||||
A tooltip which persists once shown until the user dismisses it with the Esc
|
||||
key or by clicking the close button.
|
||||
|
||||
# FIXME: the PersistentTooltip should be disabled when the user clicks anywhere off
|
||||
# it. We can't do this with focus-out-event becuase modal ensures we have focus?
|
||||
|
||||
markup: some Pango text markup to display in the tooltip
|
||||
"""
|
||||
def __init__(self, markup):
|
||||
gtk.Window.__init__(self, gtk.WINDOW_POPUP)
|
||||
|
||||
# Inherit the system theme for a tooltip
|
||||
style = gtk.rc_get_style_by_paths(gtk.settings_get_default(),
|
||||
'gtk-tooltip', 'gtk-tooltip', gobject.TYPE_NONE)
|
||||
self.set_style(style)
|
||||
|
||||
# The placement of the close button on the tip should reflect how the
|
||||
# window manager of the users system places close buttons. Try to read
|
||||
# the metacity gconf key to determine whether the close button is on the
|
||||
# left or the right.
|
||||
# In the case that we can't determine the users configuration we default
|
||||
# to close buttons being on the right.
|
||||
__button_right = True
|
||||
try:
|
||||
client = gconf.client_get_default()
|
||||
order = client.get_string("/apps/metacity/general/button_layout")
|
||||
if order and order.endswith(":"):
|
||||
__button_right = False
|
||||
except NameError:
|
||||
pass
|
||||
|
||||
# We need to ensure we're only shown once
|
||||
self.shown = False
|
||||
|
||||
# We don't want any WM decorations
|
||||
self.set_decorated(False)
|
||||
# We don't want to show in the taskbar or window switcher
|
||||
self.set_skip_pager_hint(True)
|
||||
self.set_skip_taskbar_hint(True)
|
||||
# We must be modal to ensure we grab focus when presented from a gtk.Dialog
|
||||
self.set_modal(True)
|
||||
|
||||
self.set_border_width(0)
|
||||
self.set_position(gtk.WIN_POS_MOUSE)
|
||||
self.set_opacity(0.95)
|
||||
|
||||
# Ensure a reasonable minimum size
|
||||
self.set_geometry_hints(self, 100, 50)
|
||||
|
||||
# Draw our label and close buttons
|
||||
hbox = gtk.HBox(False, 0)
|
||||
hbox.show()
|
||||
self.add(hbox)
|
||||
|
||||
img = gtk.Image()
|
||||
img.set_from_stock(gtk.STOCK_CLOSE, gtk.ICON_SIZE_BUTTON)
|
||||
|
||||
self.button = gtk.Button()
|
||||
self.button.set_image(img)
|
||||
self.button.connect("clicked", self._dismiss_cb)
|
||||
self.button.set_flags(gtk.CAN_DEFAULT)
|
||||
self.button.grab_focus()
|
||||
self.button.show()
|
||||
vbox = gtk.VBox(False, 0)
|
||||
vbox.show()
|
||||
vbox.pack_start(self.button, False, False, 0)
|
||||
if __button_right:
|
||||
hbox.pack_end(vbox, True, True, 0)
|
||||
else:
|
||||
hbox.pack_start(vbox, True, True, 0)
|
||||
|
||||
self.set_default(self.button)
|
||||
|
||||
bin = gtk.HBox(True, 6)
|
||||
bin.set_border_width(6)
|
||||
bin.show()
|
||||
self.label = gtk.Label()
|
||||
self.label.set_line_wrap(True)
|
||||
# We want to match the colours of the normal tooltips, as dictated by
|
||||
# the users gtk+-2.0 theme, wherever possible - on some systems this
|
||||
# requires explicitly setting a fg_color for the label which matches the
|
||||
# tooltip_fg_color
|
||||
settings = gtk.settings_get_default()
|
||||
colours = settings.get_property('gtk-color-scheme').split('\n')
|
||||
# remove any empty lines, there's likely to be a trailing one after
|
||||
# calling split on a dictionary-like string
|
||||
colours = filter(None, colours)
|
||||
for col in colours:
|
||||
item, val = col.split(': ')
|
||||
if item == 'tooltip_fg_color':
|
||||
style = self.label.get_style()
|
||||
style.fg[gtk.STATE_NORMAL] = gtk.gdk.color_parse(val)
|
||||
self.label.set_style(style)
|
||||
break # we only care for the tooltip_fg_color
|
||||
self.label.set_markup(markup)
|
||||
self.label.show()
|
||||
bin.add(self.label)
|
||||
hbox.pack_end(bin, True, True, 6)
|
||||
|
||||
self.connect("key-press-event", self._catch_esc_cb)
|
||||
|
||||
"""
|
||||
Callback when the PersistentTooltip's close button is clicked.
|
||||
Hides the PersistentTooltip.
|
||||
"""
|
||||
def _dismiss_cb(self, button):
|
||||
self.hide()
|
||||
return True
|
||||
|
||||
"""
|
||||
Callback when the Esc key is detected. Hides the PersistentTooltip.
|
||||
"""
|
||||
def _catch_esc_cb(self, widget, event):
|
||||
keyname = gtk.gdk.keyval_name(event.keyval)
|
||||
if keyname == "Escape":
|
||||
self.hide()
|
||||
return True
|
||||
|
||||
"""
|
||||
Called to present the PersistentTooltip.
|
||||
Overrides the superclasses show() method to include state tracking.
|
||||
"""
|
||||
def show(self):
|
||||
if not self.shown:
|
||||
self.shown = True
|
||||
gtk.Window.show(self)
|
||||
|
||||
"""
|
||||
Called to hide the PersistentTooltip.
|
||||
Overrides the superclasses hide() method to include state tracking.
|
||||
"""
|
||||
def hide(self):
|
||||
self.shown = False
|
||||
gtk.Window.hide(self)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user