Compare commits

..

2 Commits
elroy ... clyde

Author SHA1 Message Date
Marcin Juszkiewicz
b9065372f4 db: fix SRC_URI
git-svn-id: https://svn.o-hand.com/repos/poky/branches/clyde@4130 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-03-28 13:14:58 +00:00
Richard Purdie
0e09f04573 Branch for clyde
git-svn-id: https://svn.o-hand.com/repos/poky/branches/clyde@1165 311d38ba-8fff-0310-9ca6-ca027cbcb966
2007-01-19 10:14:05 +00:00
3662 changed files with 251975 additions and 1182265 deletions

11
LICENSE
View File

@@ -1,11 +0,0 @@
Different components of Poky are under different licenses (a mix of
MIT and GPLv2). Please see:
bitbake/COPYING (GPLv2)
meta/COPYING.MIT (MIT)
meta-extras/COPYING.MIT (MIT)
which cover the components in those subdirectories.
License information for any other files is either explicitly stated
or defaults to GPL version 2.

75
README
View File

@@ -1,15 +1,66 @@
Poky
====
Introduction
==
Poky platform builder is a combined cross build system and development
environment. It features support for building X11/Matchbox/GTK based
filesystem images for various embedded devices and boards. It also
supports cross-architecture application development using QEMU emulation
and a standalone toolchain and SDK with IDE integration.
'Poky' is a combined cross build system and linux distribution based
upon OpenEmbedded. It features support for building X11/Matchbox/GTK
based filesystem images for various embedded devices and boards.
Poky has an extensive handbook, the source of which is contained in
the handbook directory. For compiled HTML or pdf versions of this,
see the Poky website http://pokylinux.org.
Additional information on the specifics of hardware that Poky supports
is available in README.hardware.
Required Packages
===
Running Poky on Debian based distributions requires the following
extra packages be installed;
build-essential
diffstat
texinfo
texi2html
cvs
subversion
gawk
bochsbios (to run qemux86 images)
You also need to install the qemu from http://debian.o-hand.com/. A
poky-depends deb is also available from this source which will install
all the dependencies mentioned above for you.
Alternatively poky can build qemu itself, but for this you need the
following packages installed;
gcc-3.4
libsdl1.2-dev
zlib1g-dev
You will also need to comment out ASSUME_PROVIDED += "qemu-native"' in
build/conf/local.conf.
Building under other distro's such as Fedora is known to work. Use the above
package names as a guide for dependencies.
Building An Image
===
Simply run;
% source poky-init-build-env
% bitbake oh-image-pda
This will result in an ext2 image and kernel for qemu arm (see scripts dir).
To build for other machine types see MACHINE in build/conf/local.conf
Notes:
===
Useful Links;
OpenedHand
http://openedhand.com
Poky Homepage
http://projects.o-hand.com/poky
OE Homepage and wiki
http://openembedded.org

View File

@@ -1,436 +0,0 @@
Poky Hardware Reference Guide
=============================
This file gives details about using Poky with different hardware reference
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device. To discuss
support for further hardware reference boards/devices please contact OpenedHand.
QEMU Emulation Images (qemuarm and qemux86)
===========================================
To simplify development Poky supports building images to work with the QEMU
emulator in system emulation mode. Two architectures are currently supported,
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
in the Poky Handbook.
Hardware Reference Boards
=========================
The following boards are supported by Poky:
* Compulab CM-X270 (cm-x270)
* Compulab EM-X270 (em-x270)
* FreeScale iMX31ADS (mx31ads)
* Marvell PXA3xx Zylonite (zylonite)
* Logic iMX31 Lite Kit (mx31litekit)
* Phytec phyCORE-iMX31 (mx31phy)
For more information see board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Consumer Devices
================
The following consumer devices are supported by Poky:
* FIC Neo1973 GTA01 smartphone (fic-gta01)
* HTC Universal (htcuniversal)
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
* Sharp Zaurus SL-C7x0 series (c7x0)
* Sharp Zaurus SL-C1000 (akita)
* Sharp Zaurus SL-C3x00 series (spitz)
For more information see board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Poky Boot CD (bootcdx86)
========================
The Poky boot CD iso images are designed as a demonstration of the Poky
environment and to show the versatile image formats Poky can generate. It will
run on Pentium2 or greater PC style computers. The iso image can be
burnt to CD and then booted from.
Hardware Reference Boards
=========================
Compulab CM-X270 (cm-x270)
==========================
The bootloader on this board doesn't support writing jffs2 images directly to
NAND and normally uses a proprietary kernel flash driver. To allow the use of
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
is booted which contains mtd utilities and this is then used to write the main
filesystem.
It is assumed the board is connected to a network where a TFTP server is
available and that a serial terminal is available to communicate with the
bootloader (38400, 8N1). If a DHCP server is available the device will use it
to obtain an IP address. If not, run:
ARMmon > setip dhcp off
ARMmon > setip ip 192.168.1.203
ARMmon > setip mask 255.255.255.0
To reflash the kernel:
ARMmon > download kernel tftp zimage 192.168.1.202
ARMmon > flash kernel
where zimage is the name of the kernel on the TFTP server and its IP address is
192.168.1.202. The names of the files must be all lowercase.
To reflash the initrd/initramfs:
ARMmon > download ramdisk tftp diskimage 192.168.1.202
ARMmon > flash ramdisk
where diskimage is the name of the initramfs image (a cpio.gz file).
To boot the initramfs:
ARMmon > ramdisk on
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
To reflash the main image login to the system as user "root", then run:
# ifconfig eth0 192.168.1.203
# tftp -g -r mainimage 192.168.1.202
# flash_eraseall /dev/mtd1
# nandwrite /dev/mtd1 mainimage
which configures the network interface with the IP address 192.168.1.203,
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
the flash and then writes the new image to the flash.
The main image can then be booted with:
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
Note that the initramfs image is built by poky in a slightly different mode to
normal since it uses uclibc. To generate this use a command like:
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
Compulab EM-X270 (em-x270)
==========================
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
Compulab website. Inside the images directory of this ZIP file is another ZIP
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
Insert this USB disk into the supplied adapter and connect this to the
board. Whilst holding down the the suspend button press the reset button. The
board will now boot off the USB key and into a version of Angstrom. On the
desktop is an icon labelled "Updater". Run this program to launch the updater
that will flash the Poky kernel and rootfs to the board.
FreeScale iMX31ADS (mx31ads)
===========================
The correct serial port is the top-most female connector to the right of the
ethernet socket.
For uploading data to RedBoot we are going to use tftp. In this example we
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
To set the IP address, run:
ip_address -l 192.168.9.2/24 -h 192.168.9.1
To download a kernel called "zimage" from the TFTP server, run:
load -r -b 0x100000 zimage
To write the kernel to flash run:
fis create kernel
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
load -r -b 0x100000 rootfs
To write the root filesystem to flash run:
fis create root
To load and boot a kernel and rootfs from flash:
fis load kernel
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
To load and boot a kernel from a TFTP server with the rootfs over NFS:
load -r -b 0x100000 zimage
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
The instructions above are for using the (default) NOR flash on the board,
there is also 128M of NAND flash. It is possible to install Poky to the NAND
flash which gives more space for the rootfs and instructions for using this are
given below. To switch to the NAND flash:
factive NAND
This will then restart RedBoot using the NAND rather than the NOR. If you
have not used the NAND before then it is unlikely that there will be a
partition table yet. You can get the list of partitions with 'fis list'.
If this shows no partitions then you can create them with:
fis init
The output of 'fis list' should now show:
Name FLASH addr Mem addr Length Entry point
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
Partitions for the kernel and rootfs need to be created:
fis create -l 0x1A0000 -e 0x00100000 kernel
fis create -l 0x5000000 -e 0x00100000 root
You may now use the instructions above for flashing. However it is important
to note that the erase block size for the NAND is different to the NOR so the
JFFS erase size will need to be changed to 0x4000. Stardard images are built
for NOR and you will need to build custom images for NAND.
You will also need to update the kernel command line to use the correct root
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
scheme shown above. If this fails then you can doublecheck against the output
from the kernel when it evaluates the available mtd partitions.
Marvell PXA3xx Zylonite (zylonite)
==================================
These instructions assume the Zylonite is connected to a machine running a TFTP
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
to access the blob bootloader. The kernel is on the TFTP server as
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
the images are to be saved in NAND flash.
The following commands setup blob:
blob> setip client 192.168.123.4
blob> setip server 192.168.123.5
To flash the kernel:
blob> tftp zylonite-kernel
blob> nandwrite -j 0x80800000 0x60000 0x200000
To flash the rootfs:
blob> tftp zylonite-rootfs
blob> nanderase -j 0x260000 0x5000000
blob> nandwrite -j 0x80800000 0x260000 <length>
(where <length> is the rootfs size which will be printed by the tftp step)
To boot the board:
blob> nkernel
blob> boot
Logic iMX31 Lite Kit (mx31litekit)
===============================
The easiest method to boot this board is to take an MMC/SD card and format
the first partition as ext2, then extract the poky image onto this as root.
Assuming the board is network connected, a TFTP server is available at
192.168.1.33 and a serial terminal is available (115200 8N1), the following
commands will boot a kernel called "mx31kern" from the TFTP server:
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
losh> exec 0x80100000 -
Phytec phyCORE-iMX31 (mx31phy)
==============================
Support for this board is currently being developed. Experimental jffs2
images and a suitable kernel are available and are known to work with the
board.
Consumer Devices
================
FIC Neo1973 GTA01 smartphone (fic-gta01)
========================================
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
which you can build with "bitbake dfu-util-native" command.
Flashing requires these steps:
1. Power down the device.
2. Connect the device to the host machine via USB.
3. Hold AUX key and press Power key. There should be a bootmenu
on screen.
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
The output should look like this:
dfu-util - (C) 2007 by OpenMoko Inc.
This program is Free Software and has ABSOLUTELY NO WARRANTY
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
jffs2 image file to use as the root filesystem
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
HTC Universal (htcuniversal)
============================
Note: HTC Universal support is highly experimental.
On the HTC Universal, entirely replacing the Windows installation is not
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
has booted, Windows is no longer in memory or active but when power is removed,
the user will be returned to windows and will need to return to Linux from
there.
Once an MMC/SD card is available it is suggested its split into two partitions,
one for a program called HaRET which lets you boot Linux from within Windows
and the second for the rootfs. The HaRET partition should be the first partition
on the card and be vfat formatted. It doesn't need to be large, just enough for
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
second partition. The first partition should be vfat so Windows recognises it
as if it doesn't, it has been known to reformat cards.
On the first partition you need three files:
* a HaRET binary (version 0.5.1 works well and a working version
should be part of the last Poky release)
* a kernel renamed to "zImage"
* a default.txt which contains:
set kernel "zImage"
set mtype "855"
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
boot2
On the second parition the root file system is extracted as root. A different
partition layout or other kernel options can be changed in the default.txt file.
When inserted into the device, Windows should see the card and let you browse
its contents using File Explorer. Running the HaRET binary will present a dialog
box (maybe after messages warning about running unsigned binaries) where you
select OK and you should then see Poky boot. Kernel messages can be seen by
adding psplash=false to the kernel commandline.
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
============================================================
Note: Nokia tablet support is highly experimental.
The Nokia internet tablet devices are OMAP based tablet formfactor devices
with large screens (800x480), wifi and touchscreen.
To flash images to these devices you need the "flasher" utility which can be
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
utility needs to be run as root and the usb filesystem needs to be mounted
although most distributions will have done this for you. Once you have this
follow these steps:
1. Power down the device.
2. Connect the device to the host machine via USB
(connecting power to the device doesn't hurt either).
3. Run "flasher -i"
4. Power on the device.
5. The program should give an indication it's found
a tablet device. If not, recheck the cables, make sure you're
root and usbfs/usbdevfs is mounted.
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
jffs2 image file to use as the root filesystem
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
and <kernel> is the kernel to use
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
7. Run "flasher -R" to reboot the device.
8. The device should boot into Poky.
The nokia800 images and kernel will run on both the N800 and N810.
Sharp Zaurus SL-C7x0 series (c7x0)
==================================
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
these devices follow these steps:
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
card as "initrd.bin":
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
"zImage.bin":
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
4. Copy an updater script (updater.sh.c7x0) onto the card
as "updater.sh":
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
5. Power down the Zaurus.
6. Hold "OK" key and power on the device. An update menu should appear
(in Japanese).
7. Choose "Update" (item 4).
8. The next screen will ask for the source, choose the appropriate
card (CF or SD).
9. Make sure AC power is connected.
10. The next screen asks for confirmation, choose "Yes" (the left button).
11. The update process will start, flash the files on the card onto
the device and the device will then reboot into Poky.
Sharp Zaurus SL-C1000 (akita)
=============================
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
c7x0. To install Poky images on this device follow the instructions for
the c7x0 but replace "c7x0" with "akita" where appropriate.
Sharp Zaurus SL-C3x00 series (spitz)
====================================
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
to akita but with an internal microdrive. The installation procedure
assumes a standard microdrive based device where the root (first)
partition has been enlarged to fit the image (at least 100MB,
400MB for the SDK).
The procedure is the same as for the c7x0 and akita models with the
following differences:
1. Instead of a jffs2 image you need to copy a compressed tarball of the
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
card as "hdimage1.tgz":
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
2. You additionally need to copy a special tar utility (gnu-tar) onto
the card as "gnu-tar":
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar

View File

@@ -1,193 +1,10 @@
Changes in BitBake 1.8.x:
- Fix -f (force) in conjunction with -b
- Fix exit code for build failures in --continue mode
- Fix git branch tags fetching
- Change parseConfigurationFile so it works on real data, not a copy
- Handle 'base' inherit and all other INHERITs from parseConfigurationFile
instead of BBHandler
- Fix getVarFlags bug in data_smart
- Optmise cache handling by more quickly detecting an invalid cache, only
saving the cache when its changed, moving the cache validity check into
the parsing loop and factoring some getVar calls outside a for loop
- Cooker: Remove a debug message from the parsing loop to lower overhead
- Convert build.py exec_task to use getVarFlags
- Update shell to use cooker.buildFile
- Add StampUpdate event
- Convert -b option to use taskdata/runqueue
- Remove digraph and switch to new stamp checking code. exec_task no longer
honours dependencies
- Make fetcher timestamp updating non-fatal when permissions don't allow
updates
- Add BB_SCHEDULER variable/option ("completion" or "speed") controlling
the way bitbake schedules tasks
- Add BB_STAMP_POLICY variable/option ("perfile" or "full") controlling
how extensively stamps are looked at for validity
- When handling build target failures make sure idepends are checked and
failed where needed. Fixes --continue mode crashes.
- Fix problems with recrdeptask handling where some idepends weren't handled
correctly.
- Work around refs/HEAD issues with git over http (#3410)
- Add proxy support to the CVS fetcher (from Cyril Chemparathy)
- Improve runfetchcmd so errors are seen and various GIT variables are exported
- Add ability to fetchers to check URL validity without downloading
- Improve runtime PREFERRED_PROVIDERS warning message
- Add BB_STAMP_WHITELIST option which contains a list of stamps to ignore when
checking stamp dependencies and using a BB_STAMP_POLICY of "whitelist"
- No longer weight providers on the basis of a package being "already staged". This
leads to builds being non-deterministic.
- Flush stdout/stderr before forking to fix duplicate console output
- Make sure recrdeps tasks include all inter-task dependencies of a given fn
- Add bb.runqueue.check_stamp_fn() for use by packaged-staging
- Add PERSISTENT_DIR to store the PersistData in a persistent
directory != the cache dir.
- Add md5 and sha256 checksum generation functions to utils.py
- Revert the '-' character fix in class names since it breaks things
Changes in BitBake 1.7.3:
Changes in BitBake 1.8.10:
- Psyco is available only for x86 - do not use it on other architectures.
- Fix a bug in bb.decodeurl where http://some.where.com/somefile.tgz decoded to host="" (#1530)
- Warn about malformed PREFERRED_PROVIDERS (#1072)
- Add support for BB_NICE_LEVEL option (#1627)
- Sort initial providers list by default preference (#1145, #2024)
- Improve provider sorting so prefered versions have preference over latest versions (#768)
- Detect builds of tasks with overlapping providers and warn (will become a fatal error) (#1359)
- Add MULTI_PROVIDER_WHITELIST variable to allow known safe multiple providers to be listed
- Handle paths in svn fetcher module parameter
- Support the syntax "export VARIABLE"
- Add bzr fetcher
- Add support for cleaning directories before a task in the form:
do_taskname[cleandirs] = "dir"
- bzr fetcher tweaks from Robert Schuster (#2913)
- Add mercurial (hg) fetcher from Robert Schuster (#2913)
- Fix bogus preferred_version return values
- Fix 'depends' flag splitting
- Fix unexport handling (#3135)
- Add bb.copyfile function similar to bb.movefile (and improve movefile error reporting)
- Allow multiple options for deptask flag
- Use git-fetch instead of git-pull removing any need for merges when
fetching (we don't care about the index). Fixes fetch errors.
- Add BB_GENERATE_MIRROR_TARBALLS option, set to 0 to make git fetches
faster at the expense of not creating mirror tarballs.
- SRCREV handling updates, improvements and fixes from Poky
- Add bb.utils.lockfile() and bb.utils.unlockfile() from Poky
- Add support for task selfstamp and lockfiles flags
- Disable task number acceleration since it can allow the tasks to run
out of sequence
- Improve runqueue code comments
- Add task scheduler abstraction and some example schedulers
- Improve circular dependency chain debugging code and user feedback
- Don't give a stacktrace for invalid tasks, have a user friendly message (#3431)
- Add support for "-e target" (#3432)
- Fix shell showdata command (#3259)
- Fix shell data updating problems (#1880)
- Properly raise errors for invalid source URI protocols
- Change the wget fetcher failure handling to avoid lockfile problems
- Add git branch support
- Add support for branches in git fetcher (Otavio Salvador, Michael Lauer)
- Make taskdata and runqueue errors more user friendly
- Add norecurse and fullpath options to cvs fetcher
Changes in Bitbake 1.8.8:
- Rewrite svn fetcher to make adding extra operations easier
as part of future SRCDATE="now" fixes
(requires new FETCHCMD_svn definition in bitbake.conf)
- Change SVNDIR layout to be more unique (fixes #2644 and #2624)
- Import persistent data store from trunk
- Sync fetcher code with that in trunk, adding SRCREV support for svn
- Add ConfigParsed Event after configuration parsing is complete
- data.emit_var() - only call getVar if we need the variable
- Stop generating the A variable (seems to be legacy code)
- Make sure intertask depends get processed correcting in recursive depends
- Add pn-PN to overrides when evaluating PREFERRED_VERSION
- Improve the progress indicator by skipping tasks that have
already run before starting the build rather than during it
- Add profiling option (-P)
- Add BB_SRCREV_POLICY variable (clear or cache) to control SRCREV cache
- Add SRCREV_FORMAT support
- Fix local fetcher's localpath return values
- Apply OVERRIDES before performing immediate expansions
- Allow the -b -e option combination to take regular expressions
- Add plain message function to bb.msg
- Sort the list of providers before processing so dependency problems are
reproducible rather than effectively random
- Add locking for fetchers so only one tries to fetch a given file at a given time
- Fix int(0)/None confusion in runqueue.py which causes random gaps in dependency chains
- Fix handling of variables with expansion in the name using _append/_prepend
e.g. RRECOMMENDS_${PN}_append_xyz = "abc"
- Expand data in addtasks
- Print the list of missing DEPENDS,RDEPENDS for the "No buildable providers available for required...."
error message.
- Rework add_task to be more efficient (6% speedup, 7% number of function calls reduction)
- Sort digraph output to make builds more reproducible
- Split expandKeys into two for loops to benefit from the expand_cache (12% speedup)
- runqueue.py: Fix idepends handling to avoid dependency errors
- Clear the terminal TOSTOP flag if set (and warn the user)
- Fix regression from r653 and make SRCDATE/CVSDATE work for packages again
Changes in Bitbake 1.8.6:
- Correctly redirect stdin when forking
- If parsing errors are found, exit, too many users miss the errors
- Remove supriours PREFERRED_PROVIDER warnings
Changes in Bitbake 1.8.4:
- Make sure __inherit_cache is updated before calling include() (from Michael Krelin)
- Fix bug when target was in ASSUME_PROVIDED (#2236)
- Raise ParseError for filenames with multiple underscores instead of infinitely looping (#2062)
- Fix invalid regexp in BBMASK error handling (missing import) (#1124)
- Don't run build sanity checks on incomplete builds
- Promote certain warnings from debug to note 2 level
- Update manual
Changes in Bitbake 1.8.2:
- Catch truncated cache file errors
- Add PE (Package Epoch) support from Philipp Zabel (pH5)
- Add code to handle inter-task dependencies
- Allow operations other than assignment on flag variables
- Fix cache errors when generation dotGraphs
Changes in Bitbake 1.8.0:
- Release 1.7.x as a stable series
Changes in BitBake 1.7.x:
- Major updates of the dependency handling and execution
of tasks. Code from bin/bitbake replaced with runqueue.py
and taskdata.py
- New task execution code supports multithreading with a simplistic
threading algorithm controlled by BB_NUMBER_THREADS
- Change of the SVN Fetcher to keep the checkout around
courtsey of Paul Sokolovsky (#1367)
- PATH fix to bbimage (#1108)
- Allow debug domains to be specified on the commandline (-l)
- Allow 'interactive' tasks
- Logging message improvements
- Drop now uneeded BUILD_ALL_DEPS variable
- Add support for wildcards to -b option
- Major overhaul of the fetchers making a large amount of code common
including mirroring code
- Fetchers now touch md5 stamps upon access (to show activity)
- Fix -f force option when used without -b (long standing bug)
- Add expand_cache to data_cache.py, caching expanded data (speedup)
- Allow version field in DEPENDS (ignored for now)
- Add abort flag support to the shell
- Make inherit fail if the class doesn't exist (#1478)
- Fix data.emit_env() to expand keynames as well as values
- Add ssh fetcher
- Add perforce fetcher
- Make PREFERRED_PROVIDER_foobar defaults to foobar if available
- Share the parser's mtime_cache, reducing the number of stat syscalls
- Compile all anonfuncs at once!
*** Anonfuncs must now use common spacing format ***
- Memorise the list of handlers in __BBHANDLERS and tasks in __BBTASKS
This removes 2 million function calls resulting in a 5-10% speedup
- Add manpage
- Update generateDotGraph to use taskData/runQueue improving accuracy
and also adding a task dependency graph
- Fix/standardise on GPLv2 licence
- Move most functionality from bin/bitbake to cooker.py and split into
separate funcitons
- CVS fetcher: Added support for non-default port
- Add BBINCLUDELOGS_LINES, the number of lines to read from any logfile
- Drop shebangs from lib/bb scripts
Changes in BitBake 1.7.1:
- Major updates of the dependency handling and execution
of tasks
- Change of the SVN Fetcher to keep the checkout around
courtsey to Paul Sokolovsky (#1367)
Changes in Bitbake 1.6.0:
- Better msg handling

View File

@@ -1,52 +1,45 @@
AUTHORS
COPYING
ChangeLog
MANIFEST
setup.py
bin/bitdoc
bin/bbimage
bin/bitbake
lib/bb/COW.py
lib/bb/__init__.py
lib/bb/build.py
lib/bb/cache.py
lib/bb/cooker.py
lib/bb/COW.py
lib/bb/data.py
lib/bb/data_smart.py
lib/bb/event.py
lib/bb/fetch/__init__.py
lib/bb/fetch/bzr.py
lib/bb/manifest.py
lib/bb/methodpool.py
lib/bb/msg.py
lib/bb/providers.py
lib/bb/runqueue.py
lib/bb/shell.py
lib/bb/taskdata.py
lib/bb/utils.py
lib/bb/fetch/cvs.py
lib/bb/fetch/git.py
lib/bb/fetch/hg.py
lib/bb/fetch/__init__.py
lib/bb/fetch/local.py
lib/bb/fetch/perforce.py
lib/bb/fetch/ssh.py
lib/bb/fetch/svk.py
lib/bb/fetch/svn.py
lib/bb/fetch/wget.py
lib/bb/manifest.py
lib/bb/methodpool.py
lib/bb/msg.py
lib/bb/parse/__init__.py
lib/bb/parse/parse_py/__init__.py
lib/bb/parse/parse_py/BBHandler.py
lib/bb/parse/parse_py/ConfHandler.py
lib/bb/persist_data.py
lib/bb/providers.py
lib/bb/runqueue.py
lib/bb/shell.py
lib/bb/taskdata.py
lib/bb/utils.py
setup.py
lib/bb/parse/parse_py/__init__.py
doc/COPYING.GPL
doc/COPYING.MIT
doc/bitbake.1
doc/manual/html.css
doc/manual/Makefile
doc/manual/usermanual.xml
contrib/bbdev.sh
contrib/vim/syntax/bitbake.vim
contrib/vim/ftdetect/bitbake.vim
conf/bitbake.conf
classes/base.bbclass

155
bitbake/bin/bbimage Executable file
View File

@@ -0,0 +1,155 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2003 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys, os
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
from bb import *
__version__ = 1.1
type = "jffs2"
cfg_bb = data.init()
cfg_oespawn = data.init()
bb.msg.set_debug_level(0)
def usage():
print "Usage: bbimage [options ...]"
print "Creates an image for a target device from a root filesystem,"
print "obeying configuration parameters from the BitBake"
print "configuration files, thereby easing handling of deviceisms."
print ""
print " %s\t\t%s" % ("-r [arg], --root [arg]", "root directory (default=${IMAGE_ROOTFS})")
print " %s\t\t%s" % ("-t [arg], --type [arg]", "image type (jffs2[default], cramfs)")
print " %s\t\t%s" % ("-n [arg], --name [arg]", "image name (override IMAGE_NAME variable)")
print " %s\t\t%s" % ("-v, --version", "output version information and exit")
sys.exit(0)
def version():
print "BitBake Build Tool Core version %s" % bb.__version__
print "BBImage version %s" % __version__
def emit_bb(d, base_d = {}):
for v in d.keys():
if d[v] != base_d[v]:
data.emit_var(v, d)
def getopthash(l):
h = {}
for (opt, val) in l:
h[opt] = val
return h
import getopt
try:
(opts, args) = getopt.getopt(sys.argv[1:], 'vr:t:e:n:', [ 'version', 'root=', 'type=', 'bbfile=', 'name=' ])
except getopt.GetoptError:
usage()
# handle opts
opthash = getopthash(opts)
if '--version' in opthash or '-v' in opthash:
version()
sys.exit(0)
try:
cfg_bb = parse.handle(os.path.join('conf', 'bitbake.conf'), cfg_bb)
except IOError:
fatal("Unable to open bitbake.conf")
# sanity check
if cfg_bb is None:
fatal("Unable to open/parse %s" % os.path.join('conf', 'bitbake.conf'))
usage(1)
rootfs = None
extra_files = []
if '--root' in opthash:
rootfs = opthash['--root']
if '-r' in opthash:
rootfs = opthash['-r']
if '--type' in opthash:
type = opthash['--type']
if '-t' in opthash:
type = opthash['-t']
if '--bbfile' in opthash:
extra_files.append(opthash['--bbfile'])
if '-e' in opthash:
extra_files.append(opthash['-e'])
for f in extra_files:
try:
cfg_bb = parse.handle(f, cfg_bb)
except IOError:
print "unable to open %s" % f
if not rootfs:
rootfs = data.getVar('IMAGE_ROOTFS', cfg_bb, 1)
if not rootfs:
bb.fatal("IMAGE_ROOTFS not defined")
data.setVar('IMAGE_ROOTFS', rootfs, cfg_bb)
from copy import copy, deepcopy
localdata = data.createCopy(cfg_bb)
overrides = data.getVar('OVERRIDES', localdata)
if not overrides:
bb.fatal("OVERRIDES not defined.")
data.setVar('OVERRIDES', '%s:%s' % (overrides, type), localdata)
data.update_data(localdata)
data.setVar('OVERRIDES', overrides, localdata)
if '-n' in opthash:
data.setVar('IMAGE_NAME', opthash['-n'], localdata)
if '--name' in opthash:
data.setVar('IMAGE_NAME', opthash['--name'], localdata)
topdir = data.getVar('TOPDIR', localdata, 1) or os.getcwd()
cmd = data.getVar('IMAGE_CMD', localdata, 1)
if not cmd:
bb.fatal("IMAGE_CMD not defined")
outdir = data.getVar('DEPLOY_DIR_IMAGE', localdata, 1)
if not outdir:
bb.fatal('DEPLOY_DIR_IMAGE not defined')
mkdirhier(outdir)
#depends = data.getVar('IMAGE_DEPENDS', localdata, 1) or ""
#if depends:
# bb.note("Spawning bbmake to satisfy dependencies: %s" % depends)
# ret = os.system('bbmake %s' % depends)
# if ret != 0:
# bb.error("executing bbmake to satisfy dependencies")
bb.note("Executing %s" % cmd)
data.setVar('image_cmd', cmd, localdata)
data.setVarFlag('image_cmd', 'func', 1, localdata)
try:
bb.build.exec_func('image_cmd', localdata)
except bb.build.FuncFailed:
sys.exit(1)
#ret = os.system(cmd)
#sys.exit(ret)

View File

@@ -27,7 +27,7 @@ sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'l
import bb
from bb import cooker
__version__ = "1.8.11"
__version__ = "1.7.4"
#============================================================================#
# BBOptions
@@ -50,7 +50,7 @@ def main():
usage = """%prog [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
It expects that BBFILES is defined, which is a space seperated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.""" )
@@ -102,8 +102,6 @@ Default BBFILES are the .bb files in the current directory.""" )
parser.add_option( "-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [] )
parser.add_option( "-P", "--profile", help = "profile the command and print a report",
action = "store_true", dest = "profile", default = False )
options, args = parser.parse_args(sys.argv)
@@ -111,24 +109,15 @@ Default BBFILES are the .bb files in the current directory.""" )
configuration.pkgs_to_build = []
configuration.pkgs_to_build.extend(args[1:])
cooker = bb.cooker.BBCooker(configuration)
if configuration.profile:
try:
import cProfile as profile
except:
import profile
profile.runctx("cooker.cook()", globals(), locals(), "profile.log")
import pstats
p = pstats.Stats('profile.log')
p.sort_stats('time')
p.print_stats()
p.print_callers()
p.sort_stats('cumulative')
p.print_stats()
else:
cooker.cook()
bb.cooker.BBCooker().cook(configuration)
if __name__ == "__main__":
main()
sys.exit(0)
import profile
profile.run('main()', "profile.log")
import pstats
p = pstats.Stats('profile.log')
p.sort_stats('time')
p.print_stats()
p.print_callers()

View File

@@ -0,0 +1,79 @@
# Copyright (C) 2003 Chris Larson
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
die() {
bbfatal "$*"
}
bbnote() {
echo "NOTE:" "$*"
}
bbwarn() {
echo "WARNING:" "$*"
}
bbfatal() {
echo "FATAL:" "$*"
exit 1
}
bbdebug() {
test $# -ge 2 || {
echo "Usage: bbdebug level \"message\""
exit 1
}
test ${@bb.msg.debug_level} -ge $1 && {
shift
echo "DEBUG:" $*
}
}
addtask showdata
do_showdata[nostamp] = "1"
python do_showdata() {
import sys
# emit variables and shell functions
bb.data.emit_env(sys.__stdout__, d, True)
# emit the metadata which isnt valid shell
for e in bb.data.keys(d):
if bb.data.getVarFlag(e, 'python', d):
sys.__stdout__.write("\npython %s () {\n%s}\n" % (e, bb.data.getVar(e, d, 1)))
}
addtask listtasks
do_listtasks[nostamp] = "1"
python do_listtasks() {
import sys
for e in bb.data.keys(d):
if bb.data.getVarFlag(e, 'task', d):
sys.__stdout__.write("%s\n" % e)
}
addtask build
do_build[dirs] = "${TOPDIR}"
do_build[nostamp] = "1"
python base_do_build () {
bb.note("The included, default BB base.bbclass does not define a useful default task.")
bb.note("Try running the 'listtasks' task against a .bb to see what tasks are defined.")
}
EXPORT_FUNCTIONS do_clean do_mrproper do_build

58
bitbake/conf/bitbake.conf Normal file
View File

@@ -0,0 +1,58 @@
# Copyright (C) 2003 Chris Larson
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
B = "${S}"
CVSDIR = "${DL_DIR}/cvs"
DEPENDS = ""
DEPLOY_DIR = "${TMPDIR}/deploy"
DEPLOY_DIR_IMAGE = "${DEPLOY_DIR}/images"
DL_DIR = "${TMPDIR}/downloads"
FETCHCOMMAND = ""
FETCHCOMMAND_cvs = "/usr/bin/env cvs -d${CVSROOT} co ${CVSCOOPTS} ${CVSMODULE}"
FETCHCOMMAND_svn = "/usr/bin/env svn co ${SVNCOOPTS} ${SVNROOT} ${SVNMODULE}"
FETCHCOMMAND_wget = "/usr/bin/env wget -t 5 --passive-ftp -P ${DL_DIR} ${URI}"
FILESDIR = "${@bb.which(bb.data.getVar('FILESPATH', d, 1), '.')}"
FILESPATH = "${FILE_DIRNAME}/${PF}:${FILE_DIRNAME}/${P}:${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/files:${FILE_DIRNAME}"
FILE_DIRNAME = "${@os.path.dirname(bb.data.getVar('FILE', d))}"
GITDIR = "${DL_DIR}/git"
IMAGE_CMD = "_NO_DEFINED_IMAGE_TYPES_"
IMAGE_ROOTFS = "${TMPDIR}/rootfs"
MKTEMPCMD = "mktemp -q ${TMPBASE}"
MKTEMPDIRCMD = "mktemp -d -q ${TMPBASE}"
OVERRIDES = "local:${MACHINE}:${TARGET_OS}:${TARGET_ARCH}"
P = "${PN}-${PV}"
PF = "${PN}-${PV}-${PR}"
PN = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[0] or 'defaultpkgname'}"
PR = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[2] or 'r0'}"
PROVIDES = ""
PV = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[1] or '1.0'}"
RESUMECOMMAND = ""
RESUMECOMMAND_wget = "/usr/bin/env wget -c -t 5 --passive-ftp -P ${DL_DIR} ${URI}"
S = "${WORKDIR}/${P}"
SRC_URI = "file://${FILE}"
STAMP = "${TMPDIR}/stamps/${PF}"
SVNDIR = "${DL_DIR}/svn"
T = "${WORKDIR}/temp"
TARGET_ARCH = "${BUILD_ARCH}"
TMPDIR = "${TOPDIR}/tmp"
UPDATECOMMAND = ""
UPDATECOMMAND_cvs = "/usr/bin/env cvs -d${CVSROOT} update ${CVSCOOPTS}"
UPDATECOMMAND_svn = "/usr/bin/env svn update ${SVNCOOPTS}"
WORKDIR = "${TMPDIR}/work/${PF}"

View File

@@ -175,12 +175,6 @@ include</literal> directive.</para>
<varname>DEPENDS</varname> = "${@get_depends(bb, d)}"</screen></para>
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
</section>
<section>
<title>Variable Flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
</section>
<section>
<title>Inheritance</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
@@ -218,42 +212,6 @@ method one can get the name of the triggered event.</para><para>The above event
of the event and the content of the <varname>FILE</varname> variable.</para>
</section>
</section>
<section>
<title>Dependency Handling</title>
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
</section>
<section>
<title>DEPENDS</title>
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>RDEPENDS</title>
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
</section>
<section>
<title>Inter Task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
</section>
</section>
<section>
<title>Parsing</title>
<section>
@@ -413,8 +371,6 @@ options:
Stop processing at the given list of dependencies when
generating dependency graphs. This can help to make
the graph more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
</screen>
</para>
@@ -445,20 +401,12 @@ options:
<title>Generating dependency graphs</title>
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
Three files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing <varname>DEPENDS</varname> variables, <emphasis>rdepends.dot</emphasis> and <emphasis>alldepends.dot</emphasis> containing both <varname>DEPENDS</varname> and <varname>RDEPENDS</varname>. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
<screen><prompt>$ </prompt>bitbake -g blah</screen>
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
</example>
</para>
</section>
<section>
<title>Special variables</title>
<para>Certain variables affect bitbake operation:</para>
<section>
<title><varname>BB_NUMBER_THREADS</varname></title>
<para> The number of threads bitbake should run at once (default: 1).</para>
</section>
</section>
<section>
<title>Metadata</title>
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space seperated list of files that are available, and supports wildcards.

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.8.11"
__version__ = "1.7.4"
__all__ = [
@@ -46,6 +46,7 @@ __all__ = [
"pkgcmp",
"dep_parenreduce",
"dep_opconvert",
"digraph",
# fetch
"decodeurl",
@@ -96,23 +97,17 @@ class MalformedUrl(Exception):
#######################################################################
#######################################################################
def plain(*args):
bb.msg.warn(''.join(args))
def debug(lvl, *args):
bb.msg.debug(lvl, None, ''.join(args))
bb.msg.std_debug(lvl, ''.join(args))
def note(*args):
bb.msg.note(1, None, ''.join(args))
def warn(*args):
bb.msg.warn(1, None, ''.join(args))
bb.msg.std_note(''.join(args))
def error(*args):
bb.msg.error(None, ''.join(args))
bb.msg.std_error(''.join(args))
def fatal(*args):
bb.msg.fatal(None, ''.join(args))
bb.msg.std_fatal(''.join(args))
#######################################################################
@@ -154,7 +149,8 @@ def movefile(src,dest,newmtime=None,sstat=None):
if not sstat:
sstat=os.lstat(src)
except Exception, e:
print "movefile: Stating source file failed...", e
print "!!! Stating source file failed... movefile()"
print "!!!",e
return None
destexists=1
@@ -178,11 +174,13 @@ def movefile(src,dest,newmtime=None,sstat=None):
if destexists and not stat.S_ISDIR(dstat[stat.ST_MODE]):
os.unlink(dest)
os.symlink(target,dest)
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
# os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
os.unlink(src)
return os.lstat(dest)
except Exception, e:
print "movefile: failed to properly create symlink:", dest, "->", target, e
print "!!! failed to properly create symlink:"
print "!!!",dest,"->",target
print "!!!",e
return None
renamefailed=1
@@ -194,7 +192,8 @@ def movefile(src,dest,newmtime=None,sstat=None):
import errno
if e[0]!=errno.EXDEV:
# Some random error.
print "movefile: Failed to move", src, "to", dest, e
print "!!! Failed to move",src,"to",dest
print "!!!",e
return None
# Invalid cross-device-link 'bind' mounted or actually Cross-Device
@@ -206,13 +205,16 @@ def movefile(src,dest,newmtime=None,sstat=None):
os.rename(dest+"#new",dest)
didcopy=1
except Exception, e:
print 'movefile: copy', src, '->', dest, 'failed.', e
print '!!! copy',src,'->',dest,'failed.'
print "!!!",e
return None
else:
#we don't yet handle special, so we need to fall back to /bin/mv
a=getstatusoutput("/bin/mv -f "+"'"+src+"' '"+dest+"'")
if a[0]!=0:
print "movefile: Failed to move special file:" + src + "' to '" + dest + "'", a
print "!!! Failed to move special file:"
print "!!! '"+src+"' to '"+dest+"'"
print "!!!",a
return None # failure
try:
if didcopy:
@@ -220,7 +222,9 @@ def movefile(src,dest,newmtime=None,sstat=None):
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
os.unlink(src)
except Exception, e:
print "movefile: Failed to chown/chmod/unlink", dest, e
print "!!! Failed to chown/chmod/unlink in movefile()"
print "!!!",dest
print "!!!",e
return None
if newmtime:
@@ -230,75 +234,7 @@ def movefile(src,dest,newmtime=None,sstat=None):
newmtime=sstat[stat.ST_MTIME]
return newmtime
def copyfile(src,dest,newmtime=None,sstat=None):
"""
Copies a file from src to dest, preserving all permissions and
attributes; mtime will be preserved even when moving across
filesystems. Returns true on success and false on failure.
"""
import os, stat, shutil
#print "copyfile("+src+","+dest+","+str(newmtime)+","+str(sstat)+")"
try:
if not sstat:
sstat=os.lstat(src)
except Exception, e:
print "copyfile: Stating source file failed...", e
return False
destexists=1
try:
dstat=os.lstat(dest)
except:
dstat=os.lstat(os.path.dirname(dest))
destexists=0
if destexists:
if stat.S_ISLNK(dstat[stat.ST_MODE]):
try:
os.unlink(dest)
destexists=0
except Exception, e:
pass
if stat.S_ISLNK(sstat[stat.ST_MODE]):
try:
target=os.readlink(src)
if destexists and not stat.S_ISDIR(dstat[stat.ST_MODE]):
os.unlink(dest)
os.symlink(target,dest)
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
return os.lstat(dest)
except Exception, e:
print "copyfile: failed to properly create symlink:", dest, "->", target, e
return False
if stat.S_ISREG(sstat[stat.ST_MODE]):
try: # For safety copy then move it over.
shutil.copyfile(src,dest+"#new")
os.rename(dest+"#new",dest)
except Exception, e:
print 'copyfile: copy', src, '->', dest, 'failed.', e
return False
else:
#we don't yet handle special, so we need to fall back to /bin/mv
a=getstatusoutput("/bin/cp -f "+"'"+src+"' '"+dest+"'")
if a[0]!=0:
print "copyfile: Failed to copy special file:" + src + "' to '" + dest + "'", a
return False # failure
try:
os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
except Exception, e:
print "copyfile: Failed to chown/chmod/unlink", dest, e
return False
if newmtime:
os.utime(dest,(newmtime,newmtime))
else:
os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
newmtime=sstat[stat.ST_MTIME]
return newmtime
#######################################################################
#######################################################################
@@ -341,11 +277,10 @@ def decodeurl(url):
raise MalformedUrl(url)
user = m.group('user')
parm = m.group('parm')
locidx = location.find('/')
if locidx != -1:
host = location[:locidx]
path = location[locidx:]
m = re.compile('(?P<host>[^/;]+)(?P<path>/[^;]+)').match(location)
if m:
host = m.group('host')
path = m.group('path')
else:
host = ""
path = location
@@ -410,20 +345,14 @@ def encodeurl(decoded):
#######################################################################
def which(path, item, direction = 0):
"""
Locate a file in a PATH
"""
paths = (path or "").split(':')
if direction != 0:
paths.reverse()
"""Useful function for locating a file in a PATH"""
found = ""
for p in (path or "").split(':'):
next = os.path.join(p, item)
if os.path.exists(next):
return next
return ""
if os.path.exists(os.path.join(p, item)):
found = os.path.join(p, item)
if direction == 0:
break
return found
#######################################################################
@@ -1127,6 +1056,174 @@ def dep_opconvert(mysplit, myuse):
mypos += 1
return newsplit
class digraph:
"""beautiful directed graph object"""
def __init__(self):
self.dict={}
#okeys = keys, in order they were added (to optimize firstzero() ordering)
self.okeys=[]
self.__callback_cache=[]
def __str__(self):
str = ""
for key in self.okeys:
str += "%s:\t%s\n" % (key, self.dict[key][1])
return str
def addnode(self,mykey,myparent):
if not mykey in self.dict:
self.okeys.append(mykey)
if myparent==None:
self.dict[mykey]=[0,[]]
else:
self.dict[mykey]=[0,[myparent]]
self.dict[myparent][0]=self.dict[myparent][0]+1
return
if myparent and (not myparent in self.dict[mykey][1]):
self.dict[mykey][1].append(myparent)
self.dict[myparent][0]=self.dict[myparent][0]+1
def delnode(self,mykey, ref = 1):
"""Delete a node
If ref is 1, remove references to this node from other nodes.
If ref is 2, remove nodes that reference this node."""
if not mykey in self.dict:
return
for x in self.dict[mykey][1]:
self.dict[x][0]=self.dict[x][0]-1
del self.dict[mykey]
while 1:
try:
self.okeys.remove(mykey)
except ValueError:
break
if ref:
__kill = []
for k in self.okeys:
if mykey in self.dict[k][1]:
if ref == 1 or ref == 2:
self.dict[k][1].remove(mykey)
if ref == 2:
__kill.append(k)
for l in __kill:
self.delnode(l, ref)
def allnodes(self):
"returns all nodes in the dictionary"
return self.dict.keys()
def firstzero(self):
"returns first node with zero references, or NULL if no such node exists"
for x in self.okeys:
if self.dict[x][0]==0:
return x
return None
def firstnonzero(self):
"returns first node with nonzero references, or NULL if no such node exists"
for x in self.okeys:
if self.dict[x][0]!=0:
return x
return None
def allzeros(self):
"returns all nodes with zero references, or NULL if no such node exists"
zerolist = []
for x in self.dict.keys():
if self.dict[x][0]==0:
zerolist.append(x)
return zerolist
def hasallzeros(self):
"returns 0/1, Are all nodes zeros? 1 : 0"
zerolist = []
for x in self.dict.keys():
if self.dict[x][0]!=0:
return 0
return 1
def empty(self):
if len(self.dict)==0:
return 1
return 0
def hasnode(self,mynode):
return mynode in self.dict
def getparents(self, item):
if not self.hasnode(item):
return []
return self.dict[item][1]
def getchildren(self, item):
if not self.hasnode(item):
return []
children = [i for i in self.okeys if item in self.getparents(i)]
return children
def walkdown(self, item, callback, debug = None, usecache = False):
if not self.hasnode(item):
return 0
if usecache:
if self.__callback_cache.count(item):
if debug:
print "hit cache for item: %s" % item
return 1
parents = self.getparents(item)
children = self.getchildren(item)
for p in parents:
if p in children:
# print "%s is both parent and child of %s" % (p, item)
if usecache:
self.__callback_cache.append(p)
ret = callback(self, p)
if ret == 0:
return 0
continue
if item == p:
print "eek, i'm my own parent!"
return 0
if debug:
print "item: %s, p: %s" % (item, p)
ret = self.walkdown(p, callback, debug, usecache)
if ret == 0:
return 0
if usecache:
self.__callback_cache.append(item)
return callback(self, item)
def walkup(self, item, callback):
if not self.hasnode(item):
return 0
parents = self.getparents(item)
children = self.getchildren(item)
for c in children:
if c in parents:
ret = callback(self, item)
if ret == 0:
return 0
continue
if item == c:
print "eek, i'm my own child!"
return 0
ret = self.walkup(c, callback)
if ret == 0:
return 0
return callback(self, item)
def copy(self):
mygraph=digraph()
for x in self.dict.keys():
mygraph.dict[x]=self.dict[x][:]
mygraph.okeys=self.okeys[:]
return mygraph
if __name__ == "__main__":
import doctest, bb
doctest.testmod(bb)

View File

@@ -74,22 +74,10 @@ def exec_func(func, d, dirs = None):
if not body:
return
flags = data.getVarFlags(func, d)
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot']:
if not item in flags:
flags[item] = None
ispython = flags['python']
cleandirs = (data.expand(flags['cleandirs'], d) or "").split()
for cdir in cleandirs:
os.system("rm -rf %s" % cdir)
if dirs:
dirs = data.expand(dirs, d)
else:
dirs = (data.expand(flags['dirs'], d) or "").split()
if not dirs:
dirs = (data.getVarFlag(func, 'dirs', d) or "").split()
for adir in dirs:
adir = data.expand(adir, d)
mkdirhier(adir)
if len(dirs) > 0:
@@ -97,25 +85,19 @@ def exec_func(func, d, dirs = None):
else:
adir = data.getVar('B', d, 1)
adir = data.expand(adir, d)
try:
prevdir = os.getcwd()
except OSError:
prevdir = data.getVar('TOPDIR', d, True)
prevdir = data.expand('${TOPDIR}', d)
if adir and os.access(adir, os.F_OK):
os.chdir(adir)
locks = []
lockfiles = (data.expand(flags['lockfiles'], d) or "").split()
for lock in lockfiles:
locks.append(bb.utils.lockfile(lock))
if flags['python']:
if data.getVarFlag(func, "python", d):
exec_func_python(func, d)
else:
exec_func_shell(func, d, flags)
for lock in locks:
bb.utils.unlockfile(lock)
exec_func_shell(func, d)
if os.path.exists(prevdir):
os.chdir(prevdir)
@@ -124,20 +106,19 @@ def exec_func_python(func, d):
"""Execute a python BB 'function'"""
import re, os
bbfile = bb.data.getVar('FILE', d, 1)
tmp = "def " + func + "():\n%s" % data.getVar(func, d)
tmp += '\n' + func + '()'
comp = utils.better_compile(tmp, func, bbfile)
comp = utils.better_compile(tmp, func, bb.data.getVar('FILE', d, 1) )
prevdir = os.getcwd()
g = {} # globals
g['bb'] = bb
g['os'] = os
g['d'] = d
utils.better_exec(comp, g, tmp, bbfile)
utils.better_exec(comp,g,tmp, bb.data.getVar('FILE',d,1))
if os.path.exists(prevdir):
os.chdir(prevdir)
def exec_func_shell(func, d, flags):
def exec_func_shell(func, d):
"""Execute a shell BB 'function' Returns true if execution was successful.
For this, it creates a bash shell script in the tmp dectory, writes the local
@@ -149,9 +130,9 @@ def exec_func_shell(func, d, flags):
"""
import sys
deps = flags['deps']
check = flags['check']
interact = flags['interactive']
deps = data.getVarFlag(func, 'deps', d)
check = data.getVarFlag(func, 'check', d)
interact = data.getVarFlag(func, 'interactive', d)
if check in globals():
if globals()[check](func, deps):
return
@@ -203,12 +184,11 @@ def exec_func_shell(func, d, flags):
# execute function
prevdir = os.getcwd()
if flags['fakeroot']:
if data.getVarFlag(func, "fakeroot", d):
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
else:
maybe_fakeroot = ''
lang_environment = "LC_ALL=C "
ret = os.system('%s%ssh -e %s' % (lang_environment, maybe_fakeroot, runfile))
ret = os.system('%ssh -e %s' % (maybe_fakeroot, runfile))
try:
os.chdir(prevdir)
except:
@@ -263,30 +243,72 @@ def exec_task(task, d):
a function is that a task exists in the task digraph, and therefore
has dependencies amongst other tasks."""
# Check whther this is a valid task
if not data.getVarFlag(task, 'task', d):
raise EventException("No such task", InvalidTask(task, d))
# check if the task is in the graph..
task_graph = data.getVar('_task_graph', d)
if not task_graph:
task_graph = bb.digraph()
data.setVar('_task_graph', task_graph, d)
task_cache = data.getVar('_task_cache', d)
if not task_cache:
task_cache = []
data.setVar('_task_cache', task_cache, d)
if not task_graph.hasnode(task):
raise EventException("Missing node in task graph", InvalidTask(task, d))
try:
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % task)
old_overrides = data.getVar('OVERRIDES', d, 0)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', 'task-%s:%s' % (task[3:], old_overrides), localdata)
data.update_data(localdata)
data.expandKeys(localdata)
event.fire(TaskStarted(task, localdata))
exec_func(task, localdata)
event.fire(TaskSucceeded(task, localdata))
except FuncFailed, reason:
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % reason )
failedevent = TaskFailed(task, d)
event.fire(failedevent)
raise EventException("Function failed in task: %s" % reason, failedevent)
# check whether this task needs executing..
if stamp_is_current(task, d):
return 1
# follow digraph path up, then execute our way back down
def execute(graph, item):
if data.getVarFlag(item, 'task', d):
if item in task_cache:
return 1
if task != item:
# deeper than toplevel, exec w/ deps
exec_task(item, d)
return 1
try:
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % item)
old_overrides = data.getVar('OVERRIDES', d, 0)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', 'task_%s:%s' % (item, old_overrides), localdata)
data.update_data(localdata)
event.fire(TaskStarted(item, localdata))
exec_func(item, localdata)
event.fire(TaskSucceeded(item, localdata))
task_cache.append(item)
data.setVar('_task_cache', task_cache, d)
except FuncFailed, reason:
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % reason )
failedevent = TaskFailed(item, d)
event.fire(failedevent)
raise EventException("Function failed in task: %s" % reason, failedevent)
if data.getVarFlag(task, 'dontrundeps', d):
execute(None, task)
else:
task_graph.walkdown(task, execute)
# make stamp, or cause event and raise exception
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
if not data.getVarFlag(task, 'nostamp', d):
make_stamp(task, d)
def extract_stamp_data(d, fn):
"""
Extracts stamp data from d which is either a data dictonary (fn unset)
or a dataCache entry (fn set).
"""
if fn:
return (d.task_queues[fn], d.stamp[fn], d.task_deps[fn])
task_graph = data.getVar('_task_graph', d)
if not task_graph:
task_graph = bb.digraph()
data.setVar('_task_graph', task_graph, d)
return (task_graph, data.getVar('STAMP', d, 1), None)
def extract_stamp(d, fn):
"""
Extracts stamp format which is either a data dictonary (fn unset)
@@ -296,6 +318,49 @@ def extract_stamp(d, fn):
return d.stamp[fn]
return data.getVar('STAMP', d, 1)
def stamp_is_current(task, d, file_name = None, checkdeps = 1):
"""
Check status of a given task's stamp.
Returns 0 if it is not current and needs updating.
(d can be a data dict or dataCache)
"""
(task_graph, stampfn, taskdep) = extract_stamp_data(d, file_name)
if not stampfn:
return 0
stampfile = "%s.%s" % (stampfn, task)
if not os.access(stampfile, os.F_OK):
return 0
if checkdeps == 0:
return 1
import stat
tasktime = os.stat(stampfile)[stat.ST_MTIME]
_deps = []
def checkStamp(graph, task):
# check for existance
if file_name:
if 'nostamp' in taskdep and task in taskdep['nostamp']:
return 1
else:
if data.getVarFlag(task, 'nostamp', d):
return 1
if not stamp_is_current(task, d, file_name, 0 ):
return 0
depfile = "%s.%s" % (stampfn, task)
deptime = os.stat(depfile)[stat.ST_MTIME]
if deptime > tasktime:
return 0
return 1
return task_graph.walkdown(task, checkStamp)
def stamp_internal(task, d, file_name):
"""
Internal stamp helper function
@@ -331,40 +396,33 @@ def del_stamp(task, d, file_name = None):
"""
stamp_internal(task, d, file_name)
def add_tasks(tasklist, d):
def add_task(task, deps, d):
task_graph = data.getVar('_task_graph', d)
if not task_graph:
task_graph = bb.digraph()
data.setVarFlag(task, 'task', 1, d)
task_graph.addnode(task, None)
for dep in deps:
if not task_graph.hasnode(dep):
task_graph.addnode(dep, None)
task_graph.addnode(task, dep)
# don't assume holding a reference
data.setVar('_task_graph', task_graph, d)
task_deps = data.getVar('_task_deps', d)
if not task_deps:
task_deps = {}
if not 'tasks' in task_deps:
task_deps['tasks'] = []
if not 'parents' in task_deps:
task_deps['parents'] = {}
for task in tasklist:
task = data.expand(task, d)
data.setVarFlag(task, 'task', 1, d)
if not task in task_deps['tasks']:
task_deps['tasks'].append(task)
flags = data.getVarFlags(task, d)
def getTask(name):
def getTask(name):
deptask = data.getVarFlag(task, name, d)
if deptask:
if not name in task_deps:
task_deps[name] = {}
if name in flags:
deptask = data.expand(flags[name], d)
task_deps[name][task] = deptask
getTask('depends')
getTask('deptask')
getTask('rdeptask')
getTask('recrdeptask')
getTask('nostamp')
task_deps['parents'][task] = []
for dep in flags['deps']:
dep = data.expand(dep, d)
task_deps['parents'][task].append(dep)
task_deps[name][task] = deptask
getTask('deptask')
getTask('rdeptask')
getTask('recrdeptask')
getTask('nostamp')
# don't assume holding a reference
data.setVar('_task_deps', task_deps, d)
def remove_task(task, kill, d):
@@ -372,5 +430,22 @@ def remove_task(task, kill, d):
If kill is 1, also remove tasks that depend on this task."""
data.delVarFlag(task, 'task', d)
task_graph = data.getVar('_task_graph', d)
if not task_graph:
task_graph = bb.digraph()
if not task_graph.hasnode(task):
return
data.delVarFlag(task, 'task', d)
ref = 1
if kill == 1:
ref = 2
task_graph.delnode(task, ref)
data.setVar('_task_graph', task_graph, d)
def task_exists(task, d):
task_graph = data.getVar('_task_graph', d)
if not task_graph:
task_graph = bb.digraph()
data.setVar('_task_graph', task_graph, d)
return task_graph.hasnode(task)

View File

@@ -39,7 +39,7 @@ except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
__cache_version__ = "129"
__cache_version__ = "125"
class Cache:
"""
@@ -50,11 +50,9 @@ class Cache:
self.cachedir = bb.data.getVar("CACHE", cooker.configuration.data, True)
self.clean = {}
self.checked = {}
self.depends_cache = {}
self.data = None
self.data_fn = None
self.cacheclean = True
if self.cachedir in [None, '']:
self.has_cache = False
@@ -69,33 +67,22 @@ class Cache:
except OSError:
bb.mkdirhier( self.cachedir )
if not self.has_cache:
return
# If any of configuration.data's dependencies are newer than the
# cache there isn't even any point in loading it...
newest_mtime = 0
deps = bb.data.getVar("__depends", cooker.configuration.data, True)
for f,old_mtime in deps:
if old_mtime > newest_mtime:
newest_mtime = old_mtime
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
if self.has_cache and (self.mtime(self.cachefile)):
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
p = pickle.Unpickler( file(self.cachefile,"rb"))
self.depends_cache, version_data = p.load()
if version_data['CACHE_VER'] != __cache_version__:
raise ValueError, 'Cache Version Mismatch'
if version_data['BITBAKE_VER'] != bb.__version__:
raise ValueError, 'Bitbake Version Mismatch'
except EOFError:
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
self.depends_cache = {}
except:
except (ValueError, KeyError):
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
self.depends_cache = {}
else:
bb.msg.note(1, bb.msg.domain.Cache, "Out of date cache found, rebuilding...")
if self.depends_cache:
for fn in self.depends_cache.keys():
self.clean[fn] = ""
self.cacheValidUpdate(fn)
def getVar(self, var, fn, exp = 0):
"""
@@ -107,6 +94,7 @@ class Cache:
2. We're learning what data to cache - serve from data
backend but add a copy of the data to the cache.
"""
if fn in self.clean:
return self.depends_cache[fn][var]
@@ -118,7 +106,6 @@ class Cache:
# yet setData hasn't been called to setup the right access. Very bad.
bb.msg.error(bb.msg.domain.Cache, "Parsing error data_fn %s and fn %s don't match" % (self.data_fn, fn))
self.cacheclean = False
result = bb.data.getVar(var, self.data, exp)
self.depends_cache[fn][var] = result
return result
@@ -141,8 +128,6 @@ class Cache:
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s (full)" % fn)
bb_data, skipped = self.load_bbfile(fn, cfgData)
return bb_data
@@ -154,15 +139,11 @@ class Cache:
to record the variables accessed.
Return the cache status and whether the file was skipped when parsed
"""
if fn not in self.checked:
self.cacheValidUpdate(fn)
if self.cacheValid(fn):
if "SKIPPED" in self.depends_cache[fn]:
return True, True
return True, False
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s" % fn)
bb_data, skipped = self.load_bbfile(fn, cfgData)
self.setData(fn, bb_data)
return False, skipped
@@ -188,10 +169,11 @@ class Cache:
if not self.has_cache:
return False
self.checked[fn] = ""
# Pretend we're clean so getVar works
self.clean[fn] = ""
# Check file still exists
if self.mtime(fn) == 0:
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
self.remove(fn)
return False
# File isn't in depends_cache
if not fn in self.depends_cache:
@@ -199,36 +181,26 @@ class Cache:
self.remove(fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
self.remove(fn)
return False
# Check the file's timestamp
if mtime != self.getVar("CACHETIMESTAMP", fn, True):
if bb.parse.cached_mtime(fn) > self.getVar("CACHETIMESTAMP", fn, True):
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s changed" % fn)
self.remove(fn)
return False
# Check dependencies are still valid
depends = self.getVar("__depends", fn, True)
if depends:
for f,old_mtime in depends:
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if fmtime == 0:
self.remove(fn)
return False
for f,old_mtime in depends:
# Check if file still exists
if self.mtime(f) == 0:
return False
if (fmtime != old_mtime):
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
self.remove(fn)
return False
new_mtime = bb.parse.cached_mtime(f)
if (new_mtime > old_mtime):
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
self.remove(fn)
return False
#bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
if not fn in self.clean:
self.clean[fn] = ""
@@ -263,10 +235,6 @@ class Cache:
if not self.has_cache:
return
if self.cacheclean:
bb.msg.note(1, bb.msg.domain.Cache, "Cache is clean, not saving.")
return
version_data = {}
version_data['CACHE_VER'] = __cache_version__
version_data['BITBAKE_VER'] = bb.__version__
@@ -283,15 +251,16 @@ class Cache:
"""
pn = self.getVar('PN', file_name, True)
pe = self.getVar('PE', file_name, True) or "0"
pv = self.getVar('PV', file_name, True)
pr = self.getVar('PR', file_name, True)
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
provides = Set([pn] + (self.getVar("PROVIDES", file_name, True) or "").split())
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
packages = (self.getVar('PACKAGES', file_name, True) or "").split()
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
cacheData.task_queues[file_name] = self.getVar("_task_graph", file_name, True)
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name, True)
# build PackageName to FileName lookup table
@@ -303,34 +272,25 @@ class Cache:
# build FileName to PackageName lookup table
cacheData.pkg_fn[file_name] = pn
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
cacheData.pkg_pvpr[file_name] = (pv,pr)
cacheData.pkg_dp[file_name] = dp
provides = [pn]
for provide in (self.getVar("PROVIDES", file_name, True) or "").split():
if provide not in provides:
provides.append(provide)
# Build forward and reverse provider hashes
# Forward: virtual -> [filenames]
# Reverse: PN -> [virtuals]
if pn not in cacheData.pn_provides:
cacheData.pn_provides[pn] = []
cacheData.pn_provides[pn] = Set()
cacheData.pn_provides[pn] |= provides
cacheData.fn_provides[file_name] = provides
for provide in provides:
if provide not in cacheData.providers:
cacheData.providers[provide] = []
cacheData.providers[provide].append(file_name)
if not provide in cacheData.pn_provides[pn]:
cacheData.pn_provides[pn].append(provide)
cacheData.deps[file_name] = []
cacheData.deps[file_name] = Set()
for dep in depends:
if not dep in cacheData.deps[file_name]:
cacheData.deps[file_name].append(dep)
if not dep in cacheData.all_depends:
cacheData.all_depends.append(dep)
cacheData.all_depends.add(dep)
cacheData.deps[file_name].add(dep)
# Build reverse hash for PACKAGES, so runtime dependencies
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
@@ -352,21 +312,26 @@ class Cache:
# Build hash of runtime depends and rececommends
def add_dep(deplist, deps):
for dep in deps:
if not dep in deplist:
deplist[dep] = ""
if not file_name in cacheData.rundeps:
cacheData.rundeps[file_name] = {}
if not file_name in cacheData.runrecs:
cacheData.runrecs[file_name] = {}
rdepends = self.getVar('RDEPENDS', file_name, True) or ""
rrecommends = self.getVar('RRECOMMENDS', file_name, True) or ""
for package in packages + [pn]:
if not package in cacheData.rundeps[file_name]:
cacheData.rundeps[file_name][package] = []
cacheData.rundeps[file_name][package] = {}
if not package in cacheData.runrecs[file_name]:
cacheData.runrecs[file_name][package] = []
cacheData.runrecs[file_name][package] = {}
cacheData.rundeps[file_name][package] = rdepends + " " + (self.getVar("RDEPENDS_%s" % package, file_name, True) or "")
cacheData.runrecs[file_name][package] = rrecommends + " " + (self.getVar("RRECOMMENDS_%s" % package, file_name, True) or "")
add_dep(cacheData.rundeps[file_name][package], bb.utils.explode_deps(self.getVar('RDEPENDS', file_name, True) or ""))
add_dep(cacheData.runrecs[file_name][package], bb.utils.explode_deps(self.getVar('RRECOMMENDS', file_name, True) or ""))
add_dep(cacheData.rundeps[file_name][package], bb.utils.explode_deps(self.getVar("RDEPENDS_%s" % package, file_name, True) or ""))
add_dep(cacheData.runrecs[file_name][package], bb.utils.explode_deps(self.getVar("RRECOMMENDS_%s" % package, file_name, True) or ""))
# Collect files we may need for possible world-dep
# calculations
@@ -387,7 +352,7 @@ class Cache:
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
if bb.parse.cached_mtime_noerror(bbfile_loc):
if self.mtime(bbfile_loc):
os.chdir(bbfile_loc)
bb_data = data.init_db(config)
try:
@@ -442,11 +407,10 @@ class CacheData:
self.possible_world = []
self.pkg_pn = {}
self.pkg_fn = {}
self.pkg_pepvpr = {}
self.pkg_pvpr = {}
self.pkg_dp = {}
self.pn_provides = {}
self.fn_provides = {}
self.all_depends = []
self.all_depends = Set()
self.deps = {}
self.rundeps = {}
self.runrecs = {}

View File

@@ -26,10 +26,33 @@ import sys, os, getopt, glob, copy, os.path, re, time
import bb
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue
from sets import Set
import itertools, sre_constants
import itertools
parsespin = itertools.cycle( r'|/-\\' )
#============================================================================#
# BBStatistics
#============================================================================#
class BBStatistics:
"""
Manage build statistics for one run
"""
def __init__(self ):
self.attempt = 0
self.success = 0
self.fail = 0
self.deps = 0
def show( self ):
print "Build statistics:"
print " Attempted builds: %d" % self.attempt
if self.fail:
print " Failed builds: %d" % self.fail
if self.deps:
print " Dependencies not satisfied: %d" % self.deps
if self.fail or self.deps: return 1
else: return 0
#============================================================================#
# BBCooker
#============================================================================#
@@ -38,101 +61,63 @@ class BBCooker:
Manages one bitbake build run
"""
def __init__(self, configuration):
Statistics = BBStatistics # make it visible from the shell
def __init__( self ):
self.build_cache_fail = []
self.build_cache = []
self.stats = BBStatistics()
self.status = None
self.cache = None
self.bb_cache = None
self.configuration = configuration
if self.configuration.verbose:
bb.msg.set_verbose(True)
if self.configuration.debug:
bb.msg.set_debug_level(self.configuration.debug)
else:
bb.msg.set_debug_level(0)
if self.configuration.debug_domains:
bb.msg.set_debug_domains(self.configuration.debug_domains)
self.configuration.data = bb.data.init()
for f in self.configuration.file:
self.parseConfigurationFile( f )
self.parseConfigurationFile( os.path.join( "conf", "bitbake.conf" ) )
if not self.configuration.cmd:
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data) or "build"
bbpkgs = bb.data.getVar('BBPKGS', self.configuration.data, True)
if bbpkgs:
self.configuration.pkgs_to_build.extend(bbpkgs.split())
#
# Special updated configuration we use for firing events
#
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
bb.data.update_data(self.configuration.event_data)
#
# TOSTOP must not be set or our children will hang when they output
#
fd = sys.stdout.fileno()
if os.isatty(fd):
import termios
tcattr = termios.tcgetattr(fd)
if tcattr[3] & termios.TOSTOP:
bb.msg.note(1, bb.msg.domain.Build, "The terminal had the TOSTOP bit set, clearing...")
tcattr[3] = tcattr[3] & ~termios.TOSTOP
termios.tcsetattr(fd, termios.TCSANOW, tcattr)
# Change nice level if we're asked to
nice = bb.data.getVar("BB_NICE_LEVEL", self.configuration.data, True)
if nice:
curnice = os.nice(0)
nice = int(nice) - curnice
bb.msg.note(2, bb.msg.domain.Build, "Renice to %s " % os.nice(nice))
def tryBuildPackage(self, fn, item, task, the_data):
def tryBuildPackage(self, fn, item, task, the_data, build_depends):
"""
Build one task of a package, optionally build following task depends
"""
bb.event.fire(bb.event.PkgStarted(item, the_data))
try:
self.stats.attempt += 1
if not build_depends:
bb.data.setVarFlag('do_%s' % task, 'dontrundeps', 1, the_data)
if not self.configuration.dry_run:
bb.build.exec_task('do_%s' % task, the_data)
bb.event.fire(bb.event.PkgSucceeded(item, the_data))
self.build_cache.append(fn)
return True
except bb.build.FuncFailed:
self.stats.fail += 1
bb.msg.error(bb.msg.domain.Build, "task stack execution failed")
bb.event.fire(bb.event.PkgFailed(item, the_data))
self.build_cache_fail.append(fn)
raise
except bb.build.EventException, e:
self.stats.fail += 1
event = e.args[1]
bb.msg.error(bb.msg.domain.Build, "%s event exception, aborting" % bb.event.getName(event))
bb.event.fire(bb.event.PkgFailed(item, the_data))
self.build_cache_fail.append(fn)
raise
def tryBuild(self, fn):
def tryBuild( self, fn, build_depends):
"""
Build a provider and its dependencies.
build_depends is a list of previous build dependencies (not runtime)
If build_depends is empty, we're dealing with a runtime depends
"""
the_data = self.bb_cache.loadDataFull(fn, self.configuration.data)
item = self.status.pkg_fn[fn]
#if bb.build.stamp_is_current('do_%s' % self.configuration.cmd, the_data):
# return True
if bb.build.stamp_is_current('do_%s' % self.configuration.cmd, the_data):
self.build_cache.append(fn)
return True
return self.tryBuildPackage(fn, item, self.configuration.cmd, the_data)
return self.tryBuildPackage(fn, item, self.configuration.cmd, the_data, build_depends)
def showVersions(self):
def showVersions( self ):
pkg_pn = self.status.pkg_pn
preferred_versions = {}
latest_versions = {}
@@ -151,78 +136,36 @@ class BBCooker:
latest = latest_versions[p]
if pref != latest:
prefstr = pref[0][0] + ":" + pref[0][1] + '-' + pref[0][2]
prefstr = pref[0][0] + "-" + pref[0][1]
else:
prefstr = ""
print "%-30s %20s %20s" % (p, latest[0][0] + ":" + latest[0][1] + "-" + latest[0][2],
print "%-30s %20s %20s" % (p, latest[0][0] + "-" + latest[0][1],
prefstr)
def showEnvironment(self , buildfile = None, pkgs_to_build = []):
"""
Show the outer or per-package environment
"""
fn = None
envdata = None
if 'world' in pkgs_to_build:
print "'world' is not a valid target for --environment."
sys.exit(1)
if len(pkgs_to_build) > 1:
print "Only one target can be used with the --environment option."
sys.exit(1)
if buildfile:
if len(pkgs_to_build) > 0:
print "No target should be used with the --environment and --buildfile options."
sys.exit(1)
def showEnvironment( self ):
"""Show the outer or per-package environment"""
if self.configuration.buildfile:
self.cb = None
self.bb_cache = bb.cache.init(self)
fn = self.matchFile(buildfile)
if not fn:
sys.exit(1)
elif len(pkgs_to_build) == 1:
self.updateCache()
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
taskdata = bb.taskdata.TaskData(self.configuration.abort)
try:
taskdata.add_provider(localdata, self.status, pkgs_to_build[0])
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
targetid = taskdata.getbuild_id(pkgs_to_build[0])
fnid = taskdata.build_targets[targetid][0]
fn = taskdata.fn_index[fnid]
else:
envdata = self.configuration.data
if fn:
try:
envdata = self.bb_cache.loadDataFull(fn, self.configuration.data)
self.configuration.data = self.bb_cache.loadDataFull(self.configuration.buildfile, self.configuration.data)
except IOError, e:
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to read %s: %s" % (fn, e))
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to read %s: %s" % ( self.configuration.buildfile, e ))
except Exception, e:
bb.msg.fatal(bb.msg.domain.Parsing, "%s" % e)
# emit variables and shell functions
try:
data.update_data( envdata )
data.emit_env(sys.__stdout__, envdata, True)
data.update_data( self.configuration.data )
data.emit_env(sys.__stdout__, self.configuration.data, True)
except Exception, e:
bb.msg.fatal(bb.msg.domain.Parsing, "%s" % e)
# emit the metadata which isnt valid shell
data.expandKeys( envdata )
for e in envdata.keys():
if data.getVarFlag( e, 'python', envdata ):
sys.__stdout__.write("\npython %s () {\n%s}\n" % (e, data.getVar(e, envdata, 1)))
data.expandKeys( self.configuration.data )
for e in self.configuration.data.keys():
if data.getVarFlag( e, 'python', self.configuration.data ):
sys.__stdout__.write("\npython %s () {\n%s}\n" % (e, data.getVar(e, self.configuration.data, 1)))
def generateDotGraph( self, pkgs_to_build, ignore_deps ):
"""
@@ -249,21 +192,22 @@ class BBCooker:
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
rq.prepare_runqueue()
rq = bb.runqueue.RunQueue()
rq.prepare_runqueue(self, self.configuration.data, self.status, taskdata, runlist)
seen_fnids = []
depends_file = file('depends.dot', 'w' )
tdepends_file = file('task-depends.dot', 'w' )
print >> depends_file, "digraph depends {"
print >> tdepends_file, "digraph depends {"
for task in range(len(rq.runq_fnid)):
rq.prio_map.reverse()
for task1 in range(len(rq.runq_fnid)):
task = rq.prio_map[task1]
taskname = rq.runq_task[task]
fnid = rq.runq_fnid[task]
fn = taskdata.fn_index[fnid]
pn = self.status.pkg_fn[fn]
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
version = self.bb_cache.getVar('PV', fn, True ) + '-' + self.bb_cache.getVar('PR', fn, True)
print >> tdepends_file, '"%s.%s" [label="%s %s\\n%s\\n%s"]' % (pn, taskname, pn, taskname, version, fn)
for dep in rq.runq_depends[task]:
depfn = taskdata.fn_index[rq.runq_fnid[dep]]
@@ -272,7 +216,7 @@ class BBCooker:
if fnid not in seen_fnids:
seen_fnids.append(fnid)
packages = []
print >> depends_file, '"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn)
print >> depends_file, '"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn)
for depend in self.status.deps[fn]:
print >> depends_file, '"%s" -> "%s"' % (pn, depend)
rdepends = self.status.rundeps[fn]
@@ -318,11 +262,7 @@ class BBCooker:
# Handle PREFERRED_PROVIDERS
for p in (bb.data.getVar('PREFERRED_PROVIDERS', localdata, 1) or "").split():
try:
(providee, provider) = p.split(':')
except:
bb.msg.error(bb.msg.domain.Provider, "Malformed option in PREFERRED_PROVIDERS variable: %s" % p)
continue
(providee, provider) = p.split(':')
if providee in self.status.preferred and self.status.preferred[providee] != provider:
bb.msg.error(bb.msg.domain.Provider, "conflicting preferences for %s: both %s and %s specified" % (providee, provider, self.status.preferred[providee]))
self.status.preferred[providee] = provider
@@ -379,6 +319,8 @@ class BBCooker:
except ImportError, details:
bb.msg.fatal(bb.msg.domain.Parsing, "Sorry, shell not available (%s)" % details )
else:
bb.data.update_data( self.configuration.data )
bb.data.expandKeys( self.configuration.data )
shell.start( self )
sys.exit( 0 )
@@ -386,19 +328,19 @@ class BBCooker:
try:
self.configuration.data = bb.parse.handle( afile, self.configuration.data )
# Handle any INHERITs and inherit the base class
inherits = ["base"] + (bb.data.getVar('INHERIT', self.configuration.data, True ) or "").split()
# Add the handlers we inherited by INHERIT
# we need to do this manually as it is not guranteed
# we will pick up these classes... as we only INHERIT
# on .inc and .bb files but not on .conf
data = bb.data.createCopy( self.configuration.data )
inherits = ["base"] + (bb.data.getVar('INHERIT', data, True ) or "").split()
for inherit in inherits:
self.configuration.data = bb.parse.handle(os.path.join('classes', '%s.bbclass' % inherit), self.configuration.data, True )
data = bb.parse.handle( os.path.join('classes', '%s.bbclass' % inherit ), data, True )
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', self.configuration.data) or []:
bb.event.register(var,bb.data.getVar(var, self.configuration.data))
bb.fetch.fetcher_init(self.configuration.data)
bb.event.fire(bb.event.ConfigParsed(self.configuration.data))
# FIXME: This assumes that we included at least one .inc file
for var in bb.data.keys(data):
if bb.data.getVarFlag(var, 'handler', data):
bb.event.register(var,bb.data.getVar(var, data))
except IOError:
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to open %s" % afile )
@@ -429,147 +371,92 @@ class BBCooker:
except ValueError:
bb.msg.error(bb.msg.domain.Parsing, "invalid value for BBFILE_PRIORITY_%s: \"%s\"" % (c, priority))
def buildSetVars(self):
def cook(self, configuration):
"""
Setup any variables needed before starting a build
We are building stuff here. We do the building
from here. By default we try to execute task
build.
"""
self.configuration = configuration
if self.configuration.verbose:
bb.msg.set_verbose(True)
if self.configuration.debug:
bb.msg.set_debug_level(self.configuration.debug)
else:
bb.msg.set_debug_level(0)
if self.configuration.debug_domains:
bb.msg.set_debug_domains(self.configuration.debug_domains)
self.configuration.data = bb.data.init()
for f in self.configuration.file:
self.parseConfigurationFile( f )
self.parseConfigurationFile( os.path.join( "conf", "bitbake.conf" ) )
if not self.configuration.cmd:
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data) or "build"
#
# Special updated configuration we use for firing events
#
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
bb.data.update_data(self.configuration.event_data)
if self.configuration.show_environment:
self.showEnvironment()
sys.exit( 0 )
# inject custom variables
if not bb.data.getVar("BUILDNAME", self.configuration.data):
bb.data.setVar("BUILDNAME", os.popen('date +%Y%m%d%H%M').readline().strip(), self.configuration.data)
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S',time.gmtime()),self.configuration.data)
def matchFile(self, buildfile):
"""
Convert the fragment buildfile into a real file
Error if there are too many matches
"""
bf = os.path.abspath(buildfile)
try:
os.stat(bf)
return bf
except OSError:
(filelist, masked) = self.collect_bbfiles()
regexp = re.compile(buildfile)
matches = []
for f in filelist:
if regexp.search(f) and os.path.isfile(f):
bf = f
matches.append(f)
if len(matches) != 1:
bb.msg.error(bb.msg.domain.Parsing, "Unable to match %s (%s matches found):" % (buildfile, len(matches)))
for f in matches:
bb.msg.error(bb.msg.domain.Parsing, " %s" % f)
return False
return matches[0]
def buildFile(self, buildfile):
"""
Build the file matching regexp buildfile
"""
# Make sure our target is a fully qualified filename
fn = self.matchFile(buildfile)
if not fn:
return False
# Load data into the cache for fn
self.bb_cache = bb.cache.init(self)
self.bb_cache.loadData(fn, self.configuration.data)
# Parse the loaded cache data
self.status = bb.cache.CacheData()
self.bb_cache.handle_data(fn, self.status)
# Tweak some variables
item = self.bb_cache.getVar('PN', fn, True)
self.status.ignored_dependencies = Set()
self.status.bbfile_priority[fn] = 1
# Remove external dependencies
self.status.task_deps[fn]['depends'] = {}
self.status.deps[fn] = []
self.status.rundeps[fn] = []
self.status.runrecs[fn] = []
# Remove stamp for target if force mode active
if self.configuration.force:
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (self.configuration.cmd, fn))
bb.build.del_stamp('do_%s' % self.configuration.cmd, self.configuration.data)
# Setup taskdata structure
taskdata = bb.taskdata.TaskData(self.configuration.abort)
taskdata.add_provider(self.configuration.data, self.status, item)
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
bb.event.fire(bb.event.BuildStarted(buildname, [item], self.configuration.event_data))
# Execute the runqueue
runlist = [[item, "do_%s" % self.configuration.cmd]]
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
rq.prepare_runqueue()
try:
failures = rq.execute_runqueue()
except runqueue.TaskFailure, fnids:
failures = 0
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
failures = failures + 1
bb.event.fire(bb.event.BuildCompleted(buildname, [item], self.configuration.event_data, failures))
return False
bb.event.fire(bb.event.BuildCompleted(buildname, [item], self.configuration.event_data, failures))
return True
if self.configuration.interactive:
self.interactiveMode()
def buildTargets(self, targets):
"""
Attempt to build the targets specified
"""
if self.configuration.buildfile is not None:
bf = os.path.abspath( self.configuration.buildfile )
try:
os.stat(bf)
except OSError:
(filelist, masked) = self.collect_bbfiles()
regexp = re.compile(self.configuration.buildfile)
matches = []
for f in filelist:
if regexp.search(f) and os.path.isfile(f):
bf = f
matches.append(f)
if len(matches) != 1:
bb.msg.error(bb.msg.domain.Parsing, "Unable to match %s (%s matches found):" % (self.configuration.buildfile, len(matches)))
for f in matches:
bb.msg.error(bb.msg.domain.Parsing, " %s" % f)
sys.exit(1)
bf = matches[0]
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
bb.event.fire(bb.event.BuildStarted(buildname, targets, self.configuration.event_data))
bbfile_data = bb.parse.handle(bf, self.configuration.data)
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
# Remove stamp for target if force mode active
if self.configuration.force:
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (self.configuration.cmd, bf))
bb.build.del_stamp('do_%s' % self.configuration.cmd, bbfile_data)
taskdata = bb.taskdata.TaskData(self.configuration.abort)
item = bb.data.getVar('PN', bbfile_data, 1)
try:
self.tryBuildPackage(bf, item, self.configuration.cmd, bbfile_data, True)
except bb.build.EventException:
bb.msg.error(bb.msg.domain.Build, "Build of '%s' failed" % item )
runlist = []
try:
for k in targets:
taskdata.add_provider(localdata, self.status, k)
runlist.append([k, "do_%s" % self.configuration.cmd])
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
rq.prepare_runqueue()
try:
failures = rq.execute_runqueue()
except runqueue.TaskFailure, fnids:
failures = 0
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
failures = failures + 1
bb.event.fire(bb.event.BuildCompleted(buildname, targets, self.configuration.event_data, failures))
sys.exit(1)
bb.event.fire(bb.event.BuildCompleted(buildname, targets, self.configuration.event_data, failures))
sys.exit(0)
def updateCache(self):
# Import Psyco if available and not disabled
import platform
if platform.machine() in ['i386', 'i486', 'i586', 'i686']:
if not self.configuration.disable_psyco:
try:
import psyco
except ImportError:
bb.msg.note(1, bb.msg.domain.Collection, "Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
else:
psyco.bind( self.parse_bbfiles )
else:
bb.msg.note(1, bb.msg.domain.Collection, "You have disabled Psyco. This decreases performance.")
sys.exit( self.stats.show() )
# initialise the parsing status now we know we will need deps
self.status = bb.cache.CacheData()
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
@@ -577,50 +464,41 @@ class BBCooker:
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
(filelist, masked) = self.collect_bbfiles()
bb.data.renameVar("__depends", "__base_depends", self.configuration.data)
self.parse_bbfiles(filelist, masked, self.myProgressCallback)
bb.msg.debug(1, bb.msg.domain.Collection, "parsing complete")
self.buildDepgraph()
def cook(self):
"""
We are building stuff here. We do the building
from here. By default we try to execute task
build.
"""
if self.configuration.show_environment:
self.showEnvironment(self.configuration.buildfile, self.configuration.pkgs_to_build)
sys.exit( 0 )
self.buildSetVars()
if self.configuration.interactive:
self.interactiveMode()
if self.configuration.buildfile is not None:
if not self.buildFile(self.configuration.buildfile):
sys.exit(1)
sys.exit(0)
# initialise the parsing status now we know we will need deps
self.updateCache()
if self.configuration.parse_only:
bb.msg.note(1, bb.msg.domain.Collection, "Requested parsing .bb files only. Exiting.")
return 0
pkgs_to_build = self.configuration.pkgs_to_build
if len(pkgs_to_build) == 0 and not self.configuration.show_versions:
bbpkgs = bb.data.getVar('BBPKGS', self.configuration.data, 1)
if bbpkgs:
pkgs_to_build.extend(bbpkgs.split())
if len(pkgs_to_build) == 0 and not self.configuration.show_versions \
and not self.configuration.show_environment:
print "Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help'"
print "for usage information."
sys.exit(0)
# Import Psyco if available and not disabled
if not self.configuration.disable_psyco:
try:
import psyco
except ImportError:
bb.msg.note(1, bb.msg.domain.Collection, "Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
else:
psyco.bind( self.parse_bbfiles )
else:
bb.msg.note(1, bb.msg.domain.Collection, "You have disabled Psyco. This decreases performance.")
try:
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
(filelist, masked) = self.collect_bbfiles()
self.parse_bbfiles(filelist, masked, self.myProgressCallback)
bb.msg.debug(1, bb.msg.domain.Collection, "parsing complete")
print
if self.configuration.parse_only:
bb.msg.note(1, bb.msg.domain.Collection, "Requested parsing .bb files only. Exiting.")
return
self.buildDepgraph()
if self.configuration.show_versions:
self.showVersions()
sys.exit( 0 )
@@ -634,7 +512,34 @@ class BBCooker:
self.generateDotGraph( pkgs_to_build, self.configuration.ignored_dot_deps )
sys.exit( 0 )
return self.buildTargets(pkgs_to_build)
bb.event.fire(bb.event.BuildStarted(buildname, pkgs_to_build, self.configuration.event_data))
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
taskdata = bb.taskdata.TaskData(self.configuration.abort)
runlist = []
try:
for k in pkgs_to_build:
taskdata.add_provider(localdata, self.status, k)
runlist.append([k, "do_%s" % self.configuration.cmd])
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
rq = bb.runqueue.RunQueue()
rq.prepare_runqueue(self, self.configuration.data, self.status, taskdata, runlist)
try:
failures = rq.execute_runqueue(self, self.configuration.data, self.status, taskdata, runlist)
except runqueue.TaskFailure, fnids:
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
sys.exit(1)
bb.event.fire(bb.event.BuildCompleted(buildname, pkgs_to_build, self.configuration.event_data, failures))
sys.exit( self.stats.show() )
except KeyboardInterrupt:
bb.msg.note(1, bb.msg.domain.Collection, "KeyboardInterrupt - Build not completed.")
@@ -651,17 +556,13 @@ class BBCooker:
return bbfiles
def find_bbfiles( self, path ):
"""Find all the .bb files in a directory"""
from os.path import join
found = []
for dir, dirs, files in os.walk(path):
for ignored in ('SCCS', 'CVS', '.svn'):
if ignored in dirs:
dirs.remove(ignored)
found += [join(dir,f) for f in files if f.endswith('.bb')]
return found
"""Find all the .bb files in a directory (uses find)"""
findcmd = 'find ' + path + ' -name *.bb | grep -v SCCS/'
try:
finddata = os.popen(findcmd)
except OSError:
return []
return finddata.readlines()
def collect_bbfiles( self ):
"""Collect all available .bb build files"""
@@ -708,11 +609,11 @@ class BBCooker:
return (finalfiles, masked)
def parse_bbfiles(self, filelist, masked, progressCallback = None):
parsed, cached, skipped, error = 0, 0, 0, 0
parsed, cached, skipped = 0, 0, 0
for i in xrange( len( filelist ) ):
f = filelist[i]
#bb.msg.debug(1, bb.msg.domain.Collection, "parsing %s" % f)
bb.msg.debug(1, bb.msg.domain.Collection, "parsing %s" % f)
# read a file's metadata
try:
@@ -751,7 +652,6 @@ class BBCooker:
self.bb_cache.sync()
raise
except Exception, e:
error += 1
self.bb_cache.remove(f)
bb.msg.error(bb.msg.domain.Collection, "%s while parsing %s" % (e, f))
except:
@@ -763,6 +663,3 @@ class BBCooker:
bb.msg.note(1, bb.msg.domain.Collection, "Parsing finished. %d cached, %d parsed, %d skipped, %d masked." % ( cached, parsed, skipped, masked ))
self.bb_cache.sync()
if error > 0:
bb.msg.fatal(bb.msg.domain.Collection, "Parsing errors found, exiting...")

View File

@@ -96,19 +96,6 @@ def getVar(var, d, exp = 0):
"""
return d.getVar(var,exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> renameVar('TEST', 'TEST2', d)
>>> print getVar('TEST2', d)
testcontents
"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set
@@ -282,7 +269,6 @@ def expandKeys(alterdata, readdata = None):
if readdata == None:
readdata = alterdata
todolist = {}
for key in keys(alterdata):
if not '${' in key:
continue
@@ -290,14 +276,20 @@ def expandKeys(alterdata, readdata = None):
ekey = expand(key, readdata)
if key == ekey:
continue
todolist[key] = ekey
val = getVar(key, alterdata)
if val is None:
continue
# import copy
# setVarFlags(ekey, copy.copy(getVarFlags(key, readdata)), alterdata)
setVar(ekey, val, alterdata)
# These two for loops are split for performance to maximise the
# usefulness of the expand cache
for i in ('_append', '_prepend'):
dest = getVarFlag(ekey, i, alterdata) or []
src = getVarFlag(key, i, readdata) or []
dest.extend(src)
setVarFlag(ekey, i, dest, alterdata)
for key in todolist:
ekey = todolist[key]
renameVar(key, ekey, alterdata)
delVar(key, alterdata)
def expandData(alterdata, readdata = None):
"""For each variable in alterdata, expand it, and update the var contents.
@@ -345,12 +337,6 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if getVarFlag(var, "python", d):
return 0
export = getVarFlag(var, "export", d)
unexport = getVarFlag(var, "unexport", d)
func = getVarFlag(var, "func", d)
if not all and not export and not unexport and not func:
return 0
try:
if all:
oval = getVar(var, d, 0)
@@ -370,34 +356,34 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if type(val) is not types.StringType:
return 0
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
if getVarFlag(var, 'matchesenv', d):
return 0
varExpanded = expand(var, d)
if unexport:
o.write('unset %s\n' % varExpanded)
return 1
if getVarFlag(var, 'matchesenv', d):
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return 0
val.rstrip()
if not val:
return 0
varExpanded = expand(var, d)
if func:
# NOTE: should probably check for unbalanced {} within the var
if getVarFlag(var, "func", d):
# NOTE: should probably check for unbalanced {} within the var
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
return 1
if export:
o.write('export ')
# if we're going to output this within doublequotes,
# to a shell, we need to escape the quotes in the var
alter = re.sub('"', '\\"', val.strip())
o.write('%s="%s"\n' % (varExpanded, alter))
else:
if getVarFlag(var, "unexport", d):
o.write('unset %s\n' % varExpanded)
return 1
if getVarFlag(var, "export", d):
o.write('export ')
else:
if not all:
return 0
# if we're going to output this within doublequotes,
# to a shell, we need to escape the quotes in the var
alter = re.sub('"', '\\"', val.strip())
o.write('%s="%s"\n' % (varExpanded, alter))
return 1

View File

@@ -170,28 +170,6 @@ class DataSmart:
return self.expand(value,var)
return value
def renameVar(self, key, newkey):
"""
Rename the variable key to newkey
"""
val = self.getVar(key, 0)
if val is None:
return
self.setVar(newkey, val)
for i in ('_append', '_prepend'):
dest = self.getVarFlag(newkey, i) or []
src = self.getVarFlag(key, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest)
if self._special_values.has_key(i) and key in self._special_values[i]:
self._special_values[i].remove(key)
self._special_values[i].add(newkey)
self.delVar(key)
def delVar(self,var):
self.expand_cache = {}
self.dict[var] = {}
@@ -232,10 +210,10 @@ class DataSmart:
flags = {}
if local_var:
for i in local_var.keys():
for i in self.dict[var].keys():
if i == "content":
continue
flags[i] = local_var[i]
flags[i] = self.dict[var][i]
if len(flags) == 0:
return None

View File

@@ -23,13 +23,14 @@ BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os, re
import bb.data
import bb.utils
class Event:
"""Base class for events"""
type = "Event"
def __init__(self, d):
def __init__(self, d = bb.data.init()):
self._data = d
def getData(self):
@@ -124,30 +125,11 @@ def getName(e):
else:
return e.__name__
class ConfigParsed(Event):
"""Configuration Parsing Complete"""
class StampUpdate(Event):
"""Trigger for any adjustment of the stamp files to happen"""
def __init__(self, targets, stampfns, d):
self._targets = targets
self._stampfns = stampfns
Event.__init__(self, d)
def getStampPrefix(self):
return self._stampfns
def getTargets(self):
return self._targets
stampPrefix = property(getStampPrefix)
targets = property(getTargets)
class PkgBase(Event):
"""Base class for package events"""
def __init__(self, t, d):
def __init__(self, t, d = bb.data.init()):
self._pkg = t
Event.__init__(self, d)

View File

@@ -24,15 +24,9 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os, re, fcntl
import os, re
import bb
from bb import data
from bb import persist_data
try:
import cPickle as pickle
except ImportError:
import pickle
class FetchError(Exception):
"""Exception raised when a download fails"""
@@ -49,9 +43,6 @@ class ParameterError(Exception):
class MD5SumError(Exception):
"""Exception raised when a MD5SUM of a file does not match the expected one"""
class InvalidSRCREV(Exception):
"""Exception raised when an invalid SRCREV is encountered"""
def uri_replace(uri, uri_find, uri_replace, d):
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
if not uri or not uri_find or not uri_replace:
@@ -83,257 +74,78 @@ def uri_replace(uri, uri_find, uri_replace, d):
return bb.encodeurl(result_decoded)
methods = []
urldata_cache = {}
urldata = {}
def fetcher_init(d):
"""
Called to initilize the fetchers once the configuration data is known
Calls before this must not hit the cache.
"""
pd = persist_data.PersistData(d)
# When to drop SCM head revisions controled by user policy
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
if srcrev_policy == "cache":
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
elif srcrev_policy == "clear":
bb.msg.debug(1, bb.msg.domain.Fetcher, "Clearing SRCREV cache due to cache policy of: %s" % srcrev_policy)
pd.delDomain("BB_URI_HEADREVS")
else:
bb.msg.fatal(bb.msg.domain.Fetcher, "Invalid SRCREV cache policy of: %s" % srcrev_policy)
# Make sure our domains exist
pd.addDomain("BB_URI_HEADREVS")
pd.addDomain("BB_URI_LOCALCOUNT")
def init(urls = [], d = None):
if d == None:
bb.msg.debug(2, bb.msg.domain.Fetcher, "BUG init called with None as data object!!!")
return
# Function call order is usually:
# 1. init
# 2. go
# 3. localpaths
# localpath can be called at any time
for m in methods:
m.urls = []
def init(urls, d, setup = True):
urldata = {}
for u in urls:
ud = initdata(u, d)
if ud.method:
ud.method.urls.append(u)
def initdata(url, d):
fn = bb.data.getVar('FILE', d, 1)
if fn in urldata_cache:
urldata = urldata_cache[fn]
for url in urls:
if url not in urldata:
urldata[url] = FetchData(url, d)
if setup:
for url in urldata:
if not urldata[url].setup:
urldata[url].setup_localpath(d)
urldata_cache[fn] = urldata
return urldata
if fn not in urldata:
urldata[fn] = {}
if url not in urldata[fn]:
ud = FetchData()
(ud.type, ud.host, ud.path, ud.user, ud.pswd, ud.parm) = bb.decodeurl(data.expand(url, d))
ud.date = Fetch.getSRCDate(ud, d)
for m in methods:
if m.supports(url, ud, d):
ud.localpath = m.localpath(url, ud, d)
ud.md5 = ud.localpath + '.md5'
# if user sets localpath for file, use it instead.
if "localpath" in ud.parm:
ud.localpath = ud.parm["localpath"]
ud.method = m
break
urldata[fn][url] = ud
return urldata[fn][url]
def go(d):
"""
Fetch all urls
init must have previously been called
"""
urldata = init([], d, True)
for u in urldata:
ud = urldata[u]
m = ud.method
if ud.localfile:
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
"""Fetch all urls"""
fn = bb.data.getVar('FILE', d, 1)
for m in methods:
for u in m.urls:
ud = urldata[fn][u]
if ud.localfile and not m.forcefetch(u, ud, d) and os.path.exists(urldata[fn][u].md5):
# File already present along with md5 stamp file
# Touch md5 file to show activity
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
os.utime(ud.md5, None)
continue
lf = bb.utils.lockfile(ud.lockfile)
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
# If someone else fetched this before we got the lock,
# notice and don't try again
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
bb.utils.unlockfile(lf)
continue
m.go(u, ud, d)
if ud.localfile:
if not m.forcefetch(u, ud, d):
# RP - is olddir needed?
# olddir = os.path.abspath(os.getcwd())
m.go(u, ud , d)
# os.chdir(olddir)
if ud.localfile and not m.forcefetch(u, ud, d):
Fetch.write_md5sum(u, ud, d)
bb.utils.unlockfile(lf)
def checkstatus(d):
"""
Check all urls exist upstream
init must have previously been called
"""
urldata = init([], d, True)
for u in urldata:
ud = urldata[u]
m = ud.method
bb.msg.note(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
ret = m.checkstatus(u, ud, d)
if not ret:
bb.msg.fatal(bb.msg.domain.Fetcher, "URL %s doesn't work" % u)
def localpaths(d):
"""
Return a list of the local filenames, assuming successful fetch
"""
"""Return a list of the local filenames, assuming successful fetch"""
local = []
urldata = init([], d, True)
for u in urldata:
ud = urldata[u]
local.append(ud.localpath)
fn = bb.data.getVar('FILE', d, 1)
for m in methods:
for u in m.urls:
local.append(urldata[fn][u].localpath)
return local
srcrev_internal_call = False
def get_srcrev(d):
"""
Return the version string for the current package
(usually to be used as PV)
Most packages usually only have one SCM so we just pass on the call.
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
have been set.
"""
#
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
# could translate into a call to here. If it does, we need to catch this
# and provide some way so it knows get_srcrev is active instead of being
# some number etc. hence the srcrev_internal_call tracking and the magic
# "SRCREVINACTION" return value.
#
# Neater solutions welcome!
#
if bb.fetch.srcrev_internal_call:
return "SRCREVINACTION"
scms = []
# Only call setup_localpath on URIs which suppports_srcrev()
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
for u in urldata:
ud = urldata[u]
if ud.method.suppports_srcrev():
if not ud.setup:
ud.setup_localpath(d)
scms.append(u)
if len(scms) == 0:
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
raise ParameterError
if len(scms) == 1:
return urldata[scms[0]].method.sortable_revision(scms[0], urldata[scms[0]], d)
#
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
#
format = bb.data.getVar('SRCREV_FORMAT', d, 1)
if not format:
bb.msg.error(bb.msg.domain.Fetcher, "The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
raise ParameterError
for scm in scms:
if 'name' in urldata[scm].parm:
name = urldata[scm].parm["name"]
rev = urldata[scm].method.sortable_revision(scm, urldata[scm], d)
format = format.replace(name, rev)
return format
def localpath(url, d, cache = True):
"""
Called from the parser with cache=False since the cache isn't ready
at this point. Also called from classed in OE e.g. patch.bbclass
"""
ud = init([url], d)
if ud[url].method:
return ud[url].localpath
def localpath(url, d):
ud = initdata(url, d)
if ud.method:
return ud.localpath
return url
def runfetchcmd(cmd, d, quiet = False):
"""
Run cmd returning the command output
Raise an error if interrupted or cmd fails
Optionally echo command output to stdout
"""
# Need to export PATH as binary could be in metadata paths
# rather than host provided
# Also include some other variables.
# FIXME: Should really include all export varaiables?
exportvars = ['PATH', 'GIT_PROXY_HOST', 'GIT_PROXY_PORT', 'GIT_PROXY_COMMAND']
for var in exportvars:
val = data.getVar(var, d, True)
if val:
cmd = 'export ' + var + '=%s; %s' % (val, cmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
# redirect stderr to stdout
stdout_handle = os.popen(cmd + " 2>&1", "r")
output = ""
while 1:
line = stdout_handle.readline()
if not line:
break
if not quiet:
print line,
output += line
status = stdout_handle.close() or 0
signal = status >> 8
exitstatus = status & 0xff
if signal:
raise FetchError("Fetch command %s failed with signal %s, output:\n%s" % (cmd, signal, output))
elif status != 0:
raise FetchError("Fetch command %s failed with exit code %s, output:\n%s" % (cmd, status, output))
return output
class FetchData(object):
"""
A class which represents the fetcher state for a given URI.
"""
def __init__(self, url, d):
"""Class for fetcher variable store"""
def __init__(self):
self.localfile = ""
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = bb.decodeurl(data.expand(url, d))
self.date = Fetch.getSRCDate(self, d)
self.url = url
self.setup = False
for m in methods:
if m.supports(url, self, d):
self.method = m
return
raise NoMethodError("Missing implementation for url %s" % url)
def setup_localpath(self, d):
self.setup = True
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
else:
bb.fetch.srcrev_internal_call = True
self.localpath = self.method.localpath(self.url, self, d)
bb.fetch.srcrev_internal_call = False
# We have to clear data's internal caches since the cached value of SRCREV is now wrong.
# Horrible...
bb.data.delVar("ISHOULDNEVEREXIST", d)
self.md5 = self.localpath + '.md5'
self.lockfile = self.localpath + '.lock'
class Fetch(object):
@@ -370,12 +182,6 @@ class Fetch(object):
"""
return False
def suppports_srcrev(self):
"""
The fetcher supports auto source revisions (SRCREV)
"""
return False
def go(self, url, urldata, d):
"""
Fetch urls
@@ -383,14 +189,6 @@ class Fetch(object):
"""
raise NoMethodError("Missing implementation for url")
def checkstatus(self, url, urldata, d):
"""
Check the status of a URL
Assumes localpath was called first
"""
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s could not be checked for status since no method exists." % url)
return True
def getSRCDate(urldata, d):
"""
Return the SRC Date for the component
@@ -403,41 +201,11 @@ class Fetch(object):
pn = data.getVar("PN", d, 1)
if pn:
return data.getVar("SRCDATE_%s" % pn, d, 1) or data.getVar("CVSDATE_%s" % pn, d, 1) or data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
return data.getVar("SRCDATE_%s" % pn, d, 1) or data.getVar("CVSDATE_%s" % pn, d, 1) or data.getVar("DATE", d, 1)
return data.getVar("SRCDATE", d, 1) or data.getVar("CVSDATE", d, 1) or data.getVar("DATE", d, 1)
getSRCDate = staticmethod(getSRCDate)
def srcrev_internal_helper(ud, d):
"""
Return:
a) a source revision if specified
b) True if auto srcrev is in action
c) False otherwise
"""
if 'rev' in ud.parm:
return ud.parm['rev']
if 'tag' in ud.parm:
return ud.parm['tag']
rev = None
if 'name' in ud.parm:
pn = data.getVar("PN", d, 1)
rev = data.getVar("SRCREV_pn-" + pn + "_" + ud.parm['name'], d, 1)
if not rev:
rev = data.getVar("SRCREV", d, 1)
if rev == "INVALID":
raise InvalidSRCREV("Please set SRCREV to a valid value")
if not rev:
return False
if rev is "SRCREVINACTION":
return True
return rev
srcrev_internal_helper = staticmethod(srcrev_internal_helper)
def try_mirror(d, tarfn):
"""
Try to use a mirrored version of the sources. We do this
@@ -484,7 +252,14 @@ class Fetch(object):
verify_md5sum = staticmethod(verify_md5sum)
def write_md5sum(url, ud, d):
md5data = bb.utils.md5_file(ud.localpath)
if bb.which(data.getVar('PATH', d), 'md5sum'):
try:
md5pipe = os.popen('md5sum ' + ud.localpath)
md5data = (md5pipe.readline().split() or [ "" ])[0]
md5pipe.close()
except OSError:
md5data = ""
# verify the md5sum
if not Fetch.verify_md5sum(ud, md5data):
raise MD5SumError(url)
@@ -494,50 +269,6 @@ class Fetch(object):
md5out.close()
write_md5sum = staticmethod(write_md5sum)
def latest_revision(self, url, ud, d):
"""
Look in the cache for the latest revision, if not present ask the SCM.
"""
if not hasattr(self, "_latest_revision"):
raise ParameterError
pd = persist_data.PersistData(d)
key = self._revision_key(url, ud, d)
rev = pd.getValue("BB_URI_HEADREVS", key)
if rev != None:
return str(rev)
rev = self._latest_revision(url, ud, d)
pd.setValue("BB_URI_HEADREVS", key, rev)
return rev
def sortable_revision(self, url, ud, d):
"""
"""
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
pd = persist_data.PersistData(d)
key = self._revision_key(url, ud, d)
latest_rev = self._build_revision(url, ud, d)
last_rev = pd.getValue("BB_URI_LOCALCOUNT", key + "_rev")
count = pd.getValue("BB_URI_LOCALCOUNT", key + "_count")
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
if count is None:
count = "0"
else:
count = str(int(count) + 1)
pd.setValue("BB_URI_LOCALCOUNT", key + "_rev", latest_rev)
pd.setValue("BB_URI_LOCALCOUNT", key + "_count", count)
return str(count + "+" + latest_rev)
import cvs
import git
import local
@@ -546,16 +277,12 @@ import wget
import svk
import ssh
import perforce
import bzr
import hg
methods.append(local.Local())
methods.append(wget.Wget())
methods.append(svn.Svn())
methods.append(git.Git())
methods.append(cvs.Cvs())
methods.append(git.Git())
methods.append(local.Local())
methods.append(svn.Svn())
methods.append(wget.Wget())
methods.append(svk.Svk())
methods.append(ssh.SSH())
methods.append(perforce.Perforce())
methods.append(bzr.Bzr())
methods.append(hg.Hg())

View File

@@ -1,154 +0,0 @@
"""
BitBake 'Fetch' implementation for bzr.
"""
# Copyright (C) 2007 Ross Burton
# Copyright (C) 2007 Richard Purdie
#
# Classes for obtaining upstream sources for the
# BitBake build tools.
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
class Bzr(Fetch):
def supports(self, url, ud, d):
return ud.type in ['bzr']
def localpath (self, url, ud, d):
# Create paths to bzr checkouts
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
revision = Fetch.srcrev_internal_helper(ud, d)
if revision is True:
ud.revision = self.latest_revision(url, ud, d)
elif revision:
ud.revision = revision
if not ud.revision:
ud.revision = self.latest_revision(url, ud, d)
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _buildbzrcommand(self, ud, d, command):
"""
Build up an bzr commandline based on ud
command is "fetch", "update", "revno"
"""
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = "http"
if "proto" in ud.parm:
proto = ud.parm["proto"]
bzrroot = ud.host + ud.path
options = []
if command is "revno":
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
if command is "fetch":
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command is "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid bzr command %s" % command)
return bzrcmd
def go(self, loc, ud, d):
"""Fetch url"""
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping bzr checkout." % ud.localpath)
return
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Update %s" % loc)
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
os.system("rm -rf %s" % os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)))
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Checkout %s" % loc)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % bzrcmd)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
# tar them up to a defined filename
try:
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.pkgdir)), d)
except:
t, v, tb = sys.exc_info()
try:
os.unlink(ud.localpath)
except OSError:
pass
raise t, v, tb
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
"""
Return a unique key for the url
"""
return "bzr:" + ud.pkgdir
def _latest_revision(self, url, ud, d):
"""
Return the latest upstream revision number
"""
bb.msg.debug(2, bb.msg.domain.Fetcher, "BZR fetcher hitting network for %s" % url)
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
return output.strip()
def _sortable_revision(self, url, ud, d):
"""
Return a sortable revision number which in our case is the revision number
"""
return self._build_revision(url, ud, d)
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -58,15 +58,7 @@ class Cvs(Fetch):
elif ud.tag:
ud.date = ""
norecurse = ''
if 'norecurse' in ud.parm:
norecurse = '_norecurse'
fullpath = ''
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
@@ -102,23 +94,14 @@ class Cvs(Fetch):
if method == "dir":
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
cvsroot = ":" + method + ":" + ud.user
if ud.pswd:
cvsroot += ":" + ud.pswd
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
options = []
if 'norecurse' in ud.parm:
options.append("-l")
if ud.date:
options.append("-D \"%s UTC\"" % ud.date)
options.append("-D %s" % ud.date)
if ud.tag:
options.append("-r %s" % ud.tag)
@@ -161,15 +144,10 @@ class Cvs(Fetch):
pass
raise FetchError(ud.module)
os.chdir(moddir)
os.chdir('..')
# tar them up to a defined filename
if 'fullpath' in ud.parm:
os.chdir(pkgdir)
myret = os.system("tar -czf %s %s" % (ud.localpath, localdir))
else:
os.chdir(moddir)
os.chdir('..')
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
if myret != 0:
try:
os.unlink(ud.localpath)

View File

@@ -25,7 +25,28 @@ import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import runfetchcmd
def prunedir(topdir):
# Delete everything reachable from the directory named in 'topdir'.
# CAUTION: This is dangerous!
for root, dirs, files in os.walk(topdir, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
def rungitcmd(cmd,d):
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
# Need to export PATH as git is likely to be in metadata paths
# rather than host provided
pathcmd = 'export PATH=%s; %s' % (data.expand('${PATH}', d), cmd)
myret = os.system(pathcmd)
if myret != 0:
raise FetchError("Git: %s failed" % pathcmd)
class Git(Fetch):
"""Class to fetch a module or modules from git repositories"""
@@ -41,28 +62,24 @@ class Git(Fetch):
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
ud.branch = ud.parm.get("branch", "master")
tag = Fetch.srcrev_internal_helper(ud, d)
if tag is True:
ud.tag = self.latest_revision(url, ud, d)
elif tag:
ud.tag = tag
if not ud.tag:
ud.tag = self.latest_revision(url, ud, d)
if ud.tag == "master":
ud.tag = self.latest_revision(url, ud, d)
ud.tag = "master"
if 'tag' in ud.parm:
ud.tag = ud.parm['tag']
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.tag), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def forcefetch(self, url, ud, d):
# tag=="master" must always update
if (ud.tag == "master"):
return True
return False
def go(self, loc, ud, d):
"""Fetch url"""
if Fetch.try_mirror(d, ud.localfile):
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping git checkout." % ud.localpath)
return
@@ -79,55 +96,32 @@ class Git(Fetch):
if Fetch.try_mirror(d, repofilename):
bb.mkdirhier(repodir)
os.chdir(repodir)
runfetchcmd("tar -xzf %s" % (repofile), d)
rungitcmd("tar -xzf %s" % (repofile),d)
else:
runfetchcmd("git clone -n %s://%s%s %s" % (ud.proto, ud.host, ud.path, repodir), d)
rungitcmd("git clone -n %s://%s%s %s" % (ud.proto, ud.host, ud.path, repodir),d)
os.chdir(repodir)
rungitcmd("git pull %s://%s%s" % (ud.proto, ud.host, ud.path),d)
rungitcmd("git pull --tags %s://%s%s" % (ud.proto, ud.host, ud.path),d)
rungitcmd("git prune-packed", d)
rungitcmd("git pack-redundant --all | xargs -r rm", d)
# Remove all but the .git directory
runfetchcmd("rm * -Rf", d)
runfetchcmd("git fetch %s://%s%s %s" % (ud.proto, ud.host, ud.path, ud.branch), d)
runfetchcmd("git fetch --tags %s://%s%s" % (ud.proto, ud.host, ud.path), d)
runfetchcmd("git prune-packed", d)
runfetchcmd("git pack-redundant --all | xargs -r rm", d)
rungitcmd("rm * -Rf", d)
# old method of downloading tags
#rungitcmd("rsync -a --verbose --stats --progress rsync://%s%s/ %s" % (ud.host, ud.path, os.path.join(repodir, ".git", "")),d)
os.chdir(repodir)
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
if mirror_tarballs != "0":
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
rungitcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ),d)
if os.path.exists(codir):
bb.utils.prunedir(codir)
prunedir(codir)
bb.mkdirhier(codir)
os.chdir(repodir)
runfetchcmd("git read-tree %s" % (ud.tag), d)
runfetchcmd("git checkout-index -q -f --prefix=%s -a" % (os.path.join(codir, "git", "")), d)
rungitcmd("git read-tree %s" % (ud.tag),d)
rungitcmd("git checkout-index -q -f --prefix=%s -a" % (os.path.join(codir, "git", "")),d)
os.chdir(codir)
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git checkout")
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
os.chdir(repodir)
bb.utils.prunedir(codir)
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
"""
Return a unique key for the url
"""
return "git:" + ud.host + ud.path.replace('/', '.')
def _latest_revision(self, url, ud, d):
"""
Compute the HEAD revision for the url
"""
output = runfetchcmd("git ls-remote %s://%s%s %s" % (ud.proto, ud.host, ud.path, ud.branch), d, True)
return output.split()[0]
def _build_revision(self, url, ud, d):
return ud.tag
rungitcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ),d)

View File

@@ -1,147 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for mercurial DRCS (hg).
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2004 Marcin Juszkiewicz
# Copyright (C) 2007 Robert Schuster
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os, re
import sys
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
class Hg(Fetch):
"""Class to fetch a from mercurial repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with mercurial.
"""
return ud.type in ['hg']
def localpath(self, url, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("hg method needs a 'module' parameter")
ud.module = ud.parm["module"]
# Create paths to mercurial checkouts
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _buildhgcommand(self, ud, d, command):
"""
Build up an hg commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = "http"
if "proto" in ud.parm:
proto = ud.parm["proto"]
host = ud.host
if proto == "file":
host = "/"
ud.host = "localhost"
if ud.user == None:
hgroot = host + ud.path
else:
hgroot = ud.user + "@" + host + ud.path
if command is "info":
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
if ud.revision:
options.append("-r %s" % ud.revision)
if command is "fetch":
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
elif command is "pull":
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
cmd = "%s pull" % (basecmd)
elif command is "update":
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid hg command %s" % command)
return cmd
def go(self, loc, ud, d):
"""Fetch url"""
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping hg checkout." % ud.localpath)
return
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
updatecmd = self._buildhgcommand(ud, d, "pull")
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(ud.moddir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
updatecmd = self._buildhgcommand(ud, d, "update")
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
runfetchcmd(fetchcmd, d)
os.chdir(ud.pkgdir)
try:
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
try:
os.unlink(ud.localpath)
except OSError:
pass
raise t, v, tb

View File

@@ -38,11 +38,9 @@ class Local(Fetch):
return urldata.type in ['file','patch']
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""Return the local filename of a given url assuming a successful fetch.
"""
path = url.split("://")[1]
path = path.split(";")[0]
newpath = path
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, 1)
@@ -59,14 +57,3 @@ class Local(Fetch):
"""Fetch urls (no-op for Local method)"""
# no need to fetch local files, we'll deal with them in place.
return 1
def checkstatus(self, url, urldata, d):
"""
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
return True
if os.path.exists(urldata.localpath):
return True
return False

View File

@@ -37,7 +37,7 @@ class Perforce(Fetch):
return ud.type in ['p4']
def doparse(url,d):
parm = {}
parm=[]
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
@@ -125,7 +125,7 @@ class Perforce(Fetch):
"""
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping perforce checkout." % ud.localpath)
return

View File

@@ -1,12 +1,17 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for svn.
BitBake 'Fetch' implementations
This implementation is for svn. It is based on the cvs implementation.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2004 Marcin Juszkiewicz
# Copyright (C) 2004 Marcin Juszkiewicz
#
# Classes for obtaining upstream sources for the
# BitBake build tools.
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -30,7 +35,6 @@ from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
class Svn(Fetch):
"""Class to fetch a module or modules from svn repositories"""
@@ -43,54 +47,32 @@ class Svn(Fetch):
def localpath(self, url, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("svn method needs a 'module' parameter")
ud.module = ud.parm["module"]
# Create paths to svn checkouts
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
if 'rev' in ud.parm:
ud.date = ""
ud.revision = ud.parm['rev']
elif 'date' in ud.date:
ud.date = ud.parm['date']
ud.revision = ""
else:
#
# ***Nasty hack***
# If DATE in unexpanded PV, use ud.date (which is set from SRCDATE)
# Should warn people to switch to SRCREV here
#
pv = data.getVar("PV", d, 0)
if "DATE" in pv:
ud.revision = ""
else:
rev = Fetch.srcrev_internal_helper(ud, d)
if rev is True:
ud.revision = self.latest_revision(url, ud, d)
ud.date = ""
elif rev:
ud.revision = rev
ud.date = ""
else:
ud.revision = ""
ud.module = ud.parm["module"]
ud.revision = ""
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
if ud.revision:
ud.date = ""
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _buildsvncommand(self, ud, d, command):
"""
Build up an svn commandline based on ud
command is "fetch", "update", "info"
"""
def forcefetch(self, url, ud, d):
if (ud.date == "now"):
return True
return False
basecmd = data.expand('${FETCHCMD_svn}', d)
def go(self, loc, ud, d):
"""Fetch url"""
# try to use the tarball stash
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping svn checkout." % ud.localpath)
return
proto = "svn"
if "proto" in ud.parm:
@@ -102,103 +84,55 @@ class Svn(Fetch):
svnroot = ud.host + ud.path
# either use the revision, or SRCDATE in braces,
# either use the revision, or SRCDATE in braces, or nothing for SRCDATE = "now"
options = []
if ud.revision:
options.append("-r %s" % ud.revision)
elif ud.date != "now":
options.append("-r {%s}" % ud.date)
if ud.user:
options.append("--username %s" % ud.user)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "svn:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
if ud.pswd:
options.append("--password %s" % ud.pswd)
if command is "info":
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
elif ud.date:
options.append("-r {%s}" % ud.date)
if command is "fetch":
svncmd = "%s co %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, ud.module)
elif command is "update":
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command)
data.setVar('SVNROOT', "%s://%s/%s" % (proto, svnroot, ud.module), localdata)
data.setVar('SVNCOOPTS', " ".join(options), localdata)
data.setVar('SVNMODULE', ud.module, localdata)
svncmd = data.getVar('FETCHCOMMAND', localdata, 1)
svnupcmd = data.getVar('UPDATECOMMAND', localdata, 1)
if svn_rsh:
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
svnupcmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svnupcmd)
return svncmd
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${SVNDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, ud.module)
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + moddir + "'")
def go(self, loc, ud, d):
"""Fetch url"""
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping svn checkout." % ud.localpath)
return
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
if os.access(os.path.join(moddir, '.svn'), os.R_OK):
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(ud.moddir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupdatecmd)
runfetchcmd(svnupdatecmd, d)
os.chdir(moddir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupcmd)
myret = os.system(svnupcmd)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnfetchcmd)
runfetchcmd(svnfetchcmd, d)
bb.mkdirhier(pkgdir)
os.chdir(pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svncmd)
myret = os.system(svncmd)
os.chdir(ud.pkgdir)
if myret != 0:
raise FetchError(ud.module)
os.chdir(pkgdir)
# tar them up to a defined filename
try:
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)))
if myret != 0:
try:
os.unlink(ud.localpath)
except OSError:
pass
raise t, v, tb
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
"""
Return a unique key for the url
"""
return "svn:" + ud.moddir
def _latest_revision(self, url, ud, d):
"""
Return the latest upstream revision number
"""
bb.msg.debug(2, bb.msg.domain.Fetcher, "SVN fetcher hitting network for %s" % url)
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
revision = None
for line in output.splitlines():
if "Last Changed Rev" in line:
revision = line.split(":")[1].strip()
return revision
def _sortable_revision(self, url, ud, d):
"""
Return a sortable revision number which in our case is the revision number
"""
return self._build_revision(url, ud, d)
def _build_revision(self, url, ud, d):
return ud.revision
raise FetchError(ud.module)

View File

@@ -48,13 +48,11 @@ class Wget(Fetch):
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def go(self, uri, ud, d, checkonly = False):
def go(self, uri, ud, d):
"""Fetch urls"""
def fetch_uri(uri, ud, d):
if checkonly:
fetchcmd = data.getVar("CHECKCOMMAND", d, 1)
elif os.path.exists(ud.localpath):
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = data.getVar("RESUMECOMMAND", d, 1)
else:
@@ -68,10 +66,10 @@ class Wget(Fetch):
if ret != 0:
return False
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
# check if sourceforge did send us to the mirror page
if not os.path.exists(ud.localpath):
bb.msg.debug(2, bb.msg.domain.Fetcher, "The fetch command for %s returned success but %s doesn't exist?..." % (uri, ud.localpath))
os.system("rm %s*" % ud.localpath) # FIXME shell quote it
bb.msg.debug(2, bb.msg.domain.Fetcher, "sourceforge.net send us to the mirror on %s" % ud.basename)
return False
return True
@@ -85,10 +83,10 @@ class Wget(Fetch):
newuri = uri_replace(uri, find, replace, d)
if newuri != uri:
if fetch_uri(newuri, ud, localdata):
return True
return
if fetch_uri(uri, ud, localdata):
return True
return
# try mirrors
mirrors = [ i.split() for i in (data.getVar('MIRRORS', localdata, 1) or "").split('\n') if i ]
@@ -96,10 +94,6 @@ class Wget(Fetch):
newuri = uri_replace(uri, find, replace, d)
if newuri != uri:
if fetch_uri(newuri, ud, localdata):
return True
return
raise FetchError(uri)
def checkstatus(self, uri, ud, d):
return self.go(uri, ud, d, True)

View File

@@ -23,7 +23,7 @@ Message handling infrastructure for bitbake
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys, os, re, bb
from bb import utils, event
from bb import utils
debug_level = {}
@@ -37,38 +37,11 @@ domain = bb.utils.Enum(
'Depends',
'Fetcher',
'Parsing',
'PersistData',
'Provider',
'RunQueue',
'TaskData',
'Util')
class MsgBase(bb.event.Event):
"""Base class for messages"""
def __init__(self, msg, d ):
self._message = msg
event.Event.__init__(self, d)
class MsgDebug(MsgBase):
"""Debug Message"""
class MsgNote(MsgBase):
"""Note Message"""
class MsgWarn(MsgBase):
"""Warning Message"""
class MsgError(MsgBase):
"""Error Message"""
class MsgFatal(MsgBase):
"""Fatal Message"""
class MsgPlain(MsgBase):
"""General output"""
#
# Message control functions
#
@@ -90,40 +63,45 @@ def set_debug_domains(domains):
bb.msg.debug_level[ddomain] = bb.msg.debug_level[ddomain] + 1
found = True
if not found:
bb.msg.warn(None, "Logging domain %s is not valid, ignoring" % domain)
std_warn("Logging domain %s is not valid, ignoring" % domain)
#
# Message handling functions
#
def debug(level, domain, msg, fn = None):
bb.event.fire(MsgDebug(msg, None))
if not domain:
domain = 'default'
if debug_level[domain] >= level:
print 'DEBUG: ' + msg
def note(level, domain, msg, fn = None):
bb.event.fire(MsgNote(msg, None))
if not domain:
domain = 'default'
if level == 1 or verbose or debug_level[domain] >= 1:
print 'NOTE: ' + msg
std_note(msg)
def warn(domain, msg, fn = None):
bb.event.fire(MsgWarn(msg, None))
print 'WARNING: ' + msg
std_warn(msg)
def error(domain, msg, fn = None):
bb.event.fire(MsgError(msg, None))
print 'ERROR: ' + msg
std_error(msg)
def fatal(domain, msg, fn = None):
bb.event.fire(MsgFatal(msg, None))
std_fatal(msg)
#
# Compatibility functions for the original message interface
#
def std_debug(lvl, msg):
if debug_level['default'] >= lvl:
print 'DEBUG: ' + msg
def std_note(msg):
print 'NOTE: ' + msg
def std_warn(msg):
print 'WARNING: ' + msg
def std_error(msg):
print 'ERROR: ' + msg
def std_fatal(msg):
print 'ERROR: ' + msg
sys.exit(1)
def plain(msg, fn = None):
bb.event.fire(MsgPlain(msg, None))
print msg

View File

@@ -50,10 +50,6 @@ def cached_mtime_noerror(f):
return 0
return __mtime_cache[f]
def update_mtime(f):
__mtime_cache[f] = os.stat(f)[8]
return __mtime_cache[f]
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])

View File

@@ -0,0 +1,188 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""class for handling .bb files (using a C++ parser)
Reads a .bb file and obtains its metadata (using a C++ parser)
Copyright (C) 2006 Tim Robert Ansell
Copyright (C) 2006 Holger Hans Peter Freyther
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import os, sys
# The Module we will use here
import bb
from bitbakec import parsefile
#
# This is the Python Part of the Native Parser Implementation.
# We will only parse .bbclass, .inc and .bb files but no
# configuration files.
# supports, init and handle are the public methods used by
# parser module
#
# The rest of the methods are internal implementation details.
def _init(fn, d):
"""
Initialize the data implementation with values of
the environment and data from the file.
"""
pass
#
# public
#
def supports(fn, data):
return fn[-3:] == ".bb" or fn[-8:] == ".bbclass" or fn[-4:] == ".inc" or fn[-5:] == ".conf"
def init(fn, data):
if not bb.data.getVar('TOPDIR', data):
bb.data.setVar('TOPDIR', os.getcwd(), data)
if not bb.data.getVar('BBPATH', data):
bb.data.setVar('BBPATH', os.path.join(sys.prefix, 'share', 'bitbake'), data)
def handle_inherit(d):
"""
Handle inheriting of classes. This will load all default classes.
It could be faster, it could detect infinite loops but this is todo
Also this delayed loading of bb.parse could impose a penalty
"""
from bb.parse import handle
files = (data.getVar('INHERIT', d, True) or "").split()
if not "base" in i:
files[0:0] = ["base"]
__inherit_cache = data.getVar('__inherit_cache', d) or []
for f in files:
file = data.expand(f, d)
if file[0] != "/" and file[-8:] != ".bbclass":
file = os.path.join('classes', '%s.bbclass' % file)
if not file in __inherit_cache:
debug(2, "BB %s:%d: inheriting %s" % (fn, lineno, file))
__inherit_cache.append( file )
try:
handle(file, d, True)
except IOError:
print "Failed to inherit %s" % file
data.setVar('__inherit_cache', __inherit_cache, d)
def handle(fn, d, include):
from bb import data, parse
(root, ext) = os.path.splitext(os.path.basename(fn))
base_name = "%s%s" % (root,ext)
# initialize with some data
init(fn,d)
# check if we include or are the beginning
oldfile = None
if include:
oldfile = d.getVar('FILE', False)
is_conf = False
elif ext == ".conf":
is_conf = True
data.inheritFromOS(d)
# find the file
if not os.path.isabs(fn):
abs_fn = bb.which(d.getVar('BBPATH', True), fn)
else:
abs_fn = fn
# check if the file exists
if not os.path.exists(abs_fn):
raise IOError("file '%(fn)s' not found" % locals() )
# now we know the file is around mark it as dep
if include:
parse.mark_dependency(d, abs_fn)
# manipulate the bbpath
if ext != ".bbclass" and ext != ".conf":
old_bb_path = data.getVar('BBPATH', d)
data.setVar('BBPATH', os.path.dirname(abs_fn) + (":%s" %old_bb_path) , d)
# handle INHERITS and base inherit
if ext != ".bbclass" and ext != ".conf":
data.setVar('FILE', fn, d)
handle_interit(d)
# now parse this file - by defering it to C++
parsefile(abs_fn, d, is_conf)
# Finish it up
if include == 0:
data.expandKeys(d)
data.update_data(d)
#### !!! XXX Finish it up by executing the anonfunc
# restore the original FILE
if oldfile:
d.setVar('FILE', oldfile)
# restore bbpath
if ext != ".bbclass" and ext != ".conf":
data.setVar('BBPATH', old_bb_path, d )
return d
# Needed for BitBake files...
__pkgsplit_cache__={}
def vars_from_file(mypkg, d):
if not mypkg:
return (None, None, None)
if mypkg in __pkgsplit_cache__:
return __pkgsplit_cache__[mypkg]
myfile = os.path.splitext(os.path.basename(mypkg))
parts = myfile[0].split('_')
__pkgsplit_cache__[mypkg] = parts
exp = 3 - len(parts)
tmplist = []
while exp != 0:
exp -= 1
tmplist.append(None)
parts.extend(tmplist)
return parts
# Inform bitbake that we are a parser
# We need to define all three
from bb.parse import handlers
handlers.append( {'supports' : supports, 'handle': handle, 'init' : init})
del handlers

View File

@@ -0,0 +1,36 @@
buil: bitbakec.so
echo "Done"
bitbakescanner.cc: bitbakescanner.l
flex -t bitbakescanner.l > bitbakescanner.cc
bitbakeparser.cc: bitbakeparser.y python_output.h
lemon bitbakeparser.y
mv bitbakeparser.c bitbakeparser.cc
bitbakec.c: bitbakec.pyx
pyrexc bitbakec.pyx
bitbakec-processed.c: bitbakec.c
cat bitbakec.c | sed -e"s/__pyx_f_8bitbakec_//" > bitbakec-processed.c
bitbakec.o: bitbakec-processed.c
gcc -c bitbakec-processed.c -o bitbakec.o -fPIC -I/usr/include/python2.4
bitbakeparser.o: bitbakeparser.cc
g++ -c bitbakeparser.cc -fPIC -I/usr/include/python2.4
bitbakescanner.o: bitbakescanner.cc
g++ -c bitbakescanner.cc -fPIC -I/usr/include/python2.4
bitbakec.so: bitbakec.o bitbakeparser.o bitbakescanner.o
g++ -shared -fPIC bitbakeparser.o bitbakescanner.o bitbakec.o -o bitbakec.so
clean:
rm -f *.out
rm -f *.cc
rm -f bitbakec.c
rm -f bitbakec-processed.c
rm -f *.o
rm -f *.so

View File

@@ -0,0 +1,12 @@
To ease portability (lemon, flex, etc) we keep the
result of flex and lemon in the source code. We agree
to not manually change the scanner and parser.
How we create the files:
flex -t bitbakescanner.l > bitbakescanner.cc
lemon bitbakeparser.y
mv bitbakeparser.c bitbakeparser.cc
Now manually create two files

View File

@@ -0,0 +1,28 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2006 Holger Hans Peter Freyther
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
# THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__version__ = '0.1'
__all__ = [ 'BBHandler' ]
import BBHandler

View File

@@ -0,0 +1,253 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
cdef extern from "stdio.h":
ctypedef int FILE
FILE *fopen(char*, char*)
int fclose(FILE *fp)
cdef extern from "string.h":
int strlen(char*)
cdef extern from "lexerc.h":
ctypedef struct lex_t:
void* parser
void* scanner
char* name
FILE* file
int config
void* data
int lineError
int errorParse
cdef extern int parse(FILE*, char*, object, int)
def parsefile(object file, object data, object config):
#print "parsefile: 1", file, data
# Open the file
cdef FILE* f
f = fopen(file, "r")
#print "parsefile: 2 opening file"
if (f == NULL):
raise IOError("No such file %s." % file)
#print "parsefile: 3 parse"
parse(f, file, data, config)
# Close the file
fclose(f)
cdef public void e_assign(lex_t* container, char* key, char* what):
#print "e_assign", key, what
if what == NULL:
print "FUTURE Warning empty string: use \"\""
what = ""
d = <object>container.data
d.setVar(key, what)
cdef public void e_export(lex_t* c, char* what):
#print "e_export", what
#exp:
# bb.data.setVarFlag(key, "export", 1, data)
d = <object>c.data
d.setVarFlag(what, "export", 1)
cdef public void e_immediate(lex_t* c, char* key, char* what):
#print "e_immediate", key, what
#colon:
# val = bb.data.expand(groupd["value"], data)
d = <object>c.data
d.setVar(key, d.expand(what,d))
cdef public void e_cond(lex_t* c, char* key, char* what):
#print "e_cond", key, what
#ques:
# val = bb.data.getVar(key, data)
# if val == None:
# val = groupd["value"]
if what == NULL:
print "FUTURE warning: Use \"\" for", key
what = ""
d = <object>c.data
d.setVar(key, (d.getVar(key,False) or what))
cdef public void e_prepend(lex_t* c, char* key, char* what):
#print "e_prepend", key, what
#prepend:
# val = "%s %s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
d = <object>c.data
d.setVar(key, what + " " + (d.getVar(key,0) or ""))
cdef public void e_append(lex_t* c, char* key, char* what):
#print "e_append", key, what
#append:
# val = "%s %s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
d = <object>c.data
d.setVar(key, (d.getVar(key,0) or "") + " " + what)
cdef public void e_precat(lex_t* c, char* key, char* what):
#print "e_precat", key, what
#predot:
# val = "%s%s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
d = <object>c.data
d.setVar(key, what + (d.getVar(key,0) or ""))
cdef public void e_postcat(lex_t* c, char* key, char* what):
#print "e_postcat", key, what
#postdot:
# val = "%s%s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
d = <object>c.data
d.setVar(key, (d.getVar(key,0) or "") + what)
cdef public int e_addtask(lex_t* c, char* name, char* before, char* after) except -1:
#print "e_addtask", name
# func = m.group("func")
# before = m.group("before")
# after = m.group("after")
# if func is None:
# return
# var = "do_" + func
#
# data.setVarFlag(var, "task", 1, d)
#
# if after is not None:
# # set up deps for function
# data.setVarFlag(var, "deps", after.split(), d)
# if before is not None:
# # set up things that depend on this func
# data.setVarFlag(var, "postdeps", before.split(), d)
# return
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No tasks allowed in config files")
return -1
d = <object>c.data
do = "do_%s" % name
d.setVarFlag(do, "task", 1)
if before != NULL and strlen(before) > 0:
#print "Before", before
d.setVarFlag(do, "postdeps", ("%s" % before).split())
if after != NULL and strlen(after) > 0:
#print "After", after
d.setVarFlag(do, "deps", ("%s" % after).split())
return 0
cdef public int e_addhandler(lex_t* c, char* h) except -1:
#print "e_addhandler", h
# data.setVarFlag(h, "handler", 1, d)
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No handlers allowed in config files")
return -1
d = <object>c.data
d.setVarFlag(h, "handler", 1)
return 0
cdef public int e_export_func(lex_t* c, char* function) except -1:
#print "e_export_func", function
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No functions allowed in config files")
return -1
return 0
cdef public int e_inherit(lex_t* c, char* file) except -1:
#print "e_inherit", file
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No inherits allowed in config files")
return -1
return 0
cdef public void e_include(lex_t* c, char* file):
from bb.parse import handle
d = <object>c.data
try:
handle(d.expand(file,d), d, True)
except IOError:
print "Could not include file", file
cdef public int e_require(lex_t* c, char* file) except -1:
#print "e_require", file
from bb.parse import handle
d = <object>c.data
try:
handle(d.expand(file,d), d, True)
except IOError:
print "ParseError", file
from bb.parse import ParseError
raise ParseError("Could not include required file %s" % file)
return -1
return 0
cdef public int e_proc(lex_t* c, char* key, char* what) except -1:
#print "e_proc", key, what
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No inherits allowed in config files")
return -1
return 0
cdef public int e_proc_python(lex_t* c, char* key, char* what) except -1:
#print "e_proc_python"
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No pythin allowed in config files")
return -1
if key != NULL:
pass
#print "Key", key
if what != NULL:
pass
#print "What", what
return 0
cdef public int e_proc_fakeroot(lex_t* c, char* key, char* what) except -1:
#print "e_fakeroot", key, what
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No fakeroot allowed in config files")
return -1
return 0
cdef public int e_def(lex_t* c, char* a, char* b, char* d) except -1:
#print "e_def", a, b, d
if c.config == 1:
from bb.parse import ParseError
raise ParseError("No defs allowed in config files")
return -1
return 0
cdef public int e_parse_error(lex_t* c) except -1:
print "e_parse_error", c.name, "line:", lineError, "parse:", errorParse
from bb.parse import ParseError
raise ParseError("There was an parse error, sorry unable to give more information at the current time. File: %s Line: %d" % (c.name,lineError) )
return -1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,29 @@
#define T_SYMBOL 1
#define T_VARIABLE 2
#define T_EXPORT 3
#define T_OP_ASSIGN 4
#define T_STRING 5
#define T_OP_PREDOT 6
#define T_OP_POSTDOT 7
#define T_OP_IMMEDIATE 8
#define T_OP_COND 9
#define T_OP_PREPEND 10
#define T_OP_APPEND 11
#define T_TSYMBOL 12
#define T_BEFORE 13
#define T_AFTER 14
#define T_ADDTASK 15
#define T_ADDHANDLER 16
#define T_FSYMBOL 17
#define T_EXPORT_FUNC 18
#define T_ISYMBOL 19
#define T_INHERIT 20
#define T_INCLUDE 21
#define T_REQUIRE 22
#define T_PROC_BODY 23
#define T_PROC_OPEN 24
#define T_PROC_CLOSE 25
#define T_PYTHON 26
#define T_FAKEROOT 27
#define T_DEF_BODY 28
#define T_DEF_ARGS 29

View File

@@ -0,0 +1,179 @@
/* bbp.lemon
written by Marc Singer
6 January 2005
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
USA.
DESCRIPTION
-----------
lemon parser specification file for a BitBake input file parser.
Most of the interesting shenanigans are done in the lexer. The
BitBake grammar is not regular. In order to emit tokens that
the parser can properly interpret in LALR fashion, the lexer
manages the interpretation state. This is why there are ISYMBOLs,
SYMBOLS, and TSYMBOLS.
This parser was developed by reading the limited available
documentation for BitBake and by analyzing the available BB files.
There is no assertion of correctness to be made about this parser.
*/
%token_type {token_t}
%name bbparse
%token_prefix T_
%extra_argument {lex_t* lex}
%include {
#include "token.h"
#include "lexer.h"
#include "python_output.h"
}
%token_destructor { $$.release_this (); }
%syntax_error { e_parse_error( lex ); }
program ::= statements.
statements ::= statements statement.
statements ::= .
variable(r) ::= SYMBOL(s).
{ r.assignString( (char*)s.string() );
s.assignString( 0 );
s.release_this(); }
variable(r) ::= VARIABLE(v).
{
r.assignString( (char*)v.string() );
v.assignString( 0 );
v.release_this(); }
statement ::= EXPORT variable(s) OP_ASSIGN STRING(v).
{ e_assign( lex, s.string(), v.string() );
e_export( lex, s.string() );
s.release_this(); v.release_this(); }
statement ::= EXPORT variable(s) OP_PREDOT STRING(v).
{ e_precat( lex, s.string(), v.string() );
e_export( lex, s.string() );
s.release_this(); v.release_this(); }
statement ::= EXPORT variable(s) OP_POSTDOT STRING(v).
{ e_postcat( lex, s.string(), v.string() );
e_export( lex, s.string() );
s.release_this(); v.release_this(); }
statement ::= EXPORT variable(s) OP_IMMEDIATE STRING(v).
{ e_immediate ( lex, s.string(), v.string() );
e_export( lex, s.string() );
s.release_this(); v.release_this(); }
statement ::= EXPORT variable(s) OP_COND STRING(v).
{ e_cond( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_ASSIGN STRING(v).
{ e_assign( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_PREDOT STRING(v).
{ e_precat( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_POSTDOT STRING(v).
{ e_postcat( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_PREPEND STRING(v).
{ e_prepend( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_APPEND STRING(v).
{ e_append( lex, s.string() , v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_IMMEDIATE STRING(v).
{ e_immediate( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
statement ::= variable(s) OP_COND STRING(v).
{ e_cond( lex, s.string(), v.string() );
s.release_this(); v.release_this(); }
task ::= TSYMBOL(t) BEFORE TSYMBOL(b) AFTER TSYMBOL(a).
{ e_addtask( lex, t.string(), b.string(), a.string() );
t.release_this(); b.release_this(); a.release_this(); }
task ::= TSYMBOL(t) AFTER TSYMBOL(a) BEFORE TSYMBOL(b).
{ e_addtask( lex, t.string(), b.string(), a.string());
t.release_this(); a.release_this(); b.release_this(); }
task ::= TSYMBOL(t).
{ e_addtask( lex, t.string(), NULL, NULL);
t.release_this();}
task ::= TSYMBOL(t) BEFORE TSYMBOL(b).
{ e_addtask( lex, t.string(), b.string(), NULL);
t.release_this(); b.release_this(); }
task ::= TSYMBOL(t) AFTER TSYMBOL(a).
{ e_addtask( lex, t.string(), NULL, a.string());
t.release_this(); a.release_this(); }
tasks ::= tasks task.
tasks ::= task.
statement ::= ADDTASK tasks.
statement ::= ADDHANDLER SYMBOL(s).
{ e_addhandler( lex, s.string()); s.release_this (); }
func ::= FSYMBOL(f). { e_export_func( lex, f.string()); f.release_this(); }
funcs ::= funcs func.
funcs ::= func.
statement ::= EXPORT_FUNC funcs.
inherit ::= ISYMBOL(i). { e_inherit( lex, i.string() ); i.release_this (); }
inherits ::= inherits inherit.
inherits ::= inherit.
statement ::= INHERIT inherits.
statement ::= INCLUDE ISYMBOL(i).
{ e_include( lex, i.string() ); i.release_this(); }
statement ::= REQUIRE ISYMBOL(i).
{ e_require( lex, i.string() ); i.release_this(); }
proc_body(r) ::= proc_body(l) PROC_BODY(b).
{ /* concatenate body lines */
r.assignString( token_t::concatString(l.string(), b.string()) );
l.release_this ();
b.release_this ();
}
proc_body(b) ::= . { b.assignString(0); }
statement ::= variable(p) PROC_OPEN proc_body(b) PROC_CLOSE.
{ e_proc( lex, p.string(), b.string() );
p.release_this(); b.release_this(); }
statement ::= PYTHON SYMBOL(p) PROC_OPEN proc_body(b) PROC_CLOSE.
{ e_proc_python ( lex, p.string(), b.string() );
p.release_this(); b.release_this(); }
statement ::= PYTHON PROC_OPEN proc_body(b) PROC_CLOSE.
{ e_proc_python( lex, NULL, b.string());
b.release_this (); }
statement ::= FAKEROOT SYMBOL(p) PROC_OPEN proc_body(b) PROC_CLOSE.
{ e_proc_fakeroot( lex, p.string(), b.string() );
p.release_this (); b.release_this (); }
def_body(r) ::= def_body(l) DEF_BODY(b).
{ /* concatenate body lines */
r.assignString( token_t::concatString(l.string(), b.string()) );
l.release_this (); b.release_this ();
}
def_body(b) ::= . { b.assignString( 0 ); }
statement ::= SYMBOL(p) DEF_ARGS(a) def_body(b).
{ e_def( lex, p.string(), a.string(), b.string());
p.release_this(); a.release_this(); b.release_this(); }

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,319 @@
/* bbf.flex
written by Marc Singer
6 January 2005
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
USA.
DESCRIPTION
-----------
flex lexer specification for a BitBake input file parser.
Unfortunately, flex doesn't welcome comments within the rule sets.
I say unfortunately because this lexer is unreasonably complex and
comments would make the code much easier to comprehend.
The BitBake grammar is not regular. In order to interpret all
of the available input files, the lexer maintains much state as it
parses. There are places where this lexer will emit tokens that
are invalid. The parser will tend to catch these.
The lexer requires C++ at the moment. The only reason for this has
to do with a very small amount of managed state. Producing a C
lexer should be a reasonably easy task as long as the %reentrant
option is used.
NOTES
-----
o RVALUES. There are three kinds of RVALUES. There are unquoted
values, double quote enclosed strings, and single quote
strings. Quoted strings may contain unescaped quotes (of either
type), *and* any type may span more than one line by using a
continuation '\' at the end of the line. This requires us to
recognize all types of values with a single expression.
Moreover, the only reason to quote a value is to include
trailing or leading whitespace. Whitespace within a value is
preserved, ugh.
o CLASSES. C_ patterns define classes. Classes ought not include
a repitition operator, instead letting the reference to the class
define the repitition count.
C_SS - symbol start
C_SB - symbol body
C_SP - whitespace
*/
%option never-interactive
%option yylineno
%option noyywrap
%option reentrant stack
%{
#include "token.h"
#include "lexer.h"
#include "bitbakeparser.h"
#include <ctype.h>
extern void *bbparseAlloc(void *(*mallocProc)(size_t));
extern void bbparseFree(void *p, void (*freeProc)(void*));
extern void *bbparseAlloc(void *(*mallocProc)(size_t));
extern void *bbparse(void*, int, token_t, lex_t*);
extern void bbparseTrace(FILE *TraceFILE, char *zTracePrompt);
//static const char* rgbInput;
//static size_t cbInput;
extern "C" {
int lineError;
int errorParse;
enum {
errorNone = 0,
errorUnexpectedInput,
errorUnsupportedFeature,
};
}
#define YY_EXTRA_TYPE lex_t*
/* Read from buffer */
#define YY_INPUT(buf,result,max_size) \
{ yyextra->input(buf, &result, max_size); }
//#define YY_DECL static size_t yylex ()
#define ERROR(e) \
do { lineError = yylineno; errorParse = e; yyterminate (); } while (0)
static const char* fixup_escapes (const char* sz);
%}
C_SP [ \t]
COMMENT #.*\n
OP_ASSIGN "="
OP_PREDOT ".="
OP_POSTDOT "=."
OP_IMMEDIATE ":="
OP_PREPEND "=+"
OP_APPEND "+="
OP_COND "?="
B_OPEN "{"
B_CLOSE "}"
K_ADDTASK "addtask"
K_ADDHANDLER "addhandler"
K_AFTER "after"
K_BEFORE "before"
K_DEF "def"
K_INCLUDE "include"
K_REQUIRE "require"
K_INHERIT "inherit"
K_PYTHON "python"
K_FAKEROOT "fakeroot"
K_EXPORT "export"
K_EXPORT_FUNC "EXPORT_FUNCTIONS"
STRING \"([^\n\r]|"\\\n")*\"
SSTRING \'([^\n\r]|"\\\n")*\'
VALUE ([^'" \t\n])|([^'" \t\n]([^\n]|(\\\n))*[^'" \t\n])
C_SS [a-zA-Z_]
C_SB [a-zA-Z0-9_+-./]
REF $\{{C_SS}{C_SB}*\}
SYMBOL {C_SS}{C_SB}*
VARIABLE $?{C_SS}({C_SB}*|{REF})*(\[[a-zA-Z0-9_]*\])?
FILENAME ([a-zA-Z_./]|{REF})(([-+a-zA-Z0-9_./]*)|{REF})*
PROC \({C_SP}*\)
%s S_DEF
%s S_DEF_ARGS
%s S_DEF_BODY
%s S_FUNC
%s S_INCLUDE
%s S_INHERIT
%s S_REQUIRE
%s S_PROC
%s S_RVALUE
%s S_TASK
%%
{OP_APPEND} { BEGIN S_RVALUE;
yyextra->accept (T_OP_APPEND); }
{OP_PREPEND} { BEGIN S_RVALUE;
yyextra->accept (T_OP_PREPEND); }
{OP_IMMEDIATE} { BEGIN S_RVALUE;
yyextra->accept (T_OP_IMMEDIATE); }
{OP_ASSIGN} { BEGIN S_RVALUE;
yyextra->accept (T_OP_ASSIGN); }
{OP_PREDOT} { BEGIN S_RVALUE;
yyextra->accept (T_OP_PREDOT); }
{OP_POSTDOT} { BEGIN S_RVALUE;
yyextra->accept (T_OP_POSTDOT); }
{OP_COND} { BEGIN S_RVALUE;
yyextra->accept (T_OP_COND); }
<S_RVALUE>\\\n{C_SP}* { }
<S_RVALUE>{STRING} { BEGIN INITIAL;
size_t cb = yyleng;
while (cb && isspace (yytext[cb - 1]))
--cb;
yytext[cb - 1] = 0;
yyextra->accept (T_STRING, yytext + 1); }
<S_RVALUE>{SSTRING} { BEGIN INITIAL;
size_t cb = yyleng;
while (cb && isspace (yytext[cb - 1]))
--cb;
yytext[cb - 1] = 0;
yyextra->accept (T_STRING, yytext + 1); }
<S_RVALUE>{VALUE} { ERROR (errorUnexpectedInput); }
<S_RVALUE>{C_SP}*\n+ { BEGIN INITIAL;
yyextra->accept (T_STRING, NULL); }
{K_INCLUDE} { BEGIN S_INCLUDE;
yyextra->accept (T_INCLUDE); }
{K_REQUIRE} { BEGIN S_REQUIRE;
yyextra->accept (T_REQUIRE); }
{K_INHERIT} { BEGIN S_INHERIT;
yyextra->accept (T_INHERIT); }
{K_ADDTASK} { BEGIN S_TASK;
yyextra->accept (T_ADDTASK); }
{K_ADDHANDLER} { yyextra->accept (T_ADDHANDLER); }
{K_EXPORT_FUNC} { BEGIN S_FUNC;
yyextra->accept (T_EXPORT_FUNC); }
<S_TASK>{K_BEFORE} { yyextra->accept (T_BEFORE); }
<S_TASK>{K_AFTER} { yyextra->accept (T_AFTER); }
<INITIAL>{K_EXPORT} { yyextra->accept (T_EXPORT); }
<INITIAL>{K_FAKEROOT} { yyextra->accept (T_FAKEROOT); }
<INITIAL>{K_PYTHON} { yyextra->accept (T_PYTHON); }
{PROC}{C_SP}*{B_OPEN}{C_SP}*\n* { BEGIN S_PROC;
yyextra->accept (T_PROC_OPEN); }
<S_PROC>{B_CLOSE}{C_SP}*\n* { BEGIN INITIAL;
yyextra->accept (T_PROC_CLOSE); }
<S_PROC>([^}][^\n]*)?\n* { yyextra->accept (T_PROC_BODY, yytext); }
{K_DEF} { BEGIN S_DEF; }
<S_DEF>{SYMBOL} { BEGIN S_DEF_ARGS;
yyextra->accept (T_SYMBOL, yytext); }
<S_DEF_ARGS>[^\n:]*: { yyextra->accept (T_DEF_ARGS, yytext); }
<S_DEF_ARGS>{C_SP}*\n { BEGIN S_DEF_BODY; }
<S_DEF_BODY>{C_SP}+[^\n]*\n { yyextra->accept (T_DEF_BODY, yytext); }
<S_DEF_BODY>\n { yyextra->accept (T_DEF_BODY, yytext); }
<S_DEF_BODY>. { BEGIN INITIAL; unput (yytext[0]); }
{COMMENT} { }
<INITIAL>{SYMBOL} { yyextra->accept (T_SYMBOL, yytext); }
<INITIAL>{VARIABLE} { yyextra->accept (T_VARIABLE, yytext); }
<S_TASK>{SYMBOL} { yyextra->accept (T_TSYMBOL, yytext); }
<S_FUNC>{SYMBOL} { yyextra->accept (T_FSYMBOL, yytext); }
<S_INHERIT>{SYMBOL} { yyextra->accept (T_ISYMBOL, yytext); }
<S_INCLUDE>{FILENAME} { BEGIN INITIAL;
yyextra->accept (T_ISYMBOL, yytext); }
<S_REQUIRE>{FILENAME} { BEGIN INITIAL;
yyextra->accept (T_ISYMBOL, yytext); }
<S_TASK>\n { BEGIN INITIAL; }
<S_FUNC>\n { BEGIN INITIAL; }
<S_INHERIT>\n { BEGIN INITIAL; }
[ \t\r\n] /* Insignificant whitespace */
. { ERROR (errorUnexpectedInput); }
/* Check for premature termination */
<<EOF>> { return T_EOF; }
%%
void lex_t::accept (int token, const char* sz)
{
token_t t;
memset (&t, 0, sizeof (t));
t.copyString(sz);
/* tell lemon to parse the token */
parse (parser, token, t, this);
}
void lex_t::input (char *buf, int *result, int max_size)
{
/* printf("lex_t::input %p %d\n", buf, max_size); */
*result = fread(buf, 1, max_size, file);
/* printf("lex_t::input result %d\n", *result); */
}
int lex_t::line ()const
{
/* printf("lex_t::line\n"); */
return yyget_lineno (scanner);
}
extern "C" {
void parse (FILE* file, char* name, PyObject* data, int config)
{
/* printf("parse bbparseAlloc\n"); */
void* parser = bbparseAlloc (malloc);
yyscan_t scanner;
lex_t lex;
/* printf("parse yylex_init\n"); */
yylex_init (&scanner);
lex.parser = parser;
lex.scanner = scanner;
lex.file = file;
lex.name = name;
lex.data = data;
lex.config = config;
lex.parse = bbparse;
/*printf("parse yyset_extra\n"); */
yyset_extra (&lex, scanner);
/* printf("parse yylex\n"); */
int result = yylex (scanner);
/* printf("parse result %d\n", result); */
lex.accept (0);
/* printf("parse lex.accept\n"); */
bbparseTrace (NULL, NULL);
/* printf("parse bbparseTrace\n"); */
if (result != T_EOF)
printf ("premature end of file\n");
yylex_destroy (scanner);
bbparseFree (parser, free);
}
}

View File

@@ -0,0 +1,48 @@
/*
Copyright (C) 2005 Holger Hans Peter Freyther
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef LEXER_H
#define LEXER_H
#include "Python.h"
extern "C" {
struct lex_t {
void* parser;
void* scanner;
FILE* file;
char *name;
PyObject *data;
int config;
void* (*parse)(void*, int, token_t, lex_t*);
void accept(int token, const char* sz = NULL);
void input(char *buf, int *result, int max_size);
int line()const;
};
}
#endif

View File

@@ -0,0 +1,19 @@
#ifndef LEXERC_H
#define LEXERC_H
#include <stdio.h>
extern int lineError;
extern int errorParse;
typedef struct {
void *parser;
void *scanner;
FILE *file;
char *name;
PyObject *data;
int config;
} lex_t;
#endif

View File

@@ -0,0 +1,56 @@
#ifndef PYTHON_OUTPUT_H
#define PYTHON_OUTPUT_H
/*
Copyright (C) 2006 Holger Hans Peter Freyther
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This is the glue:
It will be called from the lemon grammar and will call into
python to set certain things.
*/
extern "C" {
struct lex_t;
extern void e_assign(lex_t*, const char*, const char*);
extern void e_export(lex_t*, const char*);
extern void e_immediate(lex_t*, const char*, const char*);
extern void e_cond(lex_t*, const char*, const char*);
extern void e_prepend(lex_t*, const char*, const char*);
extern void e_append(lex_t*, const char*, const char*);
extern void e_precat(lex_t*, const char*, const char*);
extern void e_postcat(lex_t*, const char*, const char*);
extern void e_addtask(lex_t*, const char*, const char*, const char*);
extern void e_addhandler(lex_t*,const char*);
extern void e_export_func(lex_t*, const char*);
extern void e_inherit(lex_t*, const char*);
extern void e_include(lex_t*, const char*);
extern void e_require(lex_t*, const char*);
extern void e_proc(lex_t*, const char*, const char*);
extern void e_proc_python(lex_t*, const char*, const char*);
extern void e_proc_fakeroot(lex_t*, const char*, const char*);
extern void e_def(lex_t*, const char*, const char*, const char*);
extern void e_parse_error(lex_t*);
}
#endif // PYTHON_OUTPUT_H

View File

@@ -0,0 +1,96 @@
/*
Copyright (C) 2005 Holger Hans Peter Freyther
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef TOKEN_H
#define TOKEN_H
#include <ctype.h>
#include <string.h>
#define PURE_METHOD
/**
* Special Value for End Of File Handling. We set it to
* 1001 so we can have up to 1000 Terminal Symbols on
* grammar. Currenlty we have around 20
*/
#define T_EOF 1001
struct token_t {
const char* string()const PURE_METHOD;
static char* concatString(const char* l, const char* r);
void assignString(char* str);
void copyString(const char* str);
void release_this();
private:
char *m_string;
size_t m_stringLen;
};
inline const char* token_t::string()const
{
return m_string;
}
/*
* append str to the current string
*/
inline char* token_t::concatString(const char* l, const char* r)
{
size_t cb = (l ? strlen (l) : 0) + strlen (r) + 1;
char *r_sz = new char[cb];
*r_sz = 0;
if (l)
strcat (r_sz, l);
strcat (r_sz, r);
return r_sz;
}
inline void token_t::assignString(char* str)
{
m_string = str;
m_stringLen = str ? strlen(str) : 0;
}
inline void token_t::copyString(const char* str)
{
if( str ) {
m_stringLen = strlen(str);
m_string = new char[m_stringLen+1];
strcpy(m_string, str);
}
}
inline void token_t::release_this()
{
delete m_string;
m_string = 0;
}
#endif

View File

@@ -72,9 +72,9 @@ def inherit(files, d):
if not file in __inherit_cache:
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
__inherit_cache.append( file )
data.setVar('__inherit_cache', __inherit_cache, d)
include(fn, file, d, "inherit")
__inherit_cache = data.getVar('__inherit_cache', d) or []
data.setVar('__inherit_cache', __inherit_cache, d)
def handle(fn, d, include = 0):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__
@@ -95,10 +95,6 @@ def handle(fn, d, include = 0):
if ext == ".bbclass":
__classname__ = root
classes.append(__classname__)
__inherit_cache = data.getVar('__inherit_cache', d) or []
if not fn in __inherit_cache:
__inherit_cache.append(fn)
data.setVar('__inherit_cache', __inherit_cache, d)
if include != 0:
oldfile = data.getVar('FILE', d)
@@ -122,16 +118,18 @@ def handle(fn, d, include = 0):
abs_fn = fn
if ext != ".bbclass":
dname = os.path.dirname(abs_fn)
if bbpath[0] != dname:
bbpath.insert(0, dname)
data.setVar('BBPATH', ":".join(bbpath), d)
bbpath.insert(0, os.path.dirname(abs_fn))
data.setVar('BBPATH', ":".join(bbpath), d)
if include:
bb.parse.mark_dependency(d, abs_fn)
if ext != ".bbclass":
data.setVar('FILE', fn, d)
i = (data.getVar("INHERIT", d, 1) or "").split()
if not "base" in i and __classname__ != "base":
i[0:0] = ["base"]
inherit(i, d)
lineno = 0
while 1:
@@ -163,7 +161,7 @@ def handle(fn, d, include = 0):
if t:
data.setVar('T', t, d)
except Exception, e:
bb.msg.debug(1, bb.msg.domain.Parsing, "Exception when executing anonymous function: %s" % e)
bb.msg.debug(1, bb.msg.domain.Parsing, "executing anonymous function: %s" % e)
raise
data.delVar("__anonqueue", d)
data.delVar("__anonfunc", d)
@@ -173,11 +171,24 @@ def handle(fn, d, include = 0):
all_handlers = {}
for var in data.getVar('__BBHANDLERS', d) or []:
# try to add the handler
# if we added it remember the choiche
handler = data.getVar(var,d)
bb.event.register(var, handler)
if bb.event.register(var,handler) == bb.event.Registered:
all_handlers[var] = handler
tasklist = data.getVar('__BBTASKS', d) or []
bb.build.add_tasks(tasklist, d)
for var in data.getVar('__BBTASKS', d) or []:
deps = data.getVarFlag(var, 'deps', d) or []
postdeps = data.getVarFlag(var, 'postdeps', d) or []
bb.build.add_task(var, deps, d)
for p in postdeps:
pdeps = data.getVarFlag(p, 'deps', d) or []
pdeps.append(var)
data.setVarFlag(p, 'deps', pdeps, d)
bb.build.add_task(p, pdeps, d)
# now add the handlers
if not len(all_handlers) == 0:
data.setVar('__all_handlers__', all_handlers, d)
bbpath.pop(0)
if oldfile:
@@ -323,23 +334,15 @@ def feeder(lineno, s, fn, root, d):
data.setVarFlag(var, "task", 1, d)
bbtasks = data.getVar('__BBTASKS', d) or []
if not var in bbtasks:
bbtasks.append(var)
bbtasks.append(var)
data.setVar('__BBTASKS', bbtasks, d)
existing = data.getVarFlag(var, "deps", d) or []
if after is not None:
# set up deps for function
for entry in after.split():
if entry not in existing:
existing.append(entry)
data.setVarFlag(var, "deps", existing, d)
# set up deps for function
data.setVarFlag(var, "deps", after.split(), d)
if before is not None:
# set up things that depend on this func
for entry in before.split():
existing = data.getVarFlag(entry, "deps", d) or []
if var not in existing:
data.setVarFlag(entry, "deps", [var] + existing, d)
# set up things that depend on this func
data.setVarFlag(var, "postdeps", before.split(), d)
return
m = __addhandler_regexp__.match(s)
@@ -374,8 +377,6 @@ def vars_from_file(mypkg, d):
myfile = os.path.splitext(os.path.basename(mypkg))
parts = myfile[0].split('_')
__pkgsplit_cache__[mypkg] = parts
if len(parts) > 3:
raise ParseError("Unable to generate default variables from the filename: %s (too many underscores)" % mypkg)
exp = 3 - len(parts)
tmplist = []
while exp != 0:
@@ -387,27 +388,25 @@ def vars_from_file(mypkg, d):
def set_additional_vars(file, d, include):
"""Deduce rest of variables, e.g. ${A} out of ${SRC_URI}"""
return
# Nothing seems to use this variable
#bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s: set_additional_vars" % file)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s: set_additional_vars" % file)
#src_uri = data.getVar('SRC_URI', d, 1)
#if not src_uri:
# return
src_uri = data.getVar('SRC_URI', d, 1)
if not src_uri:
return
#a = (data.getVar('A', d, 1) or '').split()
a = (data.getVar('A', d, 1) or '').split()
#from bb import fetch
#try:
# ud = fetch.init(src_uri.split(), d)
# a += fetch.localpaths(d, ud)
#except fetch.NoMethodError:
# pass
#except bb.MalformedUrl,e:
# raise ParseError("Unable to generate local paths for SRC_URI due to malformed uri: %s" % e)
#del fetch
from bb import fetch
try:
fetch.init(src_uri.split(), d)
except fetch.NoMethodError:
pass
except bb.MalformedUrl,e:
raise ParseError("Unable to generate local paths for SRC_URI due to malformed uri: %s" % e)
#data.setVar('A', " ".join(a), d)
a += fetch.localpaths(d)
del fetch
data.setVar('A', " ".join(a), d)
# Add us to the handlers list

View File

@@ -31,7 +31,6 @@ from bb.parse import ParseError
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+(.+)" )
def init(data):
if not bb.data.getVar('TOPDIR', data):
@@ -46,17 +45,14 @@ def localpath(fn, d):
if os.path.exists(fn):
return fn
if "://" not in fn:
return fn
localfn = None
try:
localfn = bb.fetch.localpath(fn, d, False)
localfn = bb.fetch.localpath(fn, d)
except bb.MalformedUrl:
pass
if not localfn:
return fn
localfn = fn
return localfn
def obtain(fn, data):
@@ -71,7 +67,7 @@ def obtain(fn, data):
return localfn
bb.mkdirhier(dldir)
try:
bb.fetch.init([fn], data)
bb.fetch.init([fn])
except bb.fetch.NoMethodError:
(type, value, traceback) = sys.exc_info()
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: no method: %s" % value)
@@ -165,12 +161,6 @@ def handle(fn, data, include = 0):
return data
def feeder(lineno, s, fn, data):
def getFunc(groupd, key, data):
if 'flag' in groupd and groupd['flag'] != None:
return bb.data.getVarFlag(key, groupd['flag'], data)
else:
return bb.data.getVar(key, data)
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
@@ -178,21 +168,19 @@ def feeder(lineno, s, fn, data):
if "exp" in groupd and groupd["exp"] != None:
bb.data.setVarFlag(key, "export", 1, data)
if "ques" in groupd and groupd["ques"] != None:
val = getFunc(groupd, key, data)
val = bb.data.getVar(key, data)
if val == None:
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
val = bb.data.expand(groupd["value"], e)
val = bb.data.expand(groupd["value"], data)
elif "append" in groupd and groupd["append"] != None:
val = "%s %s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
val = "%s %s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
elif "prepend" in groupd and groupd["prepend"] != None:
val = "%s %s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
val = "%s %s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
elif "postdot" in groupd and groupd["postdot"] != None:
val = "%s%s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
val = "%s%s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
elif "predot" in groupd and groupd["predot"] != None:
val = "%s%s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
val = "%s%s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
else:
val = groupd["value"]
if 'flag' in groupd and groupd['flag'] != None:
@@ -215,11 +203,6 @@ def feeder(lineno, s, fn, data):
include(fn, s, data, "include required")
return
m = __export_regexp__.match(s)
if m:
bb.data.setVarFlag(m.group(1), "export", 1, data)
return
raise ParseError("%s:%d: unparsed line: '%s'" % (fn, lineno, s));
# Add us to the handlers list

View File

@@ -1,110 +0,0 @@
# BitBake Persistent Data Store
#
# Copyright (C) 2007 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import bb, os
try:
import sqlite3
except ImportError:
try:
from pysqlite2 import dbapi2 as sqlite3
except ImportError:
bb.msg.fatal(bb.msg.domain.PersistData, "Importing sqlite3 and pysqlite2 failed, please install one of them. Python 2.5 or a 'python-pysqlite2' like package is likely to be what you need.")
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
bb.msg.fatal(bb.msg.domain.PersistData, "sqlite3 version 3.3.0 or later is required.")
class PersistData:
"""
BitBake Persistent Data Store
Used to store data in a central location such that other threads/tasks can
access them at some future date.
The "domain" is used as a key to isolate each data pool and in this
implementation corresponds to an SQL table. The SQL table consists of a
simple key and value pair.
Why sqlite? It handles all the locking issues for us.
"""
def __init__(self, d):
self.cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
if self.cachedir in [None, '']:
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'PERSISTENT_DIR' or 'CACHE' variable.")
try:
os.stat(self.cachedir)
except OSError:
bb.mkdirhier(self.cachedir)
self.cachefile = os.path.join(self.cachedir,"bb_persist_data.sqlite3")
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
self.connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
def addDomain(self, domain):
"""
Should be called before any domain is used
Creates it if it doesn't exist.
"""
self.connection.execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
def delDomain(self, domain):
"""
Removes a domain and all the data it contains
"""
self.connection.execute("DROP TABLE IF EXISTS %s;" % domain)
def getValue(self, domain, key):
"""
Return the value of a key for a domain
"""
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
for row in data:
return row[1]
def setValue(self, domain, key, value):
"""
Sets the value of a key for a domain
"""
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
rows = 0
for row in data:
rows = rows + 1
if rows:
self._execute("UPDATE %s SET value=? WHERE key=?;" % domain, [value, key])
else:
self._execute("INSERT into %s(key, value) values (?, ?);" % domain, [key, value])
def delValue(self, domain, key):
"""
Deletes a key/value pair
"""
self._execute("DELETE from %s where key=?;" % domain, [key])
def _execute(self, *query):
while True:
try:
self.connection.execute(*query)
return
except sqlite3.OperationalError, e:
if 'database is locked' in str(e):
continue
raise

View File

@@ -31,12 +31,12 @@ class NoProvider(Exception):
class NoRProvider(Exception):
"""Exception raised when no provider of a runtime dependency can be found"""
def sortPriorities(pn, dataCache, pkg_pn = None):
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
Reorder pkg_pn by file priority and default preference
If there is a PREFERRED_VERSION, find the highest-priority bbfile
providing that version. If not, find the latest version provided by
an bbfile in the highest-priority set.
"""
if not pkg_pn:
pkg_pn = dataCache.pkg_pn
@@ -44,61 +44,36 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
priorities = {}
for f in files:
priority = dataCache.bbfile_priority[f]
preference = dataCache.pkg_dp[f]
if priority not in priorities:
priorities[priority] = {}
if preference not in priorities[priority]:
priorities[priority][preference] = []
priorities[priority][preference].append(f)
pri_list = priorities.keys()
pri_list.sort(lambda a, b: a - b)
priorities[priority] = []
priorities[priority].append(f)
p_list = priorities.keys()
p_list.sort(lambda a, b: a - b)
tmp_pn = []
for pri in pri_list:
pref_list = priorities[pri].keys()
pref_list.sort(lambda a, b: b - a)
tmp_pref = []
for pref in pref_list:
tmp_pref.extend(priorities[pri][pref])
tmp_pn = [tmp_pref] + tmp_pn
return tmp_pn
def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
Find the first provider in pkg_pn with a PREFERRED_VERSION set.
"""
for p in p_list:
tmp_pn = [priorities[p]] + tmp_pn
preferred_file = None
preferred_ver = None
localdata = data.createCopy(cfgData)
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
bb.data.setVar('OVERRIDES', "%s:%s" % (pn, data.getVar('OVERRIDES', localdata)), localdata)
bb.data.update_data(localdata)
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
m = re.match('(.*)_(.*)', preferred_v)
if m:
if m.group(1):
preferred_e = int(m.group(1)[:-1])
else:
preferred_e = None
preferred_v = m.group(2)
if m.group(3):
preferred_r = m.group(3)[1:]
else:
preferred_r = None
preferred_v = m.group(1)
preferred_r = m.group(2)
else:
preferred_e = None
preferred_r = None
for file_set in pkg_pn:
for file_set in tmp_pn:
for f in file_set:
pe,pv,pr = dataCache.pkg_pepvpr[f]
if preferred_v == pv and (preferred_r == pr or preferred_r == None) and (preferred_e == pe or preferred_e == None):
pv,pr = dataCache.pkg_pvpr[f]
if preferred_v == pv and (preferred_r == pr or preferred_r == None):
preferred_file = f
preferred_ver = (pe, pv, pr)
preferred_ver = (pv, pr)
break
if preferred_file:
break;
@@ -106,8 +81,6 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
pv_str = '%s-%s' % (preferred_v, preferred_r)
else:
pv_str = preferred_v
if not (preferred_e is None):
pv_str = '%s:%s' % (preferred_e, pv_str)
itemstr = ""
if item:
itemstr = " (for item %s)" % item
@@ -116,62 +89,37 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
else:
bb.msg.debug(1, bb.msg.domain.Provider, "selecting %s as PREFERRED_VERSION %s of package %s%s" % (preferred_file, pv_str, pn, itemstr))
return (preferred_ver, preferred_file)
del localdata
def findLatestProvider(pn, cfgData, dataCache, file_set):
"""
Return the highest version of the providers in file_set.
Take default preferences into account.
"""
# get highest priority file set
files = tmp_pn[0]
latest = None
latest_p = 0
latest_f = None
for file_name in file_set:
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
for file_name in files:
pv,pr = dataCache.pkg_pvpr[file_name]
dp = dataCache.pkg_dp[file_name]
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
latest = (pe, pv, pr)
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pv, pr)) < 0)) or (dp > latest_p):
latest = (pv, pr)
latest_f = file_name
latest_p = dp
return (latest, latest_f)
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
If there is a PREFERRED_VERSION, find the highest-priority bbfile
providing that version. If not, find the latest version provided by
an bbfile in the highest-priority set.
"""
sortpkg_pn = sortPriorities(pn, dataCache, pkg_pn)
# Find the highest priority provider with a PREFERRED_VERSION set
(preferred_ver, preferred_file) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
# Find the latest version of the highest priority provider
(latest, latest_f) = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[0])
if preferred_file is None:
preferred_file = latest_f
preferred_ver = latest
return (latest, latest_f, preferred_ver, preferred_file)
return (latest,latest_f,preferred_ver, preferred_file)
def _filterProviders(providers, item, cfgData, dataCache):
#
# RP - build_cache_fail needs to move elsewhere
#
def filterProviders(providers, item, cfgData, dataCache, build_cache_fail = {}):
"""
Take a list of providers and filter/reorder according to the
environment variables and previous build results
"""
eligible = []
preferred_versions = {}
sortpkg_pn = {}
# The order of providers depends on the order of the files on the disk
# up to here. Sort pkg_pn to make dependency issues reproducible rather
# than effectively random.
providers.sort()
# Collate providers by PN
pkg_pn = {}
@@ -183,24 +131,21 @@ def _filterProviders(providers, item, cfgData, dataCache):
bb.msg.debug(1, bb.msg.domain.Provider, "providers for %s are: %s" % (item, pkg_pn.keys()))
# First add PREFERRED_VERSIONS
for pn in pkg_pn.keys():
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
if preferred_versions[pn][1]:
eligible.append(preferred_versions[pn][1])
# Now add latest verisons
for pn in pkg_pn.keys():
if pn in preferred_versions and preferred_versions[pn][1]:
continue
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
preferred_versions[pn] = bb.providers.findBestProvider(pn, cfgData, dataCache, pkg_pn, item)[2:4]
eligible.append(preferred_versions[pn][1])
for p in eligible:
if p in build_cache_fail:
bb.msg.debug(1, bb.msg.domain.Provider, "rejecting already-failed %s" % p)
eligible.remove(p)
if len(eligible) == 0:
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
return 0
# If pn == item, give it a slight default preference
# This means PREFERRED_PROVIDER_foobar defaults to foobar if available
for p in providers:
@@ -213,71 +158,31 @@ def _filterProviders(providers, item, cfgData, dataCache):
eligible.remove(fn)
eligible = [fn] + eligible
return eligible
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables and previous build results
Takes a "normal" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
if prefervar:
dataCache.preferred[item] = prefervar
foundUnique = False
if item in dataCache.preferred:
for p in eligible:
pn = dataCache.pkg_fn[p]
if dataCache.preferred[item] == pn:
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
eligible.remove(p)
eligible = [p] + eligible
foundUnique = True
break
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
return eligible, foundUnique
def filterProvidersRunTime(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
environment variables and previous build results
Takes a "runtime" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
for p in eligible:
# look to see if one of them is already staged, or marked as preferred.
# if so, bump it to the head of the queue
for p in providers:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
if prefervar == pn:
var = "PREFERRED_PROVIDERS_%s = %s" % (provide, prefervar)
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to %s" % (pn, item, var))
preferred_vars.append(var)
eligible.remove(p)
eligible = [p] + eligible
preferred.append(p)
break
pv, pr = dataCache.pkg_pvpr[p]
numberPreferred = len(preferred)
stamp = '%s.do_populate_staging' % dataCache.stamp[p]
if os.path.exists(stamp):
(newvers, fn) = preferred_versions[pn]
if not fn in eligible:
# package was made ineligible by already-failed check
continue
oldver = "%s-%s" % (pv, pr)
newver = '-'.join(newvers)
if (newver != oldver):
extra_chat = "%s (%s) already staged but upgrading to %s to satisfy %s" % (pn, oldver, newver, item)
else:
extra_chat = "Selecting already-staged %s (%s) to satisfy %s" % (pn, oldver, item)
if numberPreferred > 1:
bb.msg.error(bb.msg.domain.Provider, "Conflicting PREFERRED_PROVIDERS entries were found which resulted in an attempt to select multiple providers (%s) for runtime dependecy %s\nThe entries resulting in this conflict were: %s" % (preferred, item, preferred_vars))
bb.msg.note(2, bb.msg.domain.Provider, "%s" % extra_chat)
eligible.remove(fn)
eligible = [fn] + eligible
break
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
return eligible, numberPreferred
return eligible
def getRuntimeProviders(dataCache, rdepend):
"""
@@ -296,12 +201,7 @@ def getRuntimeProviders(dataCache, rdepend):
# Only search dynamic packages if we can't find anything in other variables
for pattern in dataCache.packages_dynamic:
pattern = pattern.replace('+', "\+")
try:
regexp = re.compile(pattern)
except:
bb.msg.error(bb.msg.domain.Provider, "Error parsing re expression: %s" % pattern)
raise
regexp = re.compile(pattern)
if regexp.match(rdepend):
rproviders += dataCache.packages_dynamic[pattern]

File diff suppressed because it is too large Load Diff

View File

@@ -68,6 +68,7 @@ leave_mainloop = False
last_exception = None
cooker = None
parsed = False
initdata = None
debug = os.environ.get( "BBSHELL_DEBUG", "" )
##########################################################################
@@ -103,11 +104,10 @@ class BitBakeShellCommands:
def _findProvider( self, item ):
self._checkParsed()
# Need to use taskData for this information
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
if not preferred: preferred = item
try:
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status, cooker.build_cache_fail)
except KeyError:
if item in cooker.status.providers:
pf = cooker.status.providers[item][0]
@@ -144,7 +144,6 @@ class BitBakeShellCommands:
def build( self, params, cmd = "build" ):
"""Build a providee"""
global last_exception
globexpr = params[0]
self._checkParsed()
names = globfilter( cooker.status.pkg_pn.keys(), globexpr )
@@ -153,16 +152,15 @@ class BitBakeShellCommands:
oldcmd = cooker.configuration.cmd
cooker.configuration.cmd = cmd
cooker.build_cache = []
cooker.build_cache_fail = []
td = taskdata.TaskData(cooker.configuration.abort)
localdata = data.createCopy(cooker.configuration.data)
data.update_data(localdata)
data.expandKeys(localdata)
try:
tasks = []
for name in names:
td.add_provider(localdata, cooker.status, name)
td.add_provider(cooker.configuration.data, cooker.status, name)
providers = td.get_provider(name)
if len(providers) == 0:
@@ -170,23 +168,26 @@ class BitBakeShellCommands:
tasks.append([name, "do_%s" % cooker.configuration.cmd])
td.add_unresolved(localdata, cooker.status)
td.add_unresolved(cooker.configuration.data, cooker.status)
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
rq.prepare_runqueue()
rq.execute_runqueue()
rq = runqueue.RunQueue()
rq.prepare_runqueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
rq.execute_runqueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
except Providers.NoProvider:
print "ERROR: No Provider"
global last_exception
last_exception = Providers.NoProvider
except runqueue.TaskFailure, fnids:
for fnid in fnids:
print "ERROR: '%s' failed" % td.fn_index[fnid]
global last_exception
last_exception = runqueue.TaskFailure
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % names
global last_exception
last_exception = e
cooker.configuration.cmd = oldcmd
@@ -219,8 +220,8 @@ class BitBakeShellCommands:
edit.usage = "<providee>"
def environment( self, params ):
"""Dump out the outer BitBake environment"""
cooker.showEnvironment()
"""Dump out the outer BitBake environment (see bbread)"""
data.emit_env(sys.__stdout__, cooker.configuration.data, True)
def exit_( self, params ):
"""Leave the BitBake Shell"""
@@ -235,21 +236,38 @@ class BitBakeShellCommands:
def fileBuild( self, params, cmd = "build" ):
"""Parse and build a .bb file"""
global last_exception
name = params[0]
bf = completeFilePath( name )
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
oldcmd = cooker.configuration.cmd
cooker.configuration.cmd = cmd
cooker.build_cache = []
cooker.build_cache_fail = []
thisdata = copy.deepcopy( initdata )
# Caution: parse.handle modifies thisdata, hence it would
# lead to pollution cooker.configuration.data, which is
# why we use it on a safe copy we obtained from cooker right after
# parsing the initial *.conf files
try:
cooker.buildFile(bf)
bbfile_data = parse.handle( bf, thisdata )
except parse.ParseError:
print "ERROR: Unable to open or parse '%s'" % bf
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % name
last_exception = e
else:
# Remove stamp for target if force mode active
if cooker.configuration.force:
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (cmd, bf))
bb.build.del_stamp('do_%s' % cmd, bbfile_data)
item = data.getVar('PN', bbfile_data, 1)
data.setVar( "_task_cache", [], bbfile_data ) # force
try:
cooker.tryBuildPackage( os.path.abspath( bf ), item, cmd, bbfile_data, True )
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % name
global last_exception
last_exception = e
cooker.configuration.cmd = oldcmd
fileBuild.usage = "<bbfile>"
@@ -380,11 +398,6 @@ SRC_URI = ""
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
new.usage = "<directory> <filename>"
def package( self, params ):
"""Execute 'package' on a providee"""
self.build( params, "package" )
package.usage = "<providee>"
def pasteBin( self, params ):
"""Send a command + output buffer to the pastebin at http://rafb.net/paste"""
index = params[0]
@@ -493,8 +506,8 @@ SRC_URI = ""
interpreter.interact( "SHELL: Expert Mode - BitBake Python %s\nType 'help' for more information, press CTRL-D to switch back to BBSHELL." % sys.version )
def showdata( self, params ):
"""Show the parsed metadata for a given providee"""
cooker.showEnvironment(None, params)
"""Execute 'showdata' on a providee"""
self.build( params, "showdata" )
showdata.usage = "<providee>"
def setVar( self, params ):
@@ -524,6 +537,8 @@ SRC_URI = ""
def status( self, params ):
"""<just for testing>"""
print "-" * 78
print "build cache = '%s'" % cooker.build_cache
print "build cache fail = '%s'" % cooker.build_cache_fail
print "building list = '%s'" % cooker.building_list
print "build path = '%s'" % cooker.build_path
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
@@ -542,7 +557,6 @@ SRC_URI = ""
def which( self, params ):
"""Computes the providers for a given providee"""
# Need to use taskData for this information
item = params[0]
self._checkParsed()
@@ -551,7 +565,8 @@ SRC_URI = ""
if not preferred: preferred = item
try:
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status,
cooker.build_cache_fail)
except KeyError:
lv, lf, pv, pf = (None,)*4
@@ -572,7 +587,6 @@ SRC_URI = ""
def completeFilePath( bbfile ):
"""Get the complete bbfile path"""
if not cooker.status: return bbfile
if not cooker.status.pkg_fn: return bbfile
for key in cooker.status.pkg_fn.keys():
if key.endswith( bbfile ):
@@ -725,6 +739,10 @@ class BitBakeShell:
print __credits__
# save initial cooker configuration (will be reused in file*** commands)
global initdata
initdata = copy.deepcopy( cooker.configuration.data )
def cleanup( self ):
"""Write readline history and clean up resources"""
debugOut( "writing command history" )

View File

@@ -23,7 +23,7 @@ Task data collection and handling
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from bb import data, event, mkdirhier, utils
from bb import data, fetch, event, mkdirhier, utils
import bb, os
class TaskData:
@@ -43,7 +43,6 @@ class TaskData:
self.tasks_fnid = []
self.tasks_name = []
self.tasks_tdepends = []
self.tasks_idepends = []
# Cache to speed up task ID lookups
self.tasks_lookup = {}
@@ -91,16 +90,6 @@ class TaskData:
return self.fn_index.index(name)
def gettask_ids(self, fnid):
"""
Return an array of the ID numbers matching a given fnid.
"""
ids = []
if fnid in self.tasks_lookup:
for task in self.tasks_lookup[fnid]:
ids.append(self.tasks_lookup[fnid][task])
return ids
def gettask_id(self, fn, task, create = True):
"""
Return an ID number for the task matching fn and task.
@@ -119,7 +108,6 @@ class TaskData:
self.tasks_name.append(task)
self.tasks_fnid.append(fnid)
self.tasks_tdepends.append([])
self.tasks_idepends.append([])
listid = len(self.tasks_name) - 1
@@ -134,6 +122,7 @@ class TaskData:
Add tasks for a given fn to the database
"""
task_graph = dataCache.task_queues[fn]
task_deps = dataCache.task_deps[fn]
fnid = self.getfn_id(fn)
@@ -145,24 +134,15 @@ class TaskData:
if fnid in self.tasks_fnid:
return
for task in task_deps['tasks']:
# Work out task dependencies
# Work out task dependencies
for task in task_graph.allnodes():
parentids = []
for dep in task_deps['parents'][task]:
for dep in task_graph.getparents(task):
parentid = self.gettask_id(fn, dep)
parentids.append(parentid)
taskid = self.gettask_id(fn, task)
self.tasks_tdepends[taskid].extend(parentids)
# Touch all intertask dependencies
if 'depends' in task_deps and task in task_deps['depends']:
ids = []
for dep in task_deps['depends'][task].split():
if dep:
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_idepends[taskid].extend(ids)
# Work out build dependencies
if not fnid in self.depids:
dependids = {}
@@ -177,11 +157,11 @@ class TaskData:
rdepends = dataCache.rundeps[fn]
rrecs = dataCache.runrecs[fn]
for package in rdepends:
for rdepend in bb.utils.explode_deps(rdepends[package]):
for rdepend in rdepends[package]:
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime dependency %s for %s" % (rdepend, fn))
rdependids[self.getrun_id(rdepend)] = None
for package in rrecs:
for rdepend in bb.utils.explode_deps(rrecs[package]):
for rdepend in rrecs[package]:
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime recommendation %s for %s" % (rdepend, fn))
rdependids[self.getrun_id(rdepend)] = None
self.rdepids[fnid] = rdependids.keys()
@@ -339,7 +319,7 @@ class TaskData:
self.add_provider_internal(cfgData, dataCache, item)
except bb.providers.NoProvider:
if self.abort:
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
bb.msg.error(bb.msg.domain.Provider, "No providers of build target %s (for %s)" % (item, self.get_dependees_str(item)))
raise
targetid = self.getbuild_id(item)
self.remove_buildtarget(targetid)
@@ -357,7 +337,7 @@ class TaskData:
return
if not item in dataCache.providers:
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
bb.msg.debug(1, bb.msg.domain.Provider, "No providers of build target %s (for %s)" % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData))
raise bb.providers.NoProvider(item)
@@ -366,7 +346,7 @@ class TaskData:
all_p = dataCache.providers[item]
eligible, foundUnique = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
eligible = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
for p in eligible:
fnid = self.getfn_id(p)
@@ -374,18 +354,33 @@ class TaskData:
eligible.remove(p)
if not eligible:
bb.msg.note(2, bb.msg.domain.Provider, "No buildable provider PROVIDES '%s' but '%s' DEPENDS on or otherwise requires it. Enable debugging and see earlier logs to find unbuildable providers." % (item, self.get_dependees_str(item)))
bb.msg.debug(1, bb.msg.domain.Provider, "No providers of build target %s after filtering (for %s)" % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData))
raise bb.providers.NoProvider(item)
if len(eligible) > 1 and foundUnique == False:
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
if prefervar:
dataCache.preferred[item] = prefervar
discriminated = False
if item in dataCache.preferred:
for p in eligible:
pn = dataCache.pkg_fn[p]
if dataCache.preferred[item] == pn:
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
eligible.remove(p)
eligible = [p] + eligible
discriminated = True
break
if len(eligible) > 1 and discriminated == False:
if item not in self.consider_msgs_cache:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.msg.note(1, bb.msg.domain.Provider, "multiple providers are available for %s (%s);" % (item, ", ".join(providers_list)))
bb.msg.note(1, bb.msg.domain.Provider, "consider defining PREFERRED_PROVIDER_%s" % item)
bb.event.fire(bb.event.MultipleProviders(item, providers_list, cfgData))
bb.event.fire(bb.event.MultipleProviders(item,providers_list,cfgData))
self.consider_msgs_cache.append(item)
for fn in eligible:
@@ -414,11 +409,11 @@ class TaskData:
all_p = bb.providers.getRuntimeProviders(dataCache, item)
if not all_p:
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables" % (self.get_rdependees_str(item), item))
bb.msg.error(bb.msg.domain.Provider, "No providers of runtime build target %s (for %s)" % (item, self.get_rdependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
raise bb.providers.NoRProvider(item)
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
eligible = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
for p in eligible:
fnid = self.getfn_id(p)
@@ -426,11 +421,24 @@ class TaskData:
eligible.remove(p)
if not eligible:
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables of any buildable targets.\nEnable debugging and see earlier logs to find unbuildable targets." % (self.get_rdependees_str(item), item))
bb.msg.error(bb.msg.domain.Provider, "No providers of runtime build target %s after filtering (for %s)" % (item, self.get_rdependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
raise bb.providers.NoRProvider(item)
if len(eligible) > 1 and numberPreferred == 0:
# Should use dataCache.preferred here?
preferred = []
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
if prefervar == pn:
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to PREFERRED_PROVIDERS" % (pn, item))
eligible.remove(p)
eligible = [p] + eligible
preferred.append(p)
if len(eligible) > 1 and len(preferred) == 0:
if item not in self.consider_msgs_cache:
providers_list = []
for fn in eligible:
@@ -440,12 +448,12 @@ class TaskData:
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
self.consider_msgs_cache.append(item)
if numberPreferred > 1:
if len(preferred) > 1:
if item not in self.consider_msgs_cache:
providers_list = []
for fn in eligible:
for fn in preferred:
providers_list.append(dataCache.pkg_fn[fn])
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (top %s entries preferred) (%s);" % (item, numberPreferred, ", ".join(providers_list)))
bb.msg.note(2, bb.msg.domain.Provider, "multiple preferred providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
bb.msg.note(2, bb.msg.domain.Provider, "consider defining only one PREFERRED_PROVIDER entry to match runtime %s" % item)
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
self.consider_msgs_cache.append(item)
@@ -455,77 +463,60 @@ class TaskData:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
bb.msg.debug(2, bb.msg.domain.Provider, "adding '%s' to satisfy runtime '%s'" % (fn, item))
bb.msg.debug(2, bb.msg.domain.Provider, "adding %s to satisfy runtime %s" % (fn, item))
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
def fail_fnid(self, fnid, missing_list = []):
def fail_fnid(self, fnid):
"""
Mark a file as failed (unbuildable)
Remove any references from build and runtime provider lists
missing_list, A list of missing requirements for this target
"""
if fnid in self.failed_fnids:
return
bb.msg.debug(1, bb.msg.domain.Provider, "File '%s' is unbuildable, removing..." % self.fn_index[fnid])
bb.msg.debug(1, bb.msg.domain.Provider, "Removing failed file %s" % self.fn_index[fnid])
self.failed_fnids.append(fnid)
for target in self.build_targets:
if fnid in self.build_targets[target]:
self.build_targets[target].remove(fnid)
if len(self.build_targets[target]) == 0:
self.remove_buildtarget(target, missing_list)
self.remove_buildtarget(target)
for target in self.run_targets:
if fnid in self.run_targets[target]:
self.run_targets[target].remove(fnid)
if len(self.run_targets[target]) == 0:
self.remove_runtarget(target, missing_list)
self.remove_runtarget(target)
def remove_buildtarget(self, targetid, missing_list = []):
def remove_buildtarget(self, targetid):
"""
Mark a build target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
if not missing_list:
missing_list = [self.build_names_index[targetid]]
else:
missing_list = [self.build_names_index[targetid]] + missing_list
bb.msg.note(2, bb.msg.domain.Provider, "Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
bb.msg.debug(1, bb.msg.domain.Provider, "Removing failed build target %s" % self.build_names_index[targetid])
self.failed_deps.append(targetid)
dependees = self.get_dependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in range(len(self.tasks_idepends)):
idepends = self.tasks_idepends[taskid]
for (idependid, idependtask) in idepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
self.fail_fnid(fnid)
if self.abort and targetid in self.external_targets:
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
bb.msg.error(bb.msg.domain.Provider, "No buildable providers available for required build target %s" % self.build_names_index[targetid])
raise bb.providers.NoProvider
def remove_runtarget(self, targetid, missing_list = []):
def remove_runtarget(self, targetid):
"""
Mark a run target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
if not missing_list:
missing_list = [self.run_names_index[targetid]]
else:
missing_list = [self.run_names_index[targetid]] + missing_list
bb.msg.note(1, bb.msg.domain.Provider, "Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.run_names_index[targetid], missing_list))
bb.msg.note(1, bb.msg.domain.Provider, "Removing failed runtime build target %s" % self.run_names_index[targetid])
self.failed_rdeps.append(targetid)
dependees = self.get_rdependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
self.fail_fnid(fnid)
def add_unresolved(self, cfgData, dataCache):
"""
Resolve all unresolved build and runtime targets
"""
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving missing task queue dependencies")
while 1:
added = 0
for target in self.get_unresolved_build_targets(dataCache):
@@ -535,7 +526,6 @@ class TaskData:
except bb.providers.NoProvider:
targetid = self.getbuild_id(target)
if self.abort and targetid in self.external_targets:
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (target, self.get_dependees_str(target)))
raise
self.remove_buildtarget(targetid)
for target in self.get_unresolved_run_targets(dataCache):
@@ -555,26 +545,14 @@ class TaskData:
"""
bb.msg.debug(3, bb.msg.domain.TaskData, "build_names:")
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.build_names_index))
bb.msg.debug(3, bb.msg.domain.TaskData, "run_names:")
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.run_names_index))
bb.msg.debug(3, bb.msg.domain.TaskData, "build_targets:")
for buildid in range(len(self.build_names_index)):
target = self.build_names_index[buildid]
targets = "None"
if buildid in self.build_targets:
targets = self.build_targets[buildid]
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (buildid, target, targets))
for target in self.build_targets.keys():
bb.msg.debug(3, bb.msg.domain.TaskData, " %s: %s" % (self.build_names_index[target], self.build_targets[target]))
bb.msg.debug(3, bb.msg.domain.TaskData, "run_targets:")
for runid in range(len(self.run_names_index)):
target = self.run_names_index[runid]
targets = "None"
if runid in self.run_targets:
targets = self.run_targets[runid]
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (runid, target, targets))
for target in self.run_targets.keys():
bb.msg.debug(3, bb.msg.domain.TaskData, " %s: %s" % (self.run_names_index[target], self.run_targets[target]))
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
for task in range(len(self.tasks_name)):
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
@@ -582,12 +560,7 @@ class TaskData:
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task]))
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
for fnid in self.depids:
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.depids[fnid]))
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime ids (per fn):")
for fnid in self.rdepids:
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))

View File

@@ -22,7 +22,7 @@ BitBake Utility Functions
digits = "0123456789"
ascii_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
import re, fcntl, os
import re
def explode_version(s):
r = []
@@ -62,12 +62,10 @@ def vercmp_part(a, b):
return -1
def vercmp(ta, tb):
(ea, va, ra) = ta
(eb, vb, rb) = tb
(va, ra) = ta
(vb, rb) = tb
r = int(ea)-int(eb)
if (r == 0):
r = vercmp_part(va, vb)
r = vercmp_part(va, vb)
if (r == 0):
r = vercmp_part(ra, rb)
return r
@@ -85,45 +83,18 @@ def explode_deps(s):
for i in l:
if i[0] == '(':
flag = True
#j = []
if not flag:
j = []
if flag:
j.append(i)
else:
r.append(i)
#else:
# j.append(i)
if flag and i.endswith(')'):
flag = False
# Ignore version
#r[-1] += ' ' + ' '.join(j)
return r
def explode_dep_versions(s):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
and return a dictonary of dependencies and versions.
"""
r = {}
l = s.split()
lastdep = None
lastver = ""
inversion = False
for i in l:
if i[0] == '(':
inversion = True
lastver = i[1:] or ""
#j = []
elif inversion and i.endswith(')'):
inversion = False
lastver = lastver + " " + (i[:-1] or "")
r[lastdep] = lastver
elif not inversion:
r[i] = None
lastdep = i
lastver = ""
elif inversion:
lastver = lastver + " " + i
return r
def _print_trace(body, line):
"""
@@ -229,79 +200,3 @@ def Enum(*names):
constants = tuple(constants)
EnumType = EnumClass()
return EnumType
def lockfile(name):
"""
Use the file fn as a lock file, return when the lock has been acquired.
Returns a variable to pass to unlockfile().
"""
while True:
# If we leave the lockfiles lying around there is no problem
# but we should clean up after ourselves. This gives potential
# for races though. To work around this, when we acquire the lock
# we check the file we locked was still the lock file on disk.
# by comparing inode numbers. If they don't match or the lockfile
# no longer exists, we start again.
# This implementation is unfair since the last person to request the
# lock is the most likely to win it.
lf = open(name, "a+")
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
statinfo = os.fstat(lf.fileno())
if os.path.exists(lf.name):
statinfo2 = os.stat(lf.name)
if statinfo.st_ino == statinfo2.st_ino:
return lf
# File no longer exists or changed, retry
lf.close
def unlockfile(lf):
"""
Unlock a file locked using lockfile()
"""
os.unlink(lf.name)
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
lf.close
def md5_file(filename):
"""
Return the hex string representation of the MD5 checksum of filename.
"""
try:
import hashlib
m = hashlib.md5()
except ImportError:
import md5
m = md5.new()
for line in open(filename):
m.update(line)
return m.hexdigest()
def sha256_file(filename):
"""
Return the hex string representation of the 256-bit SHA checksum of
filename. On Python 2.4 this will return None, so callers will need to
handle that by either skipping SHA checks, or running a standalone sha256sum
binary.
"""
try:
import hashlib
except ImportError:
return None
s = hashlib.sha256()
for line in open(filename):
s.update(line)
return s.hexdigest()
def prunedir(topdir):
# Delete everything reachable from the directory named in 'topdir'.
# CAUTION: This is dangerous!
for root, dirs, files in os.walk(topdir, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
os.rmdir(topdir)

View File

@@ -2,77 +2,47 @@
DL_DIR ?= "${OEROOT}/sources"
BBFILES = "${OEROOT}/meta/packages/*/*.bb"
# Uncomment and set to allow bitbake to execute multiple tasks at once.
# For a quadcore, BB_NUMBER_THREADS = "4", PARALLEL_MAKE = "-j 4" would
# be appropriate.
# BB_NUMBER_THREADS = "4"
# Also, make can be passed flags so it run parallel threads e.g.:
# PARALLEL_MAKE = "-j 4"
# To enable extra packages, uncomment the following lines:
# BBFILES := "${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-extras/packages/*/*.bb"
# BBFILE_COLLECTIONS = "normal extras"
# BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
# BBFILE_PATTERN_extras = "^${OEROOT}/meta/"
# BBFILE_PRIORITY_normal = "5"
# BBFILE_PRIORITY_extras = "5"
BBMASK = ""
# The machine to target
MACHINE ?= "qemuarm"
# Other supported machines
#MACHINE ?= "cmx270"
#MACHINE ?= "qemux86"
#MACHINE ?= "c7x0"
#MACHINE ?= "akita"
#MACHINE ?= "spitz"
#MACHINE ?= "nokia770"
#MACHINE ?= "nokia800"
#MACHINE ?= "fic-gta01"
#MACHINE ?= "bootcdx86"
#MACHINE ?= "cm-x270"
#MACHINE ?= "em-x270"
#MACHINE ?= "htcuniversal"
#MACHINE ?= "mx31ads"
#MACHINE ?= "mx31litekit"
#MACHINE ?= "mx31phy"
#MACHINE ?= "zylonite"
DISTRO ?= "poky"
DISTRO = "poky"
# For bleeding edge / experimental / unstable package versions
# DISTRO ?= "poky-bleeding"
# DISTRO = "poky-bleeding"
# Poky has various extra metadata collections (openmoko, extras).
# To enable these, uncomment all (or some of) the following lines:
# BBFILES = "\
# ${OEROOT}/meta/packages/*/*.bb
# ${OEROOT}/meta-extras/packages/*/*.bb
# ${OEROOT}/meta-openmoko/packages/*/*.bb
# "
# BBFILE_COLLECTIONS = "normal extras openmoko"
# BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
# BBFILE_PATTERN_extras = "^${OEROOT}/meta-extras/"
# BBFILE_PATTERN_openmoko = "^${OEROOT}/meta-openmoko/"
# BBFILE_PRIORITY_normal = "5"
# BBFILE_PRIORITY_extras = "5"
# BBFILE_PRIORITY_openmoko = "5"
BBMASK = ""
# EXTRA_IMAGE_FEATURES allows extra packages to be added to the generated images
# IMAGE_FEATURES configuration of the generated images
# (Some of these are automatically added to certain image types)
# "dbg-pkgs" - add -dbg packages for all installed packages
# (adds symbol information for debugging/profiling)
# "dev-pkgs" - add -dev packages for all installed packages
# (useful if you want to develop against libs in the image)
# "tools-sdk" - add development tools (gcc, make, pkgconfig etc.)
# "tools-debug" - add debugging tools (gdb, strace)
# "tools-profile" - add profiling tools (oprofile, exmap, lttng valgrind (x86 only))
# "tools-testapps" - add useful testing tools (ts_print, aplay, arecord etc.)
# "debug-tweaks" - make an image for suitable of development
# e.g. ssh root access has a blank password
# There are other application targets too, see meta/classes/poky-image.bbclass
# and meta/packages/tasks/task-poky.bb for more details.
# "dev-pkgs" - add -dev packages for all installed packages
# (useful if you want to develop against libs in the image)
# "dbg-pkgs" - add -dbg packages for all installed packages
# (adds symbol information for debugging/profiling)
# "apps-core" - core applications
# "apps-pda" - add PDA application suite (contacts, dates, etc.)
# "dev-tools" - add development tools (gcc, make, pkgconfig etc.)
# "dbg-tools" - add debugging tools (gdb, strace, oprofile, etc.)
# "test-tools" - add useful testing tools (ts_print, aplay, arecord etc.)
# "debug-tweaks" - make an image for suitable of development
# e.g. ssh root access has a blank password
EXTRA_IMAGE_FEATURES = "tools-debug tools-profile tools-testapps debug-tweaks"
# The default IMAGE_FEATURES above are too large for the mx31phy and
# c700/c750 machines which have limited space. The code below limits
# the default features for those machines.
EXTRA_IMAGE_FEATURES_c7x0 = "tools-testapps debug-tweaks"
EXTRA_IMAGE_FEATURES_mx31phy = "debug-tweaks"
EXTRA_IMAGE_FEATURES_mx31ads = "tools-testapps debug-tweaks"
IMAGE_FEATURES = "dbg-tools test-tools debug-tweaks"
# A list of packaging systems used in generated images
# The first package type listed will be used for rootfs generation
@@ -81,17 +51,12 @@ EXTRA_IMAGE_FEATURES_mx31ads = "tools-testapps debug-tweaks"
#PACKAGE_CLASSES ?= "package_deb package_ipk"
PACKAGE_CLASSES ?= "package_ipk"
# POKYMODE controls the characteristics of the generated packages/images by
# telling poky which type of toolchain to use.
#
# Options include several different EABI combinations and a compatibility
# mode for the OABI mode poky previously used.
#
# The default is "eabi"
# Use "oabi" for machines with kernels < 2.6.18 on ARM for example.
# Use "external-MODE" to use the precompiled external toolchains where MODE
# is the type of external toolchain to use e.g. eabi.
# POKYMODE = "external-eabi"
# POKYMODE controls the characteristics of the generated packages/images.
# Options include several different EABI combinations and a
# compatibility mode for the OABI mode poky use to use. Use "oabi" for machines
# with kernels < 2.6.18 for example. The default is "eabi". These changes only
# really apply for ARM machines.
# POKYMODE = "oabi"
# Uncomment this to specify where BitBake should create its temporary files.
# Note that a full build of everything in OpenEmbedded will take GigaBytes of hard
@@ -99,13 +64,13 @@ PACKAGE_CLASSES ?= "package_ipk"
# <build directory>/tmp
TMPDIR = "${OEROOT}/build/tmp"
# Uncomment and set to allow bitbake to execute multiple tasks at once.
# Note, This option is currently experimental - YMMV.
# 'quilt' is also required on the host system
# BB_NUMBER_THREADS = "1"
# Uncomment this if you are using the Openedhand provided qemu deb - see README
# ASSUME_PROVIDED += "qemu-native"
# Comment this out if you don't have a 3.x gcc version available and wish
# poky to build one for you. The 3.x gcc is required to build qemu-native.
# ASSUME_PROVIDED += "gcc3-native"
# Comment this out if you are *not* using provided qemu deb - see README
ASSUME_PROVIDED += "qemu-native"
# Uncomment these two if you want BitBake to build images useful for debugging.
# DEBUG_BUILD = "1"
@@ -127,14 +92,7 @@ TMPDIR = "${OEROOT}/build/tmp"
BBINCLUDELOGS = "yes"
# Specifies a location to search for pre-generated tarballs when fetching
# a cvs:// or svn:// URI. Uncomment this, if you do not want to pull directly
# from CVS or Subversion
SRC_TARBALL_STASH = "http://pokylinux.org/sources/"
# Set this if you wish to make pkgconfig libraries from your system available
# for native builds. Combined with extra ASSUME_PROVIDEDs this can allow
# native builds of applications like oprofileui-native (unsupported feature).
#EXTRA_NATIVE_PKGCONFIG_PATH = ":/usr/lib/pkgconfig"
#ASSUME_PROVIDED += "gtk+-native libglade-native"
# a cvs:// URI. Uncomment this, if you not want to pull directly from CVS.
CVS_TARBALL_STASH = "http://folks.o-hand.com/~richard/poky/sources/"
ENABLE_BINARY_LOCALE_GENERATION = "1"

View File

@@ -1,16 +0,0 @@
#
# local.conf covers user settings, site.conf covers site specific information
# such as proxy server addresses and optionally any shared download location
#
# Uncomment to cause CVS to use the proxy host specified
#CVS_PROXY_HOST = "proxy.example.com"
#CVS_PROXY_PORT = "81"
# Uncomment to cause git to use the proxy host specificed
#GIT_PROXY_HOST = "proxy.example.com"
#GIT_PROXY_PORT = "81"
#export GIT_PROXY_COMMAND = "${OEROOT}/scripts/poky-git-proxy-command"
# Uncomment this to use a shared download directory
#DL_DIR = "/some/shared/download/directory/"

View File

@@ -1,38 +0,0 @@
2008-02-29 Matthew Allum <mallum@openedhand.com>
* development.xml:
Disable images too big / lack context for now.
* introduction.xml:
Remove some OH specific stuff.
* style.css:
Remove limit on image size
2008-02-15 Matthew Allum <mallum@openedhand.com>
* introduction.xml:
Minor tweaks to 'What is Poky'
2008-02-15 Matthew Allum <mallum@openedhand.com>
* poky-handbook.xml:
* poky-handbook.png
* poky-beaver.png
* poky-logo.svg:
* style.css:
Add some title images.
2008-02-14 Matthew Allum <mallum@openedhand.com>
* development.xml:
remove uri's
* style.css:
Fix glossary
2008-02-06 Matthew Allum <mallum@openedhand.com>
* Makefile:
Add various xslto options for html.
* introduction.xml:
Remove link in title.
* style.css:
Add initial version

View File

@@ -1,25 +0,0 @@
all: html pdf
pdf:
poky-docbook-to-pdf poky-handbook.xml
# -- old way --
# dblatex poky-handbook.xml
html:
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
xsltproc --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel 1 \
--stringparam section.autolabel 1 \
-o poky-handbook.html \
--xinclude /usr/share/xml/docbook/stylesheet/nwalsh/html/docbook.xsl \
poky-handbook.xml
# -- old way --
# xmlto xhtml-nochunks poky-handbook.xml
tarball: html
tar -cvzf poky-handbook.tgz poky-handbook.html style.css screenshots/ss-sato.png poky-beaver.png poky-handbook.png
validate:
xmllint --postvalid --xinclude --noout poky-handbook.xml

View File

@@ -1,11 +0,0 @@
Handbook Todo List:
* Document adding a new IMAGE_FEATURE to the customising images section
* Add instructions about using zaurus/openmoko emulation
* Add component overview/block diagrams
* Software Deevelopment intro should mention its software development for
intended target and could be a different arch etc and thus special case.
* Expand insane.bbclass documentation to cover tests
* Document remaining classes (see list in ref-classes)
* Document formfactor

View File

@@ -1,30 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='contact'>
<title>OpenedHand Contact Information</title>
<literallayout>
OpenedHand Ltd
Unit R, Homesdale Business Center
216-218 Homesdale Rd
Bromley, BR1 2QZ
England
+44 (0) 208 819 6559
info@openedhand.com</literallayout>
<!-- Fop messes this up so we do like above
<address>
OpenedHand Ltd
Unit R, Homesdale Business Center
<street>216-218 Homesdale Rd</street>
<city>Bromley</city>, <postcode>BR1 2QZ</postcode>
<country>England</country>
<phone> +44 (0) 208 819 6559</phone>
<email>info@openedhand.com</email>
</address>
-->
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,853 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="platdev">
<title>Platform Development with Poky</title>
<section id="platdev-appdev">
<title>Software development</title>
<para>
Poky supports several methods of software development. These different
forms of development are explained below and can be switched
between as needed.
</para>
<section id="platdev-appdev-external-sdk">
<title>Developing externally using the Poky SDK</title>
<para>
The meta-toolchain and meta-toolchain-sdk targets (<link linkend='ref-images'>see
the images section</link>) build tarballs which contain toolchains and
libraries suitable for application development outside Poky. These unpack into the
<filename class="directory">/usr/local/poky</filename> directory and contain
a setup script, e.g.
<filename>/usr/local/poky/eabi-glibc/arm/environment-setup</filename> which
can be sourced to initialise a suitable environment. After sourcing this, the
compiler, QEMU scripts, QEMU binary, a special version of pkgconfig and other
useful utilities are added to the PATH. Variables to assist pkgconfig and
autotools are also set so that, for example, configure can find pre-generated test
results for tests which need target hardware to run.
</para>
<para>
Using the toolchain with autotool enabled packages is straightforward, just pass the
appropriate host option to configure e.g. "./configure --host=arm-poky-linux-gnueabi".
For other projects it is usually a case of ensuring the cross tools are used e.g.
CC=arm-poky-linux-gnueabi-gcc and LD=arm-poky-linux-gnueabi-ld.
</para>
</section>
<section id="platdev-appdev-external-anjuta">
<title>Developing externally using the Anjuta plugin</title>
<para>
An Anjuta IDE plugin exists to make developing software within the Poky framework
easier for the application developer. It presents a graphical IDE from which the
developer can cross compile an application then deploy and execute the output in a QEMU
emulation session. It also supports cross debugging and profiling.
</para>
<!-- DISBALED, TOO BIG!
<screenshot>
<mediaobject>
<imageobject>
<imagedata fileref="screenshots/ss-anjuta-poky-1.png" format="PNG"/>
</imageobject>
<caption>
<para>The Anjuta Poky SDK plugin showing an active QEMU session running Sato</para>
</caption>
</mediaobject>
</screenshot>
-->
<para>
To use the plugin, a toolchain and SDK built by Poky is required along with Anjuta and the Anjuta
plugin. The Poky Anjuta plugin is available from the OpenedHand SVN repository located at
http://svn.o-hand.com/repos/anjuta-poky/trunk/anjuta-plugin-sdk/; a web interface
to the repository can be accessed at <ulink url='http://svn.o-hand.com/view/anjuta-poky/'/>.
See the README file contained in the project for more information
about the dependencies and how to get them along with details of
the prebuilt packages.
</para>
<section id="platdev-appdev-external-anjuta-setup">
<title>Setting up the Anjuta plugin</title>
<para>Extract the tarball for the toolchain into / as root. The
toolchain will be installed into
<filename class="directory">/usr/local/poky</filename>.</para>
<para>To use the plugin, first open or create an existing
project. If creating a new project the "C GTK+" project type
will allow itself to be cross-compiled. However you should be
aware that this uses glade for the UI.</para>
<para>To activate the plugin go to
<menuchoice><guimenu>Edit</guimenu><guimenuitem>Preferences</guimenuitem></menuchoice>,
then choose <guilabel>General</guilabel> from the left hand side. Choose the
Installed plugins tab, scroll down to <guilabel>Poky
SDK</guilabel> and check the
box. The plugin is now activated but first it must be
configured.</para>
</section>
<section id="platdev-appdev-external-anjuta-configuration">
<title>Configuring the Anjuta plugin</title>
<para>The configuration options for the SDK can be found by choosing
the <guilabel>Poky SDK</guilabel> icon from the left hand side. The following options
need to be set:</para>
<itemizedlist>
<listitem><para><guilabel>SDK root</guilabel>: this is the root directory of the SDK
for an ARM EABI SDK this will be <filename
class="directory">/usr/local/poky/eabi-glibc/arm</filename>.
This directory will contain directories named like "bin",
"include", "var", etc. With the file chooser it is important
to enter into the "arm" subdirectory for this
example.</para></listitem>
<listitem><para><guilabel>Toolchain triplet</guilabel>: this is the cross compile
triplet, e.g. "arm-poky-linux-gnueabi".</para></listitem>
<listitem><para><guilabel>Kernel</guilabel>: use the file chooser to select the kernel
to use with QEMU</para></listitem>
<listitem><para><guilabel>Root filesystem</guilabel>: use the file chooser to select
the root filesystem image, this should be an image (not a
tarball)</para></listitem>
</itemizedlist>
<!-- DISBALED, TOO BIG!
<screenshot>
<mediaobject>
<imageobject>
<imagedata fileref="screenshots/ss-anjuta-poky-2.png" format="PNG"/>
</imageobject>
<caption>
<para>Anjuta Preferences Dialog</para>
</caption>
</mediaobject>
</screenshot>
-->
</section>
<section id="platdev-appdev-external-anjuta-usage">
<title>Using the Anjuta plugin</title>
<para>As an example, cross-compiling a project, deploying it into
QEMU and running a debugger against it and then doing a system
wide profile.</para>
<para>Choose <menuchoice><guimenu>Build</guimenu><guimenuitem>Run
Configure</guimenuitem></menuchoice> or
<menuchoice><guimenu>Build</guimenu><guimenuitem>Run
Autogenerate</guimenuitem></menuchoice> to run "configure"
(or to run "autogen") for the project. This passes command line
arguments to instruct it to cross-compile.</para>
<para>Next do
<menuchoice><guimenu>Build</guimenu><guimenuitem>Build
Project</guimenuitem></menuchoice> to build and compile the
project. If you have previously built the project in the same
tree without using the cross-compiler you may find that your
project fails to link. Simply do
<menuchoice><guimenu>Build</guimenu><guimenuitem>Clean
Project</guimenuitem></menuchoice> to remove the old
binaries. You may then try building again.</para>
<para>Next start QEMU by using
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Start
QEMU</guimenuitem></menuchoice>, this will start QEMU and
will show any error messages in the message view. Once Poky has
fully booted within QEMU you may now deploy into it.</para>
<para>Once built and QEMU is running, choose
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Deploy</guimenuitem></menuchoice>,
this will install the package into a temporary directory and
then copy using rsync over SSH into the target. Progress and
messages will be shown in the message view.</para>
<para>To debug a program installed into onto the target choose
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Debug
remote</guimenuitem></menuchoice>. This prompts for the
local binary to debug and also the command line to run on the
target. The command line to run should include the full path to
the to binary installed in the target. This will start a
gdbserver over SSH on the target and also an instance of a
cross-gdb in a local terminal. This will be preloaded to connect
to the server and use the <guilabel>SDK root</guilabel> to find
symbols. This gdb will connect to the target and load in
various libraries and the target program. You should setup any
breakpoints or watchpoints now since you might not be able to
interrupt the execution later. You may stop
the debugger on the target using
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Stop
debugger</guimenuitem></menuchoice>.</para>
<para>It is also possible to execute a command in the target over
SSH, the appropriate environment will be be set for the
execution. Choose
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Run
remote</guimenuitem></menuchoice> to do this. This will open
a terminal with the SSH command inside.</para>
<para>To do a system wide profile against the system running in
QEMU choose
<menuchoice><guimenu>Tools</guimenu><guimenuitem>Profile
remote</guimenuitem></menuchoice>. This will start up
OProfileUI with the appropriate parameters to connect to the
server running inside QEMU and will also supply the path to the
debug information necessary to get a useful profile.</para>
</section>
</section>
<section id="platdev-appdev-qemu">
<title>Developing externally in QEMU</title>
<para>
Running Poky QEMU images is covered in the <link
linkend='intro-quickstart-qemu'>Running an Image</link> section.
</para>
<para>
Poky's QEMU images contain a complete native toolchain. This means
that applications can be developed within QEMU in the same was as a
normal system. Using qemux86 on an x86 machine is fast since the
guest and host architectures match, qemuarm is slower but gives
faithful emulation of ARM specific issues. To speed things up these
images support using distcc to call a cross-compiler outside the
emulated system too. If <command>runqemu</command> was used to start
QEMU, and distccd is present on the host system, any bitbake cross
compiling toolchain available from the build system will automatically
be used from within qemu simply by calling distcc
(<command>export CC="distcc"</command> can be set in the enviroment).
Alterntatively, if a suitable SDK/toolchain is present in
<filename class="directory">/usr/local/poky</filename> it will also
automatically be used.
</para>
<para>
There are several options for connecting into the emulated system.
QEMU provides a framebuffer interface which has standard consoles
available. There is also a serial connection available which has a
console to the system running on it and IP networking as standard.
The images have a dropbear ssh server running with the root password
disabled allowing standard ssh and scp commands to work. The images
also contain an NFS server exporting the guest's root filesystem
allowing that to be made available to the host.
</para>
</section>
<section id="platdev-appdev-chroot">
<title>Developing externally in a chroot</title>
<para>
If you have a system that matches the architecture of the Poky machine you're using,
such as qemux86, you can run binaries directly from the image on the host system
using a chroot combined with tools like <ulink url='http://projects.o-hand.com/xephyr'>Xephyr</ulink>.
</para>
<para>
Poky has some scripts to make using its qemux86 images within a chroot easier. To use
these you need to install the poky-scripts package or otherwise obtain the
<filename>poky-chroot-setup</filename> and <filename>poky-chroot-run</filename> scripts.
You also need Xephyr and chrootuid binaries available. To initialize a system use the setup script:
</para>
<para>
<literallayout class='monospaced'>
# poky-chroot-setup &lt;qemux86-rootfs.tgz&gt; &lt;target-directory&gt;
</literallayout>
</para>
<para>
which will unpack the specified qemux86 rootfs tarball into the target-directory.
You can then start the system with:
</para>
<para>
<literallayout class='monospaced'>
# poky-chroot-run &lt;target-directory&gt; &lt;command&gt;
</literallayout>
</para>
<para>
where the target-directory is the place the rootfs was unpacked to and command is
an optional command to run. If no command is specified, the system will drop you
within a bash shell. A Xephyr window will be displayed containing the emulated
system and you may be asked for a password since some of the commands used for
bind mounting directories need to be run using sudo.
</para>
<para>
There are limits as to how far the the realism of the chroot environment extends.
It is useful for simple development work or quick tests but full system emulation
with QEMU offers a much more realistic environment for more complex development
tasks. Note that chroot support within Poky is still experimental.
</para>
</section>
<section id="platdev-appdev-insitu">
<title>Developing in Poky directly</title>
<para>
Working directly in Poky is a fast and effective development technique.
The idea is that you can directly edit files in
<glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm>
or the source directory <glossterm><link linkend='var-S'>S</link></glossterm>
and then force specific tasks to rerun in order to test the changes.
An example session working on the matchbox-desktop package might
look like this:
</para>
<para>
<literallayout class='monospaced'>
$ bitbake matchbox-desktop
$ sh
$ cd tmp/work/armv5te-poky-linux-gnueabi/matchbox-desktop-2.0+svnr1708-r0/
$ cd matchbox-desktop-2
$ vi src/main.c
$ exit
$ bitbake matchbox-desktop -c compile -f
$ bitbake matchbox-desktop
</literallayout>
</para>
<para>
Here, we build the package, change into the work directory for the package,
change a file, then recompile the package. Instead of using sh like this,
you can also use two different terminals. The risk with working like this
is that a command like unpack could wipe out the changes you've made to the
work directory so you need to work carefully.
</para>
<para>
It is useful when making changes directly to the work directory files to do
so using quilt as detailed in the <link linkend='usingpoky-modifying-packages-quilt'>
modifying packages with quilt</link> section. The resulting patches can be copied
into the recipe directory and used directly in the <glossterm><link
linkend='var-SRC_URI'>SRC_URI</link></glossterm>.
</para>
<para>
For a review of the skills used in this section see Sections <link
linkend="usingpoky-components-bitbake">2.1.1</link> and <link
linkend="usingpoky-debugging-taskrunning">2.4.2</link>.
</para>
</section>
<section id="platdev-appdev-devshell">
<title>Developing with 'devshell'</title>
<para>
When debugging certain commands or even to just edit packages, the
'devshell' can be a useful tool. To start it you run a command like:
</para>
<para>
<literallayout class='monospaced'>
$ bitbake matchbox-desktop -c devshell
</literallayout>
</para>
<para>
which will open a terminal with a shell prompt within the Poky
environment. This means PATH is setup to include the cross toolchain,
the pkgconfig variables are setup to find the right .pc files,
configure will be able to find the Poky site files etc. Within this
environment, you can run configure or compile command as if they
were being run by Poky itself. You are also changed into the
source (<glossterm><link linkend='var-S'>S</link></glossterm>)
directory automatically. When finished with the shell just exit it
or close the terminal window.
</para>
<para>
The default shell used by devshell is the gnome-terminal. Other
forms of terminal can also be used by setting the <glossterm>
<link linkend='var-TERMCMD'>TERMCMD</link></glossterm> and <glossterm>
<link linkend='var-TERMCMDRUN'>TERMCMDRUN</link></glossterm> variables
in local.conf. For examples of the other options available, see
<filename>meta/conf/bitbake.conf</filename>. An external shell is
launched rather than opening directly into the original terminal
window to make interaction with bitbakes multiple threads easier
and also allow a client/server split of bitbake in the future
(devshell will still work over X11 forwarding or similar).
</para>
<para>
It is worth remembering that inside devshell you need to use the full
compiler name such as <command>arm-poky-linux-gnueabi-gcc</command>
instead of just <command>gcc</command> and the same applies to other
applications from gcc, bintuils, libtool etc. Poky will have setup
environmental variables such as CC to assist applications, such as make,
find the correct tools.
</para>
</section>
<section id="platdev-appdev-srcrev">
<title>Developing within Poky with an external SCM based package</title>
<para>
If you're working on a recipe which pulls from an external SCM it
is possible to have Poky notice new changes added to the
SCM and then build the latest version. This only works for SCMs
where its possible to get a sensible revision number for changes.
Currently it works for svn, git and bzr repositories.
</para>
<para>
To enable this behaviour it is simply a case of adding <glossterm>
<link linkend='var-SRCREV'>SRCREV</link></glossterm>_pn-<glossterm>
<link linkend='var-PN'>PN</link></glossterm> = "${AUTOREV}" to
local.conf where <glossterm><link linkend='var-PN'>PN</link></glossterm>
is the name of the package for which you want to enable automatic source
revision updating.
</para>
</section>
</section>
<section id="platdev-gdb-remotedebug">
<title>Debugging with GDB Remotely</title>
<para>
<ulink url="http://sourceware.org/gdb/">GDB</ulink> (The GNU Project Debugger)
allows you to examine running programs to understand and fix problems and
also to perform postmortem style analsys of program crashes. It is available
as a package within poky and installed by default in sdk images. It works best
when -dbg packages for the application being debugged are installed as the
extra symbols give more meaningful output from GDB.
</para>
<para>
Sometimes, due to memory or disk space constraints, it is not possible
to use GDB directly on the remote target to debug applications. This is
due to the fact that
GDB needs to load the debugging information and the binaries of the
process being debugged. GDB then needs to perform many
computations to locate information such as function names, variable
names and values, stack traces, etc. even before starting the debugging
process. This places load on the target system and can alter the
characteristics of the program being debugged.
</para>
<para>
This is where GDBSERVER comes into play as it runs on the remote target
and does not load any debugging information from the debugged process.
Instead, the debugging information processing is done by a GDB instance
running on a distant computer - the host GDB. The host GDB then sends
control commands to GDBSERVER to make it stop or start the debugged
program, as well as read or write some memory regions of that debugged
program. All the debugging information loading and processing as well
as the heavy debugging duty is done by the host GDB, giving the
GDBSERVER running on the target a chance to remain small and fast.
</para>
<para>
As the host GDB is responsible for loading the debugging information and
doing the necessary processing to make actual debugging happen, the
user has to make sure it can access the unstripped binaries complete
with their debugging information and compiled with no optimisations. The
host GDB must also have local access to all the libraries used by the
debugged program. On the remote target the binaries can remain stripped
as GDBSERVER does not need any debugging information there. However they
must also be compiled without optimisation matching the host's binaries.
</para>
<para>
The binary being debugged on the remote target machine is hence referred
to as the 'inferior' in keeping with GDB documentation and terminology.
Further documentation on GDB, is available on
<ulink url="http://sourceware.org/gdb/documentation/">on their site</ulink>.
</para>
<section id="platdev-gdb-remotedebug-launch-gdbserver">
<title>Launching GDBSERVER on the target</title>
<para>
First, make sure gdbserver is installed on the target. If not,
install the gdbserver package (which needs the libthread-db1
package).
</para>
<para>
To launch GDBSERVER on the target and make it ready to "debug" a
program located at <emphasis>/path/to/inferior</emphasis>, connect
to the target and launch:
<programlisting>$ gdbserver localhost:2345 /path/to/inferior</programlisting>
After that, gdbserver should be listening on port 2345 for debugging
commands coming from a remote GDB process running on the host computer.
Communication between the GDBSERVER and the host GDB will be done using
TCP. To use other communication protocols please refer to the
GDBSERVER documentation.
</para>
</section>
<section id="platdev-gdb-remotedebug-launch-gdb">
<title>Launching GDB on the host computer</title>
<para>
Running GDB on the host computer takes a number of stages, described in the
following sections.
</para>
<section id="platdev-gdb-remotedebug-launch-gdb-buildcross">
<title>Build the cross GDB package</title>
<para>
A suitable gdb cross binary is required which runs on your host computer but
knows about the the ABI of the remote target. This can be obtained from
the the Poky toolchain, e.g.
<filename>/usr/local/poky/eabi-glibc/arm/bin/arm-poky-linux-gnueabi-gdb</filename>
which "arm" is the target architecture and "linux-gnueabi" the target ABI.
</para>
<para>
Alternatively this can be built directly by Poky. To do this you would build
the gdb-cross package so for example you would run:
<programlisting>bitbake gdb-cross</programlisting>
Once built, the cross gdb binary can be found at
<programlisting>tmp/cross/bin/&lt;target-abi&gt;-gdb </programlisting>
</para>
</section>
<section id="platdev-gdb-remotedebug-launch-gdb-inferiorbins">
<title>Making the inferior binaries available</title>
<para>
The inferior binary needs to be available to GDB complete with all debugging
symbols in order to get the best possible results along with any libraries
the inferior depends on and their debugging symbols. There are a number of
ways this can be done.
</para>
<para>
Perhaps the easiest is to have an 'sdk' image corresponding to the plain
image installed on the device. In the case of 'pky-image-sato',
'poky-image-sdk' would contain suitable symbols. The sdk images already
have the debugging symbols installed so its just a question expanding the
archive to some location and telling GDB where this is.
</para>
<para>
Alternatively, poky can build a custom directory of files for a specific
debugging purpose by reusing its tmp/rootfs directory, on the host computer
in a slightly different way to normal. This directory contains the contents
of the last built image. This process assumes the image running on the
target was the last image to be built by Poky, the package <emphasis>foo</emphasis>
contains the inferior binary to be debugged has been built without without
optimisation and has debugging information available.
</para>
<para>
Firstly you want to install the <emphasis>foo</emphasis> package to tmp/rootfs
by doing:
</para>
<programlisting>tmp/staging/i686-linux/usr/bin/opkg-cl -f \
tmp/work/&lt;target-abi&gt;/poky-image-sato-1.0-r0/temp/opkg.conf -o \
tmp/rootfs/ update</programlisting>
<para>
then,
</para>
<programlisting>tmp/staging/i686-linux/usr/bin/opkg-cl -f \
tmp/work/&lt;target-abi&gt;/poky-image-sato-1.0-r0/temp/opkg.conf \
-o tmp/rootfs install foo
tmp/staging/i686-linux/usr/bin/opkg-cl -f \
tmp/work/&lt;target-abi&gt;/poky-image-sato-1.0-r0/temp/opkg.conf \
-o tmp/rootfs install foo-dbg</programlisting>
<para>
which installs the debugging information too.
</para>
</section>
<section id="platdev-gdb-remotedebug-launch-gdb-launchhost">
<title>Launch the host GDB</title>
<para>
To launch the host GDB, run the cross gdb binary identified above with
the inferior binary specified on the commandline:
<programlisting>&lt;target-abi&gt;-gdb rootfs/usr/bin/foo</programlisting>
This loads the binary of program <emphasis>foo</emphasis>
as well as its debugging information. Once the gdb prompt
appears, you must instruct GDB to load all the libraries
of the inferior from tmp/rootfs:
<programlisting>set solib-absolute-prefix /path/to/tmp/rootfs</programlisting>
where <filename>/path/to/tmp/rootfs</filename> must be
the absolute path to <filename>tmp/rootfs</filename> or wherever the
binaries with debugging information are located.
</para>
<para>
Now, tell GDB to connect to the GDBSERVER running on the remote target:
<programlisting>target remote remote-target-ip-address:2345</programlisting>
Where remote-target-ip-address is the IP address of the
remote target where the GDBSERVER is running. 2345 is the
port on which the GDBSERVER is running.
</para>
</section>
<section id="platdev-gdb-remotedebug-launch-gdb-using">
<title>Using the Debugger</title>
<para>
Debugging can now proceed as normal, as if the debugging were being done on the
local machine, for example to tell GDB to break in the <emphasis>main</emphasis>
function, for instance:
<programlisting>break main</programlisting>
and then to tell GDB to "continue" the inferior execution,
<programlisting>continue</programlisting>
</para>
<para>
For more information about using GDB please see the
project's online documentation at <ulink
url="http://sourceware.org/gdb/download/onlinedocs/"/>.
</para>
</section>
</section>
</section>
<section id="platdev-oprofile">
<title>Profiling with OProfile</title>
<para>
<ulink url="http://oprofile.sourceforge.net/">OProfile</ulink> is a
statistical profiler well suited to finding performance
bottlenecks in both userspace software and the kernel. It provides
answers to questions like "Which functions does my application spend
the most time in when doing X?". Poky is well integrated with OProfile
to make profiling applications on target hardware straightforward.
</para>
<para>
To use OProfile you need an image with OProfile installed. The easiest
way to do this is with "tools-profile" in <glossterm><link
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>. You also
need debugging symbols to be available on the system where the analysis
will take place. This can be achieved with "dbg-pkgs" in <glossterm><link
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm> or by
installing the appropriate -dbg packages. For
successful call graph analysis the binaries must preserve the frame
pointer register and hence should be compiled with the
"-fno-omit-framepointer" flag. In Poky this can be achieved with
<glossterm><link linkend='var-SELECTED_OPTIMIZATION'>SELECTED_OPTIMIZATION
</link></glossterm> = "-fexpensive-optimizations -fno-omit-framepointer
-frename-registers -O2" or by setting <glossterm><link
linkend='var-DEBUG_BUILD'>DEBUG_BUILD</link></glossterm> = "1" in
local.conf (the latter will also add extra debug information making the
debug packages large).
</para>
<section id="platdev-oprofile-target">
<title>Profiling on the target</title>
<para>
All the profiling work can be performed on the target device. A
simple OProfile session might look like:
</para>
<para>
<literallayout class='monospaced'>
# opcontrol --reset
# opcontrol --start --separate=lib --no-vmlinux -c 5
[do whatever is being profiled]
# opcontrol --stop
$ opreport -cl
</literallayout>
</para>
<para>
Here, the reset command clears any previously profiled data,
OProfile is then started. The options used to start OProfile mean
dynamic library data is kept separately per application, kernel
profiling is disabled and callgraphing is enabled up to 5 levels
deep. To profile the kernel, you would specify the
<parameter>--vmlinux=/path/to/vmlinux</parameter> option (the vmlinux file is usually in
<filename class="directory">/boot/</filename> in Poky and must match the running kernel). The profile is
then stopped and the results viewed with opreport with options
to see the separate library symbols and callgraph information.
</para>
<para>
Callgraphing means OProfile not only logs infomation about which
functions time is being spent in but also which functions
called those functions (their parents) and which functions that
function calls (its children). The higher the callgraphing depth,
the more accurate the results but this also increased the loging
overhead so it should be used with caution. On ARM, binaries need
to have the frame pointer enabled for callgraphing to work (compile
with the gcc option -fno-omit-framepointer).
</para>
<para>
For more information on using OProfile please see the OProfile
online documentation at <ulink
url="http://oprofile.sourceforge.net/docs/"/>.
</para>
</section>
<section id="platdev-oprofile-oprofileui">
<title>Using OProfileUI</title>
<para>
A graphical user interface for OProfile is also available. You can
either use prebuilt Debian packages from the <ulink
url='http://debian.o-hand.com/'>OpenedHand repository</ulink> or
download and build from svn at
http://svn.o-hand.com/repos/oprofileui/trunk/. If the
"tools-profile" image feature is selected, all necessary binaries
are installed onto the target device for OProfileUI interaction.
</para>
<!-- DISBALED, Need a more 'contexual' shot?
<screenshot>
<mediaobject>
<imageobject>
<imagedata fileref="screenshots/ss-oprofile-viewer.png" format="PNG"/>
</imageobject>
<caption>
<para>OProfileUI Viewer showing an application being profiled on a remote device</para>
</caption>
</mediaobject>
</screenshot>
-->
<para>
In order to convert the data in the sample format from the target
to the host the <filename>opimport</filename> program is needed.
This is not included in standard Debian OProfile packages but an
OProfile package with this addition is also available from the <ulink
url='http://debian.o-hand.com/'>OpenedHand repository</ulink>.
We recommend using OProfile 0.9.3 or greater. Other patches to
OProfile may be needed for recent OProfileUI features, but Poky
usually includes all needed patches on the target device. Please
see the <ulink
url='http://svn.o-hand.com/repos/oprofileui/trunk/README'>
OProfileUI README</ulink> for up to date information, and the
<ulink url="http://labs.o-hand.com/oprofileui">OProfileUI website
</ulink> for more information on the OProfileUI project.
</para>
<section id="platdev-oprofile-oprofileui-online">
<title>Online mode</title>
<para>
This assumes a working network connection with the target
hardware. In this case you just need to run <command>
"oprofile-server"</command> on the device. By default it listens
on port 4224. This can be changed with the <parameter>--port</parameter> command line
option.
</para>
<para>
The client program is called <command>oprofile-viewer</command>. The
UI is relatively straightforward, the key functionality is accessed
through the buttons on the toolbar (which are duplicated in the
menus.) These buttons are:
</para>
<itemizedlist>
<listitem>
<para>
Connect - connect to the remote host, the IP address or hostname for the
target can be supplied here.
</para>
</listitem>
<listitem>
<para>
Disconnect - disconnect from the target.
</para>
</listitem>
<listitem>
<para>
Start - start the profiling on the device.
</para>
</listitem>
<listitem>
<para>
Stop - stop the profiling on the device and download the data to the local
host. This will generate the profile and show it in the viewer.
</para>
</listitem>
<listitem>
<para>
Download - download the data from the target, generate the profile and show it
in the viewer.
</para>
</listitem>
<listitem>
<para>
Reset - reset the sample data on the device. This will remove the sample
information that was collected on a previous sampling run. Ensure you do this
if you do not want to include old sample information.
</para>
</listitem>
<listitem>
<para>
Save - save the data downloaded from the target to another directory for later
examination.
</para>
</listitem>
<listitem>
<para>
Open - load data that was previously saved.
</para>
</listitem>
</itemizedlist>
<para>
The behaviour of the client is to download the complete 'profile archive' from
the target to the host for processing. This archive is a directory containing
the sample data, the object files and the debug information for said object
files. This archive is then converted using a script included in this
distribution ('oparchconv') that uses 'opimport' to convert the archive from
the target to something that can be processed on the host.
</para>
<para>
Downloaded archives are kept in /tmp and cleared up when they are no longer in
use.
</para>
<para>
If you wish to profile into the kernel, this is possible, you just need to ensure
a vmlinux file matching the running kernel is available. In Poky this is usually
located in /boot/vmlinux-KERNELVERSION, where KERNEL-version is the version of
the kernel e.g. 2.6.23. Poky generates separate vmlinux packages for each kernel
it builds so it should be a question of just ensuring a matching package is
installed (<command> opkg install kernel-vmlinux</command>. These are automatically
installed into development and profiling images alongside OProfile. There is a
configuration option within the OProfileUI settings page where the location of
the vmlinux file can be entered.
</para>
<para>
Waiting for debug symbols to transfer from the device can be slow and it's not
always necessary to actually have them on device for OProfile use. All that is
needed is a copy of the filesystem with the debug symbols present on the viewer
system. The <link linkend='platdev-gdb-remotedebug-launch-gdb'>GDB remote debug
section</link> covers how to create such a directory with Poky and the location
of this directory can again be specified in the OProfileUI settings dialog. If
specified, it will be used where the file checksums match those on the system
being profiled.
</para>
</section>
<section id="platdev-oprofile-oprofileui-offline">
<title>Offline mode</title>
<para>
If no network access to the target is available an archive for processing in
'oprofile-viewer' can be generated with the following set of command.
</para>
<para>
<literallayout class='monospaced'>
# opcontrol --reset
# opcontrol --start --separate=lib --no-vmlinux -c 5
[do whatever is being profiled]
# opcontrol --stop
# oparchive -o my_archive
</literallayout>
</para>
<para>
Where my_archive is the name of the archive directory where you would like the
profile archive to be kept. The directory will be created for you. This can
then be copied to another host and loaded using 'oprofile-viewer''s open
functionality. The archive will be converted if necessary.
</para>
</section>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,7 +0,0 @@
DESCRIPTION = "GNU Helloworld application"
SECTION = "examples"
LICENSE = "GPLv3"
SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.bz2"
inherit autotools

View File

@@ -1,8 +0,0 @@
#include <stdio.h>
int main(void)
{
printf("Hello world!\n");
return 0;
}

View File

@@ -1,16 +0,0 @@
DESCRIPTION = "Simple helloworld application"
SECTION = "examples"
LICENSE = "MIT"
SRC_URI = "file://helloworld.c"
S = "${WORKDIR}"
do_compile() {
${CC} helloworld.c -o helloworld
}
do_install() {
install -d ${D}${bindir}
install -m 0755 helloworld ${D}${bindir}
}

View File

@@ -1,13 +0,0 @@
require xorg-lib-common.inc
DESCRIPTION = "X11 Pixmap library"
LICENSE = "X-BSD"
DEPENDS += "libxext"
PR = "r2"
PE = "1"
XORG_PN = "libXpm"
PACKAGES =+ "sxpm cxpm"
FILES_cxpm = "${bindir}/cxpm"
FILES_sxpm = "${bindir}/sxpm"

View File

@@ -1,13 +0,0 @@
DESCRIPTION = "Tools for managing memory technology devices."
SECTION = "base"
DEPENDS = "zlib"
HOMEPAGE = "http://www.linux-mtd.infradead.org/"
LICENSE = "GPLv2"
SRC_URI = "ftp://ftp.infradead.org/pub/mtd-utils/mtd-utils-${PV}.tar.gz"
CFLAGS_prepend = "-I ${S}/include "
do_install() {
oe_runmake install DESTDIR=${D}
}

View File

@@ -1,726 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='extendpoky'>
<title>Extending Poky</title>
<para>
This section gives information about how to extend the functionality
already present in Poky, documenting standard tasks such as adding new
software packages, extending or customising images or porting poky to
new hardware (adding a new machine). It also contains advice about how
to manage the process of making changes to Poky to achieve best results.
</para>
<section id='usingpoky-extend-addpkg'>
<title>Adding a Package</title>
<para>
To add package into Poky you need to write a recipe for it.
Writing a recipe means creating a .bb file which sets various
variables. The variables
useful for recipes are detailed in the <link linkend='ref-varlocality-recipe-required'>
recipe reference</link> section along with more detailed information
about issues such as recipe naming.
</para>
<para>
The simplest way to add a new package is to base it on a similar
pre-existing recipe. There are some examples below of how to add
standard types of packages:
</para>
<section id='usingpoky-extend-addpkg-singlec'>
<title>Single .c File Package (Hello World!)</title>
<para>
To build an application from a single file stored locally requires a
recipe which has the file listed in the <glossterm><link
linkend='var-SRC_URI'>SRC_URI</link></glossterm> variable. In addition
the <function>do_compile</function> and <function>do_install</function>
tasks need to be manually written. The <glossterm><link linkend='var-S'>
S</link></glossterm> variable defines the directory containing the source
code which in this case is set equal to <glossterm><link linkend='var-WORKDIR'>
WORKDIR</link></glossterm>, the directory BitBake uses for the build.
</para>
<programlisting>
DESCRIPTION = "Simple helloworld application"
SECTION = "examples"
LICENSE = "MIT"
SRC_URI = "file://helloworld.c"
S = "${WORKDIR}"
do_compile() {
${CC} helloworld.c -o helloworld
}
do_install() {
install -d ${D}${bindir}
install -m 0755 helloworld ${D}${bindir}
}
</programlisting>
<para>
As a result of the build process "helloworld" and "helloworld-dbg"
packages will be built.
</para>
</section>
<section id='usingpoky-extend-addpkg-autotools'>
<title>Autotooled Package</title>
<para>
Applications which use autotools (autoconf, automake)
require a recipe which has a source archive listed in
<glossterm><link
linkend='var-SRC_URI'>SRC_URI</link></glossterm> and
<command>inherit autotools</command> to instruct BitBake to use the
<filename>autotools.bbclass</filename> which has
definitions of all the steps
needed to build an autotooled application.
The result of the build will be automatically packaged and if
the application uses NLS to localise then packages with
locale information will be generated (one package per
language).
</para>
<programlisting>
DESCRIPTION = "GNU Helloworld application"
SECTION = "examples"
LICENSE = "GPLv2"
SRC_URI = "${GNU_MIRROR}/hello/hello-${PV}.tar.bz2"
inherit autotools
</programlisting>
</section>
<section id='usingpoky-extend-addpkg-makefile'>
<title>Makefile-Based Package</title>
<para>
Applications which use GNU make require a recipe which has
the source archive listed in <glossterm><link
linkend='var-SRC_URI'>SRC_URI</link></glossterm>.
Adding a <function>do_compile</function> step
is not needed as by default BitBake will start the "make"
command to compile the application. If there is a need for
additional options to make then they should be stored in the
<glossterm><link
linkend='var-EXTRA_OEMAKE'>EXTRA_OEMAKE</link></glossterm> variable - BitBake
will pass them into the GNU
make invocation. A <function>do_install</function> task is required
- otherwise BitBake will run an empty <function>do_install</function>
task by default.
</para>
<para>
Some applications may require extra parameters to be passed to
the compiler, for example an additional header path. This can
be done buy adding to the <glossterm><link
linkend='var-CFLAGS'>CFLAGS</link></glossterm> variable, as in the example below.
</para>
<programlisting>
DESCRIPTION = "Tools for managing memory technology devices."
SECTION = "base"
DEPENDS = "zlib"
HOMEPAGE = "http://www.linux-mtd.infradead.org/"
LICENSE = "GPLv2"
SRC_URI = "ftp://ftp.infradead.org/pub/mtd-utils/mtd-utils-${PV}.tar.gz"
CFLAGS_prepend = "-I ${S}/include "
do_install() {
oe_runmake install DESTDIR=${D}
}
</programlisting>
</section>
<section id='usingpoky-extend-addpkg-files'>
<title>Controlling packages content</title>
<para>
The variables <glossterm><link
linkend='var-PACKAGES'>PACKAGES</link></glossterm> and
<glossterm><link linkend='var-FILES'>FILES</link></glossterm> are used to split an
application into multiple packages.
</para>
<para>
Below the "libXpm" recipe is used as an example. By
default the "libXpm" recipe generates one package
which contains the library
and also a few binaries. The recipe can be adapted to
split the binaries into separate packages.
</para>
<programlisting>
require xorg-lib-common.inc
DESCRIPTION = "X11 Pixmap library"
LICENSE = "X-BSD"
DEPENDS += "libxext"
PE = "1"
XORG_PN = "libXpm"
PACKAGES =+ "sxpm cxpm"
FILES_cxpm = "${bindir}/cxpm"
FILES_sxpm = "${bindir}/sxpm"
</programlisting>
<para>
In this example we want to ship the "sxpm" and "cxpm" binaries
in separate packages. Since "bindir" would be packaged into the
main <glossterm><link linkend='var-PN'>PN</link></glossterm>
package as standard we prepend the <glossterm><link
linkend='var-PACKAGES'>PACKAGES</link></glossterm> variable so
additional package names are added to the start of list. The
extra <glossterm><link linkend='var-PN'>FILES</link></glossterm>_*
variables then contain information to specify which files and
directories goes into which package.
</para>
</section>
<section id='usingpoky-extend-addpkg-postinstalls'>
<title>Post Install Scripts</title>
<para>
To add a post-installation script to a package, add
a <function>pkg_postinst_PACKAGENAME()</function>
function to the .bb file
where PACKAGENAME is the name of the package to attach
the postinst script to. A post-installation function has the following structure:
</para>
<programlisting>
pkg_postinst_PACKAGENAME () {
#!/bin/sh -e
# Commands to carry out
}
</programlisting>
<para>
The script defined in the post installation function
gets called when the rootfs is made. If the script succeeds,
the package is marked as installed. If the script fails,
the package is marked as unpacked and the script will be
executed again on the first boot of the image.
</para>
<para>
Sometimes it is necessary that the execution of a post-installation
script is delayed until the first boot, because the script
needs to be executed the device itself. To delay script execution
until boot time, the post-installation function should have the
following structure:
</para>
<programlisting>
pkg_postinst_PACKAGENAME () {
#!/bin/sh -e
if [ x"$D" = "x" ]; then
# Actions to carry out on the device go here
else
exit 1
fi
}
</programlisting>
<para>
The structure above delays execution until first boot
because the <glossterm><link
linkend='var-D'>D</link></glossterm> variable points
to the 'image'
directory when the rootfs is being made at build time but
is unset when executed on the first boot.
</para>
</section>
</section>
<section id='usingpoky-extend-customimage'>
<title>Customising Images</title>
<para>
Poky images can be customised to satisfy
particular requirements. Several methods are detailed below
along with guidelines of when to use them.
</para>
<section id='usingpoky-extend-customimage-custombb'>
<title>Customising Images through a custom image .bb files</title>
<para>
One way to get additional software into an image is by creating a
custom image. The recipe will contain two lines:
</para>
<programlisting>
IMAGE_INSTALL = "task-poky-x11-base package1 package2"
inherit poky-image
</programlisting>
<para>
By creating a custom image, a developer has total control
over the contents of the image. It is important use
the correct names of packages in the <glossterm><link
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm> variable.
The names must be in
the OpenEmbedded notation instead of Debian notation, for example
"glibc-dev" instead of "libc6-dev" etc.
</para>
<para>
The other method of creating a new image is by modifying
an existing image. For example if a developer wants to add
"strace" into "poky-image-sato" the following recipe can
be used:
</para>
<programlisting>
require poky-image-sato.bb
IMAGE_INSTALL += "strace"
</programlisting>
</section>
<section id='usingpoky-extend-customimage-customtasks'>
<title>Customising Images through custom tasks</title>
<para>
For for complex custom images, the best approach is to create a custom
task package which is them used to build the image (or images). A good
example of a tasks package is <filename>meta/packages/tasks/task-poky.bb
</filename>. The <glossterm><link linkend='var-PACKAGES'>PACKAGES</link></glossterm>
variable lists the task packages to build (along with the complimentary
-dbg and -dev packages). For each package added,
<glossterm><link linkend='var-PACKAGES'>RDEPENDS</link></glossterm> and
<glossterm><link linkend='var-PACKAGES'>RRECOMMENDS</link></glossterm>
entries can then be added each containing a list of packages the parent
task package should contain. An example would be:
</para>
<para>
<programlisting>
DESCRIPTION = "My Custom Tasks"
PACKAGES = "\
task-custom-apps \
task-custom-apps-dbg \
task-custom-apps-dev \
task-custom-tools \
task-custom-tools-dbg \
task-custom-tools-dev \
"
RDEPENDS_task-custom-apps = "\
dropbear \
portmap \
psplash"
RDEPENDS_task-custom-tools = "\
oprofile \
oprofileui-server \
lttng-control \
lttng-viewer"
RRECOMMENDS_task-custom-tools = "\
kernel-module-oprofile"
</programlisting>
</para>
<para>
In this example, two tasks packages are created, task-custom-apps and
task-custom-tools with the dependencies and recommended package dependencies
listed. To build an image using these task packages, you would then add
"task-custom-apps" and/or "task-custom-tools" to <glossterm><link
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm> or other forms
of image dependencies as described in other areas of this section.
</para>
</section>
<section id='usingpoky-extend-customimage-imagefeatures'>
<title>Customising Images through custom <glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm></title>
<para>
Ultimately users may want to add extra image "features" as used by Poky with the
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
variable. To create these, the best reference is <filename>meta/classes/poky-image.bbclass</filename>
which illustrates how poky achieves this. In summary, the file looks at the contents of the
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
variable and based on this generates the <glossterm><link linkend='var-IMAGE_INSTALL'>
IMAGE_INSTALL</link></glossterm> variable automatically. Extra features can be added by
extending the class or creating a custom class for use with specialised image .bb files.
</para>
</section>
<section id='usingpoky-extend-customimage-localconf'>
<title>Customising Images through local.conf</title>
<para>
It is possible to customise image contents by abusing
variables used by distribution maintainers in local.conf.
This method only allows the addition of packages and
is not recommended.
</para>
<para>
To add an "strace" package into the image the following is
added to local.conf:
</para>
<programlisting>
DISTRO_EXTRA_RDEPENDS += "strace"
</programlisting>
<para>
However, since the <glossterm><link linkend='var-DISTRO_EXTRA_RDEPENDS'>
DISTRO_EXTRA_RDEPENDS</link></glossterm> variable is for
distribution maintainers this method does not make
adding packages as simple as a custom .bb file. Using
this method, a few packages will need to be recreated
and the the image built.
</para>
<programlisting>
bitbake -cclean task-boot task-base task-poky
bitbake poky-image-sato
</programlisting>
<para>
Cleaning task-* packages is required because they use the
<glossterm><link linkend='var-DISTRO_EXTRA_RDEPENDS'>
DISTRO_EXTRA_RDEPENDS</link></glossterm> variable. There is no need to
build them by hand as Poky images depend on the packages they contain so
dependencies will be built automatically. For this reason we don't use the
"rebuild" task in this case since "rebuild" does not care about
dependencies - it only rebuilds the specified package.
</para>
</section>
</section>
<section id="platdev-newmachine">
<title>Porting Poky to a new machine</title>
<para>
Adding a new machine to Poky is a straightforward process and
this section gives an idea of the changes that are needed. This guide is
meant to cover adding machines similar to those Poky already supports.
Adding a totally new architecture might require gcc/glibc changes as
well as updates to the site information and, whilst well within Poky's
capabilities, is outside the scope of this section.
</para>
<section id="platdev-newmachine-conffile">
<title>Adding the machine configuration file</title>
<para>
A .conf file needs to be added to conf/machine/ with details of the
device being added. The name of the file determines the name Poky will
use to reference this machine.
</para>
<para>
The most important variables to set in this file are <glossterm>
<link linkend='var-TARGET_ARCH'>TARGET_ARCH</link></glossterm>
(e.g. "arm"), <glossterm><link linkend='var-PREFERRED_PROVIDER'>
PREFERRED_PROVIDER</link></glossterm>_virtual/kernel (see below) and
<glossterm><link linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES
</link></glossterm> (e.g. "kernel26 apm screen wifi"). Other variables
like <glossterm><link linkend='var-SERIAL_CONSOLE'>SERIAL_CONSOLE
</link></glossterm> (e.g. "115200 ttyS0"), <glossterm>
<link linkend='var-KERNEL_IMAGETYPE'>KERNEL_IMAGETYPE</link>
</glossterm> (e.g. "zImage") and <glossterm><link linkend='var-IMAGE_FSTYPES'>
IMAGE_FSTYPES</link></glossterm> (e.g. "tar.gz jffs2") might also be
needed. Full details on what these variables do and the meaning of
their contents is available through the links.
</para>
</section>
<section id="platdev-newmachine-kernel">
<title>Adding a kernel for the machine</title>
<para>
Poky needs to be able to build a kernel for the machine. You need
to either create a new kernel recipe for this machine or extend an
existing recipe. There are plenty of kernel examples in the
packages/linux directory which can be used as references.
</para>
<para>
If creating a new recipe the "normal" recipe writing rules apply
for setting up a <glossterm><link linkend='var-SRC_URI'>SRC_URI
</link></glossterm> including any patches and setting <glossterm>
<link linkend='var-S'>S</link></glossterm> to point at the source
code. You will need to create a configure task which configures the
unpacked kernel with a defconfig be that through a "make defconfig"
command or more usually though copying in a suitable defconfig and
running "make oldconfig". By making use of "inherit kernel" and also
maybe some of the linux-*.inc files, most other functionality is
centralised and the the defaults of the class normally work well.
</para>
<para>
If extending an existing kernel it is usually a case of adding a
suitable defconfig file in a location similar to that used by other
machine's defconfig files in a given kernel, possibly listing it in
the SRC_URI and adding the machine to the expression in <glossterm>
<link linkend='var-COMPATIBLE_MACHINE'>COMPATIBLE_MACHINE</link>
</glossterm>.
</para>
</section>
<section id="platdev-newmachine-formfactor">
<title>Adding a formfactor configuration file</title>
<para>
A formfactor configuration file provides information about the
target hardware on which Poky is running, and that Poky cannot
obtain from other sources such as the kernel. Some examples of
information contained in a formfactor configuration file include
framebuffer orientation, whether or not the system has a keyboard,
the positioning of the keyboard in relation to the screen, and
screen resolution.
</para>
<para>
Sane defaults should be used in most cases, but if customisation is
necessary you need to create a <filename>machconfig</filename> file
under <filename>meta/packages/formfactor/files/MACHINENAME/</filename>
where <literal>MACHINENAME</literal> is the name for which this infomation
applies. For information about the settings available and the defaults, please see
<filename>meta/packages/formfactor/files/config</filename>.
</para>
</section>
</section>
<section id='usingpoky-changes'>
<title>Making and Maintaining Changes</title>
<para>
We recognise that people will want to extend/configure/optimise Poky for
their specific uses, especially due to the extreme configurability and
flexibility Poky offers. To ensure ease of keeping pace with future
changes in Poky we recommend making changes to Poky in a controlled way.
</para>
<para>
Poky supports the idea of <link
linkend='usingpoky-changes-collections'>"collections"</link> which when used
properly can massively ease future upgrades and allow segregation
between the Poky core and a given developer's changes. Some other advice on
managing changes to Poky is also given in the following section.
</para>
<section id="usingpoky-changes-collections">
<title>Bitbake Collections</title>
<para>
Often, people want to extend Poky either through adding packages
or overriding files contained within Poky to add their own
functionality. Bitbake has a powerful mechanism called
collections which provide a way to handle this which is fully
supported and actively encouraged within Poky.
</para>
<para>
In the standard tree, meta-extras is an example of how you can
do this. As standard the data in meta-extras is not used on a
Poky build but local.conf.sample shows how to enable it:
</para>
<para>
<literallayout class='monospaced'>
BBFILES := "${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-extras/packages/*/*.bb"
BBFILE_COLLECTIONS = "normal extras"
BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
BBFILE_PATTERN_extras = "^${OEROOT}/meta-extras/"
BBFILE_PRIORITY_normal = "5"
BBFILE_PRIORITY_extras = "5"</literallayout>
</para>
<para>
As can be seen, the extra recipes are added to BBFILES. The
BBFILE_COLLECTIONS variable is then set to contain a list of
collection names. The BBFILE_PATTERN variables are regular
expressions used to match files from BBFILES into a particular
collection in this case by using the base pathname.
The BBFILE_PRIORITY variable then assigns the different
priorities to the files in different collections. This is useful
in situations where the same package might appear in both
repositories and allows you to choose which collection should
'win'.
</para>
<para>
This works well for recipes. For bbclasses and configuration
files, you can use the BBPATH environment variable. In this
case, the first file with the matching name found in BBPATH is
the one that is used, just like the PATH variable for binaries.
</para>
</section>
<section id='usingpoky-changes-commits'>
<title>Committing Changes</title>
<para>
Modifications to Poky are often managed under some kind of source
revision control system. The policy for committing to such systems
is important as some simple policy can significantly improve
usability. The tips below are based on the policy that OpenedHand
uses for commits to Poky.
</para>
<para>
It helps to use a consistent style for commit messages when committing
changes. We've found a style where the first line of a commit message
summarises the change and starts with the name of any package affected
work well. Not all changes are to specific packages so the prefix could
also be a machine name or class name instead. If a change needs a longer
description this should follow the summary.
</para>
<para>
Any commit should be self contained in that it should leave the
metadata in a consistent state, buildable before and after the
commit. This helps ensure the autobuilder test results are valid
but is good practice regardless.
</para>
</section>
<section id='usingpoky-changes-prbump'>
<title>Package Revision Incrementing</title>
<para>
If a committed change will result in changing the package output
then the value of the <glossterm><link linkend='var-PR'>PR</link>
</glossterm> variable needs to be increased (commonly referred to
as 'bumped') as part of that commit. Only integer values are used
and <glossterm><link linkend='var-PR'>PR</link></glossterm> =
"r0" should not be added into new recipes as this is default value.
When upgrading the version of a package (<glossterm><link
linkend='var-PV'>PV</link></glossterm>), the <glossterm><link
linkend='var-PR'>PR</link></glossterm> variable should be removed.
</para>
<para>
The aim is that the package version will only ever increase. If
for some reason <glossterm><link linkend='var-PV'>PV</link></glossterm>
will change and but not increase, the <glossterm><link
linkend='var-PE'>PE</link></glossterm> (Package Epoch) can
be increased (it defaults to '0'). The version numbers aim to
follow the <ulink url='http://www.debian.org/doc/debian-policy/ch-controlfields.html'>
Debian Version Field Policy Guidelines</ulink> which define how
versions are compared and hence what "increasing" means.
</para>
<para>
There are two reasons for doing this, the first is to ensure that
when a developer updates and rebuilds, they get all the changes to
the repository and don't have to remember to rebuild any sections.
The second is to ensure that target users are able to upgrade their
devices via their package manager such as with the <command>
opkg update;opkg upgrade</command> commands (or similar for
dpkg/apt or rpm based systems). The aim is to ensure Poky has
upgradable packages in all cases.
</para>
</section>
</section>
<section id='usingpoky-modifing-packages'>
<title>Modifying Package Source Code</title>
<para>
Poky is usually used to build software rather than modifying
it. However, there are ways Poky can be used to modify software.
</para>
<para>
During building, the sources are available in <glossterm><link
linkend='var-WORKDIR'>WORKDIR</link></glossterm> directory.
Where exactly this is depends on the type of package and the
architecture of target device. For a standard recipe not
related to <glossterm><link
linkend='var-MACHINE'>MACHINE</link></glossterm> it will be
<filename>tmp/work/PACKAGE_ARCH-poky-TARGET_OS/PN-PV-PR/</filename>.
Target device dependent packages use <glossterm><link
linkend='var-MACHINE'>MACHINE
</link></glossterm>
instead of <glossterm><link linkend='var-PACKAGE_ARCH'>PACKAGE_ARCH
</link></glossterm>
in the directory name.
</para>
<tip>
<para>
Check the package recipe sets the <glossterm><link
linkend='var-S'>S</link></glossterm> variable to something
other than standard <filename>WORKDIR/PN-PV/</filename> value.
</para>
</tip>
<para>
After building a package, a user can modify the package source code
without problem. The easiest way to test changes is by calling the
"compile" task:
</para>
<programlisting>
bitbake --cmd compile --force NAME_OF_PACKAGE
</programlisting>
<para>
Other tasks may also be called this way.
</para>
<section id='usingpoky-modifying-packages-quilt'>
<title>Modifying Package Source Code with quilt</title>
<para>
By default Poky uses <ulink
url='http://savannah.nongnu.org/projects/quilt'>quilt</ulink>
to manage patches in <function>do_patch</function> task.
It is a powerful tool which can be used to track all
modifications done to package sources.
</para>
<para>
Before modifying source code it is important to
notify quilt so it will track changes into new patch
file:
<programlisting>
quilt new NAME-OF-PATCH.patch
</programlisting>
Then add all files which will be modified into that
patch:
<programlisting>
quilt add file1 file2 file3
</programlisting>
Now start editing. At the end quilt needs to be used
to generate final patch which will contain all
modifications:
<programlisting>
quilt refresh
</programlisting>
The resulting patch file can be found in the
<filename class="directory">patches/</filename> subdirectory of the source
(<glossterm><link linkend='var-S'>S</link></glossterm>) directory. For future builds it
should be copied into
Poky metadata and added into <glossterm><link
linkend='var-SRC_URI'>SRC_URI</link></glossterm> of a recipe:
<programlisting>
SRC_URI += "file://NAME-OF-PATCH.patch;patch=1"
</programlisting>
This also requires a bump of <glossterm><link
linkend='var-PR'>PR</link></glossterm> value in the same recipe as we changed resulting packages.
</para>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,252 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='faq'>
<title>FAQ</title>
<qandaset>
<qandaentry>
<question>
<para>
How does Poky differ from <ulink url='http://www.openembedded.org/'>OpenEmbedded</ulink>?
</para>
</question>
<answer>
<para>
Poky is a derivative of <ulink
url='http://www.openembedded.org/'>OpenEmbedded</ulink>, a stable,
smaller subset focused on the GNOME Mobile environment. Development
in Poky is closely tied to OpenEmbedded with features being merged
regularly between the two for mutual benefit.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
How can you claim Poky is stable?
</para>
</question>
<answer>
<para>
There are three areas that help with stability;
<itemizedlist>
<listitem>
<para>
We keep Poky small and focused - around 650 packages compared to over 5000 for full OE
</para>
</listitem>
<listitem>
<para>
We only support hardware that we have access to for testing
</para>
</listitem>
<listitem>
<para>
We have a Buildbot which provides continuous build and integration tests
</para>
</listitem>
</itemizedlist>
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
How do I get support for my board added to Poky?
</para>
</question>
<answer>
<para>
There are two main ways to get a board supported in Poky;
<itemizedlist>
<listitem>
<para>
Send us the board if we don't have it yet
</para>
</listitem>
<listitem>
<para>
Send us bitbake recipes if you have them (see the Poky handbook to find out how to create recipes)
</para>
</listitem>
</itemizedlist>
Usually if it's not a completely exotic board then adding support in Poky should be fairly straightforward.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
Are there any products running poky ?
</para>
</question>
<answer>
<para>
The <ulink url='http://vernier.com/labquest/'>Vernier Labquest</ulink> is using Poky (for more about the Labquest see the case study at OpenedHand). There are a number of pre-production devices using Poky and we will announce those as soon as they are released.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
What is the Poky output ?
</para>
</question>
<answer>
<para>
The output of a Poky build will depend on how it was started, as the same set of recipes can be used to output various formats. Usually the output is a flashable image ready for the target device.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
How do I add my package to Poky?
</para>
</question>
<answer>
<para>
To add a package you need to create a bitbake recipe - see the Poky handbook to find out how to create a recipe.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
Do I have to reflash my entire board with a new poky image when recompiling a package?
</para>
</question>
<answer>
<para>
Poky can build packages in various formats, ipk (for ipkg/opkg), Debian package (.deb), or RPM. The packages can then be upgraded using the package tools on the device, much like on a desktop distribution like Ubuntu or Fedora.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
What is GNOME Mobile? What's the difference between GNOME Mobile and GNOME?
</para>
</question>
<answer>
<para>
<ulink url='http://www.gnome.org/mobile/'>GNOME Mobile</ulink> is a subset of the GNOME platform targeted at mobile and embedded devices. The the main difference between GNOME Mobile and standard GNOME is that desktop-orientated libraries have been removed, along with deprecated libraries, creating a much smaller footprint.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
How do I make Poky work in RHEL/CentOS?
</para>
</question>
<answer>
<para>
To get Poky working under RHEL/CentOS 5.1 you need to first install some required packages. The standard CentOS packages needed are:
<itemizedlist>
<listitem>
<para>
"Development tools" (selected during installation)
</para>
</listitem>
<listitem>
<para>
texi2html
</para>
</listitem>
<listitem>
<para>
compat-gcc-34
</para>
</listitem>
</itemizedlist>
</para>
<para>
On top of those the following external packages are needed:
<itemizedlist>
<listitem>
<para>
python-sqlite2 from <ulink
url='http://dag.wieers.com/rpm/packages/python-sqlite2/'>DAG
repository</ulink>
</para>
</listitem>
<listitem>
<para>
help2man from <ulink
url='http://centos.karan.org/el5/extras/testing/i386/RPMS/help2man-1.33.1-2.noarch.rpm'>Karan
repository</ulink>
</para>
</listitem>
</itemizedlist>
</para>
<para>
Once these packages are installed Poky will be able to build standard images however there
may be a problem with QEMU segfaulting. You can either disable the generation of binary
locales by setting <glossterm><link linkend='var-ENABLE_BINARY_LOCALE_GENERATION'>ENABLE_BINARY_LOCALE_GENERATION</link>
</glossterm> to "0" or remove the linux-2.6-execshield.patch from the kernel and rebuild
it since its that patch which causes the problems with QEMU.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
I see lots of 404 responses for files on http://folks.o-hand.com/~richard/poky/sources/*. Is something wrong?
</para>
</question>
<answer>
<para>
Nothing is wrong, Poky will check any configured source mirrors before downloading
from the upstream sources. It does this searching for both source archives and
pre-checked out versions of SCM managed software. This is so in large installations,
it can reduce load on the SCM servers themselves. The address above is one of the
default mirrors configured into standard Poky so if an upstream source disappears,
we can place sources there so builds continue to work.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
I have a machine specific data in a package for one machine only but the package is
being marked as machine specific in all cases, how do I stop it?
</para>
</question>
<answer>
<para>
Set <glossterm><link linkend='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'>SRC_URI_OVERRIDES_PACKAGE_ARCH</link>
</glossterm> = "0" in the .bb file but make sure the package is manually marked as
machine specific in the case that needs it. The code which handles <glossterm><link
linkend='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'>SRC_URI_OVERRIDES_PACKAGE_ARCH</link></glossterm>
is in base.bbclass.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>
I'm behind a firewall and need to use a proxy server. How do I do that?
</para>
</question>
<answer>
<para>
Most source fetching by Poky is done by wget and you therefore need to specify the proxy
settings in a .wgetrc file in your home directory. Example settings in that file would be
'http_proxy = http://proxy.yoyodyne.com:18023/' and 'ftp_proxy = http://proxy.yoyodyne.com:18023/'.
Poky also includes a site.conf.sample file which shows how to configure cvs and git proxy servers
if needed.
</para>
</answer>
</qandaentry>
</qandaset>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,329 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='intro'>
<title>Introduction</title>
<section id='intro-what-is'>
<title>What is Poky?</title>
<para>
Poky is an open source platform build tool. It is a complete
software development environment for the creation of Linux
devices. It aids the design, development, building, debugging,
simulation and testing of complete modern software stacks
using Linux, the X Window System and GNOME Mobile
based application frameworks. It is based on <ulink
url='http://openembedded.org/'>OpenEmbedded</ulink> but has
been customised with a particular focus.
</para>
<para> Poky was setup to:</para>
<itemizedlist>
<listitem>
<para>Provide an open source Linux, X11, Matchbox, GTK+, Pimlico, Clutter, and other <ulink url='http://gnome.org/mobile'>GNOME Mobile</ulink> technologies based full platform build and development tool.</para>
</listitem>
<listitem>
<para>Create a focused, stable, subset of OpenEmbedded that can be easily and reliably built and developed upon.</para>
</listitem>
<listitem>
<para>Fully support a wide range of x86 and ARM hardware and device virtulisation</para>
</listitem>
</itemizedlist>
<para>
Poky is primarily a platform builder which generates filesystem images
based on open source software such as the Kdrive X server, the Matchbox
window manager, the GTK+ toolkit and the D-Bus message bus system. Images
for many kinds of devices can be generated, however the standard example
machines target QEMU full system emulation (both x86 and ARM) and the ARM based
Sharp Zaurus series of devices. Poky's ability to boot inside a QEMU
emulator makes it particularly suitable as a test platform for development
of embedded software.
</para>
<para>
An important component integrated within Poky is Sato, a GNOME Mobile
based user interface environment.
It is designed to work well with screens at very high DPI and restricted
size, such as those often found on smartphones and PDAs. It is coded with
focus on efficiency and speed so that it works smoothly on hand-held and
other embedded hardware. It will sit neatly on top of any device
using the GNOME Mobile stack, providing a well defined user experience.
</para>
<screenshot>
<mediaobject>
<imageobject>
<imagedata fileref="screenshots/ss-sato.png" format="PNG"/>
</imageobject>
<caption>
<para>The Sato Desktop - A screenshot from a machine running a Poky built image</para>
</caption>
</mediaobject>
</screenshot>
<para>
Poky has a growing open source community backed up by commercial support provided by the principle developer and maintainer of Poky, <ulink url="http://o-hand.com/">OpenedHand Ltd</ulink>.
</para>
</section>
<section id='intro-manualoverview'>
<title>Documentation Overview</title>
<para>
The handbook is split into sections covering different aspects of Poky.
The <link linkend='usingpoky'>'Using Poky' section</link> gives an overview
of the components that make up Poky followed by information about using and
debugging the Poky build system. The <link linkend='extendpoky'>'Extending Poky' section</link>
gives information about how to extend and customise Poky along with advice
on how to manage these changes. The <link linkend='platdev'>'Platform Development with Poky'
section</link> gives information about interaction between Poky and target
hardware for common platform development tasks such as software development,
debugging and profiling. The rest of the manual
consists of several reference sections each giving details on a specific
section of Poky functionality.
</para>
<para>
This manual applies to Poky Release 3.1 (Pinky).
</para>
</section>
<section id='intro-requirements'>
<title>System Requirements</title>
<para>
We recommend Debian-based distributions, in particular a recent Ubuntu
release (7.04 or newer), as the host system for Poky. Nothing in Poky is
distribution specific and
other distributions will most likely work as long as the appropriate
prerequisites are installed - we know of Poky being used successfully on Redhat,
SUSE, Gentoo and Slackware host systems.
</para>
<para>On a Debian-based system, you need the following packages installed:</para>
<itemizedlist>
<listitem>
<para>build-essential</para>
</listitem>
<listitem>
<para>python</para>
</listitem>
<listitem>
<para>diffstat</para>
</listitem>
<listitem>
<para>texinfo</para>
</listitem>
<listitem>
<para>texi2html</para>
</listitem>
<listitem>
<para>cvs</para>
</listitem>
<listitem>
<para>subversion</para>
</listitem>
<listitem>
<para>wget</para>
</listitem>
<listitem>
<para>gawk</para>
</listitem>
<listitem>
<para>help2man</para>
</listitem>
<listitem>
<para>bochsbios (only to run qemux86 images)</para>
</listitem>
</itemizedlist>
<para>
Debian users can add debian.o-hand.com to their APT sources (See
<ulink url='http://debian.o-hand.com'/>
for instructions on doing this) and then run <command>
"apt-get install qemu poky-depends poky-scripts"</command> which will
automatically install all these dependencies. OpenedHand can also provide
VMware images with Poky and all dependencies pre-installed if required.
</para>
<para>
Poky can use a system provided QEMU or build its own depending on how it's
configured. See the options in <filename>local.conf</filename> for more details.
</para>
</section>
<section id='intro-quickstart'>
<title>Quick Start</title>
<section id='intro-quickstart-build'>
<title>Building and Running an Image</title>
<para>
If you want to try Poky, you can do so in a few commands. The example below
checks out the Poky source code, sets up a build environment, builds an
image and then runs that image under the QEMU emulator in ARM system emulation mode:
</para>
<para>
<literallayout class='monospaced'>
$ wget http://pokylinux.org/releases/pinky-3.1.tar.gz
$ tar zxvf pinky-3.1.tar.gz
$ cd pinky-3.1/
$ source poky-init-build-env
$ bitbake poky-image-sato
$ runqemu qemuarm
</literallayout>
</para>
<note>
<para>
This process will need Internet access, about 3 GB of disk space
available, and you should expect the build to take about 4 - 5 hours since
it is building an entire Linux system from source including the toolchain!
</para>
</note>
<para>
To build for other machines see the <glossterm><link
linkend='var-MACHINE'>MACHINE</link></glossterm> variable in build/conf/local.conf.
This file contains other useful configuration information and the default version
has examples of common setup needs and is worth
reading. To take advantage of multiple processor cores to speed up builds for example, set the
<glossterm><link linkend='var-BB_NUMBER_THREADS'>BB_NUMBER_THREADS</link></glossterm>
and <glossterm><link linkend='var-PARALLEL_MAKE'>PARALLEL_MAKE</link></glossterm> variables.
The images/kernels built by Poky are placed in the <filename class="directory">tmp/deploy/images</filename>
directory.
</para>
<para>
You could also run <command>"poky-qemu zImage-qemuarm.bin poky-image-sato-qemuarm.ext2"
</command> within the images directory if you have the poky-scripts Debian package
installed from debian.o-hand.com. This allows the QEMU images to be used standalone
outside the Poky build environment.
</para>
<para>
To setup networking within QEMU see the <link linkend='usingpoky-install-qemu-networking'>
QEMU/USB networking with IP masquerading</link> section.
</para>
</section>
<section id='intro-quickstart-qemu'>
<title>Downloading and Using Prebuilt Images</title>
<para>
Prebuilt images from Poky are also available if you just want to run the system
under QEMU. To use these you need to:
</para>
<itemizedlist>
<listitem>
<para>
Add debian.o-hand.com to your APT sources (See
<ulink url='http://debian.o-hand.com'/> for instructions on doing this)
</para>
</listitem>
<listitem>
<para>Install patched QEMU and poky-scripts:</para>
<para>
<literallayout class='monospaced'>
$ apt-get install qemu poky-scripts
</literallayout>
</para>
</listitem>
<listitem>
<para>
Download a Poky QEMU release kernel (*zImage*qemu*.bin) and compressed
filesystem image (poky-image-*-qemu*.ext2.bz2) which
you'll need to decompress with 'bzip2 -d'. These are available from the
<ulink url='http://pokylinux.org/releases/blinky-3.0/'>last release</ulink>
or from the <ulink url='http://pokylinux.org/autobuild/poky/'>autobuilder</ulink>.
</para>
</listitem>
<listitem>
<para>Start the image:</para>
<para>
<literallayout class='monospaced'>
$ poky-qemu &lt;kernel&gt; &lt;image&gt;
</literallayout>
</para>
</listitem>
</itemizedlist>
<note><para>
A patched version of QEMU is required at present. A suitable version is available from
<ulink url='http://debian.o-hand.com'/>, it can be built
by poky (bitbake qemu-native) or can be downloaded/built as part of the toolchain/SDK tarballs.
</para></note>
</section>
</section>
<section id='intro-getit'>
<title>Obtaining Poky</title>
<section id='intro-getit-releases'>
<title>Releases</title>
<para>Periodically, we make releases of Poky and these are available
at <ulink url='http://pokylinux.org/releases/'/>.
These are more stable and tested than the nightly development images.</para>
</section>
<section id='intro-getit-nightly'>
<title>Nightly Builds</title>
<para>
We make nightly builds of Poky for testing purposes and to make the
latest developments available. The output from these builds is available
at <ulink url='http://pokylinux.org/autobuild/'/>
where the numbers represent the svn revision the builds were made from.
</para>
<para>
Automated builds are available for "standard" Poky and for Poky SDKs and toolchains as well
as any testing versions we might have such as poky-bleeding. The toolchains can
be used either as external standalone toolchains or can be combined with Poky as a
prebuilt toolchain to reduce build time. Using the external toolchains is simply a
case of untarring the tarball into the root of your system (it only creates files in
<filename class="directory">/usr/local/poky</filename>) and then enabling the option
in <filename>local.conf</filename>.
</para>
</section>
<section id='intro-getit-dev'>
<title>Development Checkouts</title>
<para>
Poky is available from our SVN repository located at
http://svn.o-hand.com/repos/poky/trunk; a web interface to the repository
can be accessed at <ulink url='http://svn.o-hand.com/view/poky/'/>.
</para>
<para>
'trunk' is where the deveopment work takes place and you should use this if you're
after to work with the latest cutting edge developments. It is possible trunk
can suffer temporary periods of instability while new features are developed and
if this is undesireable we recommend using one of the release branches.
</para>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

View File

@@ -1,30 +0,0 @@
2008-02-15 Matthew Allum <mallum@openedhand.com>
* common/Makefile.am:
* common/poky-handbook.png:
Add a PNG image for the manual. Seems our logo SVG
is too complex/transparent for PDF
2008-02-14 Matthew Allum <mallum@openedhand.com>
* common/Makefile.am:
* common/fop-config.xml.in:
* common/poky-db-pdf.xsl:
* poky-docbook-to-pdf.in:
Font tweakage.
2008-01-27 Matthew Allum <mallum@openedhand.com>
* INSTALL:
* Makefile.am:
* README:
* autogen.sh:
* common/Makefile.am:
* common/fop-config.xml.in:
* common/ohand-color.svg:
* common/poky-db-pdf.xsl:
* common/poky.svg:
* common/titlepage.templates.xml:
* configure.ac:
* poky-docbook-to-pdf.in:
Initial import.

View File

@@ -1,236 +0,0 @@
Installation Instructions
*************************
Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005 Free
Software Foundation, Inc.
This file is free documentation; the Free Software Foundation gives
unlimited permission to copy, distribute and modify it.
Basic Installation
==================
These are generic installation instructions.
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions. Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').
It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring. (Caching is
disabled by default to prevent problems with accidental use of stale
cache files.)
If you need to do unusual things to compile the package, please try
to figure out how `configure' could check whether to do them, and mail
diffs or instructions to the address given in the `README' so they can
be considered for the next release. If you are using the cache, and at
some point `config.cache' contains results you don't want to keep, you
may remove or edit it.
The file `configure.ac' (or `configure.in') is used to create
`configure' by a program called `autoconf'. You only need
`configure.ac' if you want to change it or regenerate `configure' using
a newer version of `autoconf'.
The simplest way to compile this package is:
1. `cd' to the directory containing the package's source code and type
`./configure' to configure the package for your system. If you're
using `csh' on an old version of System V, you might need to type
`sh ./configure' instead to prevent `csh' from trying to execute
`configure' itself.
Running `configure' takes awhile. While running, it prints some
messages telling which features it is checking for.
2. Type `make' to compile the package.
3. Optionally, type `make check' to run any self-tests that come with
the package.
4. Type `make install' to install the programs and any data files and
documentation.
5. You can remove the program binaries and object files from the
source code directory by typing `make clean'. To also remove the
files that `configure' created (so you can compile the package for
a different kind of computer), type `make distclean'. There is
also a `make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
Compilers and Options
=====================
Some systems require unusual options for compilation or linking that the
`configure' script does not know about. Run `./configure --help' for
details on some of the pertinent environment variables.
You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here
is an example:
./configure CC=c89 CFLAGS=-O2 LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
====================================
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you must use a version of `make' that
supports the `VPATH' variable, such as GNU `make'. `cd' to the
directory where you want the object files and executables to go and run
the `configure' script. `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'.
If you have to use a `make' that does not support the `VPATH'
variable, you have to compile the package for one architecture at a
time in the source code directory. After you have installed the
package for one architecture, use `make distclean' before reconfiguring
for another architecture.
Installation Names
==================
By default, `make install' installs the package's commands under
`/usr/local/bin', include files under `/usr/local/include', etc. You
can specify an installation prefix other than `/usr/local' by giving
`configure' the option `--prefix=PREFIX'.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
pass the option `--exec-prefix=PREFIX' to `configure', the package uses
PREFIX as the prefix for installing programs and libraries.
Documentation and other data files still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like `--bindir=DIR' to specify different values for particular
kinds of files. Run `configure --help' for a list of the directories
you can set and what kinds of files go in them.
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
Optional Features
=================
Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System). The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.
For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.
Specifying the System Type
==========================
There may be some features `configure' cannot figure out automatically,
but needs to determine by the type of machine the package will run on.
Usually, assuming the package is built to be run on the _same_
architectures, `configure' can figure that out, but if it prints a
message saying it cannot guess the machine type, give it the
`--build=TYPE' option. TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:
CPU-COMPANY-SYSTEM
where SYSTEM can have one of these forms:
OS KERNEL-OS
See the file `config.sub' for the possible values of each field. If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the option `--target=TYPE' to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.
Sharing Defaults
================
If you want to set default values for `configure' scripts to share, you
can create a site shell script called `config.site' that gives default
values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/config.site' if it exists, then
`PREFIX/etc/config.site' if it exists. Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.
Defining Variables
==================
Variables not defined in a site shell script can be set in the
environment passed to `configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
causes the specified `gcc' to be used as the C compiler (unless it is
overridden in the site shell script). Here is a another example:
/bin/bash ./configure CONFIG_SHELL=/bin/bash
Here the `CONFIG_SHELL=/bin/bash' operand causes subsequent
configuration-related scripts to be executed by `/bin/bash'.
`configure' Invocation
======================
`configure' recognizes the following options to control how it operates.
`--help'
`-h'
Print a summary of the options to `configure', and exit.
`--version'
`-V'
Print the version of Autoconf used to generate the `configure'
script, and exit.
`--cache-file=FILE'
Enable the cache: use and save the results of the tests in FILE,
traditionally `config.cache'. FILE defaults to `/dev/null' to
disable caching.
`--config-cache'
`-C'
Alias for `--cache-file=config.cache'.
`--quiet'
`--silent'
`-q'
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to `/dev/null' (any error
messages will still be shown).
`--srcdir=DIR'
Look for the package's source code in directory DIR. Usually
`configure' can determine that directory automatically.
`configure' also accepts some other, not widely useful, options. Run
`configure --help' for more details.

View File

@@ -1,18 +0,0 @@
SUBDIRS = common
EXTRA_DIST = poky-docbook-to-pdf.in
bin_SCRIPTS = poky-docbook-to-pdf
edit = sed \
-e 's,@datadir\@,$(pkgdatadir),g' \
-e 's,@prefix\@,$(prefix),g' \
-e 's,@version\@,@VERSION@,g'
poky-docbook-to-pdf: poky-docbook-to-pdf.in
rm -f poky-docbook-to-pdf
$(edit) poky-docbook-to-pdf.in > poky-docbook-to-pdf
clean-local:
rm -fr poky-docbook-to-pdf
rm -fr poky-pr-docbook-to-pdf

View File

@@ -1,24 +0,0 @@
poky-doc-tools
==============
Simple tools to wrap fop to create oh branded PDF's from docbook sources.
(based on OH doc tools)
Dependencies
============
Sun Java, make sure the java in your path is the *sun* java.
xlstproc, nwalsh style sheets.
FOP, installed - see http://www.sagehill.net/docbookxsl/InstallingAnFO.html.
Also a 'fop' binary, eg I have;
% cat ~/bin/fop
#!/bin/sh
java org.apache.fop.apps.Fop "$@"

View File

@@ -1,3 +0,0 @@
#! /bin/sh
autoreconf -v --install || exit 1
./configure --enable-maintainer-mode --enable-debug "$@"

View File

@@ -1,21 +0,0 @@
SUPPORT_FILES = VeraMoBd.ttf VeraMoBd.xml \
VeraMono.ttf VeraMono.xml \
Vera.ttf Vera.xml \
draft.png titlepage.templates.xml \
poky-db-pdf.xsl poky.svg \
ohand-color.svg poky-handbook.png
commondir = $(pkgdatadir)/common
common_DATA = $(SUPPORT_FILES) fop-config.xml
EXTRA_DIST = $(SUPPORT_FILES) fop-config.xml.in
edit = sed -e 's,@datadir\@,$(pkgdatadir),g'
fop-config.xml: fop-config.xml.in
rm -f fop-config.xml
$(edit) fop-config.xml.in > fop-config.xml
clean-local:
rm -fr fop-config.xml

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -1,33 +0,0 @@
<configuration>
<entry>
<!--
Set the baseDir so common/openedhand.svg references in plans still
work ok. Note, relative file references to current dir should still work.
-->
<key>baseDir</key>
<value>@datadir@</value>
</entry>
<fonts>
<font metrics-file="@datadir@/common/VeraMono.xml"
kerning="yes"
embed-file="@datadir@/common/VeraMono.ttf">
<font-triplet name="veramono" style="normal" weight="normal"/>
</font>
<font metrics-file="@datadir@/common/VeraMoBd.xml"
kerning="yes"
embed-file="@datadir@/common/VeraMoBd.ttf">
<font-triplet name="veramono" style="normal" weight="bold"/>
</font>
<font metrics-file="@datadir@/common/Vera.xml"
kerning="yes"
embed-file="@datadir@/common/Vera.ttf">
<font-triplet name="verasans" style="normal" weight="normal"/>
<font-triplet name="verasans" style="normal" weight="bold"/>
<font-triplet name="verasans" style="italic" weight="normal"/>
<font-triplet name="verasans" style="italic" weight="bold"/>
</font>
</fonts>
</configuration>

View File

@@ -1,150 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://web.resource.org/cc/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="141.17999"
height="55.34"
id="svg2207"
sodipodi:version="0.32"
inkscape:version="0.45"
version="1.0"
sodipodi:docname="ohand-color.svg"
inkscape:output_extension="org.inkscape.output.svg.inkscape"
sodipodi:docbase="/home/mallum/Projects/admin/oh-doc-tools/common"
sodipodi:modified="true">
<defs
id="defs3" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1.2"
inkscape:cx="160"
inkscape:cy="146.21189"
inkscape:document-units="mm"
inkscape:current-layer="layer1"
height="55.34px"
width="141.18px"
inkscape:window-width="772"
inkscape:window-height="581"
inkscape:window-x="5"
inkscape:window-y="48" />
<metadata
id="metadata2211">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<g
id="g2094"
style="fill:#6d6d70;fill-opacity:1"
inkscape:export-filename="/home/mallum/Desktop/g2126.png"
inkscape:export-xdpi="312.71841"
inkscape:export-ydpi="312.71841"
transform="matrix(0.5954767,0,0,0.5954767,31.793058,-18.471052)">
<g
id="g19"
style="fill:#6d6d70;fill-opacity:1">
<path
style="fill:#6d6d70;fill-opacity:1"
id="path21"
d="M 48.693,50.633 C 40.282,50.633 33.439,57.477 33.439,65.888 L 33.439,81.142 L 41.066,81.142 L 41.066,65.888 C 41.066,61.684 44.486,58.261 48.692,58.261 C 52.897,58.261 56.32,61.684 56.32,65.888 C 56.32,70.093 52.897,73.516 48.692,73.516 C 45.677,73.516 43.065,71.756 41.828,69.211 L 41.828,79.504 C 43.892,80.549 46.224,81.142 48.692,81.142 C 57.103,81.142 63.947,74.3 63.947,65.888 C 63.948,57.477 57.104,50.633 48.693,50.633 z " />
</g>
<path
style="fill:#6d6d70;fill-opacity:1"
id="path23"
d="M 18.486,50.557 C 26.942,50.557 33.819,57.435 33.819,65.888 C 33.819,74.344 26.942,81.223 18.486,81.223 C 10.032,81.223 3.152,74.344 3.152,65.888 C 3.152,57.435 10.032,50.557 18.486,50.557 z M 18.486,73.556 C 22.713,73.556 26.153,70.118 26.153,65.888 C 26.153,61.661 22.713,58.222 18.486,58.222 C 14.258,58.222 10.819,61.661 10.819,65.888 C 10.82,70.117 14.259,73.556 18.486,73.556 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path25"
d="M 94.074,107.465 L 94.074,96.016 C 94.074,87.605 87.233,80.763 78.822,80.763 C 70.41,80.763 63.567,87.605 63.567,96.016 C 63.567,104.427 70.41,111.269 78.822,111.269 C 81.289,111.269 83.621,110.676 85.685,109.631 L 85.685,99.339 C 84.448,101.885 81.836,103.644 78.822,103.644 C 74.615,103.644 71.194,100.221 71.194,96.016 C 71.194,91.81 74.615,88.388 78.822,88.388 C 83.026,88.388 86.448,91.81 86.448,96.016 L 86.448,107.456 C 86.448,109.562 88.156,111.268 90.262,111.268 C 92.364,111.269 94.068,109.566 94.074,107.465 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path27"
d="M 124.197,95.814 C 124.088,87.496 117.293,80.762 108.949,80.762 C 100.59,80.762 93.783,87.52 93.697,95.856 L 93.693,95.856 L 93.695,107.456 C 93.695,109.562 95.402,111.268 97.509,111.268 C 99.611,111.268 101.316,109.566 101.321,107.464 L 101.321,95.994 L 101.321,95.994 C 101.333,91.798 104.747,88.388 108.948,88.388 C 113.147,88.388 116.563,91.798 116.575,95.994 L 116.575,107.456 C 116.575,109.562 118.282,111.268 120.387,111.268 C 122.492,111.268 124.201,109.562 124.201,107.456 L 124.201,95.814 L 124.197,95.814 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path29"
d="M 63.946,96.005 L 63.946,95.854 L 63.943,95.854 L 63.943,95.815 L 63.942,95.815 C 63.833,87.497 57.037,80.761 48.693,80.761 C 48.682,80.761 48.671,80.763 48.658,80.763 C 48.382,80.763 48.107,80.772 47.833,80.786 C 47.75,80.791 47.668,80.799 47.586,80.806 C 47.378,80.822 47.172,80.838 46.968,80.862 C 46.884,80.873 46.801,80.882 46.719,80.893 C 46.508,80.92 46.298,80.952 46.091,80.987 C 46.024,80.999 45.958,81.01 45.891,81.024 C 45.649,81.068 45.406,81.119 45.168,81.175 C 45.14,81.183 45.112,81.189 45.085,81.195 C 43.656,81.542 42.306,82.092 41.065,82.812 L 41.065,80.761 L 33.438,80.761 L 33.438,95.857 L 33.435,95.857 L 33.435,107.457 C 33.435,109.563 35.142,111.269 37.248,111.269 C 39.093,111.269 40.632,109.958 40.984,108.217 C 41.036,107.963 41.065,107.702 41.065,107.435 L 41.065,95.873 C 41.086,94.732 41.357,93.65 41.828,92.685 L 41.828,92.693 C 42.598,91.106 43.905,89.824 45.511,89.085 C 45.519,89.08 45.529,89.076 45.536,89.073 C 45.849,88.928 46.174,88.807 46.508,88.707 C 46.523,88.704 46.536,88.699 46.55,88.696 C 46.699,88.651 46.85,88.614 47.004,88.576 C 47.025,88.575 47.046,88.567 47.069,88.562 C 47.234,88.527 47.402,88.495 47.572,88.469 C 47.586,88.468 47.6,88.466 47.615,88.463 C 47.763,88.443 47.916,88.427 48.067,88.415 C 48.106,88.41 48.145,88.407 48.186,88.404 C 48.352,88.393 48.52,88.386 48.691,88.386 C 52.888,88.387 56.304,91.797 56.316,95.992 L 56.316,107.454 C 56.316,109.56 58.023,111.266 60.13,111.266 C 61.976,111.266 63.516,109.954 63.867,108.211 C 63.919,107.963 63.946,107.706 63.946,107.442 L 63.946,96.024 C 63.946,96.021 63.947,96.018 63.947,96.015 C 63.948,96.011 63.946,96.008 63.946,96.005 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path31"
d="M 180.644,50.633 C 178.539,50.633 176.832,52.341 176.832,54.447 L 176.832,65.887 C 176.832,70.092 173.41,73.513 169.203,73.513 C 164.998,73.513 161.576,70.092 161.576,65.887 C 161.576,61.683 164.998,58.26 169.203,58.26 C 172.219,58.26 174.83,60.019 176.068,62.565 L 176.068,52.271 C 174.004,51.225 171.673,50.632 169.203,50.632 C 160.793,50.632 153.951,57.476 153.951,65.887 C 153.951,74.298 160.793,81.141 169.203,81.141 C 177.615,81.141 184.459,74.298 184.459,65.887 L 184.459,54.447 C 184.458,52.341 182.751,50.633 180.644,50.633 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path33"
d="M 124.203,77.339 L 124.203,65.687 L 124.197,65.687 C 124.088,57.371 117.293,50.633 108.949,50.633 C 100.592,50.633 93.783,57.393 93.697,65.731 L 93.695,65.731 L 93.695,65.877 C 93.695,65.882 93.693,65.885 93.693,65.888 C 93.693,65.891 93.695,65.896 93.695,65.899 L 93.695,77.33 C 93.695,79.435 95.402,81.142 97.509,81.142 C 99.614,81.142 101.321,79.435 101.321,77.33 L 101.321,65.868 C 101.333,61.672 104.747,58.261 108.948,58.261 C 113.147,58.261 116.563,61.672 116.575,65.868 L 116.575,65.868 L 116.575,77.329 C 116.575,79.434 118.282,81.141 120.389,81.141 C 122.492,81.142 124.197,79.44 124.203,77.339 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path35"
d="M 150.517,80.761 C 148.41,80.761 146.703,82.469 146.703,84.575 L 146.703,96.015 C 146.703,100.22 143.283,103.643 139.076,103.643 C 134.871,103.643 131.449,100.22 131.449,96.015 C 131.449,91.808 134.871,88.387 139.076,88.387 C 142.092,88.387 144.703,90.145 145.941,92.692 L 145.941,82.397 C 143.875,81.353 141.545,80.76 139.076,80.76 C 130.666,80.76 123.822,87.604 123.822,96.015 C 123.822,104.426 130.666,111.268 139.076,111.268 C 147.486,111.268 154.33,104.426 154.33,96.015 L 154.33,84.575 C 154.33,82.469 152.623,80.761 150.517,80.761 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path37"
d="M 82.625,77.345 C 82.625,75.247 80.923,73.547 78.826,73.547 L 78.826,81.142 C 80.922,81.142 82.625,79.442 82.625,77.345 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path39"
d="M 90.252,69.685 C 92.35,69.685 94.048,67.987 94.048,65.888 L 86.453,65.888 C 86.453,67.986 88.154,69.685 90.252,69.685 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path41"
d="M 93.832,77.329 C 93.832,75.223 92.125,73.516 90.018,73.516 L 78.825,73.516 C 74.619,73.516 71.199,70.093 71.199,65.888 C 71.199,61.684 74.619,58.261 78.825,58.261 C 83.032,58.261 86.453,61.684 86.453,65.888 C 86.453,68.904 84.694,71.514 82.149,72.752 L 92.442,72.752 C 93.488,70.689 94.08,68.356 94.08,65.888 C 94.08,57.477 87.237,50.633 78.826,50.633 C 70.415,50.633 63.571,57.477 63.571,65.888 C 63.571,74.3 70.415,81.142 78.826,81.142 L 90.018,81.142 C 92.125,81.142 93.832,79.435 93.832,77.329 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path43"
d="M 142.869,77.345 C 142.869,75.247 141.168,73.547 139.07,73.547 L 139.07,81.142 C 141.167,81.142 142.869,79.442 142.869,77.345 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path45"
d="M 150.496,69.685 C 152.594,69.685 154.293,67.987 154.293,65.888 L 146.699,65.888 C 146.699,67.986 148.398,69.685 150.496,69.685 z " />
<path
style="fill:#6d6d70;fill-opacity:1"
id="path47"
d="M 154.076,77.329 C 154.076,75.223 152.367,73.516 150.262,73.516 L 139.07,73.516 C 134.865,73.516 131.443,70.093 131.443,65.888 C 131.443,61.684 134.865,58.261 139.07,58.261 C 143.275,58.261 146.699,61.684 146.699,65.888 C 146.699,68.904 144.939,71.514 142.392,72.752 L 152.687,72.752 C 153.73,70.689 154.324,68.356 154.324,65.888 C 154.324,57.477 147.48,50.633 139.07,50.633 C 130.66,50.633 123.816,57.477 123.816,65.888 C 123.816,74.3 130.66,81.142 139.07,81.142 L 150.261,81.142 C 152.367,81.142 154.076,79.435 154.076,77.329 z " />
</g>
<g
id="g2126"
transform="matrix(0.7679564,0,0,0.7679564,-66.520631,11.42903)"
inkscape:export-xdpi="312.71841"
inkscape:export-ydpi="312.71841"
style="fill:#35992a;fill-opacity:1">
<g
transform="translate(86.33975,4.23985e-2)"
style="fill:#35992a;fill-opacity:1"
id="g2114">
<g
style="fill:#35992a;fill-opacity:1"
id="g2116">
<path
id="path2118"
transform="translate(-86.33975,-4.239934e-2)"
d="M 89.96875,0.03125 C 87.962748,0.031250001 86.34375,1.6815001 86.34375,3.6875 L 86.34375,17.71875 L 86.34375,19.6875 L 86.34375,28.90625 C 86.343752,39.06825 94.61925,47.34375 104.78125,47.34375 L 113.375,47.34375 L 123.1875,47.34375 L 127.15625,47.34375 C 129.16325,47.343749 130.8125,45.72475 130.8125,43.71875 C 130.8125,41.71275 129.16325,40.09375 127.15625,40.09375 L 123.1875,40.09375 L 123.1875,19.6875 L 123.1875,14.65625 L 123.1875,3.6875 C 123.1875,1.6815 121.5675,0.03125 119.5625,0.03125 C 117.5555,0.031250001 115.9375,1.6815001 115.9375,3.6875 L 115.9375,14.28125 C 115.1185,13.65425 114.26275,13.109 113.34375,12.625 L 113.34375,3.6875 C 113.34475,1.6815 111.6925,0.03125 109.6875,0.03125 C 107.6825,0.031250001 106.0625,1.6815001 106.0625,3.6875 L 106.0625,10.5625 C 105.6305,10.5325 105.22025,10.5 104.78125,10.5 C 104.34125,10.5 103.90075,10.5325 103.46875,10.5625 L 103.46875,3.6875 C 103.46975,1.6815 101.84975,0.03125 99.84375,0.03125 C 97.837749,0.031250001 96.21875,1.6815001 96.21875,3.6875 L 96.21875,12.625 C 95.299754,13.109 94.41375,13.65425 93.59375,14.28125 L 93.59375,3.6875 C 93.59475,1.6815 91.97475,0.03125 89.96875,0.03125 z M 104.78125,14.34375 C 112.80825,14.34375 119.3125,20.87925 119.3125,28.90625 C 119.3125,36.93325 112.80825,43.46875 104.78125,43.46875 C 96.754254,43.46875 90.21875,36.93425 90.21875,28.90625 C 90.218752,20.87825 96.753253,14.34375 104.78125,14.34375 z "
style="fill:#35992a;fill-opacity:1" />
</g>
</g>
<path
style="fill:#35992a;fill-opacity:1"
id="path2122"
d="M 112.04875,28.913399 C 112.04875,24.899399 108.78275,21.634399 104.76975,21.634399 C 100.75675,21.634399 97.490753,24.900399 97.490753,28.913399 C 97.490753,32.926399 100.75675,36.192399 104.76975,36.192399 C 108.78275,36.192399 112.04875,32.927399 112.04875,28.913399 z " />
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -1,64 +0,0 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="file:///usr/share/xml/docbook/stylesheet/nwalsh/fo/docbook.xsl"/>
<!-- check project-plan.sh for how this is generated, needed to tweak
the cover page
-->
<xsl:include href="/tmp/titlepage.xsl"/>
<!-- To force a page break in document, i.e per section add a
<?hard-pagebreak?> tag.
-->
<xsl:template match="processing-instruction('hard-pagebreak')">
<fo:block break-before='page' />
</xsl:template>
<!--Fix for defualt indent getting TOC all wierd..
See http://sources.redhat.com/ml/docbook-apps/2005-q1/msg00455.html
FIXME: must be a better fix
-->
<xsl:param name="body.start.indent" select="'0'"/>
<!--<xsl:param name="title.margin.left" select="'0'"/>-->
<!-- stop long-ish header titles getting wrapped -->
<xsl:param name="header.column.widths">1 10 1</xsl:param>
<!-- customise headers and footers a little -->
<xsl:template name="head.sep.rule">
<xsl:if test="$header.rule != 0">
<xsl:attribute name="border-bottom-width">0.5pt</xsl:attribute>
<xsl:attribute name="border-bottom-style">solid</xsl:attribute>
<xsl:attribute name="border-bottom-color">#cccccc</xsl:attribute>
</xsl:if>
</xsl:template>
<xsl:template name="foot.sep.rule">
<xsl:if test="$footer.rule != 0">
<xsl:attribute name="border-top-width">0.5pt</xsl:attribute>
<xsl:attribute name="border-top-style">solid</xsl:attribute>
<xsl:attribute name="border-top-color">#cccccc</xsl:attribute>
</xsl:if>
</xsl:template>
<xsl:attribute-set name="header.content.properties">
<xsl:attribute name="color">#cccccc</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="footer.content.properties">
<xsl:attribute name="color">#cccccc</xsl:attribute>
</xsl:attribute-set>
<!-- general settings -->
<xsl:param name="fop.extensions" select="1"></xsl:param>
<xsl:param name="paper.type" select="'A4'"></xsl:param>
<xsl:param name="section.autolabel" select="1"></xsl:param>
<xsl:param name="body.font.family" select="'verasans'"></xsl:param>
<xsl:param name="title.font.family" select="'verasans'"></xsl:param>
<xsl:param name="monospace.font.family" select="'veramono'"></xsl:param>
</xsl:stylesheet>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

View File

@@ -1,163 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
version="1.0"
width="158.56076"
height="79.284424"
viewBox="-40.981 -92.592 300 300"
id="svg2"
xml:space="preserve">
<defs
id="defs4">
</defs>
<path
d="M -36.585379,54.412576 L -36.585379,54.421305 L -36.582469,54.421305 L -36.582469,54.243829 C -36.57956,54.302018 -36.585379,54.357297 -36.585379,54.412576 z "
style="fill:#6ac7bd"
id="path6" />
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g8">
<g
id="g10">
<path
d="M 24.482,23.998 L 24.482,23.995 C 10.961,23.994 0,34.955 0,48.476 L 0.001,48.479 L 0.001,48.482 C 0.003,62.001 10.962,72.96 24.482,72.96 L 24.482,72.96 L 0,72.96 L 0,97.442 L 0.003,97.442 C 13.523,97.44 24.482,86.48 24.482,72.961 L 24.485,72.961 C 38.005,72.959 48.963,62 48.963,48.479 L 48.963,48.476 C 48.962,34.957 38.001,23.998 24.482,23.998 z M 24.482,50.928 C 23.13,50.928 22.034,49.832 22.034,48.48 C 22.034,47.128 23.13,46.032 24.482,46.032 C 25.834,46.032 26.93,47.128 26.93,48.48 C 26.93,49.832 25.834,50.928 24.482,50.928 z "
style="fill:#ef412a"
id="path12" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g14">
<g
id="g16">
<path
d="M 119.96,48.842 C 120.024,47.548 121.086,46.516 122.397,46.516 C 123.707,46.516 124.768,47.548 124.833,48.843 C 137.211,47.62 146.879,37.181 146.879,24.483 L 122.397,24.483 C 122.396,10.961 111.435,0 97.915,0 L 97.915,24.485 C 97.917,37.183 107.584,47.619 119.96,48.842 z M 124.833,49.084 C 124.769,50.379 123.707,51.411 122.397,51.411 L 122.396,51.411 L 122.396,73.444 L 146.878,73.444 L 146.878,73.441 C 146.876,60.745 137.208,50.308 124.833,49.084 z M 119.949,48.963 L 97.915,48.963 L 97.915,73.442 L 97.915,73.442 C 110.613,73.442 121.052,63.774 122.275,51.399 C 120.981,51.334 119.949,50.274 119.949,48.963 z "
style="fill:#a9c542"
id="path18" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g20">
<g
id="g22">
<path
d="M 168.912,48.967 C 168.912,47.656 169.945,46.596 171.24,46.531 C 170.018,34.152 159.579,24.482 146.879,24.482 L 146.879,48.963 C 146.879,62.484 157.84,73.444 171.361,73.444 L 171.361,51.414 C 170.007,51.415 168.912,50.319 168.912,48.967 z M 195.841,48.978 C 195.841,48.973 195.842,48.969 195.842,48.964 L 195.842,24.482 L 195.838,24.482 C 183.14,24.484 172.702,34.154 171.482,46.531 C 172.776,46.595 173.808,47.656 173.808,48.967 C 173.808,50.278 172.776,51.339 171.481,51.403 C 172.679,63.59 182.814,73.146 195.244,73.445 L 171.361,73.445 L 171.361,97.927 L 171.364,97.927 C 184.879,97.925 195.834,86.973 195.842,73.46 L 195.844,73.46 L 195.844,48.979 L 195.841,48.978 z M 195.832,48.964 L 195.842,48.964 L 195.842,48.978 L 195.832,48.964 z "
style="fill:#f9c759"
id="path24" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g26">
<g
id="g28">
<path
d="M 70.994,48.479 L 48.962,48.479 L 48.962,48.481 L 70.995,48.481 C 70.995,48.481 70.994,48.48 70.994,48.479 z M 73.44,24.001 L 73.437,24.001 L 73.437,46.032 C 73.439,46.032 73.44,46.032 73.442,46.032 C 74.794,46.032 75.89,47.128 75.89,48.48 C 75.89,49.832 74.794,50.928 73.442,50.928 C 72.091,50.928 70.996,49.834 70.994,48.483 L 48.958,48.483 L 48.958,48.486 C 48.96,62.005 59.919,72.964 73.437,72.964 C 86.955,72.964 97.914,62.005 97.916,48.486 L 97.916,48.483 C 97.916,34.963 86.958,24.003 73.44,24.001 z "
style="fill:#6ac7bd"
id="path30" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g32">
<g
id="g34">
<path
d="M 24.482,23.998 L 24.482,23.995 C 10.961,23.994 0,34.955 0,48.476 L 22.034,48.476 C 22.036,47.125 23.131,46.031 24.482,46.031 C 25.834,46.031 26.93,47.127 26.93,48.479 C 26.93,49.831 25.834,50.927 24.482,50.927 L 24.482,72.937 C 24.469,59.427 13.514,48.479 0,48.479 L 0,72.96 L 24.481,72.96 L 24.481,72.96 L 0,72.96 L 0,97.442 L 0.003,97.442 C 13.523,97.44 24.482,86.48 24.482,72.961 L 24.485,72.961 C 38.005,72.959 48.963,62 48.963,48.479 L 48.963,48.476 C 48.962,34.957 38.001,23.998 24.482,23.998 z "
style="fill:#ef412a"
id="path36" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g38">
<g
id="g40">
<path
d="M 122.397,46.516 C 123.707,46.516 124.768,47.548 124.833,48.843 C 137.211,47.62 146.879,37.181 146.879,24.483 L 122.397,24.483 L 122.397,46.516 L 122.397,46.516 z M 97.915,0 L 97.915,24.482 L 122.396,24.482 C 122.396,10.961 111.435,0 97.915,0 z M 122.275,46.528 C 121.052,34.151 110.613,24.482 97.914,24.482 L 97.914,48.964 L 97.914,48.964 L 97.914,73.443 L 97.914,73.443 C 110.612,73.443 121.051,63.775 122.274,51.4 C 120.98,51.335 119.948,50.275 119.948,48.964 C 119.949,47.653 120.98,46.593 122.275,46.528 z M 124.833,49.084 C 124.769,50.379 123.707,51.411 122.397,51.411 L 122.396,51.411 L 122.396,73.444 L 146.878,73.444 L 146.878,73.441 C 146.876,60.745 137.208,50.308 124.833,49.084 z "
style="fill:#a9c542"
id="path42" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g44">
<g
id="g46">
<path
d="M 173.795,49.1 C 173.724,50.389 172.666,51.415 171.36,51.415 C 170.006,51.415 168.911,50.319 168.911,48.967 C 168.911,47.656 169.944,46.596 171.239,46.531 C 170.017,34.152 159.578,24.482 146.878,24.482 L 146.878,48.963 C 146.878,62.484 157.839,73.444 171.36,73.444 L 171.36,97.926 L 171.363,97.926 C 184.878,97.924 195.833,86.972 195.841,73.459 L 195.842,73.459 L 195.842,73.443 L 195.841,73.443 C 195.833,60.753 186.167,50.322 173.795,49.1 z M 195.838,24.482 C 183.14,24.484 172.702,34.154 171.482,46.531 C 172.775,46.595 173.806,47.655 173.808,48.964 L 195.841,48.964 L 195.841,48.979 C 195.841,48.974 195.842,48.969 195.842,48.964 L 195.842,24.482 L 195.838,24.482 z "
style="fill:#f9c759"
id="path48" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g50">
<g
id="g52">
<path
d="M 71.007,48.347 C 71.075,47.105 72.062,46.117 73.304,46.046 C 72.509,38.02 67.85,31.133 61.201,27.284 C 57.601,25.2 53.424,24 48.965,24 L 48.962,24 C 48.962,28.46 50.161,32.638 52.245,36.24 C 56.093,42.891 62.98,47.552 71.007,48.347 z M 48.962,48.418 C 48.962,48.438 48.961,48.456 48.961,48.476 L 48.961,48.479 L 48.962,48.479 L 48.962,48.418 z M 70.995,48.482 C 70.995,48.481 70.995,48.481 70.995,48.48 L 48.962,48.48 L 48.962,48.482 L 70.995,48.482 z M 73.44,24.001 L 73.437,24.001 L 73.437,46.032 C 73.439,46.032 73.44,46.032 73.442,46.032 C 74.794,46.032 75.89,47.128 75.89,48.48 C 75.89,49.832 74.794,50.928 73.442,50.928 C 72.091,50.928 70.996,49.834 70.994,48.483 L 48.958,48.483 L 48.958,48.486 C 48.96,62.005 59.919,72.964 73.437,72.964 C 86.955,72.964 97.914,62.005 97.916,48.486 L 97.916,48.483 C 97.916,34.963 86.958,24.003 73.44,24.001 z "
style="fill:#6ac7bd"
id="path54" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g56">
<g
id="g58">
<path
d="M 24.482,23.998 L 24.482,23.995 C 10.961,23.994 0,34.955 0,48.476 L 22.034,48.476 C 22.036,47.125 23.131,46.031 24.482,46.031 C 25.834,46.031 26.93,47.127 26.93,48.479 C 26.93,49.831 25.834,50.927 24.482,50.927 C 23.171,50.927 22.11,49.894 22.046,48.6 C 9.669,49.824 0.001,60.262 0.001,72.96 L 0,72.96 L 0,97.442 L 0.003,97.442 C 13.523,97.44 24.482,86.48 24.482,72.961 L 24.485,72.961 C 38.005,72.959 48.963,62 48.963,48.479 L 48.963,48.476 C 48.962,34.957 38.001,23.998 24.482,23.998 z "
style="fill:#ef412a"
id="path60" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g62">
<g
id="g64">
<path
d="M 119.949,48.963 C 119.949,47.611 121.045,46.515 122.397,46.515 C 123.707,46.515 124.768,47.547 124.833,48.842 C 137.211,47.619 146.879,37.18 146.879,24.482 L 122.397,24.482 C 122.396,10.961 111.435,0 97.915,0 L 97.915,24.482 L 122.394,24.482 C 108.874,24.484 97.916,35.444 97.916,48.963 L 97.916,48.963 L 97.916,73.442 L 97.916,73.442 C 110.614,73.442 121.053,63.774 122.276,51.399 C 120.981,51.334 119.949,50.274 119.949,48.963 z M 124.833,49.084 C 124.769,50.379 123.707,51.411 122.397,51.411 L 122.396,51.411 L 122.396,73.444 L 146.878,73.444 L 146.878,73.441 C 146.876,60.745 137.208,50.308 124.833,49.084 z "
style="fill:#a9c542"
id="path66" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g68">
<g
id="g70">
<path
d="M 195.841,48.979 L 195.835,48.964 L 195.841,48.964 L 195.841,48.979 C 195.841,48.974 195.842,48.969 195.842,48.964 L 195.842,24.482 L 195.838,24.482 C 183.14,24.484 172.702,34.154 171.482,46.531 C 172.776,46.595 173.808,47.656 173.808,48.967 C 173.808,50.319 172.712,51.415 171.361,51.415 C 170.007,51.415 168.912,50.319 168.912,48.967 C 168.912,47.656 169.945,46.596 171.24,46.531 C 170.018,34.152 159.579,24.482 146.879,24.482 L 146.879,48.963 C 146.879,62.484 157.84,73.444 171.361,73.444 L 171.361,97.926 L 171.364,97.926 C 184.883,97.924 195.843,86.963 195.843,73.444 L 171.959,73.444 C 185.203,73.126 195.841,62.299 195.841,48.979 z "
style="fill:#f9c759"
id="path72" />
</g>
</g>
<g
transform="matrix(2.9094193,0,0,2.9094193,-179.03055,-86.624435)"
style="opacity:0.65"
id="g74">
<g
id="g76">
<path
d="M 73.44,24.001 L 73.437,24.001 C 59.919,24.003 48.96,34.959 48.958,48.476 L 48.958,48.479 L 48.961,48.479 L 48.961,48.481 L 48.957,48.482 L 48.957,48.485 C 48.959,62.004 59.918,72.963 73.436,72.963 C 86.954,72.963 97.913,62.004 97.915,48.485 L 97.915,48.482 C 97.916,34.963 86.958,24.003 73.44,24.001 z M 73.442,50.928 C 72.09,50.928 70.994,49.832 70.994,48.48 C 70.994,47.128 72.09,46.032 73.442,46.032 C 74.794,46.032 75.89,47.128 75.89,48.48 C 75.89,49.832 74.794,50.928 73.442,50.928 z "
style="fill:#6ac7bd"
id="path78" />
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 10 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,27 +0,0 @@
AC_PREREQ(2.53)
AC_INIT(poky-doc-tools, 0.1, http://o-hand.com)
AM_INIT_AUTOMAKE()
AC_PATH_PROG(HAVE_XSLTPROC, xsltproc, no)
if test x$HAVE_XSLTPROC = xno; then
AC_MSG_ERROR([Required xsltproc program not found])
fi
AC_PATH_PROG(HAVE_FOP, fop, no)
if test x$HAVE_FOP = xno; then
AC_MSG_ERROR([Required fop program not found])
fi
AC_CHECK_FILE([/usr/share/xml/docbook/stylesheet/nwalsh/template/titlepage.xsl],HAVE_NWALSH="yes", HAVE_NWALSH="no")
if test x$HAVE_FOP = xno; then
AC_MSG_ERROR([Required 'nwalsh' docbook stylesheets not found])
fi
AC_OUTPUT([
Makefile
common/Makefile
])
echo "
== poky-doc-tools $VERSION configured successfully. ==
"

View File

@@ -1,44 +0,0 @@
#!/bin/sh
if [ -z "$1" ]; then
echo "usage: [-v] $0 <docbook file>"
echo
echo "*NOTE* you need xsltproc, fop and nwalsh docbook stylesheets"
echo " installed for this to work!"
echo
exit 0
fi
if [ "$1" = "-v" ]; then
echo "Version @version@"
exit 1
fi
BASENAME=`basename $1 .xml` || exit 1
FO="$BASENAME.fo"
PDF="$BASENAME.pdf"
xsltproc -o /tmp/titlepage.xsl \
--xinclude \
/usr/share/xml/docbook/stylesheet/nwalsh/template/titlepage.xsl \
@datadir@/common/titlepage.templates.xml || exit 1
xsltproc --xinclude \
--stringparam hyphenate false \
--stringparam formal.title.placement "figure after" \
--stringparam ulink.show 1 \
--stringparam body.font.master 9 \
--stringparam title.font.master 11 \
--stringparam draft.watermark.image "@datadir@/common/draft.png" \
--output $FO \
@datadir@/common/poky-db-pdf.xsl \
$1 || exit 1
fop -c @datadir@/common/fop-config.xml -fo $FO -pdf $PDF || exit 1
rm -f $FO
rm -f /tmp/titlepage.xsl
echo
echo " #### Success! $PDF ready. ####"
echo

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -1,112 +0,0 @@
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id='poky-handbook' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
<bookinfo>
<mediaobject>
<imageobject>
<imagedata fileref='common/poky-handbook.png'
format='SVG'
align='center'/>
</imageobject>
</mediaobject>
<title>Poky Handbook</title>
<subtitle>Hitchhiker's Guide to Poky</subtitle>
<authorgroup>
<author>
<firstname>Richard</firstname> <surname>Purdie</surname>
<affiliation>
<orgname>OpenedHand Ltd</orgname>
</affiliation>
<email>richard@openedhand.com</email>
</author>
<author>
<firstname>Tomas</firstname> <surname>Frydrych</surname>
<affiliation>
<orgname>OpenedHand Ltd</orgname>
</affiliation>
<email>tf@openedhand.com</email>
</author>
<author>
<firstname>Marcin</firstname> <surname>Juszkiewicz</surname>
<affiliation>
<orgname>OpenedHand Ltd</orgname>
</affiliation>
<email>hrw@openedhand.com</email>
</author>
<author>
<firstname>Dodji</firstname> <surname>Seketeli</surname>
<affiliation>
<orgname>OpenedHand Ltd</orgname>
</affiliation>
<email>dodji@openedhand.com</email>
</author>
</authorgroup>
<revhistory>
<revision>
<revnumber>3.1</revnumber>
<date>15 February 2008</date>
<revremark>Poky 3.1 (Pinky) Documentation Release</revremark>
</revision>
</revhistory>
<copyright>
<year>2007</year>
<year>2008</year>
<holder>OpenedHand Limited</holder>
</copyright>
<legalnotice>
<para>
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
</para>
</legalnotice>
</bookinfo>
<xi:include href="introduction.xml"/>
<xi:include href="usingpoky.xml"/>
<xi:include href="extendpoky.xml"/>
<xi:include href="development.xml"/>
<xi:include href="ref-structure.xml"/>
<xi:include href="ref-bitbake.xml"/>
<xi:include href="ref-classes.xml"/>
<xi:include href="ref-images.xml"/>
<xi:include href="ref-features.xml"/>
<xi:include href="ref-variables.xml"/>
<xi:include href="ref-varlocality.xml"/>
<xi:include href="faq.xml"/>
<xi:include href="resources.xml"/>
<xi:include href="contactus.xml"/>
<index id='index'>
<title>Index</title>
</index>
</book>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,117 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 13.0.0, SVG Export Plug-In -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" [
<!ENTITY ns_flows "http://ns.adobe.com/Flows/1.0/">
]>
<svg version="1.1"
xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:a="http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/"
x="0px" y="0px" width="300px" height="300px" viewBox="-40.981 -92.592 300 300" enable-background="new -40.981 -92.592 300 300"
xml:space="preserve">
<defs>
</defs>
<path fill="#6AC7BD" d="M48.96,48.476v0.003h0.001v-0.061C48.962,48.438,48.96,48.457,48.96,48.476z"/>
<g opacity="0.65">
<g>
<path fill="#EF412A" d="M24.482,23.998v-0.003C10.961,23.994,0,34.955,0,48.476l0.001,0.003v0.003
C0.003,62.001,10.962,72.96,24.482,72.96l0,0H0v24.482h0.003c13.52-0.002,24.479-10.962,24.479-24.481h0.003
C38.005,72.959,48.963,62,48.963,48.479v-0.003C48.962,34.957,38.001,23.998,24.482,23.998z M24.482,50.928
c-1.352,0-2.448-1.096-2.448-2.448s1.096-2.448,2.448-2.448s2.448,1.096,2.448,2.448S25.834,50.928,24.482,50.928z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#A9C542" d="M119.96,48.842c0.064-1.294,1.126-2.326,2.437-2.326c1.31,0,2.371,1.032,2.436,2.327
c12.378-1.223,22.046-11.662,22.046-24.36h-24.482C122.396,10.961,111.435,0,97.915,0v24.485
C97.917,37.183,107.584,47.619,119.96,48.842z M124.833,49.084c-0.064,1.295-1.126,2.327-2.436,2.327h-0.001v22.033h24.482v-0.003
C146.876,60.745,137.208,50.308,124.833,49.084z M119.949,48.963H97.915v24.479h0c12.698,0,23.137-9.668,24.36-22.043
C120.981,51.334,119.949,50.274,119.949,48.963z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#F9C759" d="M168.912,48.967c0-1.311,1.033-2.371,2.328-2.436c-1.222-12.379-11.661-22.049-24.361-22.049v24.481
c0,13.521,10.961,24.481,24.482,24.481v-22.03C170.007,51.415,168.912,50.319,168.912,48.967z M195.841,48.978
c0-0.005,0.001-0.009,0.001-0.014V24.482h-0.004c-12.698,0.002-23.136,9.672-24.356,22.049c1.294,0.064,2.326,1.125,2.326,2.436
s-1.032,2.372-2.327,2.436c1.198,12.187,11.333,21.743,23.763,22.042h-23.883v24.482h0.003
c13.515-0.002,24.47-10.954,24.478-24.467h0.002V48.979L195.841,48.978z M195.832,48.964h0.01v0.014L195.832,48.964z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#6AC7BD" d="M70.994,48.479H48.962v0.002h22.033C70.995,48.481,70.994,48.48,70.994,48.479z M73.44,24.001h-0.003
v22.031c0.002,0,0.003,0,0.005,0c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448c-1.351,0-2.446-1.094-2.448-2.445
H48.958v0.003c0.002,13.519,10.961,24.478,24.479,24.478s24.477-10.959,24.479-24.478v-0.003
C97.916,34.963,86.958,24.003,73.44,24.001z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#EF412A" d="M24.482,23.998v-0.003C10.961,23.994,0,34.955,0,48.476h22.034c0.002-1.351,1.097-2.445,2.448-2.445
c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448v22.01C24.469,59.427,13.514,48.479,0,48.479V72.96h24.481l0,0H0
v24.482h0.003c13.52-0.002,24.479-10.962,24.479-24.481h0.003C38.005,72.959,48.963,62,48.963,48.479v-0.003
C48.962,34.957,38.001,23.998,24.482,23.998z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#A9C542" d="M122.397,46.516c1.31,0,2.371,1.032,2.436,2.327c12.378-1.223,22.046-11.662,22.046-24.36h-24.482
L122.397,46.516L122.397,46.516z M97.915,0v24.482h24.481C122.396,10.961,111.435,0,97.915,0z M122.275,46.528
c-1.223-12.377-11.662-22.046-24.361-22.046v24.482h0v24.479h0c12.698,0,23.137-9.668,24.36-22.043
c-1.294-0.065-2.326-1.125-2.326-2.436C119.949,47.653,120.98,46.593,122.275,46.528z M124.833,49.084
c-0.064,1.295-1.126,2.327-2.436,2.327h-0.001v22.033h24.482v-0.003C146.876,60.745,137.208,50.308,124.833,49.084z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#F9C759" d="M173.795,49.1c-0.071,1.289-1.129,2.315-2.435,2.315c-1.354,0-2.449-1.096-2.449-2.448
c0-1.311,1.033-2.371,2.328-2.436c-1.222-12.379-11.661-22.049-24.361-22.049v24.481c0,13.521,10.961,24.481,24.482,24.481v24.482
h0.003c13.515-0.002,24.47-10.954,24.478-24.467h0.001v-0.016h-0.001C195.833,60.753,186.167,50.322,173.795,49.1z
M195.838,24.482c-12.698,0.002-23.136,9.672-24.356,22.049c1.293,0.064,2.324,1.124,2.326,2.433h22.033v0.015
c0-0.005,0.001-0.01,0.001-0.015V24.482H195.838z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#6AC7BD" d="M71.007,48.347c0.068-1.242,1.055-2.23,2.297-2.301c-0.795-8.026-5.454-14.913-12.103-18.762
C57.601,25.2,53.424,24,48.965,24h-0.003c0,4.46,1.199,8.638,3.283,12.24C56.093,42.891,62.98,47.552,71.007,48.347z
M48.962,48.418c0,0.02-0.001,0.038-0.001,0.058v0.003h0.001V48.418z M70.995,48.482c0-0.001,0-0.001,0-0.002H48.962v0.002H70.995
z M73.44,24.001h-0.003v22.031c0.002,0,0.003,0,0.005,0c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448
c-1.351,0-2.446-1.094-2.448-2.445H48.958v0.003c0.002,13.519,10.961,24.478,24.479,24.478s24.477-10.959,24.479-24.478v-0.003
C97.916,34.963,86.958,24.003,73.44,24.001z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#EF412A" d="M24.482,23.998v-0.003C10.961,23.994,0,34.955,0,48.476h22.034c0.002-1.351,1.097-2.445,2.448-2.445
c1.352,0,2.448,1.096,2.448,2.448s-1.096,2.448-2.448,2.448c-1.311,0-2.372-1.033-2.436-2.327
C9.669,49.824,0.001,60.262,0.001,72.96H0v24.482h0.003c13.52-0.002,24.479-10.962,24.479-24.481h0.003
C38.005,72.959,48.963,62,48.963,48.479v-0.003C48.962,34.957,38.001,23.998,24.482,23.998z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#A9C542" d="M119.949,48.963c0-1.352,1.096-2.448,2.448-2.448c1.31,0,2.371,1.032,2.436,2.327
c12.378-1.223,22.046-11.662,22.046-24.36h-24.482C122.396,10.961,111.435,0,97.915,0v24.482h24.479
c-13.52,0.002-24.478,10.962-24.478,24.481h0v24.479h0c12.698,0,23.137-9.668,24.36-22.043
C120.981,51.334,119.949,50.274,119.949,48.963z M124.833,49.084c-0.064,1.295-1.126,2.327-2.436,2.327h-0.001v22.033h24.482
v-0.003C146.876,60.745,137.208,50.308,124.833,49.084z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#F9C759" d="M195.841,48.979l-0.006-0.015h0.006V48.979c0-0.005,0.001-0.01,0.001-0.015V24.482h-0.004
c-12.698,0.002-23.136,9.672-24.356,22.049c1.294,0.064,2.326,1.125,2.326,2.436c0,1.352-1.096,2.448-2.447,2.448
c-1.354,0-2.449-1.096-2.449-2.448c0-1.311,1.033-2.371,2.328-2.436c-1.222-12.379-11.661-22.049-24.361-22.049v24.481
c0,13.521,10.961,24.481,24.482,24.481v24.482h0.003c13.519-0.002,24.479-10.963,24.479-24.482h-23.884
C185.203,73.126,195.841,62.299,195.841,48.979z"/>
</g>
</g>
<g opacity="0.65">
<g>
<path fill="#6AC7BD" d="M73.44,24.001h-0.003C59.919,24.003,48.96,34.959,48.958,48.476v0.003h0.003v0.002l-0.004,0.001v0.003
c0.002,13.519,10.961,24.478,24.479,24.478s24.477-10.959,24.479-24.478v-0.003C97.916,34.963,86.958,24.003,73.44,24.001z
M73.442,50.928c-1.352,0-2.448-1.096-2.448-2.448s1.096-2.448,2.448-2.448s2.448,1.096,2.448,2.448S74.794,50.928,73.442,50.928z
"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 6.9 KiB

View File

@@ -1,340 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-bitbake'>
<title>Reference: Bitbake</title>
<para>
Bitbake a program written in Python which interprets the metadata
that makes up Poky. At some point, people wonder what actually happens
when you type <command>bitbake poky-image-sato</command>. This section
aims to give an overview of what happens behind the scenes from a
BitBake perspective.
</para>
<para>
It is worth noting that bitbake aims to be a generic "task" executor
capable of handling complex dependency relationships. As such it has no
real knowledge of what the tasks its executing actually do. It just
considers a list of tasks with dependencies and handles metadata
consisting of variables in a certain format which get passed to the
tasks.
</para>
<section id='ref-bitbake-parsing'>
<title>Parsing</title>
<para>
The first thing BitBake does is work out its configuration by
looking for a file called <filename>bitbake.conf</filename>.
Bitbake searches through the <varname>BBPATH</varname> environment
variable looking for a <filename class="directory">conf/</filename>
directory containing a <filename>bitbake.conf</filename> file and
adds the first <filename>bitbake.conf</filename> file found in
<varname>BBPATH</varname> (similar to the PATH environment variable).
For Poky, <filename>bitbake.conf</filename> is found in <filename
class="directory">meta/conf/</filename>.
</para>
<para>
In Poky, <filename>bitbake.conf</filename> lists other configuration
files to include from a <filename class="directory">conf/</filename>
directory below the directories listed in <varname>BBPATH</varname>.
In general the most important configuration file from a user's perspective
is <filename>local.conf</filename>, which contains a users customized
settings for Poky. Other notable configuration files are the distribution
configuration file (set by the <glossterm><link linkend='var-DISTRO'>
DISTRO</link></glossterm> variable) and the machine configuration file
(set by the <glossterm><link linkend='var-MACHINE'>MACHINE</link>
</glossterm> variable). The <glossterm><link linkend='var-DISTRO'>
DISTRO</link></glossterm> and <glossterm><link linkend='var-MACHINE'>
MACHINE</link></glossterm> environment variables are both usually set in
the <filename>local.conf</filename> file. Valid distribution
configuration files are available in the <filename class="directory">
meta/conf/distro/</filename> directory and valid machine configuration
files in the <filename class="directory">meta/conf/machine/</filename>
directory. Within the <filename class="directory">
meta/conf/machine/include/</filename> directory are various <filename>
tune-*.inc</filename> configuration files which provide common
"tuning" settings specific to and shared between particular
architectures and machines.
</para>
<para>
After the parsing of the configuration files some standard classes
are included. In particular, <filename>base.bbclass</filename> is
always included, as will any other classes
specified in the configuration using the <glossterm><link
linkend='var-INHERIT'>INHERIT</link></glossterm>
variable. Class files are searched for in a classes subdirectory
under the paths in <varname>BBPATH</varname> in the same way as
configuration files.
</para>
<para>
After the parsing of the configuration files is complete, the
variable <glossterm><link linkend='var-BBFILES'>BBFILES</link></glossterm>
is set, usually in
<filename>local.conf</filename>, and defines the list of places to search for
<filename class="extension">.bb</filename> files. By
default this specifies the <filename class="directory">meta/packages/
</filename> directory within Poky, but other directories such as
<filename class="directory">meta-extras/</filename> can be included
too. If multiple directories are specified a system referred to as
<link linkend='usingpoky-changes-collections'>"collections"</link> is used to
determine which files have priority.
</para>
<para>
Bitbake parses each <filename class="extension">.bb</filename> file in
<glossterm><link linkend='var-BBFILES'>BBFILES</link></glossterm> and
stores the values of various variables. In summary, for each
<filename class="extension">.bb</filename>
file the configuration + base class of variables are set, followed
by the data in the <filename class="extension">.bb</filename> file
itself, followed by any inherit commands that
<filename class="extension">.bb</filename> file might contain.
</para>
<para>
Parsing <filename class="extension">.bb</filename> files is a time
consuming process, so a cache is kept to speed up subsequent parsing.
This cache is invalid if the timestamp of the <filename class="extension">.bb</filename>
file itself has changed, or if the timestamps of any of the include,
configuration or class files the <filename class="extension">.bb</filename>
file depends on have changed.
</para>
</section>
<section id='ref-bitbake-providers'>
<title>Preferences and Providers</title>
<para>
Once all the <filename class="extension">.bb</filename> files have been
parsed, BitBake will proceed to build "poky-image-sato" (or whatever was
specified on the commandline) and looks for providers of that target.
Once a provider is selected, BitBake resolves all the dependencies for
the target. In the case of "poky-image-sato", it would lead to
<filename>task-oh.bb</filename> and <filename>task-base.bb</filename>
which in turn would lead to packages like <application>Contacts</application>,
<application>Dates</application>, <application>BusyBox</application>
and these in turn depend on glibc and the toolchain.
</para>
<para>
Sometimes a target might have multiple providers and a common example
is "virtual/kernel" that is provided by each kernel package. Each machine
will often elect the best provider of its kernel with a line like the
following in the machine configuration file:
</para>
<programlisting><glossterm><link linkend='var-PREFERRED_PROVIDER'>PREFERRED_PROVIDER</link></glossterm>_virtual/kernel = "linux-rp"</programlisting>
<para>
The default <glossterm><link linkend='var-PREFERRED_PROVIDER'>
PREFERRED_PROVIDER</link></glossterm> is the provider with the same name as
the target.
</para>
<para>
Understanding how providers are chosen is complicated by the fact
multiple versions might be present. Bitbake defaults to the highest
version of a provider by default. Version comparisons are made using
the same method as Debian. The <glossterm><link
linkend='var-PREFERRED_VERSION'>PREFERRED_VERSION</link></glossterm>
variable can be used to specify a particular version
(usually in the distro configuration) but the order can
also be influenced by the <glossterm><link
linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm>
variable. By default files
have a preference of "0". Setting the
<glossterm><link
linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm> to "-1" will
make a package unlikely to be used unless it was explicitly referenced and
"1" makes it likely the package will be used.
<glossterm><link
linkend='var-PREFERRED_VERSION'>PREFERRED_VERSION</link></glossterm> overrides
any default preference. <glossterm><link
linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm>
is often used to mark more
experimental new versions of packages until they've undergone sufficient
testing to be considered stable.
</para>
<para>
The end result is that internally, BitBake has now built a list of
providers for each target it needs in order of priority.
</para>
</section>
<section id='ref-bitbake-dependencies'>
<title>Dependencies</title>
<para>
Each target BitBake builds consists of multiple tasks (e.g. fetch,
unpack, patch, configure, compile etc.). For best performance on
multi-core systems, BitBake considers each task as an independent
entity with a set of dependencies. There are many variables that
are used to signify these dependencies and more information can be found
found about these in the <ulink url='http://bitbake.berlios.de/manual/'>
BitBake manual</ulink>. At a basic level it is sufficient to know
that BitBake uses the <glossterm><link
linkend='var-DEPENDS'>DEPENDS</link></glossterm> and
<glossterm><link linkend='var-RDEPENDS'>RDEPENDS</link></glossterm> variables when
calculating dependencies and descriptions of these variables are
available through the links.
</para>
</section>
<section id='ref-bitbake-tasklist'>
<title>The Task List</title>
<para>
Based on the generated list of providers and the dependency information,
BitBake can now calculate exactly which tasks it needs to run and in what
order. The build now starts with BitBake forking off threads up to
the limit set in the <glossterm><link
linkend='var-BB_NUMBER_THREADS'>BB_NUMBER_THREADS</link></glossterm> variable
as long there are tasks ready to run, i.e. tasks with all their
dependencies met.
</para>
<para>
As each task completes, a timestamp is written to the directory
specified by the <glossterm><link
linkend='var-STAMPS'>STAMPS</link></glossterm> variable (usually
<filename class="directory">build/tmp/stamps/*/</filename>). On
subsequent runs, BitBake looks at the <glossterm><link
linkend='var-STAMPS'>STAMPS</link></glossterm>
directory and will not rerun
tasks its already completed unless a timestamp is found to be invalid.
Currently, invalid timestamps are only considered on a per <filename
class="extension">.bb</filename> file basis so if for example the configure stamp has a timestamp greater than the
compile timestamp for a given target the compile task would rerun but this
has no effect on other providers depending on that target. This could
change or become configurable in future versions of BitBake. Some tasks
are marked as "nostamp" tasks which means no timestamp file will be written
and the task will always rerun.
</para>
<para>Once all the tasks have been completed BitBake exits.</para>
</section>
<section id='ref-bitbake-runtask'>
<title>Running a Task</title>
<para>
It's worth noting what BitBake does to run a task. A task can either
be a shell task or a python task. For shell tasks, BitBake writes a
shell script to <filename>${WORKDIR}/temp/run.do_taskname.pid</filename>
and then executes the script. The generated
shell script contains all the exported variables, and the shell functions
with all variables expanded. Output from the shell script is
sent to the file <filename>${WORKDIR}/temp/log.do_taskname.pid</filename>.
Looking at the
expanded shell functions in the run file and the output in the log files
is a useful debugging technique.
</para>
<para>
Python functions are executed internally to BitBake itself and
logging goes to the controlling terminal. Future versions of BitBake will
write the functions to files in a similar way to shell functions and
logging will also go to the log files in a similar way.
</para>
</section>
<section id='ref-bitbake-commandline'>
<title>Commandline</title>
<para>
To quote from "bitbake --help":
</para>
<screen>Usage: bitbake [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-b BUILDFILE, --buildfile=BUILDFILE
execute the task against this .bb file, rather than a
package from BBFILES.
-k, --continue continue as much as possible after an error. While the
target that failed, and those that depend on it,
cannot be remade, the other dependencies of these
targets can be processed all the same.
-f, --force force run of specified cmd, regardless of stamp status
-i, --interactive drop into the interactive mode also called the BitBake
shell.
-c CMD, --cmd=CMD Specify task to execute. Note that this only executes
the specified task for the providee and the packages
it depends on, i.e. 'compile' does not implicitly call
stage for the dependencies (IOW: use only if you know
what you are doing). Depending on the base.bbclass a
listtasks tasks is defined and will show available
tasks
-r FILE, --read=FILE read the specified file before bitbake.conf
-v, --verbose output more chit-chat to the terminal
-D, --debug Increase the debug level. You can specify this more
than once.
-n, --dry-run don't execute, just go through the motions
-p, --parse-only quit after parsing the BB files (developers only)
-d, --disable-psyco disable using the psyco just-in-time compiler (not
recommended)
-s, --show-versions show current and preferred versions of all packages
-e, --environment show the global or per-package environment (this is
what used to be bbread)
-g, --graphviz emit the dependency trees of the specified packages in
the dot syntax
-I IGNORED_DOT_DEPS, --ignore-deps=IGNORED_DOT_DEPS
Stop processing at the given list of dependencies when
generating dependency graphs. This can help to make
the graph more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile profile the command and print a report</screen>
</section>
<section id='ref-bitbake-fetchers'>
<title>Fetchers</title>
<para>
As well as the containing the parsing and task/dependency handling
code, bitbake also contains a set of "fetcher" modules which allow
fetching of source code from various types of sources. Example
sources might be from disk with the metadata, from websites, from
remote shell accounts or from SCM systems like cvs/subversion/git.
</para>
<para>
The fetchers are usually triggered by entries in
<glossterm><link linkend='var-SRC_URI'>SRC_URI</link></glossterm>. Information about the
options and formats of entries for specific fetchers can be found in the
<ulink url='http://bitbake.berlios.de/manual/'>BitBake manual</ulink>.
</para>
<para>
One useful feature for certain SCM fetchers is the ability to
"auto-update" when the upstream SCM changes version. Since this
requires certain functionality from the SCM only certain systems
support it, currently Subversion, Bazaar and to a limited extent, Git. It
works using the <glossterm><link linkend='var-SRCREV'>SRCREV</link>
</glossterm> variable. See the <link linkend='platdev-appdev-srcrev'>
developing with an external SCM based project</link> section for more
information.
</para>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4 spell spelllang=en_gb
-->

View File

@@ -1,460 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-classes'>
<title>Reference: Classes</title>
<para>
Class files are used to abstract common functionality and share it amongst multiple
<filename class="extension">.bb</filename> files. Any metadata usually found in a
<filename class="extension">.bb</filename> file can also be placed in a class
file. Class files are identified by the extension
<filename class="extension">.bbclass</filename> and are usually placed
in a <filename class="directory">classes/</filename> directory beneath the
<filename class="directory">meta/</filename> directory or the <filename
class="directory">build/</filename> directory in the same way as <filename
class="extension">.conf</filename> files in the <filename
class="directory">conf</filename> directory. Class files are searched for
in BBPATH in the same was as <filename class="extension">.conf</filename> files too.
</para>
<para>
In most cases inheriting the class is enough to enable its features, although
for some classes you may need to set variables and/or override some of the
default behaviour.
</para>
<section id='ref-classes-base'>
<title>The base class - <filename>base.bbclass</filename></title>
<para>
The base class is special in that every <filename class="extension">.bb</filename>
file inherits it automatically. It contains definitions of standard basic
tasks such as fetching, unpacking, configuring (empty by default), compiling
(runs any Makefile present), installing (empty by default) and packaging
(empty by default). These are often overridden or extended by other classes
such as <filename>autotools.bbclass</filename> or
<filename>package.bbclass</filename>. The class contains some commonly
some commonly used functions such as <function>oe_libinstall</function>
and <function>oe_runmake</function>. The end of the class file has a
list of standard mirrors for software projects for use by the fetcher code.
</para>
</section>
<section id='ref-classes-autotools'>
<title>Autotooled Packages - <filename>autotools.bbclass</filename></title>
<para>
Autotools (autoconf, automake, libtool) brings standardisation and this
class aims to define a set of tasks (configure, compile etc.) that will
work for all autotooled packages. It should usualy be enough to define
a few standard variables as documented in the <link
linkend='usingpoky-extend-addpkg-autotools'>simple autotools
example</link> section and then simply "inherit autotools". This class
can also work with software that emulates autotools.
</para>
<para>
Its useful to have some idea of the tasks this class defines work and
what they do behind the scenes.
</para>
<itemizedlist>
<listitem>
<para>
'do_configure' regenearates the configure script and
then launches it with a standard set of arguments used during
cross-compilation. Additional parameters can be passed to
<command>configure</command> through the <glossterm><link
linkend='var-EXTRA_OECONF'>EXTRA_OECONF</link></glossterm> variable.
</para>
</listitem>
<listitem>
<para>
'do_compile' runs <command>make</command> with arguments specifying
the compiler and linker. Additional arguments can be passed through
the <glossterm><link linkend='var-EXTRA_OEMAKE'>EXTRA_OEMAKE</link>
</glossterm> variable.
</para>
</listitem>
<listitem>
<para>
'do_install' runs <command>make install</command> passing a DESTDIR
option taking its value from the standard <glossterm><link
linkend='var-DESTDIR'>DESTDIR</link></glossterm> variable.
</para>
</listitem>
</itemizedlist>
<para>
By default the class does not stage headers and libraries so
the recipe author needs to add their own <function>do_stage()</function>
task. For typical recipes the following example code will usually be
enough:
<programlisting>
do_stage() {
autotools_stage_all
}</programlisting>
</para>
</section>
<section id='ref-classes-update-alternatives'>
<title>Alternatives - <filename>update-alternatives.bbclass</filename></title>
<para>
Several programs can fulfill the same or similar function and
they can be installed with the same name. For example the <command>ar</command>
command is available from the "busybox", "binutils" and "elfutils" packages.
This class handles the renaming of the binaries so multiple packages
can be installed which would otherwise conflict and yet the
<command>ar</command> command still works regardless of which are installed
or subsequently removed. It renames the conflicting binary in each package
and symlinks the highest priority binary during installation or removal
of packages.
Four variables control this class:
</para>
<variablelist>
<varlistentry>
<term>ALTERNATIVE_NAME</term>
<listitem>
<para>
Name of binary which will be replaced (<command>ar</command> in this example)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ALTERNATIVE_LINK</term>
<listitem>
<para>
Path to resulting binary ("/bin/ar" in this example)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ALTERNATIVE_PATH</term>
<listitem>
<para>
Path to real binary ("/usr/bin/ar.binutils" in this example)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ALTERNATIVE_PRIORITY</term>
<listitem>
<para>
Priority of binary, the version with the most features should have the highest priority
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section id='ref-classes-update-rc.d'>
<title>Initscripts - <filename>update-rc.d.bbclass</filename></title>
<para>
This class uses update-rc.d to safely install an initscript on behalf of
the package. Details such as making sure the initscript is stopped before
a package is removed and started when the package is installed are taken
care of. Three variables control this class,
<link linkend='var-INITSCRIPT_PACKAGES'>INITSCRIPT_PACKAGES</link>,
<link linkend='var-INITSCRIPT_NAME'>INITSCRIPT_NAME</link> and
<link linkend='var-INITSCRIPT_PARAMS'>INITSCRIPT_PARAMS</link>. See the
links for details.
</para>
</section>
<section id='ref-classes-binconfig'>
<title>Binary config scripts - <filename>binconfig.bbclass</filename></title>
<para>
Before pkg-config became widespread, libraries shipped shell
scripts to give information about the libraries and include paths needed
to build software (usually named 'LIBNAME-config'). This class assists
any recipe using such scripts.
</para>
<para>
During staging Bitbake installs such scripts into the <filename
class="directory">staging/</filename> directory. It also changes all
paths to point into the <filename class="directory">staging/</filename>
directory so all builds which use the script will use the correct
directories for the cross compiling layout.
</para>
</section>
<section id='ref-classes-debian'>
<title>Debian renaming - <filename>debian.bbclass</filename></title>
<para>
This class renames packages so that they follow the Debian naming
policy, i.e. 'glibc' becomes 'libc6' and 'glibc-devel' becomes
'libc6-dev'.
</para>
</section>
<section id='ref-classes-pkgconfig'>
<title>Pkg-config - <filename>pkgconfig.bbclass</filename></title>
<para>
Pkg-config brought standardisation and this class aims to make its
integration smooth for all libraries which make use of it.
</para>
<para>
During staging Bitbake installs pkg-config data into the <filename
class="directory">staging/</filename> directory. By making use of
sysroot functionality within pkgconfig this class no longer has to
manipulate the files.
</para>
</section>
<section id='ref-classes-src-distribute'>
<title>Distribution of sources - <filename>src_distribute_local.bbclass</filename></title>
<para>
Many software licenses require providing the sources for compiled
binaries. To simplify this process two classes were created:
<filename>src_distribute.bbclass</filename> and
<filename>src_distribute_local.bbclass</filename>.
</para>
<para>
Result of their work are <filename class="directory">tmp/deploy/source/</filename>
subdirs with sources sorted by <glossterm><link linkend='var-LICENSE'>LICENSE</link>
</glossterm> field. If recipe lists few licenses (or has entries like "Bitstream Vera") source archive is put in each
license dir.
</para>
<para>
Src_distribute_local class has three modes of operating:
</para>
<itemizedlist>
<listitem><para>copy - copies the files to the distribute dir</para></listitem>
<listitem><para>symlink - symlinks the files to the distribute dir</para></listitem>
<listitem><para>move+symlink - moves the files into distribute dir, and symlinks them back</para></listitem>
</itemizedlist>
</section>
<section id='ref-classes-perl'>
<title>Perl modules - <filename>cpan.bbclass</filename></title>
<para>
Recipes for Perl modules are simple - usually needs only
pointing to source archive and inheriting of proper bbclass.
Building is split into two methods dependly on method used by
module authors.
</para>
<para>
Modules which use old Makefile.PL based build system require
using of <filename>cpan.bbclass</filename> in their recipes.
</para>
<para>
Modules which use Build.PL based build system require
using of <filename>cpan_build.bbclass</filename> in their recipes.
</para>
</section>
<section id='ref-classes-distutils'>
<title>Python extensions - <filename>distutils.bbclass</filename></title>
<para>
Recipes for Python extensions are simple - usually needs only
pointing to source archive and inheriting of proper bbclass.
Building is split into two methods dependly on method used by
module authors.
</para>
<para>
Extensions which use autotools based build system require using
of autotools and distutils-base bbclasses in their recipes.
</para>
<para>
Extensions which use distutils build system require using
of <filename>distutils.bbclass</filename> in their recipes.
</para>
</section>
<section id='ref-classes-devshell'>
<title>Developer Shell - <filename>devshell.bbclass</filename></title>
<para>
This class adds the devshell task. Its usually up to distribution policy
to include this class (Poky does). See the <link
linkend='platdev-appdev-devshell'>developing with 'devshell' section</link>
for more information about using devshell.
</para>
</section>
<section id='ref-classes-package'>
<title>Packaging - <filename>package*.bbclass</filename></title>
<para>
The packaging classes add support for generating packages from the output
from builds. The core generic functionality is in
<filename>package.bbclass</filename>, code specific to particular package
types is contained in various sub classes such as
<filename>package_deb.bbclass</filename> and <filename>package_ipk.bbclass</filename>.
Most users will
want one or more of these classes and this is controlled by the <glossterm>
<link linkend='var-PACKAGE_CLASSES'>PACKAGE_CLASSES</link></glossterm>
variable. The first class listed in this variable will be used for image
generation. Since images are generated from packages a packaging class is
needed to enable image generation.
</para>
</section>
<section id='ref-classes-kernel'>
<title>Building kernels - <filename>kernel.bbclass</filename></title>
<para>
This class handle building of Linux kernels and the class contains code to know how to build both 2.4 and 2.6 kernel trees. All needed headers are
staged into <glossterm><link
linkend='var-STAGING_KERNEL_DIR'>STAGING_KERNEL_DIR</link></glossterm>
directory to allow building of out-of-tree modules using <filename>module.bbclass</filename>.
</para>
<para>
The means that each kerel module built is packaged separately and inter-modules dependencies are
created by parsing the <command>modinfo</command> output. If all modules are
required then installing "kernel-modules" package will install all
packages with modules and various other kernel packages such as "kernel-vmlinux" are also generated.
</para>
<para>
Various other classes are used by the kernel and module classes internally including
<filename>kernel-arch.bbclass</filename>, <filename>module_strip.bbclass</filename>,
<filename>module-base.bbclass</filename> and <filename>linux-kernel-base.bbclass</filename>.
</para>
</section>
<section id='ref-classes-image'>
<title>Creating images - <filename>image.bbclass</filename> and <filename>rootfs*.bbclass</filename></title>
<para>
Those classes add support for creating images in many formats. First the
rootfs is created from packages by one of the <filename>rootfs_*.bbclass</filename>
files (depending on package format used) and then image is created.
The <glossterm><link
linkend='var-IMAGE_FSTYPES'>IMAGE_FSTYPES</link></glossterm>
variable controls which types of image to generate.
The list of packages to install into the image is controlled by the
<glossterm><link
linkend='var-IMAGE_INSTALL'>IMAGE_INSTALL</link></glossterm>
variable.
</para>
</section>
<section id='ref-classes-sanity'>
<title>Host System sanity checks - <filename>sanity.bbclass</filename></title>
<para>
This class checks prerequisite software is present to try and identify
and notify the user of problems which will affect their build. It also
performs basic checks of the users configuration from local.conf to
prevent common mistakes and resulting build failures. Its usually up to
distribution policy to include this class (Poky does).
</para>
</section>
<section id='ref-classes-insane'>
<title>Generated output quality assurance checks - <filename>insane.bbclass</filename></title>
<para>
This class adds a step to package generation which sanity checks the
packages generated by Poky. There are an ever increasing range of checks
this makes, checking for common problems which break builds/packages/images,
see the bbclass file for more information. Its usually up to distribution
policy to include this class (Poky doesn't at the time of writing but plans
to soon).
</para>
</section>
<section id='ref-classes-siteinfo'>
<title>Autotools configuration data cache - <filename>siteinfo.bbclass</filename></title>
<para>
Autotools can require tests which have to execute on the target hardware.
Since this isn't possible in general when cross compiling, siteinfo is
used to provide cached test results so these tests can be skipped over but
the correct values used. The <link linkend='structure-meta-site'>meta/site directory</link>
contains test results sorted into different categories like architecture, endianess and
the libc used. Siteinfo provides a list of files containing data relevant to
the current build in the <glossterm><link linkend='var-CONFIG_SITE'>CONFIG_SITE
</link></glossterm> variable which autotools will automatically pick up.
</para>
<para>
The class also provides variables like <glossterm><link
linkend='var-SITEINFO_ENDIANESS'>SITEINFO_ENDIANESS</link></glossterm>
and <glossterm><link linkend='var-SITEINFO_BITS'>SITEINFO_BITS</link>
</glossterm> which can be used elsewhere in the metadata.
</para>
<para>
This class is included from <filename>base.bbclass</filename> and is hence always active.
</para>
</section>
<section id='ref-classes-others'>
<title>Other Classes</title>
<para>
Only the most useful/important classes are covered here but there are
others, see the <filename class="directory">meta/classes</filename> directory for the rest.
</para>
</section>
<!-- Undocumented classes are:
base_srpm.bbclass
bootimg.bbclass
ccache.inc
ccdv.bbclass
cml1.bbclass
cross.bbclass
flow-lossage.bbclass
gconf.bbclass
gettext.bbclass
gnome.bbclass
gtk-icon-cache.bbclass
icecc.bbclass
lib_package.bbclass
mozilla.bbclass
multimachine.bbclass
native.bbclass
oelint.bbclass
patch.bbclass
patcher.bbclass
pkg_distribute.bbclass
pkg_metainfo.bbclass
poky.bbclass
rm_work.bbclass
rpm_core.bbclass
scons.bbclass
sdk.bbclass
sdl.bbclass
sip.bbclass
sourcepkg.bbclass
srec.bbclass
syslinux.bbclass
tinderclient.bbclass
tmake.bbclass
xfce.bbclass
xlibs.bbclass
-->
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,302 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-features'>
<title>Reference: Features</title>
<para>'Features' provide a mechanism for working out which packages
should be included in the generated images. Distributions can
select which features they want to support through the
<glossterm linkend='var-DISTRO_FEATURES'><link
linkend='var-DISTRO_FEATURES'>DISTRO_FEATURES</link></glossterm>
variable which is set in the distribution configuration file
(poky.conf for Poky). Machine features are set in the
<glossterm linkend='var-MACHINE_FEATURES'><link
linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES</link></glossterm>
variable which is set in the machine configuration file and
specifies which hardware features a given machine has.
</para>
<para>These two variables are combined to work out which kernel modules,
utilities and other packages to include. A given distribution can
support a selected subset of features so some machine features might not
be included if the distribution itself doesn't support them.
</para>
<section id='ref-features-distro'>
<title>Distro</title>
<para>The items below are valid options for <glossterm linkend='var-DISTRO_FEATURES'><link
linkend='var-DISTRO_FEATURES'>DISTRO_FEATURES</link></glossterm>.
</para>
<itemizedlist>
<listitem>
<para>
alsa - ALSA support will be included (OSS compatibility
kernel modules will be installed if available)
</para>
</listitem>
<listitem>
<para>
bluetooth - Include bluetooth support (integrated BT only)
</para>
</listitem>
<listitem>
<para>
ext2 - Include tools for supporting for devices with internal
HDD/Microdrive for storing files (instead of Flash only devices)
</para>
</listitem>
<listitem>
<para>
irda - Include Irda support
</para>
</listitem>
<listitem>
<para>
keyboard - Include keyboard support (e.g. keymaps will be
loaded during boot).
</para>
</listitem>
<listitem>
<para>
pci - Include PCI bus support
</para>
</listitem>
<listitem>
<para>
pcmcia - Include PCMCIA/CompactFlash support
</para>
</listitem>
<listitem>
<para>
usbgadget - USB Gadget Device support (for USB
networking/serial/storage)
</para>
</listitem>
<listitem>
<para>
usbhost - USB Host support (allows to connect external
keyboard, mouse, storage, network etc)
</para>
</listitem>
<listitem>
<para>
wifi - WiFi support (integrated only)
</para>
</listitem>
<listitem>
<para>
cramfs - CramFS support
</para>
</listitem>
<listitem>
<para>
ipsec - IPSec support
</para>
</listitem>
<listitem>
<para>
ipv6 - IPv6 support
</para>
</listitem>
<listitem>
<para>
nfs - NFS client support (for mounting NFS exports on
device)
</para>
</listitem>
<listitem>
<para>
ppp - PPP dialup support
</para>
</listitem>
<listitem>
<para>
smbfs - SMB networks client support (for mounting
Samba/Microsoft Windows shares on device)
</para>
</listitem>
</itemizedlist>
</section>
<section id='ref-features-machine'>
<title>Machine</title>
<para>The items below are valid options for <glossterm linkend='var-MACHINE_FEATURES'><link
linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES</link></glossterm>.
</para>
<itemizedlist>
<listitem>
<para>
acpi - Hardware has ACPI (x86/x86_64 only)
</para>
</listitem>
<listitem>
<para>
alsa - Hardware has ALSA audio drivers
</para>
</listitem>
<listitem>
<para>
apm - Hardware uses APM (or APM emulation)
</para>
</listitem>
<listitem>
<para>
bluetooth - Hardware has integrated BT
</para>
</listitem>
<listitem>
<para>
ext2 - Hardware HDD or Microdrive
</para>
</listitem>
<listitem>
<para>
irda - Hardware has Irda support
</para>
</listitem>
<listitem>
<para>
keyboard - Hardware has a keyboard
</para>
</listitem>
<listitem>
<para>
pci - Hardware has a PCI bus
</para>
</listitem>
<listitem>
<para>
pcmcia - Hardware has PCMCIA or CompactFlash sockets
</para>
</listitem>
<listitem>
<para>
screen - Hardware has a screen
</para>
</listitem>
<listitem>
<para>
serial - Hardware has serial support (usually RS232)
</para>
</listitem>
<listitem>
<para>
touchscreen - Hardware has a touchscreen
</para>
</listitem>
<listitem>
<para>
usbgadget - Hardware is USB gadget device capable
</para>
</listitem>
<listitem>
<para>
usbhost - Hardware is USB Host capable
</para>
</listitem>
<listitem>
<para>
wifi - Hardware has integrated WiFi
</para>
</listitem>
</itemizedlist>
</section>
<section id='ref-features-image'>
<title>Reference: Images</title>
<para>
The contents of images generated by Poky can be controlled by the <glossterm
linkend='var-IMAGE_FEATURES'><link
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
variable in local.conf. Through this you can add several different
predefined packages such as development utilities or packages with debug
information needed to investigate application problems or profile applications.
</para>
<para>
Current list of <glossterm
linkend='var-IMAGE_FEATURES'><link
linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm> contains:
</para>
<itemizedlist>
<listitem>
<para>
apps-console-core - Core console applications such as ssh daemon,
avahi daemon, portmap (for mounting NFS shares)
</para>
</listitem>
<listitem>
<para>
x11-base - X11 server + minimal desktop
</para>
</listitem>
<listitem>
<para>
x11-sato - OpenedHand Sato environment
</para>
</listitem>
<listitem>
<para>
apps-x11-core - Core X11 applications such as an X Terminal, file manager, file editor
</para>
</listitem>
<listitem>
<para>
apps-x11-games - A set of X11 games
</para>
</listitem>
<listitem>
<para>
apps-x11-pimlico - OpenedHand Pimlico application suite
</para>
</listitem>
<listitem>
<para>
tools-sdk - A full SDK which runs on device
</para>
</listitem>
<listitem>
<para>
tools-debug - Debugging tools such as strace and gdb
</para>
</listitem>
<listitem>
<para>
tools-profile - Profiling tools such as oprofile, exmap and LTTng
</para>
</listitem>
<listitem>
<para>
tools-testapps - Device testing tools (e.g. touchscreen debugging)
</para>
</listitem>
<listitem>
<para>
nfs-server - NFS server (exports / over NFS to everybody)
</para>
</listitem>
<listitem>
<para>
dev-pkgs - Development packages (headers and extra library links) for all packages
installed in a given image
</para>
</listitem>
<listitem>
<para>
dbg-pkgs - Debug packages for all packages installed in a given image
</para>
</listitem>
</itemizedlist>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4 spell spelllang=en_gb
-->

View File

@@ -1,69 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-images'>
<title>Reference: Images</title>
<para>
Poky has several standard images covering most people's standard needs. A full
list of image targets can be found by looking in the <filename class="directory">
meta/packages/images/</filename> directory. The standard images are listed below
along with details of what they contain:
</para>
<itemizedlist>
<listitem>
<para>
<emphasis>poky-image-minimal</emphasis> - A small image, just enough
to allow a device to boot
</para>
</listitem>
<listitem>
<para>
<emphasis>poky-image-base</emphasis> - console only image with full
support of target device hardware
</para>
</listitem>
<listitem>
<para>
<emphasis>poky-image-core</emphasis> - X11 image with simple apps like
terminal, editor and file manager
</para>
</listitem>
<listitem>
<para>
<emphasis>poky-image-sato</emphasis> - X11 image with Sato theme and
Pimlico applications. Also contains terminal, editor and file manager.
</para>
</listitem>
<listitem>
<para>
<emphasis>poky-image-sdk</emphasis> - X11 image like poky-image-sato but
also include native toolchain and libraries needed to build applications
on the device itself. Also includes testing and profiling tools and debug
symbols.
</para>
</listitem>
<listitem>
<para>
<emphasis>meta-toolchain</emphasis> - This generates a tarball containing
a standalone toolchain which can be used externally to Poky. It is self
contained and unpacks to the <filename class="directory">/usr/local/poky</filename>
directory. It also contains a copy of QEMU and the scripts neccessary to run
poky QEMU images.
</para>
</listitem>
<listitem>
<para>
<emphasis>meta-toolchain-sdk</emphasis> - This includes everything in
meta-toolchain but also includes development headers and libraries
forming a complete standalone SDK. See the <link linkend='platdev-appdev-external-sdk'>
Developing using the Poky SDK</link> and <link linkend='platdev-appdev-external-anjuta'>
Developing using the Anjuta Plugin</link> sections for more information.
</para>
</listitem>
</itemizedlist>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,365 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-structure'>
<title>Reference: Directory Structure</title>
<para>
Poky consists of several components and understanding what these are
and where they're located is one of the keys to using it. This section walks
through the Poky directory structure giving information about the various
files and directories.
</para>
<section id='structure-core'>
<title>Top level core components</title>
<section id='structure-core-bitbake'>
<title><filename class="directory">bitbake/</filename></title>
<para>
A copy of BitBake is included within Poky for ease of use, and should
usually match the current BitBake stable release from the BitBake project.
Bitbake, a metadata interpreter, reads the Poky metadata and runs the tasks
defined in the Poky metadata. Failures are usually from the metadata, not
BitBake itself, so most users don't need to worry about BitBake. The
<filename class="directory">bitbake/bin/</filename> directory is placed
into the PATH environment variable by the <link
linkend="structure-core-script">poky-init-build-env</link> script.
</para>
<para>
For more information on BitBake please see the BitBake project site at
<ulink url="http://bitbake.berlios.de/"/>
and the BitBake on-line manual at <ulink url="http://bitbake.berlios.de/manual/"/>.
</para>
</section>
<section id='structure-core-build'>
<title><filename class="directory">build/</filename></title>
<para>
This directory contains user configuration files and the output
from Poky.
</para>
</section>
<section id='structure-core-meta'>
<title><filename class="directory">meta/</filename></title>
<para>
This directory contains the core metadata, a key part of Poky. Within this
directory there are definitions of the machines, the Poky distribution
and the packages that make up a given system.
</para>
</section>
<section id='structure-core-meta-extras'>
<title><filename class="directory">meta-extras/</filename></title>
<para>
This directory is similar to <filename class="directory">meta/</filename>,
and contains some extra metadata not included in standard Poky. These are
disabled by default, and are not supported as part of Poky.
</para>
</section>
<section id='structure-core-scripts'>
<title><filename class="directory">scripts/</filename></title>
<para>
This directory contains various integration scripts which implement
extra functionality in the Poky environment, such as the QEMU
scripts. This directory is appended to the PATH environment variable by the
<link linkend="structure-core-script">poky-init-build-env</link> script.
</para>
</section>
<section id='structure-core-sources'>
<title><filename class="directory">sources/</filename></title>
<para>
While not part of a checkout, Poky will create this directory as
part of any build. Any downloads are placed in this directory (as
specified by the <glossterm><link linkend='var-DL_DIR'>DL_DIR</link>
</glossterm> variable). This directory can be shared between Poky
builds to save downloading files multiple times. SCM checkouts are
also stored here as e.g. <filename class="directory">sources/svn/
</filename>, <filename class="directory">sources/cvs/</filename> or
<filename class="directory">sources/git/</filename> and the
sources directory may contain archives of checkouts for various
revisions or dates.
</para>
<para>
It's worth noting that BitBake creates <filename class="extension">.md5
</filename> stamp files for downloads. It uses these to mark downloads as
complete as well as for checksum and access accounting purposes. If you add
a file manually to the directory, you need to touch the corresponding
<filename class="extension">.md5</filename> file too.
</para>
<para>
This location can be overridden by setting <glossterm><link
linkend='var-DL_DIR'>DL_DIR</link></glossterm> in <filename>local.conf
</filename>. This directory can be shared between builds and even between
machines via NFS, so downloads are only made once, speeding up builds.
</para>
</section>
<section id='structure-core-script'>
<title><filename>poky-init-build-env</filename></title>
<para>
This script is used to setup the Poky build environment. Sourcing this file in
a shell makes changes to PATH and sets other core BitBake variables based on the
current working directory. You need to use this before running Poky commands.
Internally it uses scripts within the <filename class="directory">scripts/
</filename> directory to do the bulk of the work.
</para>
</section>
</section>
<section id='structure-build'>
<title><filename class="directory">build/</filename> - The Build Directory</title>
<section id='structure-build-conf-local.conf'>
<title><filename>build/conf/local.conf</filename></title>
<para>
This file contains all the local user configuration of Poky. If there
is no <filename>local.conf</filename> present, it is created from
<filename>local.conf.sample</filename>. The <filename>local.conf</filename>
file contains documentation on the various configuration options. Any
variable set here overrides any variable set elsewhere within Poky unless
that variable is hardcoded within Poky (e.g. by using '=' instead of '?=').
Some variables are hardcoded for various reasons but these variables are
relatively rare.
</para>
<para>
Edit this file to set the <glossterm><link linkend='var-MACHINE'>MACHINE</link></glossterm> for which you want to build, which package types you
wish to use (PACKAGE_CLASSES) or where downloaded files should go
(<glossterm><link linkend='var-DL_DIR'>DL_DIR</link></glossterm>).
</para>
</section>
<section id='structure-build-tmp'>
<title><filename class="directory">build/tmp/</filename></title>
<para>
This is created by BitBake if it doesn't exist and is where all the Poky output
is placed. To clean Poky and start a build from scratch (other than downloads),
you can wipe this directory. The <filename class="directory">tmp/
</filename> directory has some important sub-components detailed below.
</para>
</section>
<section id='structure-build-tmp-cache'>
<title><filename class="directory">build/tmp/cache/</filename></title>
<para>
When BitBake parses the metadata it creates a cache file of the result which can
be used when subsequently running commands. These are stored here on
a per machine basis.
</para>
</section>
<section id='structure-build-tmp-cross'>
<title><filename class="directory">build/tmp/cross/</filename></title>
<para>
The cross compiler when generated is placed into this directory and those
beneath it.
</para>
</section>
<section id='structure-build-tmp-deploy'>
<title><filename class="directory">build/tmp/deploy/</filename></title>
<para>Any 'end result' output from Poky is placed under here.</para>
</section>
<section id='structure-build-tmp-deploy-deb'>
<title><filename class="directory">build/tmp/deploy/deb/</filename></title>
<para>
Any .deb packages emitted by Poky are placed here, sorted into feeds for
different architecture types.
</para>
</section>
<section id='structure-build-tmp-deploy-images'>
<title><filename class="directory">build/tmp/deploy/images/</filename></title>
<para>
Complete filesystem images are placed here. If you want to flash the resulting
image from a build onto a device, look here for them.
</para>
</section>
<section id='structure-build-tmp-deploy-ipk'>
<title><filename class="directory">build/tmp/deploy/ipk/</filename></title>
<para>Any resulting .ipk packages emitted by Poky are placed here.</para>
</section>
<section id='structure-build-tmp-rootfs'>
<title><filename class="directory">build/tmp/rootfs/</filename></title>
<para>
This is a temporary scratch area used when creating filesystem images. It is run
under fakeroot and is not useful once that fakeroot session has ended as
information is lost. It is left around since it is still useful in debugging
image creation problems.
</para>
</section>
<section id='structure-build-tmp-staging'>
<title><filename class="directory">build/tmp/staging/</filename></title>
<para>
Any package needing to share output with other packages does so within staging.
This means it contains any shared header files and any shared libraries amongst
other data. It is subdivided by architecture so multiple builds can run within
the one build directory.
</para>
</section>
<section id='structure-build-tmp-stamps'>
<title><filename class="directory">build/tmp/stamps/</filename></title>
<para>
This is used by BitBake for accounting purposes to keep track of which tasks
have been run and when. It is also subdivided by architecture. The files are
empty and the important information is the filenames and timestamps.</para>
</section>
<section id='structure-build-tmp-work'>
<title><filename class="directory">build/tmp/work/</filename></title>
<para>
This directory contains various subdirectories for each architecture, and each package built by BitBake has its own work directory under the appropriate architecture subdirectory. All tasks are executed from this work directory. As an example, the source for a particular package will be unpacked, patched, configured and compiled all within its own work directory.
</para>
<para>
It is worth considering the structure of a typical work directory. An
example is the linux-rp kernel, version 2.6.20 r7 on the machine spitz
built within Poky. For this package a work directory of <filename
class="directory">tmp/work/spitz-poky-linux-gnueabi/linux-rp-2.6.20-r7/
</filename>, referred to as <glossterm><link linkend='var-WORKDIR'>WORKDIR
</link></glossterm>, is created. Within this directory, the source is
unpacked to linux-2.6.20 and then patched by quilt (see <link
linkend="usingpoky-modifying-packages-quilt">Section 3.5.1</link>).
Within the <filename class="directory">linux-2.6.20</filename> directory,
standard Quilt directories <filename class="directory">linux-2.6.20/patches</filename>
and <filename class="directory">linux-2.6.20/.pc</filename> are created,
and standard quilt commands can be used.
</para>
<para>
There are other directories generated within <glossterm><link
linkend='var-WORKDIR'>WORKDIR</link></glossterm>. The most important
is <glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm><filename class="directory">/temp/</filename> which has log files for each
task (<filename>log.do_*.pid</filename>) and the scripts BitBake runs for
each task (<filename>run.do_*.pid</filename>). The <glossterm><link
linkend='var-WORKDIR'>WORKDIR</link></glossterm><filename
class="directory">/image/</filename> directory is where <command>make
install</command> places its output which is then split into subpackages
within <glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm><filename class="directory">/install/</filename>.
</para>
</section>
</section>
<section id='structure-meta'>
<title><filename class="directory">meta/</filename> - The Metadata</title>
<para>
As mentioned previously, this is the core of Poky. It has several
important subdivisions:
</para>
<section id='structure-meta-classes'>
<title><filename class="directory">meta/classes/</filename></title>
<para>
Contains the <filename class="extension">*.bbclass</filename> files. Class
files are used to abstract common code allowing it to be reused by multiple
packages. The <filename>base.bbclass</filename> file is inherited by every
package. Examples of other important classes are
<filename>autotools.bbclass</filename> that in theory allows any
Autotool-enabled package to work with Poky with minimal effort, or
<filename>kernel.bbclass</filename> that contains common code and functions
for working with the linux kernel. Functions like image generation or
packaging also have their specific class files (<filename>image.bbclass
</filename>, <filename>rootfs_*.bbclass</filename> and
<filename>package*.bbclass</filename>).
</para>
</section>
<section id='structure-meta-conf'>
<title><filename class="directory">meta/conf/</filename></title>
<para>
This is the core set of configuration files which start from
<filename>bitbake.conf</filename> and from which all other configuration
files are included (see the includes at the end of the file, even
<filename>local.conf</filename> is loaded from there!). While
<filename>bitbake.conf</filename> sets up the defaults, these can often be
overridden by user (<filename>local.conf</filename>), machine or
distribution configuration files.
</para>
</section>
<section id='structure-meta-conf-machine'>
<title><filename class="directory">meta/conf/machine/</filename></title>
<para>
Contains all the machine configuration files. If you set MACHINE="spitz", the
end result is Poky looking for a <filename>spitz.conf</filename> file in this directory. The includes
directory contains various data common to multiple machines. If you want to add
support for a new machine to Poky, this is the directory to look in.
</para>
</section>
<section id='structure-meta-conf-distro'>
<title><filename class="directory">meta/conf/distro/</filename></title>
<para>
Any distribution specific configuration is controlled from here. OpenEmbedded
supports multiple distributions of which Poky is one. Poky only contains the
Poky distribution so poky.conf is the main file here. This includes the
versions and SRCDATES for applications which are configured here. An example of
an alternative configuration is poky-bleeding.conf although this mainly inherits
its configuration from Poky itself.
</para>
</section>
<section id='structure-meta-packages'>
<title><filename class="directory">meta/packages/</filename></title>
<para>
Each application (package) Poky can build has an associated .bb file which are
all stored under this directory. Poky finds them through the BBFILES variable
which defaults to packages/*/*.bb. Adding a new piece of software to Poky
consists of adding the appropriate .bb file. The .bb files from OpenEmbedded
upstream are usually compatible although they are not supported.
</para>
</section>
<section id='structure-meta-site'>
<title><filename class="directory">meta/site/</filename></title>
<para>
Certain autoconf test results cannot be determined when cross compiling since it
can't run tests on a live system. This directory therefore contains a list of
cached results for various architectures which is passed to autoconf.
</para>
</section>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,840 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<!-- Dummy chapter -->
<appendix id='ref-variables-glos'>
<title>Reference: Variables Glossary</title>
<para>
This section lists common variables used in Poky and gives an overview
of their function and contents.
</para>
<glossary id='ref-variables-glossary'>
<para>
<link linkend='var-glossary-a'>A</link>
<link linkend='var-glossary-b'>B</link>
<link linkend='var-glossary-c'>C</link>
<link linkend='var-glossary-d'>D</link>
<link linkend='var-glossary-e'>E</link>
<link linkend='var-glossary-f'>F</link>
<!-- <link linkend='var-glossary-g'>G</link> -->
<link linkend='var-glossary-h'>H</link>
<link linkend='var-glossary-i'>I</link>
<!-- <link linkend='var-glossary-j'>J</link> -->
<link linkend='var-glossary-k'>K</link>
<link linkend='var-glossary-l'>L</link>
<link linkend='var-glossary-m'>M</link>
<!-- <link linkend='var-glossary-n'>N</link> -->
<!-- <link linkend='var-glossary-o'>O</link> -->
<link linkend='var-glossary-p'>P</link>
<!-- <link linkend='var-glossary-q'>Q</link> -->
<link linkend='var-glossary-r'>R</link>
<link linkend='var-glossary-s'>S</link>
<link linkend='var-glossary-t'>T</link>
<!-- <link linkend='var-glossary-u'>U</link> -->
<!-- <link linkend='var-glossary-v'>V</link> -->
<link linkend='var-glossary-w'>W</link>
<!-- <link linkend='var-glossary-x'>X</link> -->
<!-- <link linkend='var-glossary-y'>Y</link> -->
<!-- <link linkend='var-glossary-z'>Z</link>-->
</para>
<glossdiv id='var-glossary-a'><title>A</title>
<glossentry id='var-AUTHOR'><glossterm>AUTHOR</glossterm>
<glossdef>
<para>E-mail address to contact original author(s) - to
send patches, forward bugs...</para>
</glossdef>
</glossentry>
<glossentry id='var-AUTOREV'><glossterm>AUTOREV</glossterm>
<glossdef>
<para>Use current (newest) source revision - used with
<glossterm><link linkend='var-SRCREV'>SRCREV</link></glossterm>
variable.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-b'><title>B</title>
<glossentry id='var-BB_NUMBER_THREADS'><glossterm>BB_NUMBER_THREADS</glossterm>
<glossdef>
<para>The maximum number of tasks BitBake should run in parallel at any one time</para>
</glossdef>
</glossentry>
<glossentry id='var-BBFILES'><glossterm>BBFILES</glossterm>
<glossdef>
<para>List of recipes used by BitBake to build software</para>
</glossdef>
</glossentry>
<!-- BBPATH is not a usable variable in .bb files and should not be listed here -->
<glossentry id='var-BBINCLUDELOGS'><glossterm>BBINCLUDELOGS</glossterm>
<glossdef>
<para>Variable which controls how BitBake displays logs on build failure.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-c'><title>C</title>
<glossentry id='var-CFLAGS'><glossterm>CFLAGS</glossterm>
<glossdef>
<para>
Flags passed to C compiler for the target system. Evaluates to the same
as <link linkend='var-TARGET_CFLAGS'>TARGET_CFLAGS</link>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-COMPATIBLE_MACHINE'><glossterm>COMPATIBLE_MACHINE</glossterm>
<glossdef>
<para>A regular expression which evalutates to match the machines the recipe
works with. It stops recipes being run on machines they're incompatible with
which is partciuarly useful with kernels. It also helps to to increase parsing
speed as if its found the current machine is not compatible, further parsing
of the recipe is skipped.</para>
</glossdef>
</glossentry>
<glossentry id='var-CONFIG_SITE'><glossterm>CONFIG_SITE</glossterm>
<glossdef>
<para>
Contains a list of files which containing autoconf test results relevant
to the current build. This variable is used by the autotools utilities
when running configure.
</para>
</glossdef>
</glossentry>
<glossentry id='var-CVS_TARBALL_STASH'><glossterm>CVS_TARBALL_STASH</glossterm>
<glossdef>
<para>Location to search for
pre-generated tarballs when fetching from remote SCM
repositories (CVS/SVN/GIT)</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-d'><title>D</title>
<glossentry id='var-D'><glossterm>D</glossterm>
<glossdef>
<para>Destination directory</para>
</glossdef>
</glossentry>
<glossentry id='var-DEBUG_BUILD'><glossterm>DEBUG_BUILD</glossterm>
<glossdef>
<para>
Build packages with debugging information. This influences the value
<link linkend='var-SELECTED_OPTIMIZATION'>SELECTED_OPTIMIZATION</link>
takes.
</para>
</glossdef>
</glossentry>
<glossentry id='var-DEBUG_OPTIMIZATION'><glossterm>DEBUG_OPTIMIZATION</glossterm>
<glossdef>
<para>
The options to pass in <link linkend='var-TARGET_CFLAGS'>TARGET_CFLAGS</link>
and <link linkend='var-CFLAGS'>CFLAGS</link> when compiling a system for debugging.
This defaults to "-O -fno-omit-frame-pointer -g".
</para>
</glossdef>
</glossentry>
<glossentry id='var-DEFAULT_PREFERENCE'><glossterm>DEFAULT_PREFERENCE</glossterm>
<glossdef>
<para>Priority of recipe</para>
</glossdef>
</glossentry>
<glossentry id='var-DEPENDS'><glossterm>DEPENDS</glossterm>
<glossdef>
<para>
A list of build time dependencies for a given recipe. These indicate
recipes that must have staged before this recipe can configure.
</para>
</glossdef>
</glossentry>
<glossentry id='var-DESCRIPTION'><glossterm>DESCRIPTION</glossterm>
<glossdef>
<para>Package description used by package
managers</para>
</glossdef>
</glossentry>
<glossentry id='var-DESTDIR'><glossterm>DESTDIR</glossterm>
<glossdef>
<para>Destination directory</para>
</glossdef>
</glossentry>
<glossentry id='var-DISTRO'><glossterm>DISTRO</glossterm>
<glossdef>
<para>Short name of distribution</para>
</glossdef>
</glossentry>
<glossentry id='var-DISTRO_EXTRA_RDEPENDS'><glossterm>DISTRO_EXTRA_RDEPENDS</glossterm>
<glossdef>
<para>List of packages required by distribution.</para>
</glossdef>
</glossentry>
<glossentry id='var-DISTRO_EXTRA_RRECOMMENDS'><glossterm>DISTRO_EXTRA_RRECOMMENDS</glossterm>
<glossdef>
<para>List of packages which extend usability of
image. Those packages will be automatically
installed but can be removed by user.</para>
</glossdef>
</glossentry>
<glossentry id='var-DISTRO_FEATURES'><glossterm>DISTRO_FEATURES</glossterm>
<glossdef>
<para>Features of the distribution.</para>
</glossdef>
</glossentry>
<glossentry id='var-DISTRO_NAME'><glossterm>DISTRO_NAME</glossterm>
<glossdef>
<para>Long name of distribution</para>
</glossdef>
</glossentry>
<glossentry id='var-DISTRO_VERSION'><glossterm>DISTRO_VERSION</glossterm>
<glossdef>
<para>Version of distribution</para>
</glossdef>
</glossentry>
<glossentry id='var-DL_DIR'><glossterm>DL_DIR</glossterm>
<glossdef>
<para>Directory where all fetched sources will be stored</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-e'><title>E</title>
<glossentry id='var-ENABLE_BINARY_LOCALE_GENERATION'><glossterm>ENABLE_BINARY_LOCALE_GENERATION</glossterm>
<glossdef>
<para>Variable which control which locales for glibc are
to be generated during build (useful if target device
has 64M RAM or less)</para>
</glossdef>
</glossentry>
<glossentry id='var-EXTRA_OECONF'><glossterm>EXTRA_OECONF</glossterm>
<glossdef>
<para>Additional 'configure' script options</para>
</glossdef>
</glossentry>
<glossentry id='var-EXTRA_OEMAKE'><glossterm>EXTRA_OEMAKE</glossterm>
<glossdef>
<para>Additional GNU make options</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-f'><title>F</title>
<glossentry id='var-FILES'><glossterm>FILES</glossterm>
<glossdef>
<para>list of directories/files which will be placed
in packages</para>
</glossdef>
</glossentry>
<glossentry id='var-FULL_OPTIMIZATION'><glossterm>FULL_OPTIMIZATION</glossterm>
<glossdef>
<para>
The options to pass in <link linkend='var-TARGET_CFLAGS'>TARGET_CFLAGS</link>
and <link linkend='var-CFLAGS'>CFLAGS</link> when compiling an optimised system.
This defaults to "-fexpensive-optimizations -fomit-frame-pointer -frename-registers -O2".
</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- <glossdiv id='var-glossary-g'><title>G</title>-->
<!-- </glossdiv>-->
<glossdiv id='var-glossary-h'><title>H</title>
<glossentry id='var-HOMEPAGE'><glossterm>HOMEPAGE</glossterm>
<glossdef>
<para>Website where more info about package can be found</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-i'><title>I</title>
<glossentry id='var-IMAGE_FEATURES'><glossterm>IMAGE_FEATURES</glossterm>
<glossdef>
<para><link linkend="ref-features-image">List of
features</link> present in resulting images</para>
</glossdef>
</glossentry>
<glossentry id='var-IMAGE_FSTYPES'><glossterm>IMAGE_FSTYPES</glossterm>
<glossdef>
<para>Formats of rootfs images which we want to have
created</para>
</glossdef>
</glossentry>
<glossentry id='var-IMAGE_INSTALL'><glossterm>IMAGE_INSTALL</glossterm>
<glossdef>
<para>List of packages used to build image</para>
</glossdef>
</glossentry>
<glossentry id='var-INHIBIT_PACKAGE_STRIP'><glossterm>INHIBIT_PACKAGE_STRIP</glossterm>
<glossdef>
<para>
This variable causes the build to not strip binaries in
resulting packages.
</para>
</glossdef>
</glossentry>
<glossentry id='var-INHERIT'><glossterm>INHERIT</glossterm>
<glossdef>
<para>
This variable causes the named class to be inherited at
this point during parsing. Its only valid in configuration
files.
</para>
</glossdef>
</glossentry>
<glossentry id='var-INITSCRIPT_PACKAGES'><glossterm>INITSCRIPT_PACKAGES</glossterm>
<glossdef>
<para>
Scope: Used in recipes when using update-rc.d.bbclass. Optional, defaults to PN.
</para>
<para>
A list of the packages which contain initscripts. If multiple
packages are specified you need to append the package name
to the other INITSCRIPT_* as an override.
</para>
</glossdef>
</glossentry>
<glossentry id='var-INITSCRIPT_NAME'><glossterm>INITSCRIPT_NAME</glossterm>
<glossdef>
<para>
Scope: Used in recipes when using update-rc.d.bbclass. Mandatory.
</para>
<para>
The filename of the initscript (as installed to ${etcdir}/init.d).
</para>
</glossdef>
</glossentry>
<glossentry id='var-INITSCRIPT_PARAMS'><glossterm>INITSCRIPT_PARAMS</glossterm>
<glossdef>
<para>
Scope: Used in recipes when using update-rc.d.bbclass. Mandatory.
</para>
<para>
Specifies the options to pass to update-rc.d. An example is
"start 99 5 2 . stop 20 0 1 6 ." which gives the script a
runlevel of 99, starts the script in initlevels 2 and 5 and
stops it in levels 0, 1 and 6.
</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- <glossdiv id='var-glossary-j'><title>J</title>-->
<!-- </glossdiv>-->
<glossdiv id='var-glossary-k'><title>K</title>
<glossentry id='var-KERNEL_IMAGETYPE'><glossterm>KERNEL_IMAGETYPE</glossterm>
<glossdef>
<para>The type of kernel to build for a device, usually set by the
machine configuration files and defaults to "zImage". This is used
when building the kernel and is passed to "make" as the target to
build.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-l'><title>L</title>
<glossentry id='var-LICENSE'><glossterm>LICENSE</glossterm>
<glossdef>
<para>List of package source licenses.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-m'><title>M</title>
<glossentry id='var-MACHINE'><glossterm>MACHINE</glossterm>
<glossdef>
<para>Target device</para>
</glossdef>
</glossentry>
<glossentry id='var-MACHINE_ESSENTIAL_RDEPENDS'><glossterm>MACHINE_ESSENTIAL_RDEPENDS</glossterm>
<glossdef>
<para>List of packages required to boot device</para>
</glossdef>
</glossentry>
<glossentry id='var-MACHINE_ESSENTIAL_RRECOMMENDS'><glossterm>MACHINE_ESSENTIAL_RRECOOMENDS</glossterm>
<glossdef>
<para>List of packages required to boot device (usually
additional kernel modules)</para>
</glossdef>
</glossentry>
<glossentry id='var-MACHINE_EXTRA_RDEPENDS'><glossterm>MACHINE_EXTRA_RDEPENDS</glossterm>
<glossdef>
<para>List of packages required to use device</para>
</glossdef>
</glossentry>
<glossentry id='var-MACHINE_EXTRA_RRECOMMENDS'><glossterm>MACHINE_EXTRA_RRECOMMNEDS</glossterm>
<glossdef>
<para>List of packages useful to use device (for example
additional kernel modules)</para>
</glossdef>
</glossentry>
<glossentry id='var-MACHINE_FEATURES'><glossterm>MACHINE_FEATURES</glossterm>
<glossdef>
<para>List of device features - defined in <link
linkend='ref-features-machine'>machine
features section</link></para>
</glossdef>
</glossentry>
<glossentry id='var-MAINTAINER'><glossterm>MAINTAINER</glossterm>
<glossdef>
<para>E-mail of distribution maintainer</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- <glossdiv id='var-glossary-n'><title>N</title>-->
<!-- </glossdiv>-->
<!-- <glossdiv id='var-glossary-o'><title>O</title>-->
<!-- </glossdiv>-->
<glossdiv id='var-glossary-p'><title>P</title>
<glossentry id='var-PACKAGE_ARCH'><glossterm>PACKAGE_ARCH</glossterm>
<glossdef>
<para>Architecture of resulting package</para>
</glossdef>
</glossentry>
<glossentry id='var-PACKAGE_CLASSES'><glossterm>PACKAGE_CLASSES</glossterm>
<glossdef>
<para>List of resulting packages formats</para>
</glossdef>
</glossentry>
<glossentry id='var-PACKAGE_EXTRA_ARCHS'><glossterm>PACKAGE_EXTRA_ARCHS</glossterm>
<glossdef>
<para>List of architectures compatible with device
CPU. Usable when build is done for few different
devices with misc processors (like XScale and
ARM926-EJS)</para>
</glossdef>
</glossentry>
<glossentry id='var-PACKAGES'><glossterm>PACKAGES</glossterm>
<glossdef>
<para>List of packages to be created from recipe.
The default value is "${PN}-dbg ${PN} ${PN}-doc ${PN}-dev"</para>
</glossdef>
</glossentry>
<glossentry id='var-PARALLEL_MAKE'><glossterm>PARALLEL_MAKE</glossterm>
<glossdef>
<para>Extra options that are passed to the make command during the
compile tasks. This is usually of the form '-j 4' where the number
represents the maximum number of parallel threads make can run.</para>
</glossdef>
</glossentry>
<glossentry id='var-PN'><glossterm>PN</glossterm>
<glossdef>
<para>Name of package.
</para>
</glossdef>
</glossentry>
<glossentry id='var-PR'><glossterm>PR</glossterm>
<glossdef>
<para>Revision of package.
</para>
</glossdef>
</glossentry>
<glossentry id='var-PV'><glossterm>PV</glossterm>
<glossdef>
<para>Version of package.
The default value is "1.0"</para>
</glossdef>
</glossentry>
<glossentry id='var-PE'><glossterm>PE</glossterm>
<glossdef>
<para>
Epoch of the package. The default value is "1". The field is used
to make upgrades possible when the versioning scheme changes in
some backwards incompatible way.
</para>
</glossdef>
</glossentry>
<glossentry id='var-PREFERRED_PROVIDER'><glossterm>PREFERRED_PROVIDER</glossterm>
<glossdef>
<para>If multiple recipes provide an item, this variable
determines which one should be given preference. It
should be set to the "$PN" of the recipe to be preferred.</para>
</glossdef>
</glossentry>
<glossentry id='var-PREFERRED_VERSION'><glossterm>PREFERRED_VERSION</glossterm>
<glossdef>
<para>
If there are multiple versions of recipe available, this
variable determines which one should be given preference. It
should be set to the "$PV" of the recipe to be preferred.
</para>
</glossdef>
</glossentry>
<glossentry id='var-POKY_EXTRA_INSTALL'><glossterm>POKY_EXTRA_INSTALL</glossterm>
<glossdef>
<para>List of packages to be added to the image. This should
only be set in <filename>local.conf</filename>.</para>
</glossdef>
</glossentry>
<glossentry id='var-POKYLIBC'><glossterm>POKYLIBC</glossterm>
<glossdef>
<para>Libc implementation selector - glibc or uclibc can be selected.</para>
</glossdef>
</glossentry>
<glossentry id='var-POKYMODE'><glossterm>POKYMODE</glossterm>
<glossdef>
<para>Toolchain selector. It can be external toolchain
built from Poky or few supported combinations of
upstream GCC or CodeSourcery Labs toolchain.</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- <glossdiv id='var-glossary-q'><title>Q</title>-->
<!-- </glossdiv>-->
<glossdiv id='var-glossary-r'><title>R</title>
<glossentry id='var-RCONFLICTS'><glossterm>RCONFLICTS</glossterm>
<glossdef>
<para>List of packages which which conflict with this
one. Package will not be installed if they will not
be removed first.</para>
</glossdef>
</glossentry>
<glossentry id='var-RDEPENDS'><glossterm>RDEPENDS</glossterm>
<glossdef>
<para>
A list of run-time dependencies for a package. These packages
need to be installed alongside the package it applies to so
the package will run correctly, an example is a perl script
which would rdepend on perl. Since this variable applies to
output packages there would usually be an override attached
to this variable like RDEPENDS_${PN}-dev. Names in this field
should be as they are in <link linkend='var-PACKAGES'>PACKAGES
</link> namespave before any renaming of the output package
by classes like debian.bbclass.
</para>
</glossdef>
</glossentry>
<glossentry id='var-ROOT_FLASH_SIZE'><glossterm>ROOT_FLASH_SIZE</glossterm>
<glossdef>
<para>Size of rootfs in megabytes</para>
</glossdef>
</glossentry>
<glossentry id='var-RRECOMMENDS'><glossterm>RRECOMMENDS</glossterm>
<glossdef>
<para>List of packages which extend usability of
package. Those packages will be automatically
installed but can be removed by user.</para>
</glossdef>
</glossentry>
<glossentry id='var-RREPLACES'><glossterm>RREPLACES</glossterm>
<glossdef>
<para>List of packages which are replaced with this
one.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-s'><title>S</title>
<glossentry id='var-S'><glossterm>S</glossterm>
<glossdef>
<para>
Path to unpacked sources (by default:
"${<link linkend='var-WORKDIR'>WORKDIR</link>}/${<link linkend='var-PN'>PN</link>}-${<link linkend='var-PV'>PV</link>}")
</para>
</glossdef>
</glossentry>
<glossentry id='var-SECTION'><glossterm>SECTION</glossterm>
<glossdef>
<para>Section where package should be put - used
by package managers</para>
</glossdef>
</glossentry>
<glossentry id='var-SELECTED_OPTIMIZATION'><glossterm>SELECTED_OPTIMIZATION</glossterm>
<glossdef>
<para>
The variable takes the value of <link linkend='var-FULL_OPTIMIZATION'>FULL_OPTIMIZATION</link>
unless <link linkend='var-DEBUG_BUILD'>DEBUG_BUILD</link> = "1" in which case
<link linkend='var-DEBUG_OPTIMIZATION'>DEBUG_OPTIMIZATION</link> is used.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SERIAL_CONSOLE'><glossterm>SERIAL_CONSOLE</glossterm>
<glossdef>
<para>Speed and device for serial port used to attach
serial console. This is given to kernel as "console"
param and after boot getty is started on that port
so remote login is possible.</para>
</glossdef>
</glossentry>
<glossentry id='var-SHELLCMDS'><glossterm>SHELLCMDS</glossterm>
<glossdef>
<para>
A list of commands to run within the a shell, used by <glossterm><link
linkend='var-TERMCMDRUN'>TERMCMDRUN</link></glossterm>. It defaults to
<glossterm><link linkend='var-SHELLRCCMD'>SHELLRCCMD</link></glossterm>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SHELLRCCMD'><glossterm>SHELLRCCMD</glossterm>
<glossdef>
<para>
How to launch a shell, defaults to bash.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SITEINFO_ENDIANESS'><glossterm>SITEINFO_ENDIANESS</glossterm>
<glossdef>
<para>
Contains "le" for little-endian or "be" for big-endian depending
on the endian byte order of the target system.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SITEINFO_BITS'><glossterm>SITEINFO_BITS</glossterm>
<glossdef>
<para>
Contains "32" or "64" depending on the number of bits for the
CPU of the target system.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SRC_URI'><glossterm>SRC_URI</glossterm>
<glossdef>
<para>List of source files (local or remote ones)</para>
</glossdef>
</glossentry>
<glossentry id='var-SRC_URI_OVERRIDES_PACKAGE_ARCH'><glossterm>SRC_URI_OVERRIDES_PACKAGE_ARCH</glossterm>
<glossdef>
<para>
By default there is code which automatically detects whether
<glossterm><link linkend='var-SRC_URI'>SRC_URI</link></glossterm>
contains files which are machine specific and if this is the case it
automatically changes
<glossterm><link linkend='var-PACKAGE_ARCH'>PACKAGE_ARCH</link></glossterm>.
Setting this variable to "0" disables that behaviour.
</para>
</glossdef>
</glossentry>
<glossentry id='var-SRCDATE'><glossterm>SRCDATE</glossterm>
<glossdef>
<para>
Date of source code used to build package (if it was fetched
from SCM).
</para>
</glossdef>
</glossentry>
<glossentry id='var-SRCREV'><glossterm>SRCREV</glossterm>
<glossdef>
<para>
Revision of source code used to build package (Subversion,
GIT, Bazaar only).
</para>
</glossdef>
</glossentry>
<glossentry id='var-STAGING_KERNEL_DIR'><glossterm>STAGING_KERNEL_DIR</glossterm>
<glossdef>
<para>
Directory with kernel headers required to build out-of-tree
modules.
</para>
</glossdef>
</glossentry>
<glossentry id='var-STAMPS'><glossterm>STAMPS</glossterm>
<glossdef>
<para>
Directory (usually TMPDIR/stamps) with timestamps of
executed tasks.
</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-t'><title>T</title>
<glossentry id='var-TARGET_ARCH'><glossterm>TARGET_ARCH</glossterm>
<glossdef>
<para>The architecture of the device we're building for.
A number of values are possible but Poky primarily supports
"arm" and "i586".</para>
</glossdef>
</glossentry>
<glossentry id='var-TARGET_CFLAGS'><glossterm>TARGET_CFLAGS</glossterm>
<glossdef>
<para>
Flags passed to C compiler for the target system. Evaluates to the same
as <link linkend='var-CFLAGS'>CFLAGS</link>.
</para>
</glossdef>
</glossentry>
<glossentry id='var-TARGET_FPU'><glossterm>TARGET_FPU</glossterm>
<glossdef>
<para>Method of handling FPU code. For FPU-less targets
(most of ARM cpus) it has to be set to "soft" otherwise
kernel emulation will get used which will result in
performance penalty.</para>
</glossdef>
</glossentry>
<glossentry id='var-TARGET_OS'><glossterm>TARGET_OS</glossterm>
<glossdef>
<para>Type of target operating system. Can be "linux"
for glibc based system, "linux-uclibc" for uClibc. For
ARM/EABI targets there are also "linux-gnueabi" and
"linux-uclibc-gnueabi" values possible.</para>
</glossdef>
</glossentry>
<glossentry id='var-TERMCMD'><glossterm>TERMCMD</glossterm>
<glossdef>
<para>
This command is used by bitbake to lauch a terminal window with a
shell. The shell is unspecified so the user's default shell is used.
By default it is set to <command>gnome-terminal</command> but it can
be any X11 terminal application or terminal multiplexers like screen.
</para>
</glossdef>
</glossentry>
<glossentry id='var-TERMCMDRUN'><glossterm>TERMCMDRUN</glossterm>
<glossdef>
<para>
This command is similar to <glossterm><link
linkend='var-TERMCMD'>TERMCMD</link></glossterm> however instead of the users shell it runs the command specified by the <glossterm><link
linkend='var-SHELLCMDS'>SHELLCMDS</link></glossterm> variable.
</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- <glossdiv id='var-glossary-u'><title>U</title>-->
<!-- </glossdiv>-->
<!-- <glossdiv id='var-glossary-v'><title>V</title>-->
<!-- </glossdiv>-->
<glossdiv id='var-glossary-w'><title>W</title>
<glossentry id='var-WORKDIR'><glossterm>WORKDIR</glossterm>
<glossdef>
<para>Path to directory in tmp/work/ where package
will be built.</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- <glossdiv id='var-glossary-x'><title>X</title>-->
<!-- </glossdiv>-->
<!-- <glossdiv id='var-glossary-y'><title>Y</title>-->
<!-- </glossdiv>-->
<!-- <glossdiv id='var-glossary-z'><title>Z</title>-->
<!-- </glossdiv>-->
</glossary>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,204 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-varlocality'>
<title>Reference: Variable Locality (Distro, Machine, Recipe etc.)</title>
<para>
Whilst most variables can be used in almost any context (.conf, .bbclass,
.inc or .bb file), variables are often associated with a particular
locality/context. This section describes some common associations.
</para>
<section id='ref-varlocality-config-distro'>
<title>Distro Configuration</title>
<itemizedlist>
<listitem>
<para><glossterm linkend='var-DISTRO'><link linkend='var-DISTRO'>DISTRO</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-DISTRO_NAME'><link linkend='var-DISTRO_NAME'>DISTRO_NAME</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-DISTRO_VERSION'><link linkend='var-DISTRO_VERSION'>DISTRO_VERSION</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MAINTAINER'><link linkend='var-MAINTAINER'>MAINTAINER</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-PACKAGE_CLASSES'><link linkend='var-PACKAGE_CLASSES'>PACKAGE_CLASSES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-TARGET_OS'><link linkend='var-TARGET_OS'>TARGET_OS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-TARGET_FPU'><link linkend='var-TARGET_FPU'>TARGET_FPU</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-POKYMODE'><link linkend='var-POKYMODE'>POKYMODE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-POKYLIBC'><link linkend='var-POKYLIBC'>POKYLIBC</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
<section id='ref-varlocality-config-machine'>
<title>Machine Configuration</title>
<itemizedlist>
<listitem>
<para><glossterm linkend='var-TARGET_ARCH'><link linkend='var-TARGET_ARCH'>TARGET_ARCH</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-SERIAL_CONSOLE'><link linkend='var-SERIAL_CONSOLE'>SERIAL_CONSOLE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-PACKAGE_EXTRA_ARCHS'><link linkend='var-PACKAGE_EXTRA_ARCHS'>PACKAGE_EXTRA_ARCHS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-IMAGE_FSTYPES'><link linkend='var-IMAGE_FSTYPES'>IMAGE_FSTYPES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-ROOT_FLASH_SIZE'><link linkend='var-ROOT_FLASH_SIZE'>ROOT_FLASH_SIZE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MACHINE_FEATURES'><link linkend='var-MACHINE_FEATURES'>MACHINE_FEATURES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MACHINE_EXTRA_RDEPENDS'><link linkend='var-MACHINE_EXTRA_RDEPENDS'>MACHINE_EXTRA_RDEPENDS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MACHINE_EXTRA_RRECOMMENDS'><link linkend='var-MACHINE_EXTRA_RRECOMMENDS'>MACHINE_EXTRA_RRECOMMENDS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MACHINE_ESSENTIAL_RDEPENDS'><link linkend='var-MACHINE_ESSENTIAL_RDEPENDS'>MACHINE_ESSENTIAL_RDEPENDS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MACHINE_ESSENTIAL_RRECOMMENDS'><link linkend='var-MACHINE_ESSENTIAL_RRECOMMENDS'>MACHINE_ESSENTIAL_RRECOMMENDS</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
<section id='ref-varlocality-config-local'>
<title>Local Configuration (local.conf)</title>
<itemizedlist>
<listitem>
<para><glossterm linkend='var-DISTRO'><link linkend='var-DISTRO'>DISTRO</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-MACHINE'><link linkend='var-MACHINE'>MACHINE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-DL_DIR'><link linkend='var-DL_DIR'>DL_DIR</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-BBFILES'><link linkend='var-BBFILES'>BBFILES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-IMAGE_FEATURES'><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-PACKAGE_CLASSES'><link linkend='var-PACKAGE_CLASSES'>PACKAGE_CLASSES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-BB_NUMBER_THREADS'><link linkend='var-BB_NUMBER_THREADS'>BB_NUMBER_THREADS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-BBINCLUDELOGS'><link linkend='var-BBINCLUDELOGS'>BBINCLUDELOGS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-CVS_TARBALL_STASH'><link linkend='var-CVS_TARBALL_STASH'>CVS_TARBALL_STASH</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm linkend='var-ENABLE_BINARY_LOCALE_GENERATION'><link linkend='var-ENABLE_BINARY_LOCALE_GENERATION'>ENABLE_BINARY_LOCALE_GENERATION</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
<section id='ref-varlocality-recipe-required'>
<title>Recipe Variables - Required</title>
<itemizedlist>
<listitem>
<para><glossterm><link linkend='var-DESCRIPTION'>DESCRIPTION</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-LICENSE'>LICENSE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-SECTION'>SECTION</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-HOMEPAGE'>HOMEPAGE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-AUTHOR'>AUTHOR</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-SRC_URI'>SRC_URI</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
<section id='ref-varlocality-recipe-dependencies'>
<title>Recipe Variables - Dependencies</title>
<itemizedlist>
<listitem>
<para><glossterm><link linkend='var-DEPENDS'>DEPENDS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-RDEPENDS'>RDEPENDS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-RRECOMMENDS'>RRECOMMENDS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-RCONFLICTS'>RCONFLICTS</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-RREPLACES'>RREPLACES</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
<section id='ref-varlocality-recipe-paths'>
<title>Recipe Variables - Paths</title>
<itemizedlist>
<listitem>
<para><glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-S'>S</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-FILES'>FILES</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
<section id='ref-varlocality-recipe-build'>
<title>Recipe Variables - Extra Build Information</title>
<itemizedlist>
<listitem>
<para><glossterm><link linkend='var-EXTRA_OECONF'>EXTRA_OECONF</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-EXTRA_OEMAKE'>EXTRA_OEMAKE</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-PACKAGES'>PACKAGES</link></glossterm></para>
</listitem>
<listitem>
<para><glossterm><link linkend='var-DEFAULT_PREFERENCE'>DEFAULT_PREFERENCE</link></glossterm></para>
</listitem>
</itemizedlist>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4 spell spelllang=en_gb
-->

View File

@@ -1,92 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='resources'>
<title>Contributing to Poky</title>
<section id='resources-intro'>
<title>Introduction</title>
<para>
We're happy for people to experiment with Poky and there are a number of places to
find help if you run into difficulties or find bugs. To find out how to download
source code see the <link linkend='intro-getit'>Obtaining Poky</link> section of
the Introduction.
</para>
</section>
<section id='resources-bugtracker'>
<title>Bugtracker</title>
<para>
Problems with Poky should be reported in the
<ulink url='http://bugzilla.o-hand.com/'>bug tracker</ulink>.
</para>
</section>
<section id='resources-mailinglist'>
<title>Mailing list</title>
<para>
To subscribe to the mailing list send mail to:
</para>
<para>
<literallayout class='monospaced'>
poky+subscribe &lt;at&gt; openedhand &lt;dot&gt; com
</literallayout>
</para>
<para>
Then follow the simple instructions in subsequent reply. Archives are
available <ulink
url="http://lists.o-hand.com/poky/">here</ulink>.
</para>
</section>
<section id='resources-irc'>
<title>IRC</title>
<para>
Join #poky on freenode.
</para>
</section>
<section id='resources-links'>
<title>Links</title>
<itemizedlist>
<listitem><para>
<ulink url='http://pokylinux.org'>The Poky website</ulink>
</para></listitem>
<listitem><para>
<ulink url='http://www.openedhand.com/'>OpenedHand</ulink> - The
company behind Poky.
</para></listitem>
<listitem><para>
<ulink url='http://www.openembedded.org/'>OpenEmbedded</ulink>
- The upstream generic embedded distribution Poky derives
from (and contributes to).
</para></listitem>
<listitem><para>
<ulink url='http://developer.berlios.de/projects/bitbake/'>Bitbake</ulink>
- The tool used to process Poky metadata.
</para></listitem>
<listitem><para>
<ulink url='http://bitbake.berlios.de/manual/'>Bitbake User
Manual</ulink>
</para></listitem>
<listitem><para>
<ulink url='http://pimlico-project.org/'>Pimlico</ulink> - A
suite of lightweight Personal Information Management (PIM)
applications designed primarily for handheld and mobile
devices.
</para></listitem>
<listitem><para>
<ulink url='http://fabrice.bellard.free.fr/qemu/'>QEMU</ulink>
- An open source machine emulator and virtualizer.
</para></listitem>
</itemizedlist>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Some files were not shown because too many files have changed in this diff Show More