Compare commits

..

29 Commits

Author SHA1 Message Date
Joshua Lock
afd0958d27 Green 3.3.1 Release
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-07-08 12:14:07 +01:00
Richard Purdie
2b9dbe57a4 package_*.bbclass: Only set pkg in overrides. These are the only values we're interested in expanding and this makes sure we obtain the expected data
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-07-07 12:56:25 +01:00
Richard Purdie
d538d8e171 Revert "classes/package_ipk|_deb|_rpm.bbclass: Fix setting of OVERRIDES when packaging"
This reverts commit 3abe7a0624 which was incorrect
in some assumptions about OVERRIDE handling order.
2010-07-07 12:24:49 +01:00
Joshua Lock
d42f0b9153 classes/package_ipk|_deb|_rpm.bbclass: Fix setting of OVERRIDES when packaging
The OVERRIDES variable was being incorrectly set with the end result of the
runtime dependencies of the package not being encoded in it's package metadata.

This broke opkg-native in meta-toolchain.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-07-03 13:35:22 +01:00
Richard Purdie
f5003084d5 encodings: Specify encodingsdir as the default was being detected incorrectly resulting in an empty package
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-06-30 15:08:42 +01:00
Enric Balletbo i Serra
763b11d0fd busybox: fix unexpected "done" in /etc/udhcpc.d/50default script.
Run udhcpc results in

 udhcpc (v1.15.3) started
 /etc/udhcpc.d/50default: line 37: syntax error: unexpected "done" (expecting "fi")
 run-parts: /etc/udhcpc.d/50default exited with code 2

Signed-off-by: Enric Balletbo i Serra <eballetbo@gmail.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-06-29 13:36:16 +01:00
Kevin Tian
7d292468e7 qemux86/xorg.conf: no DefaultDepth for VMware SVGA driver
VMware SVGA driver needs to have same depth between the host and the guest. Or put in
other word, the depth read by the guest is the value read from host. The guest is not
allowed to change virtual depth to other value. With DefaultDepth option xorg.conf,
vmware driver rejects to work with suggestion "Please do not specify a depth on the
command line or via the config file".

Signed-off-by Kevin Tian <kevin.tian@intel.com>
2010-06-29 13:36:07 +01:00
Kevin Tian
f51bb81248 qemu: fix VMware VGA depth calculation error
VMware SVGA presents to the guest with the depth of the host surface it renders
to, and rejects to work if the two sides are mismatched. One problem is that
current VMware VGA may calculate a wrong host depth, and then memcpy from virtual
framebuffer to host surface may trigger segmentation fault. For example, when
launching Qemu in a VNC connection, VMware SVGA thinks depth as '32', however the
actual depth of VNC is '16'. The fault also happens when the host depth is not
32 bit.

Qemu <4b5db3749c5fdba93e1ac0e8748c9a9a1064319f> tempts to fix a similar issue, by
changing from hard-coded 24bit depth to instead query the surface allocator
(e.g. sdl). However it doesn't really work, because the point where query
is invoked is earlier than the point where sdl is initialized. At query time,
qemu uses a default surface allocator which, again, provides another hard-coded
depth value - 32bit. So it happens to make VMware SVGA working on some hosts,
but still fails in others.

To solve this issue, this commit introduces a postcall interface to display
surface, which is walked after surface allocators are actually initialized.
At that point it's then safe to query host depth and present to the guest.

Signed-off-by Kevin Tian <kevin.tian@intel.com>
2010-06-29 13:35:59 +01:00
Richard Purdie
d3b8687ea6 gcc: Add patch to allow disabling of libstdc++ linkage and hence fix gcc-runtime which was having broken configure tests due to the linker failures and assuming maths primitives were not in libm
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-06-29 13:35:50 +01:00
Richard Purdie
93f7d74492 qemu: Enable ppc system emulation and fix ppc build
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-06-29 13:35:30 +01:00
Joshua Lock
e2c567f51e pkgconfig: add patch to disable legacy scripts such as glib-config
On an F13 host with glib-config installed pkgconfig-native can get into a
horrible state with recursive calls between pkg-config and glib-config.
The patch adds a configure time option to disable legacy script support in
pkgconfig and makes use of the option for Poky.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-25 14:58:43 +01:00
Joshua Lock
7646f333e1 cross-canadian: ensure package dependencies are generated correctly
cross-canadian packages need to look for their SOLIBS in the nativesdk
sysroot so that dependencies are correctly picked up and meta-toolchains are
correctly built.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-25 14:58:34 +01:00
Joshua Lock
b318d8f87b gdb-cross-canadian: build with the host-triplet prefix
Our cross-canadian tools our built with the host-triplet prefix, gdb should do
similar.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-23 18:44:59 +01:00
Scott Garman
0128976cb0 kernel.bbclass: Remove additional binaries from staging
* Remove additonal binaries known to cause "strip command failed"
  errors during do_package on cross platforms.

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2010-06-23 18:44:59 +01:00
Joshua Lock
5a4342cb2e qemu: fix sloppy merge
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-23 18:02:45 +01:00
Jeff Dike
6e4579b32b qemu: Work around the crash seen on Ubuntu.
Due to different stack contents in sdl_display_init on Ubuntu vs other distros,
an uninitialized structure is causing a crash.  Zeroing the structure makes the
behavior uniform across distros, avoiding the Ubuntu crash, but doesn't fix the
underlying bugs, notably:
the return value of SDL_GetWMInfo needs to be checked, as it's currently
failing silently
the underlying reason for the failure of SDL_GetWMInfo needs to be found -
there is a GetWMINfo method in the internal SDL structure which is NULL, and
the reason for this needs to be found.

Signed-off-by: Jeff Dike <jdike@linux.intel.com>
2010-06-23 17:32:47 +01:00
Dexuan Cui
bd11abb22c linux-omap: fix build failure with gcc-4.3.3
Pull time.h patch from upstream Linux kernel
(commit 38332cb98772f5ea757e6486bed7ed0381cb5f98)

The patch fixes the following build failure:
  LD      .tmp_vmlinux1
kernel/built-in.o: In function `timespec_add_ns':
    undefined reference to `__aeabi_uldivmod'
kernel/built-in.o: In function `do_gettimeofday':
    undefined reference to `__aeabi_uldivmod'
    undefined reference to `__aeabi_uldivmod'
kernel/built-in.o: In function `timespec_add_ns':
    undefined reference to `__aeabi_uldivmod'
    undefined reference to `__aeabi_uldivmod'
kernel/built-in.o: more undefined references to `__aeabi_uldivmod'

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
2010-06-23 13:24:52 +01:00
Joshua Lock
e1798f5e39 linux-libc-headers: delete include/scsi/scsi.h, it's not for userspace
include/scsi/scsi.h is not userland parsable and research indicates this is
because the header should not be exposed to userspace. Therefore remove it
in the install.

Research done by Tom Rini <tom_rini@mentor.com> in OE commit
91d3d92a626da89dfe13d63e68a90dbafdbaef1d

This has been the case since kernel 2.6.31

Bump glibc and uclibc PR's so that users have sane <scsi/scsi.h>

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-18 10:34:06 +01:00
Dike, Jeffrey G
b43a425119 linux-libc-headers: Remove ioctls for deleted driver
2.6.33 removed the Hayes ESP driver.  The presence of these ioctls
makes setserial believe that ESP support should be built in, breaking its
build.

Signed-off-by: Jeff Dike <jdike@linux.intel.com>
2010-06-17 15:08:53 +01:00
Joshua Lock
01a5883616 qemu: Fix linking of the native package on Fedora 13
Fedora 13 switched the default behaviour of the linker to no longer
indirectly link to required libraries (i.e. dependencies of a library
already linked to). Therefore we need to explicitly pass the depended on
libraries into the linker for building to work on Fedora 13.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-17 15:08:49 +01:00
Joshua Lock
ed145bbdfe handbook: Fix stylesheet
Some sizes where defined without units (in our case px) causing display of the
header of the handbook to be broken.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 14:33:51 +01:00
Joshua Lock
dc123ba831 sanity.bbclass: Fix test for i686 SDKMACHINE
The 'is' keyword tests for object identity, returning True if the variables are
both referencing the same object. Changed the test to use the equality
operator, which compares the values of the objects.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 12:51:34 +01:00
Joshua Lock
887a7768cf handbook: Fix typo in last commit
Managed to mangle the command...

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 12:12:09 +01:00
Joshua Lock
aba90fdf2f handbook: fix extraction command
We ship bzipped tarballs now so we need to pass j to tar, not z

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 12:08:12 +01:00
Joshua Lock
27cff3f045 handbook: Note that this is the documentation for Green
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 12:03:21 +01:00
Joshua Lock
506386d148 handbook: Fix references to the stable release
The handbook was still talking about the purple release, we're green now

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 11:45:30 +01:00
Joshua Lock
f5d24f0574 handbook: Fix generation of HTML handbook
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-14 11:44:56 +01:00
Joshua Lock
d36f183ade Green 3.3 Release 2010-06-11 14:43:33 +01:00
Joshua Lock
6129f9fc44 packaged-staging.bbclass: fix typo in scan_cmd
it's PSTAGE_TMPDIR_STAGE, not PSTAGE_TMDPDIR_STAGE spotted by Chris Larson
<chris_larson@mentor.com>

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-06-11 14:43:33 +01:00
4896 changed files with 113250 additions and 125390 deletions

27
.gitignore vendored
View File

@@ -3,24 +3,33 @@
build/conf/local.conf
build/conf/bblayers.conf
build/tmp/
build/sstate-cache
build/pyshtables.py
pstage/
scripts/poky-git-proxy-socks
sources/
meta-darwin
meta-maemo
meta-extras
meta-m2
meta-prvt*
poky-autobuilder*
*.swp
*.orig
*.rej
*~
documentation/poky-ref-manual/poky-ref-manual.html
documentation/poky-ref-manual/poky-ref-manual.pdf
documentation/poky-ref-manual/poky-ref-manual.tgz
documentation/poky-ref-manual/bsp-guide.html
documentation/poky-ref-manual/bsp-guide.pdf
handbook/poky-doc-tools/Makefile
handbook/poky-doc-tools/Makefile.in
handbook/poky-doc-tools/aclocal.m4
handbook/poky-doc-tools/autom4te.cache/
handbook/poky-doc-tools/common/Makefile
handbook/poky-doc-tools/common/Makefile.in
handbook/poky-doc-tools/common/fop-config.xml
handbook/poky-doc-tools/config.log
handbook/poky-doc-tools/config.status
handbook/poky-doc-tools/configure
handbook/poky-doc-tools/install-sh
handbook/poky-doc-tools/missing
handbook/poky-doc-tools/poky-docbook-to-pdf
handbook/poky-handbook.html
handbook/poky-handbook.pdf
handbook/poky-handbook.tgz
handbook/bsp-guide.html
handbook/bsp-guide.pdf

220
CHANGELOG
View File

@@ -1,220 +0,0 @@
commit fd7a07b3a2153826bedda2ef76b9a33ab2791680
Author: Scott Garman <scott.a.garman@intel.com>
Date: Fri Jan 21 14:15:05 2011 -0800
poky-extract-sdk: allow relative paths for extract-dir
psuedo needs a full path to its pid file, so convert
relative extract-dir paths to full ones.
The symptom of this bug is receiving the following error:
pseudo: Couldn't open relative/path/to/var/pseudo/pseudo.pid: No such file or directory
This fixes [BUGID #670]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
commit 01bc47f4d47df3276b4b6c2583bcddd834fd5050
Author: Beth Flanagan <elizabeth.flanagan@intel.com>
Date: Wed Nov 3 17:20:00 2010 -0700
quilt: Fixed configure test for patch --version.
OpenSuSE 11.3 uses GNU patch 2.6.1.81-5b68 which breaks quilt's
configure test for patch version.
Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
commit 12a3d41a24db79ae6c0491defffcf4f4753001cf
Author: Richard Purdie <richard.purdie@linuxfoundation.org>
Date: Fri Jan 14 11:57:18 2011 +0000
image.bbclass: Use the dedicated BB_WORKERCONTEXT, not bitbake internals to detect context
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
commit ce4f8356796bc797d9156ed252a4ed638a2150d5
Author: Richard Purdie <rpurdie@linux.intel.com>
Date: Wed Dec 15 23:22:16 2010 +0000
scripts/poky-qemu: Improve tmp layout assumption
If someone has changed TMPDIR in local.conf to a non-standard location, the
poky-qemu script currently doesn't handle this and assumes if BUILDDIR is set,
$BUILDDIR/tmp will exist.
Its simple to check if this exists and if not, to ask bitbake where the
directory is so this patch changes the code to do that.
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
commit 54f08d23cd7d0de6aec31f4764389ff4dab2990d
Author: Scott Garman <scott.a.garman@intel.com>
Date: Tue Dec 7 20:59:06 2010 -0800
Make poky-qemu and related scripts work with arbitrary SDK locations
* No longer assume SDK toolchains are installed in /opt/poky
* [BUGFIX #568] where specifying paths to both the kernel and fs
image caused an error due to POKY_NATIVE_SYSROOT never being
set, triggering failure of poky-qemu-ifup/ifdown
* Cosmetic improvements to usage() functions by using basename
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
commit 8a3d0f375ce416ada1a5443e4a8e467504001beb
Author: Scott Garman <scott.a.garman@intel.com>
Date: Fri Nov 12 16:31:13 2010 -0800
poky-qemu: Fix issues when running Yocto 0.9 release images
This fixes two bugs with poky-qemu when it is run from a
standalone meta-toolchain setup.
[BUGFIX #535] and [BUGFIX #536]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
commit 0c2003f13434c77f901a976523478d37d8aadb48
Author: Paul Eggleton <paul.eggleton@linux.intel.com>
Date: Thu Dec 16 10:29:50 2010 +0000
openssl: restore -Wall flag
The -Wall flag was unintentionally removed from the end of the CFLAG var in
089612794d4d8d9c79bd2a4365d6df78371f7f40 by me. This patch puts it back in.
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
commit 6e71b0a012f0676c06b7b4788d932f320fca0b74
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 14:31:21 2010 +0000
web-webkit: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 4b5c1c053000d297956f08949ffde7454ee33c5d
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 13:42:15 2010 +0000
contacts: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 171e709ae6f4b1a7640bf393f57aa787648cdc0f
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 12:58:09 2010 +0000
dates: fix for Make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit a8b8557e4cb34b594bb620eb276bcaf7a8e0a8e3
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 12:27:52 2010 +0000
owl-video-widget: fix Makefile for super strict make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 399e6b8008cb0b8cc0b75efd48dd821a6cf5a8a8
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 18:29:43 2010 +0000
libowl-av: fix for Make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 290280b332570ec73301f76765b1c5f2de20a9fd
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 17:56:53 2010 +0000
gst-plugins: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 9e11fbf9048b17526ca8160d82b69f386595c9a7
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 15:39:42 2010 +0000
gstreamer: fix to comply with make 3.82's stricter parser
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 0f8244faba5c36c0580081c112ea27ce683af99b
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 12:49:13 2010 +0000
linux-libc-headers: fix for Make 3.82
Fix the kernel Makefile for use with Make 3.82 by splitting mixed implicit and
normal rules into separate rules.
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 0cc23a86562d0ce1e236ceb4a56a8f19d400192f
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 12:21:33 2010 +0000
busybox: additional fixes for Make 3.82
There where still some mixed implicit and normal rules in the Busybox Makefile,
Update our existing make-382.patch to split these into separate rules.
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 30c39cc97c384134661300e107d7a81f257f8034
Author: Joshua Lock <josh@linux.intel.com>
Date: Fri Nov 12 16:36:54 2010 +0000
procps: fix for build against make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 261ca885962ba9606bcad4c5415927a79fdd7b96
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Nov 9 12:18:14 2010 +0000
busybox: import upstream patch for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 72ddd5c20246a5d5b1752b58a61ef75b4c39cc40
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Nov 9 12:14:28 2010 +0000
eglibc: fix build of eglibc-initial for make 3.82
Make 3.82, as shipped with Fedora 14, fixes some holes in the parser which in
turn breaks behaviour of some Makefiles. Most notably eglibc's.
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 6026999e81042a7f6560f9bce04390865509b235
Author: Paul Eggleton <paul.eggleton@intel.com>
Date: Fri Nov 19 15:03:32 2010 +0000
qemu: fix failure to find zlib header files during configure
Corrects problems during configure of qemu-native due to the BUILD_CFLAGS
not being included when attempting to compile the test program for zlib
within the configure script.
Signed-off-by: Paul Eggleton <paul.eggleton@intel.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
commit c5ab4d56f97a0e45b124d40c9f536541be04c201
Author: Paul Eggleton <paul.eggleton@intel.com>
Date: Wed Nov 17 11:37:47 2010 +0000
openssl-native: disable execstack flag to prevent problems with SELinux
The execstack flag gets set on libcrypto.so by default which causes SELinux
to prevent it from being loaded on systems using SELinux, which includes
Fedora. This patch disables the execstack flag. (Note: Red Hat do this in
their openssl packaging.)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>

70
NOTES
View File

@@ -1,70 +0,0 @@
Name: Laverne
Version: 4.0.1
Built from Revision: fd7a07b3a2153826bedda2ef76b9a33ab2791680
Build Date: Jan 26 2011
Builder: autobuilder.pokylinux.org
The Laverne 4.0.1 Release ensures you can use Poky Laverne on systems running
Fedora 14 and Opensuse 11.3, fixes issues with the poky-qemu script, and fixes
several other bugs. For the full changelog for Laverne 4.0.1 please read
CHANGELOG.
Following are descriptions of fixes and known issues.
Fixes
------------------------
* Make 3.82, as shipped with Fedora 14, included parser bug fixes that
resulted in a much stricter parser. As a result, the Makefiles could not be
parsed for many of the software versions shipped with Laverne. The Makefiles
in the following recipes were fixed:
o eglibc
o busybox
o procps
o linux-libc-headers
o gstreamer
o gst-plugins
o libowl-av
o owl-video-widget
o dates
o contacts
o web-webkit
* The ability to build openssl-native on a system that has SELINUX enabled
was restored. (We disabled the execstack flag at compile time.)
* A host-intrusion issue caused by a failure in QEMU to find zlib headers
during configure was fixed. The issue was causing qemu-native to use the
system zlib if it was present. If the system zlib was not present the build
would fail.
* Stability and usability enhancements, which included handling relative
filesystem paths, were made to poky-qemu scripts.
* The run-time remapping of package names when adding extra packages to an
image via the IMAGE_INSTALL mechanism were fixed.
* The configure test in quilt for GNU patch was fixed to that it correctly
detects the version.
Known Issues
------------------------
* The mpc3815e-rbd and routerstationpro machines were untested and not a
part of the official Laverne 4.0 release. These machines are still unusable
for this Laverne 4.0.1 release.
o mpx3815e-rdb will not boot due to a kernel/uboot issue Bug #685
o routerstation will not boot (by default) due to incorrect boot
parameters Bug #681
o routerstationpro debug messages related to the ethernet driver print
during boot Bug #679
* Shutdown/poweroff on qemuarm does not cleanly halt the virtual machine.
To workaround this issue use the reboot command. Using this command avoids
a "power-cycle" and instead cleanly shuts down the VM Bug #684
* Two "Connection Manager" icons appear in the Sato UI. This duplication has
been fixed in master. Note that you can use either icon to launch the
connectivity UI. Bug #683
* The on-screen keyboard incorrectly launches in the qemumips machine. This
issue is due to a mis-configured formfactor file Bug #682

View File

@@ -138,7 +138,7 @@ Changes in Bitbake 1.9.x:
directory != the cache dir.
- Add md5 and sha256 checksum generation functions to utils.py
- Correctly handle '-' characters in class names (#2958)
- Make sure expandKeys has been called on the data dictionary before running tasks
- Make sure expandKeys has been called on the data dictonary before running tasks
- Correctly add a task override in the form task-TASKNAME.
- Revert the '-' character fix in class names since it breaks things
- When a regexp fails to compile for PACKAGES_DYNAMIC, print a more useful error (#4444)

View File

@@ -22,160 +22,143 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])),
'lib'))
import optparse
import warnings
from traceback import format_exception
import sys, os, getopt, re, time, optparse, xmlrpclib
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb.server import none
#from bb.server import xmlrpc
__version__ = "1.11.0"
__version__ = "1.9.0"
if sys.hexversion < 0x020500F0:
print "Sorry, python 2.5 or later is required for this version of bitbake"
sys.exit(1)
#============================================================================#
# BBOptions
#============================================================================#
class BBConfiguration(object):
class BBConfiguration( object ):
"""
Manages build options and configurations for one run
"""
def __init__(self, options):
def __init__( self, options ):
for key, val in options.__dict__.items():
setattr(self, key, val)
self.pkgs_to_build = []
setattr( self, key, val )
def print_exception(exc, value, tb):
"""Send exception information through bb.msg"""
bb.fatal("".join(format_exception(exc, value, tb, limit=8)))
"""
Print the exception to stderr, only showing the traceback if bitbake
debugging is enabled.
"""
if not bb.msg.debug_level['default']:
tb = None
sys.excepthook = print_exception
sys.__excepthook__(exc, value, tb)
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""Display python warning messages using bb.msg"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
s = s.split("\n")[0]
bb.msg.warn(None, s)
warnings.showwarning = _showwarning
warnings.simplefilter("ignore", DeprecationWarning)
#============================================================================#
# main
#============================================================================#
def main():
return_value = 1
return_value = 0
pythonver = sys.version_info
if pythonver[0] < 2 or (pythonver[0] == 2 and pythonver[1] < 5):
print "Sorry, bitbake needs python 2.5 or later."
sys.exit(1)
parser = optparse.OptionParser(
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
usage = """%prog [options] [package ...]
parser = optparse.OptionParser( version = "BitBake Build Tool Core version %s, %%prog version %s" % ( bb.__version__, __version__ ),
usage = """%prog [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.""")
Default BBFILES are the .bb files in the current directory.""" )
parser.add_option("-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES.",
action = "store", dest = "buildfile", default = None)
parser.add_option( "-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES.",
action = "store", dest = "buildfile", default = None )
parser.add_option("-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
action = "store_false", dest = "abort", default = True)
parser.add_option( "-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
action = "store_false", dest = "abort", default = True )
parser.add_option("-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option( "-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False )
parser.add_option("-f", "--force", help = "force run of specified cmd, regardless of stamp status",
action = "store_true", dest = "force", default = False)
parser.add_option( "-f", "--force", help = "force run of specified cmd, regardless of stamp status",
action = "store_true", dest = "force", default = False )
parser.add_option("-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd")
parser.add_option( "-i", "--interactive", help = "drop into the interactive mode also called the BitBake shell.",
action = "store_true", dest = "interactive", default = False )
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "file", default = [])
parser.add_option( "-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd" )
parser.add_option("-v", "--verbose", help = "output more chit-chat to the terminal",
action = "store_true", dest = "verbose", default = False)
parser.add_option( "-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "file", default = [] )
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
parser.add_option( "-v", "--verbose", help = "output more chit-chat to the terminal",
action = "store_true", dest = "verbose", default = False )
parser.add_option( "-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", help = "don't execute, just go through the motions",
action = "store_true", dest = "dry_run", default = False)
parser.add_option( "-n", "--dry-run", help = "don't execute, just go through the motions",
action = "store_true", dest = "dry_run", default = False )
parser.add_option("-S", "--dump-signatures", help = "don't execute, just dump out the signature construction information",
action = "store_true", dest = "dump_signatures", default = False)
parser.add_option( "-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False )
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False)
parser.add_option( "-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
action = "store_true", dest = "disable_psyco", default = False )
parser.add_option("-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
action = "store_true", dest = "disable_psyco", default = False)
parser.add_option( "-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False )
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False)
parser.add_option( "-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False )
parser.add_option("-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False)
parser.add_option( "-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False )
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option( "-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [] )
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option( "-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [] )
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option( "-P", "--profile", help = "profile the command and print a report",
action = "store_true", dest = "profile", default = False )
parser.add_option("-P", "--profile", help = "profile the command and print a report",
action = "store_true", dest = "profile", default = False)
parser.add_option("-u", "--ui", help = "userinterface to use",
parser.add_option( "-u", "--ui", help = "userinterface to use",
action = "store", dest = "ui")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option( "", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
action = "store_true", dest = "revisions_changed", default = False )
options, args = parser.parse_args(sys.argv)
configuration = BBConfiguration(options)
configuration.pkgs_to_build = []
configuration.pkgs_to_build.extend(args[1:])
configuration.initial_path = os.environ['PATH']
#server = bb.server.xmlrpc
server = bb.server.none
# Save a logfile for cooker into the current working directory. When the
# server is daemonized this logfile will be truncated.
cooker_logfile = os.path.join(os.getcwd(), "cooker.log")
cooker_logfile = os.path.join (os.getcwd(), "cooker.log")
bb.utils.init_logger(bb.msg, configuration.verbose, configuration.debug,
configuration.debug_domains)
cooker = bb.cooker.BBCooker(configuration, server)
# Clear away any spurious environment variables. But don't wipe the
# environment totally. This is necessary to ensure the correct operation
# of the UIs (e.g. for DISPLAY, etc.)
bb.utils.clean_environment()
cooker = bb.cooker.BBCooker(configuration, server)
cooker.parseCommandLine()
serverinfo = server.BitbakeServerInfo(cooker.server)
@@ -183,6 +166,8 @@ Default BBFILES are the .bb files in the current directory.""")
server.BitBakeServerFork(serverinfo, cooker.serve, cooker_logfile)
del cooker
sys.excepthook = print_exception
# Setup a connection to the server (cooker)
serverConnection = server.BitBakeServerConnection(serverinfo)
@@ -193,24 +178,19 @@ Default BBFILES are the .bb files in the current directory.""")
ui = "knotty"
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
uimodule = __import__("bb.ui", fromlist = [ui])
ui_init = getattr(uimodule, ui).init
except AttributeError:
print("FATAL: Invalid user interface '%s' specified. " % ui)
print("Valid interfaces are 'ncurses', 'depexp' or the default, 'knotty'.")
else:
try:
return_value = ui_init(serverConnection.connection, serverConnection.events)
except Exception as e:
print("FATAL: Unable to start to '%s' UI: %s" % (ui, e))
raise
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
exec "from bb.ui import " + ui
exec "return_value = " + ui + ".init(serverConnection.connection, serverConnection.events)"
except ImportError:
print "FATAL: Invalid user interface '%s' specified. " % ui
print "Valid interfaces are 'ncurses', 'depexp' or the default, 'knotty'."
except Exception, e:
print "FATAL: Unable to start to '%s' UI due to exception: %s." % (configuration.ui, e)
finally:
serverConnection.terminate()
return return_value
return return_value
if __name__ == "__main__":
ret = main()

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env python
import os
import sys
import warnings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.siggen
if len(sys.argv) > 2:
bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
else:
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -1,117 +0,0 @@
#!/usr/bin/env python
import os
import sys
import warnings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
class BBConfiguration(object):
"""
Manages build options and configurations for one run
"""
def __init__(self, debug, debug_domains):
setattr(self, "data", {})
setattr(self, "file", [])
setattr(self, "cmd", None)
setattr(self, "dump_signatures", True)
setattr(self, "debug", debug)
setattr(self, "debug_domains", debug_domains)
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""Display python warning messages using bb.msg"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
s = s.split("\n")[0]
bb.msg.warn(None, s)
warnings.showwarning = _showwarning
warnings.simplefilter("ignore", DeprecationWarning)
import bb.event
# Need to map our I/O correctly. stdout is a pipe to the server expecting
# events. We save this and then map stdout to stderr.
eventfd = os.dup(sys.stdout.fileno())
bb.event.worker_pipe = os.fdopen(eventfd, 'w', 0)
# map stdout to stderr
os.dup2(sys.stderr.fileno(), sys.stdout.fileno())
# Replace those fds with our own
#logout = data.expand("${TMPDIR}/log/stdout.%s" % os.getpid(), self.cfgData, True)
#mkdirhier(os.path.dirname(logout))
#newso = open("/tmp/stdout.%s" % os.getpid(), 'w')
#os.dup2(newso.fileno(), sys.stdout.fileno())
#os.dup2(newso.fileno(), sys.stderr.fileno())
# Don't read from stdin from the parent
si = file("/dev/null", 'r')
os.dup2(si.fileno( ), sys.stdin.fileno( ))
# We don't want to see signals to our parent, e.g. Ctrl+C
os.setpgrp()
# Save out the PID so that the event can include it the
# events
bb.event.worker_pid = os.getpid()
bb.event.useStdout = False
hashfile = sys.argv[1]
buildfile = sys.argv[2]
taskname = sys.argv[3]
import bb.cooker
p = pickle.Unpickler(file(hashfile, "rb"))
hashdata = p.load()
debug = hashdata["msg-debug"]
debug_domains = hashdata["msg-debug-domains"]
verbose = hashdata["verbose"]
bb.utils.init_logger(bb.msg, verbose, debug, debug_domains)
cooker = bb.cooker.BBCooker(BBConfiguration(debug, debug_domains), None)
cooker.parseConfiguration()
cooker.bb_cache = bb.cache.init(cooker)
cooker.status = bb.cache.CacheData()
(fn, cls) = cooker.bb_cache.virtualfn2realfn(buildfile)
buildfile = cooker.matchFile(fn)
fn = cooker.bb_cache.realfn2virtual(buildfile, cls)
cooker.buildSetVars()
# Load data into the cache for fn and parse the loaded cache data
the_data = cooker.bb_cache.loadDataFull(fn, cooker.get_file_appends(fn), cooker.configuration.data)
cooker.bb_cache.setData(fn, buildfile, the_data)
cooker.bb_cache.handle_data(fn, cooker.status)
if taskname.endswith("_setscene"):
the_data.setVarFlag(taskname, "quieterrors", "1")
bb.parse.siggen.set_taskdata(hashdata["hashes"], hashdata["deps"])
for h in hashdata["hashes"]:
bb.data.setVar("BBHASH_%s" % h, hashdata["hashes"][h], the_data)
for h in hashdata["deps"]:
bb.data.setVar("BBHASHDEPS_%s" % h, hashdata["deps"][h], the_data)
ret = 0
if sys.argv[4] != "True":
ret = bb.build.exec_task(fn, taskname, the_data)
sys.exit(ret)

View File

@@ -48,7 +48,7 @@ class HTMLFormatter:
From pydoc... almost identical at least
"""
while pairs:
(a, b) = pairs[0]
(a,b) = pairs[0]
text = join(split(text, a), b)
pairs = pairs[1:]
return text
@@ -87,7 +87,7 @@ class HTMLFormatter:
return txt + ",".join(txts)
def groups(self, item):
def groups(self,item):
"""
Create HTML to link to related groups
"""
@@ -99,12 +99,12 @@ class HTMLFormatter:
txt = "<p><b>See also:</b><br>"
txts = []
for group in item.groups():
txts.append( """<a href="group%s.html">%s</a> """ % (group, group) )
txts.append( """<a href="group%s.html">%s</a> """ % (group,group) )
return txt + ",".join(txts)
def createKeySite(self, item):
def createKeySite(self,item):
"""
Create a site for a key. It contains the header/navigator, a heading,
the description, links to related keys and to the groups.
@@ -149,7 +149,8 @@ class HTMLFormatter:
"""
groups = ""
sorted_groups = sorted(doc.groups())
sorted_groups = doc.groups()
sorted_groups.sort()
for group in sorted_groups:
groups += """<a href="group%s.html">%s</a><br>""" % (group, group)
@@ -184,7 +185,8 @@ class HTMLFormatter:
Create Overview of all avilable keys
"""
keys = ""
sorted_keys = sorted(doc.doc_keys())
sorted_keys = doc.doc_keys()
sorted_keys.sort()
for key in sorted_keys:
keys += """<a href="key%s.html">%s</a><br>""" % (key, key)
@@ -212,7 +214,7 @@ class HTMLFormatter:
description += "<h2 Description of Grozp %s</h2>" % gr
description += _description
items.sort(lambda x, y:cmp(x.name(), y.name()))
items.sort(lambda x,y:cmp(x.name(),y.name()))
for group in items:
groups += """<a href="key%s.html">%s</a><br>""" % (group.name(), group.name())
@@ -341,7 +343,7 @@ class DocumentationItem:
def addGroup(self, group):
self._groups.append(group)
def addRelation(self, relation):
def addRelation(self,relation):
self._related.append(relation)
def sort(self):
@@ -394,7 +396,7 @@ class Documentation:
"""
return self.__groups.keys()
def group_content(self, group_name):
def group_content(self,group_name):
"""
Return a list of keys/names that are in a specefic
group or the empty list
@@ -410,7 +412,7 @@ def parse_cmdline(args):
Parse the CMD line and return the result as a n-tuple
"""
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__, __version__))
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__,__version__))
usage = """%prog [options]
Create a set of html pages (documentation) for a bitbake.conf....
@@ -426,7 +428,7 @@ Create a set of html pages (documentation) for a bitbake.conf....
parser.add_option( "-D", "--debug", help = "Increase the debug level",
action = "count", dest = "debug", default = 0 )
parser.add_option( "-v", "--verbose", help = "output more chit-char to the terminal",
parser.add_option( "-v","--verbose", help = "output more chit-char to the terminal",
action = "store_true", dest = "verbose", default = False )
options, args = parser.parse_args( sys.argv )
@@ -441,7 +443,7 @@ def main():
The main Method
"""
(config_file, output_dir) = parse_cmdline( sys.argv )
(config_file,output_dir) = parse_cmdline( sys.argv )
# right to let us load the file now
try:

View File

@@ -215,11 +215,13 @@ addtask printdate before do_build</screen></para>
<para>BitBake allows to install event handlers. Events are triggered at certain points during operation, such as, the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent was to make it easy to do things like email notifications on build failure.</para>
<para><screen>addhandler myclass_eventhandler
python myclass_eventhandler() {
from bb.event import getName
from bb.event import NotHandled, getName
from bb import data
print("The name of the Event is %s" % getName(e))
print("The file we run for is %s" % data.getVar('FILE', e.data, True))
print "The name of the Event is %s" % getName(e)
print "The file we run for is %s" % data.getVar('FILE', e.data, True)
return NotHandled
}
</screen></para><para>
This event handler gets called every time an event is triggered. A global variable <varname>e</varname> is defined. <varname>e</varname>.data contains an instance of bb.data. With the getName(<varname>e</varname>)
@@ -316,9 +318,9 @@ a per URI parameters separated by a <quote>;</quote> consisting of a key and a v
<section>
<title>CVS File Fetcher</title>
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIRS</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
</para>
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout by default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout by default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR></varname>.
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</screen>

View File

@@ -3,7 +3,7 @@
#
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
#
# Copyright (C) 2006 Tim Amsell
# Copyright (C) 2006 Tim Amsell
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -18,31 +18,29 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#Please Note:
#Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.
#
from __future__ import print_function
import copy
import types
ImmutableTypes = (
types.NoneType,
bool,
complex,
float,
int,
long,
tuple,
frozenset,
basestring
)
types.ImmutableTypes = tuple([ \
types.BooleanType, \
types.ComplexType, \
types.FloatType, \
types.IntType, \
types.LongType, \
types.NoneType, \
types.TupleType, \
frozenset] + \
list(types.StringTypes))
MUTABLE = "__mutable__"
class COWMeta(type):
pass
class COWDictMeta(COWMeta):
__warn__ = False
__hasmutable__ = False
@@ -61,12 +59,12 @@ class COWDictMeta(COWMeta):
__call__ = cow
def __setitem__(cls, key, value):
if not isinstance(value, ImmutableTypes):
if not isinstance(value, types.ImmutableTypes):
if not isinstance(value, COWMeta):
cls.__hasmutable__ = True
key += MUTABLE
setattr(cls, key, value)
def __getmutable__(cls, key, readonly=False):
nkey = key + MUTABLE
try:
@@ -79,10 +77,10 @@ class COWDictMeta(COWMeta):
return value
if not cls.__warn__ is False and not isinstance(value, COWMeta):
print("Warning: Doing a copy because %s is a mutable type." % key, file=cls.__warn__)
print >> cls.__warn__, "Warning: Doing a copy because %s is a mutable type." % key
try:
value = value.copy()
except AttributeError as e:
except AttributeError, e:
value = copy.copy(value)
setattr(cls, nkey, value)
return value
@@ -100,13 +98,13 @@ class COWDictMeta(COWMeta):
value = getattr(cls, key)
except AttributeError:
value = cls.__getmutable__(key, readonly)
# This is for values which have been deleted
# This is for values which have been deleted
if value is cls.__marker__:
raise AttributeError("key %s does not exist." % key)
return value
except AttributeError as e:
except AttributeError, e:
if not default is cls.__getmarker__:
return default
@@ -120,9 +118,6 @@ class COWDictMeta(COWMeta):
key += MUTABLE
delattr(cls, key)
def __contains__(cls, key):
return cls.has_key(key)
def has_key(cls, key):
value = cls.__getreadonly__(key, cls.__marker__)
if value is cls.__marker__:
@@ -132,7 +127,7 @@ class COWDictMeta(COWMeta):
def iter(cls, type, readonly=False):
for key in dir(cls):
if key.startswith("__"):
continue
continue
if key.endswith(MUTABLE):
key = key[:-len(MUTABLE)]
@@ -158,11 +153,11 @@ class COWDictMeta(COWMeta):
return cls.iter("keys")
def itervalues(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
return cls.iter("values", readonly)
def iteritems(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
return cls.iter("items", readonly)
class COWSetMeta(COWDictMeta):
@@ -181,13 +176,13 @@ class COWSetMeta(COWDictMeta):
def remove(cls, value):
COWDictMeta.__delitem__(cls, repr(hash(value)))
def __in__(cls, value):
return COWDictMeta.has_key(repr(hash(value)))
def iterkeys(cls):
raise TypeError("sets don't have keys")
def iteritems(cls):
raise TypeError("sets don't have 'items'")
@@ -204,120 +199,120 @@ if __name__ == "__main__":
import sys
COWDictBase.__warn__ = sys.stderr
a = COWDictBase()
print("a", a)
print "a", a
a['a'] = 'a'
a['b'] = 'b'
a['dict'] = {}
b = a.copy()
print("b", b)
print "b", b
b['c'] = 'b'
print()
print
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems():
print(x)
print()
print x
print
b['dict']['a'] = 'b'
b['a'] = 'c'
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems():
print(x)
print()
print x
print
try:
b['dict2']
except KeyError as e:
print("Okay!")
except KeyError, e:
print "Okay!"
a['set'] = COWSetBase()
a['set'].add("o1")
a['set'].add("o1")
a['set'].add("o2")
print("a", a)
print "a", a
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b['set'].itervalues():
print(x)
print()
print x
print
b['set'].add('o3')
print("a", a)
print "a", a
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b['set'].itervalues():
print(x)
print()
print x
print
a['set2'] = set()
a['set2'].add("o1")
a['set2'].add("o1")
a['set2'].add("o2")
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print
del b['b']
try:
print(b['b'])
print b['b']
except KeyError:
print("Yay! deleted key raises error")
print "Yay! deleted key raises error"
if b.has_key('b'):
print("Boo!")
print "Boo!"
else:
print("Yay - has_key with delete works!")
print("a", a)
print "Yay - has_key with delete works!"
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print
b.__revertitem__('b')
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print
b.__revertitem__('dict')
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print

View File

@@ -21,14 +21,39 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.11.0"
__version__ = "1.9.0"
import sys
if sys.version_info < (2, 6, 0):
raise RuntimeError("Sorry, python 2.6.0 or later is required for this version of bitbake")
__all__ = [
import os
import bb.msg
"debug",
"note",
"error",
"fatal",
"mkdirhier",
"movefile",
"vercmp",
# fetch
"decodeurl",
"encodeurl",
# modules
"parse",
"data",
"command",
"event",
"build",
"fetch",
"manifest",
"methodpool",
"cache",
"runqueue",
"taskdata",
"providers",
]
import sys, os, types, re, string
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
@@ -56,45 +81,14 @@ def fatal(*args):
bb.msg.fatal(None, ''.join(args))
def deprecated(func, name = None, advice = ""):
"""This is a decorator which can be used to mark functions
as deprecated. It will result in a warning being emmitted
when the function is used."""
import warnings
if advice:
advice = ": %s" % advice
if name is None:
name = func.__name__
def newFunc(*args, **kwargs):
warnings.warn("Call to deprecated function %s%s." % (name,
advice),
category = PendingDeprecationWarning,
stacklevel = 2)
return func(*args, **kwargs)
newFunc.__name__ = func.__name__
newFunc.__doc__ = func.__doc__
newFunc.__dict__.update(func.__dict__)
return newFunc
# For compatibility
def deprecate_import(current, modulename, fromlist, renames = None):
"""Import objects from one module into another, wrapping them with a DeprecationWarning"""
import sys
from bb.fetch import MalformedUrl, encodeurl, decodeurl
from bb.data import VarExpandError
from bb.utils import mkdirhier, movefile, copyfile, which
from bb.utils import vercmp
module = __import__(modulename, fromlist = fromlist)
for position, objname in enumerate(fromlist):
obj = getattr(module, objname)
newobj = deprecated(obj, "{0}.{1}".format(current, objname),
"Please use {0}.{1} instead".format(modulename, objname))
if renames:
newname = renames[position]
else:
newname = objname
setattr(sys.modules[current], newname, newobj)
deprecate_import(__name__, "bb.fetch", ("MalformedUrl", "encodeurl", "decodeurl"))
deprecate_import(__name__, "bb.utils", ("mkdirhier", "movefile", "copyfile", "which"))
deprecate_import(__name__, "bb.utils", ["vercmp_string"], ["vercmp"])
if __name__ == "__main__":
import doctest, bb
bb.msg.set_debug_level(0)
doctest.testmod(bb)

View File

@@ -27,9 +27,8 @@
from bb import data, event, mkdirhier, utils
import bb, os, sys
import bb.utils
# When we execute a python function we'd like certain things
# When we execute a python function we'd like certain things
# in all namespaces, hence we add them to __builtins__
# If we do not do this and use the exec globals, they will
# not be available to subfunctions.
@@ -44,6 +43,12 @@ class FuncFailed(Exception):
Second paramter is a logfile (optional)
"""
class EventException(Exception):
"""Exception which is associated with an Event."""
def __init__(self, msg, event):
self.args = msg, event
class TaskBase(event.Event):
"""Base class for task events"""
@@ -74,7 +79,7 @@ class TaskFailed(TaskBase):
self.msg = msg
TaskBase.__init__(self, t, d)
class TaskInvalid(TaskBase):
class InvalidTask(TaskBase):
"""Invalid Task"""
# functions
@@ -84,29 +89,27 @@ def exec_func(func, d, dirs = None):
body = data.getVar(func, d)
if not body:
bb.warn("Function %s doesn't exist" % func)
return
flags = data.getVarFlags(func, d)
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot', 'task']:
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot']:
if not item in flags:
flags[item] = None
ispython = flags['python']
cleandirs = flags['cleandirs']
if cleandirs:
for cdir in data.expand(cleandirs, d).split():
os.system("rm -rf %s" % cdir)
if dirs is None:
dirs = flags['dirs']
if dirs:
dirs = data.expand(dirs, d).split()
cleandirs = (data.expand(flags['cleandirs'], d) or "").split()
for cdir in cleandirs:
os.system("rm -rf %s" % cdir)
if dirs:
for adir in dirs:
bb.utils.mkdirhier(adir)
dirs = data.expand(dirs, d)
else:
dirs = (data.expand(flags['dirs'], d) or "").split()
for adir in dirs:
mkdirhier(adir)
if len(dirs) > 0:
adir = dirs[-1]
else:
adir = data.getVar('B', d, 1)
@@ -117,23 +120,45 @@ def exec_func(func, d, dirs = None):
except OSError:
prevdir = data.getVar('TOPDIR', d, True)
# Setup scriptfile
# Setup logfiles
t = data.getVar('T', d, 1)
if not t:
raise SystemExit("T variable not set, unable to build")
bb.utils.mkdirhier(t)
bb.msg.fatal(bb.msg.domain.Build, "T not set")
mkdirhier(t)
logfile = "%s/log.%s.%s" % (t, func, str(os.getpid()))
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
logfile = d.getVar("BB_LOGFILE", True)
# Change to correct directory (if specified)
if adir and os.access(adir, os.F_OK):
os.chdir(adir)
# Handle logfiles
si = file('/dev/null', 'r')
try:
if bb.msg.debug_level['default'] > 0 or ispython:
so = os.popen("tee \"%s\"" % logfile, "w")
else:
so = file(logfile, 'w')
except OSError, e:
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
pass
se = so
# Dup the existing fds so we dont lose them
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Replace those fds with our own
os.dup2(si.fileno(), osi[1])
os.dup2(so.fileno(), oso[1])
os.dup2(se.fileno(), ose[1])
locks = []
lockfiles = flags['lockfiles']
if lockfiles:
for lock in data.expand(lockfiles, d).split():
locks.append(bb.utils.lockfile(lock))
lockfiles = (data.expand(flags['lockfiles'], d) or "").split()
for lock in lockfiles:
locks.append(bb.utils.lockfile(lock))
try:
# Run the function
@@ -154,25 +179,47 @@ def exec_func(func, d, dirs = None):
for lock in locks:
bb.utils.unlockfile(lock)
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Close our logs
si.close()
so.close()
se.close()
if os.path.exists(logfile) and os.path.getsize(logfile) == 0:
bb.msg.debug(2, bb.msg.domain.Build, "Zero size logfile %s, removing" % logfile)
os.remove(logfile)
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
def exec_func_python(func, d, runfile, logfile):
"""Execute a python BB 'function'"""
import re, os
bbfile = bb.data.getVar('FILE', d, 1)
tmp = "def " + func + "(d):\n%s" % data.getVar(func, d)
tmp += '\n' + func + '(d)'
tmp = "def " + func + "():\n%s" % data.getVar(func, d)
tmp += '\n' + func + '()'
f = open(runfile, "w")
f.write(tmp)
comp = utils.better_compile(tmp, func, bbfile)
g = {} # globals
g['d'] = d
try:
utils.better_exec(comp, {"d": d}, tmp, bbfile)
utils.better_exec(comp, g, tmp, bbfile)
except:
(t, value, tb) = sys.exc_info()
(t,value,tb) = sys.exc_info()
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
raise
raise FuncFailed("Function %s failed" % func, logfile)
bb.msg.error(bb.msg.domain.Build, "Function %s failed" % func)
raise FuncFailed("function %s failed" % func, logfile)
def exec_func_shell(func, d, runfile, logfile, flags):
"""Execute a shell BB 'function' Returns true if execution was successful.
@@ -194,29 +241,32 @@ def exec_func_shell(func, d, runfile, logfile, flags):
f = open(runfile, "w")
f.write("#!/bin/sh -e\n")
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
data.emit_func(func, f, d)
data.emit_env(f, d)
f.write("cd %s\n" % os.getcwd())
if func: f.write("%s\n" % func)
f.close()
os.chmod(runfile, 0775)
if not func:
bb.msg.error(bb.msg.domain.Build, "Function not specified")
raise FuncFailed("Function not specified for exec_func_shell")
# execute function
if flags['fakeroot'] and not flags['task']:
bb.fatal("Function %s specifies fakeroot but isn't a task?!" % func)
if flags['fakeroot']:
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
else:
maybe_fakeroot = ''
lang_environment = "LC_ALL=C "
ret = os.system('%ssh -e %s' % (lang_environment, runfile))
ret = os.system('%s%ssh -e %s' % (lang_environment, maybe_fakeroot, runfile))
if ret == 0:
return
bb.msg.error(bb.msg.domain.Build, "Function %s failed" % func)
raise FuncFailed("function %s failed" % func, logfile)
def exec_task(fn, task, d):
def exec_task(task, d):
"""Execute an BB 'task'
The primary difference between executing a task versus executing
@@ -225,13 +275,7 @@ def exec_task(fn, task, d):
# Check whther this is a valid task
if not data.getVarFlag(task, 'task', d):
event.fire(TaskInvalid(task, d), d)
bb.msg.error(bb.msg.domain.Build, "No such task: %s" % task)
return 1
quieterr = False
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
raise EventException("No such task", InvalidTask(task, d))
try:
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % task)
@@ -240,126 +284,29 @@ def exec_task(fn, task, d):
data.setVar('OVERRIDES', 'task-%s:%s' % (task[3:], old_overrides), localdata)
data.update_data(localdata)
data.expandKeys(localdata)
data.setVar('BB_FILENAME', fn, d)
data.setVar('BB_CURRENTTASK', task[3:], d)
event.fire(TaskStarted(task, localdata), localdata)
# Setup logfiles
t = data.getVar('T', d, 1)
if not t:
raise SystemExit("T variable not set, unable to build")
bb.utils.mkdirhier(t)
loglink = "%s/log.%s" % (t, task)
logfile = "%s/log.%s.%s" % (t, task, str(os.getpid()))
d.setVar("BB_LOGFILE", logfile)
# Even though the log file has not yet been opened, lets create the link
if loglink:
try:
os.remove(loglink)
except OSError as e:
pass
try:
os.symlink(logfile, loglink)
except OSError as e:
pass
# Handle logfiles
si = file('/dev/null', 'r')
try:
so = file(logfile, 'w')
except OSError as e:
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
pass
se = so
# Dup the existing fds so we dont lose them
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Replace those fds with our own
os.dup2(si.fileno(), osi[1])
os.dup2(so.fileno(), oso[1])
os.dup2(se.fileno(), ose[1])
# Since we've remapped stdout and stderr, its safe for log messages to be printed there now
# exec_func can nest so we have to save state
origstdout = bb.event.useStdout
bb.event.useStdout = True
prefuncs = (data.getVarFlag(task, 'prefuncs', localdata) or "").split()
for func in prefuncs:
exec_func(func, localdata)
exec_func(task, localdata)
postfuncs = (data.getVarFlag(task, 'postfuncs', localdata) or "").split()
for func in postfuncs:
exec_func(func, localdata)
event.fire(TaskSucceeded(task, localdata), localdata)
# make stamp, or cause event and raise exception
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
make_stamp(task, d)
except FuncFailed as message:
except FuncFailed, message:
# Try to extract the optional logfile
try:
(msg, logfile) = message
except:
logfile = None
msg = message
if not quieterr:
bb.msg.error(bb.msg.domain.Build, "Task failed: %s" % message )
failedevent = TaskFailed(msg, logfile, task, d)
event.fire(failedevent, d)
return 1
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % message )
failedevent = TaskFailed(msg, logfile, task, d)
event.fire(failedevent, d)
raise EventException("Function failed in task: %s" % message, failedevent)
except Exception:
from traceback import format_exc
if not quieterr:
bb.msg.error(bb.msg.domain.Build, "Build of %s failed" % (task))
bb.msg.error(bb.msg.domain.Build, format_exc())
failedevent = TaskFailed("Task Failed", None, task, d)
event.fire(failedevent, d)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
bb.event.useStdout = origstdout
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Close our logs
si.close()
so.close()
se.close()
if logfile and os.path.exists(logfile) and os.path.getsize(logfile) == 0:
bb.msg.debug(2, bb.msg.domain.Build, "Zero size logfile %s, removing" % logfile)
os.remove(logfile)
try:
os.remove(loglink)
except OSError as e:
pass
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
return 0
# make stamp, or cause event and raise exception
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
make_stamp(task, d)
def extract_stamp(d, fn):
"""
Extracts stamp format which is either a data dictionary (fn unset)
or a dataCache entry (fn set).
Extracts stamp format which is either a data dictonary (fn unset)
or a dataCache entry (fn set).
"""
if fn:
return d.stamp[fn]
@@ -376,7 +323,7 @@ def stamp_internal(task, d, file_name):
if not stamp:
return
stamp = "%s.%s" % (stamp, task)
bb.utils.mkdirhier(os.path.dirname(stamp))
mkdirhier(os.path.dirname(stamp))
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if os.access(stamp, os.F_OK):
@@ -416,7 +363,7 @@ def add_tasks(tasklist, d):
if not task in task_deps['tasks']:
task_deps['tasks'].append(task)
flags = data.getVarFlags(task, d)
flags = data.getVarFlags(task, d)
def getTask(name):
if not name in task_deps:
task_deps[name] = {}
@@ -428,7 +375,6 @@ def add_tasks(tasklist, d):
getTask('rdeptask')
getTask('recrdeptask')
getTask('nostamp')
getTask('fakeroot')
task_deps['parents'][task] = []
for dep in flags['deps']:
dep = data.expand(dep, d)
@@ -443,3 +389,4 @@ def remove_task(task, kill, d):
If kill is 1, also remove tasks that depend on this task."""
data.delVarFlag(task, 'task', d)

View File

@@ -28,7 +28,7 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import os, re
import bb.data
import bb.utils
@@ -38,16 +38,16 @@ except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
__cache_version__ = "132"
__cache_version__ = "131"
class Cache:
"""
BitBake Cache implementation
"""
def __init__(self, data):
def __init__(self, cooker):
self.cachedir = bb.data.getVar("CACHE", data, True)
self.cachedir = bb.data.getVar("CACHE", cooker.configuration.data, True)
self.clean = {}
self.checked = {}
self.depends_cache = {}
@@ -61,28 +61,30 @@ class Cache:
return
self.has_cache = True
self.cachefile = os.path.join(self.cachedir, "bb_cache.dat")
self.cachefile = os.path.join(self.cachedir,"bb_cache.dat")
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
bb.utils.mkdirhier(self.cachedir)
try:
os.stat( self.cachedir )
except OSError:
bb.mkdirhier( self.cachedir )
# If any of configuration.data's dependencies are newer than the
# cache there isn't even any point in loading it...
newest_mtime = 0
deps = bb.data.getVar("__depends", data)
old_mtimes = [old_mtime for f, old_mtime in deps]
old_mtimes.append(newest_mtime)
newest_mtime = max(old_mtimes)
deps = bb.data.getVar("__depends", cooker.configuration.data, True)
for f,old_mtime in deps:
if old_mtime > newest_mtime:
newest_mtime = old_mtime
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
self.depends_cache, version_data = p.load()
if version_data['CACHE_VER'] != __cache_version__:
raise ValueError('Cache Version Mismatch')
raise ValueError, 'Cache Version Mismatch'
if version_data['BITBAKE_VER'] != bb.__version__:
raise ValueError('Bitbake Version Mismatch')
raise ValueError, 'Bitbake Version Mismatch'
except EOFError:
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
self.depends_cache = {}
@@ -90,23 +92,27 @@ class Cache:
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
self.depends_cache = {}
else:
if os.path.isfile(self.cachefile):
try:
os.stat( self.cachefile )
bb.msg.note(1, bb.msg.domain.Cache, "Out of date cache found, rebuilding...")
except OSError:
pass
def getVar(self, var, fn, exp = 0):
"""
Gets the value of a variable
(similar to getVar in the data class)
There are two scenarios:
1. We have cached data - serve from depends_cache[fn]
2. We're learning what data to cache - serve from data
2. We're learning what data to cache - serve from data
backend but add a copy of the data to the cache.
"""
if fn in self.clean:
return self.depends_cache[fn][var]
self.depends_cache.setdefault(fn, {})
if not fn in self.depends_cache:
self.depends_cache[fn] = {}
if fn != self.data_fn:
# We're trying to access data in the cache which doesn't exist
@@ -128,14 +134,14 @@ class Cache:
self.data = data
# Make sure __depends makes the depends_cache
# If we're a virtual class we need to make sure all our depends are appended
# If we're a virtual class we need to make sure all our depends are appended
# to the depends of fn.
depends = self.getVar("__depends", virtualfn) or set()
self.depends_cache.setdefault(fn, {})
depends = self.getVar("__depends", virtualfn, True) or []
if "__depends" not in self.depends_cache[fn] or not self.depends_cache[fn]["__depends"]:
self.depends_cache[fn]["__depends"] = depends
else:
self.depends_cache[fn]["__depends"].update(depends)
for dep in depends:
if dep not in self.depends_cache[fn]["__depends"]:
self.depends_cache[fn]["__depends"].append(dep)
# Make sure the variants always make it into the cache too
self.getVar('__VARIANTS', virtualfn, True)
@@ -165,7 +171,7 @@ class Cache:
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and %s to %s" % (realfn, cls, "virtual:" + cls + ":" + realfn))
return "virtual:" + cls + ":" + realfn
def loadDataFull(self, virtualfn, appends, cfgData):
def loadDataFull(self, virtualfn, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
@@ -175,10 +181,10 @@ class Cache:
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s (full)" % fn)
bb_data = self.load_bbfile(fn, appends, cfgData)
bb_data = self.load_bbfile(fn, cfgData)
return bb_data[cls]
def loadData(self, fn, appends, cfgData, cacheData):
def loadData(self, fn, cfgData, cacheData):
"""
Load a subset of data for fn.
If the cached data is valid we do nothing,
@@ -206,12 +212,12 @@ class Cache:
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s" % fn)
bb_data = self.load_bbfile(fn, appends, cfgData)
bb_data = self.load_bbfile(fn, cfgData)
for data in bb_data:
virtualfn = self.realfn2virtual(fn, data)
self.setData(virtualfn, fn, bb_data[data])
if self.getVar("__SKIPPED", virtualfn):
if self.getVar("__SKIPPED", virtualfn, True):
skipped += 1
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
else:
@@ -252,11 +258,11 @@ class Cache:
self.remove(fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s no longer exists" % fn)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
self.remove(fn)
return False
@@ -269,7 +275,7 @@ class Cache:
# Check dependencies are still valid
depends = self.getVar("__depends", fn, True)
if depends:
for f, old_mtime in depends:
for f,old_mtime in depends:
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
@@ -285,25 +291,11 @@ class Cache:
if not fn in self.clean:
self.clean[fn] = ""
invalid = False
# Mark extended class data as clean too
multi = self.getVar('__VARIANTS', fn, True)
for cls in (multi or "").split():
virtualfn = self.realfn2virtual(fn, cls)
self.clean[virtualfn] = ""
if not virtualfn in self.depends_cache:
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s is not cached" % virtualfn)
invalid = True
# If any one of the varients is not present, mark cache as invalid for all
if invalid:
for cls in (multi or "").split():
virtualfn = self.realfn2virtual(fn, cls)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: Removing %s from cache" % virtualfn)
del self.clean[virtualfn]
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: Removing %s from cache" % fn)
del self.clean[fn]
return False
return True
@@ -353,14 +345,14 @@ class Cache:
def handle_data(self, file_name, cacheData):
"""
Save data we need into the cache
Save data we need into the cache
"""
pn = self.getVar('PN', file_name, True)
pe = self.getVar('PE', file_name, True) or "0"
pv = self.getVar('PV', file_name, True)
if 'SRCREVINACTION' in pv:
bb.msg.note(1, bb.msg.domain.Cache, "Found SRCREVINACTION in PV (%s) or %s. Please report this bug." % (pv, file_name))
bb.note("Found SRCREVINACTION in PV (%s) or %s. Please report this bug." % (pv, file_name))
pr = self.getVar('PR', file_name, True)
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
@@ -368,7 +360,7 @@ class Cache:
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name)
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name, True)
# build PackageName to FileName lookup table
if pn not in cacheData.pkg_pn:
@@ -377,13 +369,9 @@ class Cache:
cacheData.stamp[file_name] = self.getVar('STAMP', file_name, True)
cacheData.tasks[file_name] = self.getVar('__BBTASKS', file_name, True)
for t in cacheData.tasks[file_name]:
cacheData.basetaskhash[file_name + "." + t] = self.getVar("BB_BASEHASH_task-%s" % t, file_name, True)
# build FileName to PackageName lookup table
cacheData.pkg_fn[file_name] = pn
cacheData.pkg_pepvpr[file_name] = (pe, pv, pr)
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
cacheData.pkg_dp[file_name] = dp
provides = [pn]
@@ -412,13 +400,13 @@ class Cache:
if not dep in cacheData.all_depends:
cacheData.all_depends.append(dep)
# Build reverse hash for PACKAGES, so runtime dependencies
# Build reverse hash for PACKAGES, so runtime dependencies
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
for package in packages:
if not package in cacheData.packages:
cacheData.packages[package] = []
cacheData.packages[package].append(file_name)
rprovides += (self.getVar("RPROVIDES_%s" % package, file_name, 1) or "").split()
rprovides += (self.getVar("RPROVIDES_%s" % package, file_name, 1) or "").split()
for package in packages_dynamic:
if not package in cacheData.packages_dynamic:
@@ -453,53 +441,42 @@ class Cache:
if not self.getVar('BROKEN', file_name, True) and not self.getVar('EXCLUDE_FROM_WORLD', file_name, True):
cacheData.possible_world.append(file_name)
cacheData.hashfn[file_name] = self.getVar('BB_HASHFILENAME', file_name, True)
# Touch this to make sure its in the cache
self.getVar('__BB_DONT_CACHE', file_name, True)
self.getVar('__VARIANTS', file_name, True)
def load_bbfile(self, bbfile, appends, config):
def load_bbfile( self, bbfile , config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
chdir_back = False
from bb import data, parse
import bb
from bb import utils, data, parse, debug, event, fatal
# expand tmpdir to include this topdir
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
parse.cached_mtime_noerror(bbfile_loc)
if bb.parse.cached_mtime_noerror(bbfile_loc):
os.chdir(bbfile_loc)
bb_data = data.init_db(config)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not data.getVar('TOPDIR', bb_data):
chdir_back = True
data.setVar('TOPDIR', bbfile_loc, bb_data)
try:
if appends:
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
bb_data = parse.handle(bbfile, bb_data) # read .bb data
if chdir_back: os.chdir(oldpath)
os.chdir(oldpath)
return bb_data
except:
if chdir_back: os.chdir(oldpath)
os.chdir(oldpath)
raise
def init(cooker):
"""
The Objective: Cache the minimum amount of data possible yet get to the
The Objective: Cache the minimum amount of data possible yet get to the
stage of building packages (i.e. tryBuild) without reparsing any .bb files.
To do this, we intercept getVar calls and only cache the variables we see
being accessed. We rely on the cache getVar calls being made for all
variables bitbake might need to use to reach this stage. For each cached
To do this, we intercept getVar calls and only cache the variables we see
being accessed. We rely on the cache getVar calls being made for all
variables bitbake might need to use to reach this stage. For each cached
file we need to track:
* Its mtime
@@ -509,7 +486,7 @@ def init(cooker):
Files causing parsing errors are evicted from the cache.
"""
return Cache(cooker.configuration.data)
return Cache(cooker)
@@ -545,9 +522,6 @@ class CacheData:
self.task_deps = {}
self.stamp = {}
self.preferred = {}
self.tasks = {}
self.basetaskhash = {}
self.hashfn = {}
"""
Indirect Cache variables

View File

@@ -1,329 +0,0 @@
from pysh import pyshyacc, pyshlex
from itertools import chain
from bb import msg, utils
import ast
import codegen
PARSERCACHE_VERSION = 2
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
i = 0
while codestr[i] in ["\n", " ", " "]:
i = i + 1
if i == 0:
return codestr
if codestr[i-1] is " " or codestr[i-1] is " ":
return "if 1:\n" + codestr
return codestr
pythonparsecache = {}
shellparsecache = {}
def parser_cachefile(d):
cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
if cachedir in [None, '']:
return None
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s' for codeparser cache" % cachefile)
return cachefile
def parser_cache_init(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except:
return
if version != PARSERCACHE_VERSION:
return
bb.codeparser.pythonparsecache = data[0]
bb.codeparser.shellparsecache = data[1]
def parser_cache_save(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([[bb.codeparser.pythonparsecache, bb.codeparser.shellparsecache], PARSERCACHE_VERSION])
class PythonParser():
class ValueVisitor():
"""Visitor to traverse a python abstract syntax tree and obtain
the variables referenced via bitbake metadata APIs, and the external
functions called.
"""
getvars = ("d.getVar", "bb.data.getVar", "data.getVar")
expands = ("d.expand", "bb.data.expand", "data.expand")
execs = ("bb.build.exec_func", "bb.build.exec_task")
@classmethod
def _compare_name(cls, strparts, node):
"""Given a sequence of strings representing a python name,
where the last component is the actual Name and the prior
elements are Attribute nodes, determine if the supplied node
matches.
"""
if not strparts:
return True
current, rest = strparts[0], strparts[1:]
if isinstance(node, ast.Attribute):
if current == node.attr:
return cls._compare_name(rest, node.value)
elif isinstance(node, ast.Name):
if current == node.id:
return True
return False
@classmethod
def compare_name(cls, value, node):
"""Convenience function for the _compare_node method, which
can accept a string (which is split by '.' for you), or an
iterable of strings, in which case it checks to see if any of
them match, similar to isinstance.
"""
if isinstance(value, basestring):
return cls._compare_name(tuple(reversed(value.split("."))),
node)
else:
return any(cls.compare_name(item, node) for item in value)
def __init__(self, value):
self.var_references = set()
self.var_execs = set()
self.direct_func_calls = set()
self.var_expands = set()
self.value = value
@classmethod
def warn(cls, func, arg):
"""Warn about calls of bitbake APIs which pass a non-literal
argument for the variable name, as we're not able to track such
a reference.
"""
try:
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
msg.debug(2, None, "Failed to convert function and argument to source form")
else:
msg.debug(1, None, "Warning: in call to '%s', argument '%s' is not a literal" %
(funcstr, argstr))
def visit_Call(self, node):
if self.compare_name(self.getvars, node.func):
if isinstance(node.args[0], ast.Str):
self.var_references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif self.compare_name(self.expands, node.func):
if isinstance(node.args[0], ast.Str):
self.warn(node.func, node.args[0])
self.var_expands.update(node.args[0].s)
elif isinstance(node.args[0], ast.Call) and \
self.compare_name(self.getvars, node.args[0].func):
pass
else:
self.warn(node.func, node.args[0])
elif self.compare_name(self.execs, node.func):
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif isinstance(node.func, ast.Name):
self.direct_func_calls.add(node.func.id)
elif isinstance(node.func, ast.Attribute):
# We must have a qualified name. Therefore we need
# to walk the chain of 'Attribute' nodes to determine
# the qualification.
attr_node = node.func.value
identifier = node.func.attr
while isinstance(attr_node, ast.Attribute):
identifier = attr_node.attr + "." + identifier
attr_node = attr_node.value
if isinstance(attr_node, ast.Name):
identifier = attr_node.id + "." + identifier
self.direct_func_calls.add(identifier)
def __init__(self):
#self.funcdefs = set()
self.execs = set()
#self.external_cmds = set()
self.references = set()
def parse_python(self, node):
h = hash(str(node))
if h in pythonparsecache:
self.references = pythonparsecache[h]["refs"]
self.execs = pythonparsecache[h]["execs"]
return
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
visitor = self.ValueVisitor(code)
for n in ast.walk(code):
if n.__class__.__name__ == "Call":
visitor.visit_Call(n)
self.references.update(visitor.var_references)
self.references.update(visitor.var_execs)
self.execs = visitor.direct_func_calls
pythonparsecache[h] = {}
pythonparsecache[h]["refs"] = self.references
pythonparsecache[h]["execs"] = self.execs
class ShellParser():
def __init__(self):
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
def parse_shell(self, value):
"""Parse the supplied shell code in a string, returning the external
commands it executes.
"""
h = hash(str(value))
if h in shellparsecache:
self.execs = shellparsecache[h]["execs"]
return self.execs
try:
tokens, _ = pyshyacc.parse(value, eof=True, debug=False)
except pyshlex.NeedMore:
raise ShellSyntaxError("Unexpected EOF")
for token in tokens:
self.process_tokens(token)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
shellparsecache[h] = {}
shellparsecache[h]["execs"] = self.execs
return self.execs
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
pyshyacc.parse.
"""
def function_definition(value):
self.funcdefs.add(value.name)
return [value.body], None
def case_clause(value):
# Element 0 of each item in the case is the list of patterns, and
# Element 1 of each item in the case is the list of commands to be
# executed when that pattern matches.
words = chain(*[item[0] for item in value.items])
cmds = chain(*[item[1] for item in value.items])
return cmds, words
def if_clause(value):
main = chain(value.cond, value.if_cmds)
rest = value.else_cmds
if isinstance(rest, tuple) and rest[0] == "elif":
return chain(main, if_clause(rest[1]))
else:
return chain(main, rest)
def simple_command(value):
return None, chain(value.words, (assign[1] for assign in value.assigns))
token_handlers = {
"and_or": lambda x: ((x.left, x.right), None),
"async": lambda x: ([x], None),
"brace_group": lambda x: (x.cmds, None),
"for_clause": lambda x: (x.cmds, x.items),
"function_definition": function_definition,
"if_clause": lambda x: (if_clause(x), None),
"pipeline": lambda x: (x.commands, None),
"redirect_list": lambda x: ([x.cmd], None),
"subshell": lambda x: (x.cmds, None),
"while_clause": lambda x: (chain(x.condition, x.cmds), None),
"until_clause": lambda x: (chain(x.condition, x.cmds), None),
"simple_command": simple_command,
"case_clause": case_clause,
}
for token in tokens:
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
if more_tokens:
self.process_tokens(more_tokens)
if words:
self.process_words(words)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes
extraction of executed commands from $() blocks, as well as grabbing
the command name argument.
"""
words = list(words)
for word in list(words):
wtree = pyshlex.make_wordtree(word[1])
for part in wtree:
if not isinstance(part, list):
continue
if part[0] in ('`', '$('):
command = pyshlex.wordtree_as_string(part[1:-1])
self.parse_shell(command)
if word[0] in ("cmd_name", "cmd_word"):
if word in words:
words.remove(word)
usetoken = False
for word in words:
if word[0] in ("cmd_name", "cmd_word") or \
(usetoken and word[0] == "TOKEN"):
if "=" in word[1]:
usetoken = True
continue
cmd = word[1]
if cmd.startswith("$"):
msg.debug(1, None, "Warning: execution of non-literal command '%s'" % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self.parse_shell(command)
else:
self.allexecs.add(cmd)
break

View File

@@ -20,7 +20,7 @@ Provide an interface to interact with the bitbake server through 'commands'
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
The bitbake server takes 'commands' from its UI/commandline.
The bitbake server takes 'commands' from its UI/commandline.
Commands are either synchronous or asynchronous.
Async commands return data to the client in the form of events.
Sync commands must only return data through the function return value
@@ -62,7 +62,7 @@ class Command:
try:
command = commandline.pop(0)
if command in CommandsSync.__dict__:
# Can run synchronous commands straight away
# Can run synchronous commands straight away
return getattr(CommandsSync, command)(self.cmds_sync, self, commandline)
if self.currentAsyncCommand is not None:
return "Busy (%s in progress)" % self.currentAsyncCommand[0]
@@ -89,17 +89,7 @@ class Command:
return False
else:
return False
except KeyboardInterrupt as exc:
self.finishAsyncCommand("Interrupted")
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, basestring):
self.finishAsyncCommand(arg)
else:
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception:
except:
import traceback
self.finishAsyncCommand(traceback.format_exc())
return False
@@ -278,3 +268,6 @@ class CookerCommandSetExitCode(bb.event.Event):
def __init__(self, exitcode):
bb.event.Event.__init__(self)
self.exitcode = int(exitcode)

View File

@@ -22,13 +22,11 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import print_function
import sys, os, glob, os.path, re, time
import sre_constants
from cStringIO import StringIO
from contextlib import closing
import sys, os, getopt, glob, copy, os.path, re, time
import bb
from bb import utils, data, parse, event, cache, providers, taskdata, command, runqueue
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue
from bb import command
import itertools, sre_constants
class MultipleMatches(Exception):
"""
@@ -70,15 +68,22 @@ class BBCooker:
self.cache = None
self.bb_cache = None
if server:
self.server = server.BitBakeServer(self)
self.server = server.BitBakeServer(self)
self.configuration = configuration
self.configuration.data = bb.data.init()
if self.configuration.verbose:
bb.msg.set_verbose(True)
if not server:
bb.data.setVar("BB_WORKERCONTEXT", "1", self.configuration.data)
if self.configuration.debug:
bb.msg.set_debug_level(self.configuration.debug)
else:
bb.msg.set_debug_level(0)
if self.configuration.debug_domains:
bb.msg.set_debug_domains(self.configuration.debug_domains)
self.configuration.data = bb.data.init()
bb.data.inheritFromOS(self.configuration.data)
@@ -127,11 +132,11 @@ class BBCooker:
self.commandlineAction = None
if 'world' in self.configuration.pkgs_to_build:
bb.msg.error(bb.msg.domain.Build, "'world' is not a valid target for --environment.")
bb.error("'world' is not a valid target for --environment.")
elif len(self.configuration.pkgs_to_build) > 1:
bb.msg.error(bb.msg.domain.Build, "Only one target can be used with the --environment option.")
bb.error("Only one target can be used with the --environment option.")
elif self.configuration.buildfile and len(self.configuration.pkgs_to_build) > 0:
bb.msg.error(bb.msg.domain.Build, "No target should be used with the --environment and --buildfile options.")
bb.error("No target should be used with the --environment and --buildfile options.")
elif len(self.configuration.pkgs_to_build) > 0:
self.commandlineAction = ["showEnvironmentTarget", self.configuration.pkgs_to_build]
else:
@@ -144,18 +149,21 @@ class BBCooker:
self.commandlineAction = ["showVersions"]
elif self.configuration.parse_only:
self.commandlineAction = ["parseFiles"]
# FIXME - implement
#elif self.configuration.interactive:
# self.interactiveMode()
elif self.configuration.dot_graph:
if self.configuration.pkgs_to_build:
self.commandlineAction = ["generateDotGraph", self.configuration.pkgs_to_build, self.configuration.cmd]
else:
self.commandlineAction = None
bb.msg.error(bb.msg.domain.Build, "Please specify a package name for dependency graph generation.")
bb.error("Please specify a package name for dependency graph generation.")
else:
if self.configuration.pkgs_to_build:
self.commandlineAction = ["buildTargets", self.configuration.pkgs_to_build, self.configuration.cmd]
else:
self.commandlineAction = None
bb.msg.error(bb.msg.domain.Build, "Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
bb.error("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
def runCommands(self, server, data, abort):
"""
@@ -166,6 +174,38 @@ class BBCooker:
return self.command.runAsyncCommand()
def tryBuildPackage(self, fn, item, task, the_data):
"""
Build one task of a package, optionally build following task depends
"""
try:
if not self.configuration.dry_run:
bb.build.exec_task('do_%s' % task, the_data)
return True
except bb.build.FuncFailed:
bb.msg.error(bb.msg.domain.Build, "task stack execution failed")
raise
except bb.build.EventException, e:
event = e.args[1]
bb.msg.error(bb.msg.domain.Build, "%s event exception, aborting" % bb.event.getName(event))
raise
def tryBuild(self, fn, task):
"""
Build a provider and its dependencies.
build_depends is a list of previous build dependencies (not runtime)
If build_depends is empty, we're dealing with a runtime depends
"""
the_data = self.bb_cache.loadDataFull(fn, self.configuration.data)
item = self.status.pkg_fn[fn]
#if bb.build.stamp_is_current('do_%s' % self.configuration.cmd, the_data):
# return True
return self.tryBuildPackage(fn, item, task, the_data)
def showVersions(self):
# Need files parsed
@@ -177,7 +217,7 @@ class BBCooker:
# Sort by priority
for pn in pkg_pn:
(last_ver, last_file, pref_ver, pref_file) = bb.providers.findBestProvider(pn, self.configuration.data, self.status)
(last_ver,last_file,pref_ver,pref_file) = bb.providers.findBestProvider(pn, self.configuration.data, self.status)
preferred_versions[pn] = (pref_ver, pref_file)
latest_versions[pn] = (last_ver, last_file)
@@ -230,23 +270,28 @@ class BBCooker:
if fn:
try:
envdata = self.bb_cache.loadDataFull(fn, self.get_file_appends(fn), self.configuration.data)
except IOError as e:
envdata = self.bb_cache.loadDataFull(fn, self.configuration.data)
except IOError, e:
bb.msg.error(bb.msg.domain.Parsing, "Unable to read %s: %s" % (fn, e))
raise
except Exception as e:
except Exception, e:
bb.msg.error(bb.msg.domain.Parsing, "%s" % e)
raise
class dummywrite:
def __init__(self):
self.writebuf = ""
def write(self, output):
self.writebuf = self.writebuf + output
# emit variables and shell functions
try:
data.update_data(envdata)
with closing(StringIO()) as env:
data.emit_env(env, envdata, True)
bb.msg.plain(env.getvalue())
except Exception as e:
wb = dummywrite()
data.emit_env(wb, envdata, True)
bb.msg.plain(wb.writebuf)
except Exception, e:
bb.msg.fatal(bb.msg.domain.Parsing, "%s" % e)
# emit the metadata which isnt valid shell
data.expandKeys(envdata)
for e in envdata.keys():
@@ -279,9 +324,9 @@ class BBCooker:
taskdata.add_unresolved(localdata, self.status)
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
rq.rqdata.prepare()
rq.prepare_runqueue()
seen_fnids = []
seen_fnids = []
depend_tree = {}
depend_tree["depends"] = {}
depend_tree["tdepends"] = {}
@@ -291,9 +336,9 @@ class BBCooker:
depend_tree["rdepends-pkg"] = {}
depend_tree["rrecs-pkg"] = {}
for task in range(len(rq.rqdata.runq_fnid)):
taskname = rq.rqdata.runq_task[task]
fnid = rq.rqdata.runq_fnid[task]
for task in range(len(rq.runq_fnid)):
taskname = rq.runq_task[task]
fnid = rq.runq_fnid[task]
fn = taskdata.fn_index[fnid]
pn = self.status.pkg_fn[fn]
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
@@ -301,13 +346,13 @@ class BBCooker:
depend_tree["pn"][pn] = {}
depend_tree["pn"][pn]["filename"] = fn
depend_tree["pn"][pn]["version"] = version
for dep in rq.rqdata.runq_depends[task]:
depfn = taskdata.fn_index[rq.rqdata.runq_fnid[dep]]
for dep in rq.runq_depends[task]:
depfn = taskdata.fn_index[rq.runq_fnid[dep]]
deppn = self.status.pkg_fn[depfn]
dotname = "%s.%s" % (pn, rq.rqdata.runq_task[task])
dotname = "%s.%s" % (pn, rq.runq_task[task])
if not dotname in depend_tree["tdepends"]:
depend_tree["tdepends"][dotname] = []
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, rq.rqdata.runq_task[dep]))
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, rq.runq_task[dep]))
if fnid not in seen_fnids:
seen_fnids.append(fnid)
packages = []
@@ -318,7 +363,7 @@ class BBCooker:
depend_tree["rdepends-pn"][pn] = []
for rdep in taskdata.rdepids[fnid]:
depend_tree["rdepends-pn"][pn].append(taskdata.run_names_index[rdep])
depend_tree["rdepends-pn"][pn].append(taskdata.run_names_index[rdep])
rdepends = self.status.rundeps[fn]
for package in rdepends:
@@ -363,51 +408,51 @@ class BBCooker:
# Prints a flattened form of package-depends below where subpackages of a package are merged into the main pn
depends_file = file('pn-depends.dot', 'w' )
print("digraph depends {", file=depends_file)
print >> depends_file, "digraph depends {"
for pn in depgraph["pn"]:
fn = depgraph["pn"][pn]["filename"]
version = depgraph["pn"][pn]["version"]
print('"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn), file=depends_file)
print >> depends_file, '"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn)
for pn in depgraph["depends"]:
for depend in depgraph["depends"][pn]:
print('"%s" -> "%s"' % (pn, depend), file=depends_file)
print >> depends_file, '"%s" -> "%s"' % (pn, depend)
for pn in depgraph["rdepends-pn"]:
for rdepend in depgraph["rdepends-pn"][pn]:
print('"%s" -> "%s" [style=dashed]' % (pn, rdepend), file=depends_file)
print("}", file=depends_file)
print >> depends_file, '"%s" -> "%s" [style=dashed]' % (pn, rdepend)
print >> depends_file, "}"
bb.msg.plain("PN dependencies saved to 'pn-depends.dot'")
depends_file = file('package-depends.dot', 'w' )
print("digraph depends {", file=depends_file)
print >> depends_file, "digraph depends {"
for package in depgraph["packages"]:
pn = depgraph["packages"][package]["pn"]
fn = depgraph["packages"][package]["filename"]
version = depgraph["packages"][package]["version"]
if package == pn:
print('"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn), file=depends_file)
print >> depends_file, '"%s" [label="%s %s\\n%s"]' % (pn, pn, version, fn)
else:
print('"%s" [label="%s(%s) %s\\n%s"]' % (package, package, pn, version, fn), file=depends_file)
print >> depends_file, '"%s" [label="%s(%s) %s\\n%s"]' % (package, package, pn, version, fn)
for depend in depgraph["depends"][pn]:
print('"%s" -> "%s"' % (package, depend), file=depends_file)
print >> depends_file, '"%s" -> "%s"' % (package, depend)
for package in depgraph["rdepends-pkg"]:
for rdepend in depgraph["rdepends-pkg"][package]:
print('"%s" -> "%s" [style=dashed]' % (package, rdepend), file=depends_file)
print >> depends_file, '"%s" -> "%s" [style=dashed]' % (package, rdepend)
for package in depgraph["rrecs-pkg"]:
for rdepend in depgraph["rrecs-pkg"][package]:
print('"%s" -> "%s" [style=dashed]' % (package, rdepend), file=depends_file)
print("}", file=depends_file)
print >> depends_file, '"%s" -> "%s" [style=dashed]' % (package, rdepend)
print >> depends_file, "}"
bb.msg.plain("Package dependencies saved to 'package-depends.dot'")
tdepends_file = file('task-depends.dot', 'w' )
print("digraph depends {", file=tdepends_file)
print >> tdepends_file, "digraph depends {"
for task in depgraph["tdepends"]:
(pn, taskname) = task.rsplit(".", 1)
fn = depgraph["pn"][pn]["filename"]
version = depgraph["pn"][pn]["version"]
print('"%s.%s" [label="%s %s\\n%s\\n%s"]' % (pn, taskname, pn, taskname, version, fn), file=tdepends_file)
print >> tdepends_file, '"%s.%s" [label="%s %s\\n%s\\n%s"]' % (pn, taskname, pn, taskname, version, fn)
for dep in depgraph["tdepends"][task]:
print('"%s" -> "%s"' % (task, dep), file=tdepends_file)
print("}", file=tdepends_file)
print >> tdepends_file, '"%s" -> "%s"' % (task, dep)
print >> tdepends_file, "}"
bb.msg.plain("Task dependencies saved to 'task-depends.dot'")
def buildDepgraph( self ):
@@ -418,12 +463,9 @@ class BBCooker:
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
matched = set()
def calc_bbfile_priority(filename):
for _, _, regex, pri in self.status.bbfile_config_priorities:
for (regex, pri) in self.status.bbfile_config_priorities:
if regex.match(filename):
if not regex in matched:
matched.add(regex)
return pri
return 0
@@ -442,11 +484,6 @@ class BBCooker:
for p in self.status.pkg_fn:
self.status.bbfile_priority[p] = calc_bbfile_priority(p)
for collection, pattern, regex, _ in self.status.bbfile_config_priorities:
if not regex in matched:
bb.msg.warn(bb.msg.domain.Provider, "No bb files matched BBFILE_PATTERN_%s '%s'" %
(collection, pattern))
def buildWorldTargetList(self):
"""
Build package list for "bitbake world"
@@ -479,59 +516,31 @@ class BBCooker:
"""Drop off into a shell"""
try:
from bb import shell
except ImportError as details:
except ImportError, details:
bb.msg.fatal(bb.msg.domain.Parsing, "Sorry, shell not available (%s)" % details )
else:
shell.start( self )
def _findLayerConf(self):
path = os.getcwd()
while path != "/":
bblayers = os.path.join(path, "conf", "bblayers.conf")
if os.path.exists(bblayers):
return bblayers
path, _ = os.path.split(path)
def parseConfigurationFiles(self, files):
try:
data = self.configuration.data
bb.parse.init_parser(data, self.configuration.dump_signatures)
for f in files:
data = bb.parse.handle(f, data)
layerconf = self._findLayerConf()
if layerconf:
layerconf = os.path.join(os.getcwd(), "conf", "bblayers.conf")
if os.path.exists(layerconf):
bb.msg.debug(2, bb.msg.domain.Parsing, "Found bblayers.conf (%s)" % layerconf)
data = bb.parse.handle(layerconf, data)
layers = (bb.data.getVar('BBLAYERS', data, True) or "").split()
data = bb.data.createCopy(data)
for layer in layers:
bb.msg.debug(2, bb.msg.domain.Parsing, "Adding layer %s" % layer)
bb.data.setVar('LAYERDIR', layer, data)
data = bb.parse.handle(os.path.join(layer, "conf", "layer.conf"), data)
# XXX: Hack, relies on the local keys of the datasmart
# instance being stored in the 'dict' attribute and makes
# assumptions about how variable expansion works, but
# there's no better way to force an expansion of a single
# variable across the datastore today, and this at least
# lets us reference LAYERDIR without having to immediately
# eval all our variables that use it.
for key in data.dict:
if key != "_data":
value = data.getVar(key, False)
if value and "${LAYERDIR}" in value:
data.setVar(key, value.replace("${LAYERDIR}", layer))
bb.data.delVar('LAYERDIR', data)
if not data.getVar("BBPATH", True):
bb.fatal("The BBPATH variable is not set")
data = bb.parse.handle(os.path.join("conf", "bitbake.conf"), data)
self.configuration.data = data
@@ -543,20 +552,16 @@ class BBCooker:
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in bb.data.getVar('__BBHANDLERS', self.configuration.data) or []:
bb.event.register(var, bb.data.getVar(var, self.configuration.data))
for var in data.getVar('__BBHANDLERS', self.configuration.data) or []:
bb.event.register(var,bb.data.getVar(var, self.configuration.data))
if bb.data.getVar("BB_WORKERCONTEXT", self.configuration.data) is None:
bb.fetch.fetcher_init(self.configuration.data)
bb.codeparser.parser_cache_init(self.configuration.data)
bb.parse.init_parser(data, self.configuration.dump_signatures)
bb.fetch.fetcher_init(self.configuration.data)
bb.event.fire(bb.event.ConfigParsed(), self.configuration.data)
except IOError as e:
except IOError, e:
bb.msg.fatal(bb.msg.domain.Parsing, "Error when parsing %s: %s" % (files, str(e)))
except bb.parse.ParseError as details:
except bb.parse.ParseError, details:
bb.msg.fatal(bb.msg.domain.Parsing, "Unable to parse %s (%s)" % (files, details) )
def handleCollections( self, collections ):
@@ -579,7 +584,7 @@ class BBCooker:
continue
try:
pri = int(priority)
self.status.bbfile_config_priorities.append((c, regex, cre, pri))
self.status.bbfile_config_priorities.append((cre, pri))
except ValueError:
bb.msg.error(bb.msg.domain.Parsing, "invalid value for BBFILE_PRIORITY_%s: \"%s\"" % (c, priority))
@@ -588,8 +593,8 @@ class BBCooker:
Setup any variables needed before starting a build
"""
if not bb.data.getVar("BUILDNAME", self.configuration.data):
bb.data.setVar("BUILDNAME", time.strftime('%Y%m%d%H%M'), self.configuration.data)
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', time.gmtime()), self.configuration.data)
bb.data.setVar("BUILDNAME", os.popen('date +%Y%m%d%H%M').readline().strip(), self.configuration.data)
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S',time.gmtime()), self.configuration.data)
def matchFiles(self, buildfile):
"""
@@ -597,11 +602,11 @@ class BBCooker:
"""
bf = os.path.abspath(buildfile)
(filelist, masked) = self.collect_bbfiles()
try:
os.stat(bf)
return [bf]
except OSError:
(filelist, masked) = self.collect_bbfiles()
regexp = re.compile(buildfile)
matches = []
for f in filelist:
@@ -636,19 +641,13 @@ class BBCooker:
if (task == None):
task = self.configuration.cmd
self.bb_cache = bb.cache.init(self)
self.status = bb.cache.CacheData()
(fn, cls) = self.bb_cache.virtualfn2realfn(buildfile)
buildfile = self.matchFile(fn)
fn = self.bb_cache.realfn2virtual(buildfile, cls)
fn = self.matchFile(buildfile)
self.buildSetVars()
# Load data into the cache for fn and parse the loaded cache data
the_data = self.bb_cache.loadDataFull(fn, self.get_file_appends(fn), self.configuration.data)
self.bb_cache.setData(fn, buildfile, the_data)
self.bb_cache.handle_data(fn, self.status)
self.bb_cache = bb.cache.init(self)
self.status = bb.cache.CacheData()
self.bb_cache.loadData(fn, self.configuration.data, self.status)
# Tweak some variables
item = self.bb_cache.getVar('PN', fn, True)
@@ -673,9 +672,6 @@ class BBCooker:
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.configuration.event_data)
# Clear locks
bb.fetch.persistent_database_connection = {}
# Execute the runqueue
runlist = [[item, "do_%s" % task]]
@@ -690,8 +686,8 @@ class BBCooker:
failures = 0
try:
retval = rq.execute_runqueue()
except runqueue.TaskFailure as exc:
for fnid in exc.args:
except runqueue.TaskFailure, fnids:
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
failures = failures + 1
retval = False
@@ -699,8 +695,6 @@ class BBCooker:
bb.event.fire(bb.event.BuildCompleted(buildname, item, failures), self.configuration.event_data)
self.command.finishAsyncCommand()
return False
if retval is True:
return True
return 0.5
self.server.register_idle_function(buildFileIdle, rq)
@@ -720,6 +714,7 @@ class BBCooker:
targets = self.checkPackages(targets)
def buildTargetsIdle(server, rq, abort):
if abort or self.cookerAction == cookerStop:
rq.finish_runqueue(True)
elif self.cookerAction == cookerShutdown:
@@ -727,8 +722,8 @@ class BBCooker:
failures = 0
try:
retval = rq.execute_runqueue()
except runqueue.TaskFailure as exc:
for fnid in exc.args:
except runqueue.TaskFailure, fnids:
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
failures = failures + 1
retval = False
@@ -736,8 +731,6 @@ class BBCooker:
bb.event.fire(bb.event.BuildCompleted(buildname, targets, failures), self.configuration.event_data)
self.command.finishAsyncCommand()
return None
if retval is True:
return True
return 0.5
self.buildSetVars()
@@ -757,9 +750,6 @@ class BBCooker:
runlist.append([k, "do_%s" % task])
taskdata.add_unresolved(localdata, self.status)
# Clear locks
bb.fetch.persistent_database_connection = {}
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
self.server.register_idle_function(buildTargetsIdle, rq)
@@ -790,12 +780,13 @@ class BBCooker:
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
self.status.ignored_dependencies = set(ignore.split())
for dep in self.configuration.extra_assume_provided:
self.status.ignored_dependencies.add(dep)
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
(filelist, masked) = self.collect_bbfiles()
bb.data.renameVar("__depends", "__base_depends", self.configuration.data)
@@ -830,11 +821,11 @@ class BBCooker:
for f in contents:
(root, ext) = os.path.splitext(f)
if ext == ".bb":
bbfiles.append(os.path.abspath(os.path.join(os.getcwd(), f)))
bbfiles.append(os.path.abspath(os.path.join(os.getcwd(),f)))
return bbfiles
def find_bbfiles( self, path ):
"""Find all the .bb and .bbappend files in a directory"""
"""Find all the .bb files in a directory"""
from os.path import join
found = []
@@ -842,7 +833,7 @@ class BBCooker:
for ignored in ('SCCS', 'CVS', '.svn'):
if ignored in dirs:
dirs.remove(ignored)
found += [join(dir, f) for f in files if (f.endswith('.bb') or f.endswith('.bbappend'))]
found += [join(dir,f) for f in files if f.endswith('.bb')]
return found
@@ -851,8 +842,6 @@ class BBCooker:
parsed, cached, skipped, masked = 0, 0, 0, 0
self.bb_cache = bb.cache.init(self)
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
files = (data.getVar( "BBFILES", self.configuration.data, 1 ) or "").split()
data.setVar("BBFILES", " ".join(files), self.configuration.data)
@@ -867,7 +856,9 @@ class BBCooker:
for f in files:
if os.path.isdir(f):
dirfiles = self.find_bbfiles(f)
newfiles.update(dirfiles)
if dirfiles:
newfiles.update(dirfiles)
continue
else:
globbed = glob.glob(f)
if not globbed and os.path.exists(f):
@@ -876,45 +867,23 @@ class BBCooker:
bbmask = bb.data.getVar('BBMASK', self.configuration.data, 1)
if bbmask:
try:
bbmask_compiled = re.compile(bbmask)
except sre_constants.error:
bb.msg.fatal(bb.msg.domain.Collection, "BBMASK is not a valid regular expression.")
if not bbmask:
return (list(newfiles), 0)
bbfiles = []
bbappend = []
try:
bbmask_compiled = re.compile(bbmask)
except sre_constants.error:
bb.msg.fatal(bb.msg.domain.Collection, "BBMASK is not a valid regular expression.")
finalfiles = []
for f in newfiles:
if bbmask and bbmask_compiled.search(f):
if bbmask_compiled.search(f):
bb.msg.debug(1, bb.msg.domain.Collection, "skipping masked file %s" % f)
masked += 1
continue
if f.endswith('.bb'):
bbfiles.append(f)
elif f.endswith('.bbappend'):
bbappend.append(f)
else:
bb.msg.note(1, bb.msg.domain.Collection, "File %s of unknown filetype in BBFILES? Ignorning..." % f)
finalfiles.append(f)
# Build a list of .bbappend files for each .bb file
self.appendlist = {}
for f in bbappend:
base = os.path.basename(f).replace('.bbappend', '.bb')
if not base in self.appendlist:
self.appendlist[base] = []
self.appendlist[base].append(f)
return (bbfiles, masked)
def get_file_appends(self, fn):
"""
Returns a list of .bbappend files to apply to fn
NB: collect_files() must have been called prior to this
"""
f = os.path.basename(fn)
if f in self.appendlist:
return self.appendlist[f]
return []
return (finalfiles, masked)
def serve(self):
@@ -948,9 +917,9 @@ class BBCooker:
pout.close()
else:
self.server.serve_forever()
bb.event.fire(CookerExit(), self.configuration.event_data)
class CookerExit(bb.event.Event):
"""
Notify clients of the Cooker shutdown
@@ -979,12 +948,12 @@ class CookerParser:
self.pointer = 0
def parse_next(self):
cooker = self.cooker
if self.pointer < len(self.filelist):
f = self.filelist[self.pointer]
cooker = self.cooker
try:
fromCache, skipped, virtuals = cooker.bb_cache.loadData(f, cooker.get_file_appends(f), cooker.configuration.data, cooker.status)
fromCache, skipped, virtuals = cooker.bb_cache.loadData(f, cooker.configuration.data, cooker.status)
if fromCache:
self.cached += 1
else:
@@ -993,7 +962,7 @@ class CookerParser:
self.skipped += skipped
self.virtuals += virtuals
except IOError as e:
except IOError, e:
self.error += 1
cooker.bb_cache.remove(f)
bb.msg.error(bb.msg.domain.Collection, "opening %s: %s" % (f, e))
@@ -1002,7 +971,7 @@ class CookerParser:
cooker.bb_cache.remove(f)
cooker.bb_cache.sync()
raise
except Exception as e:
except Exception, e:
self.error += 1
cooker.bb_cache.remove(f)
bb.msg.error(bb.msg.domain.Collection, "%s while parsing %s" % (e, f))
@@ -1016,8 +985,8 @@ class CookerParser:
if self.pointer >= self.total:
cooker.bb_cache.sync()
bb.codeparser.parser_cache_save(cooker.configuration.data)
if self.error > 0:
raise ParsingErrorsFound
return False
return True

View File

@@ -1,190 +1,191 @@
"""
Python Deamonizing helper
Configurable daemon behaviors:
1.) The current working directory set to the "/" directory.
2.) The current file creation mode mask set to 0.
3.) Close all open files (1024).
4.) Redirect standard I/O streams to "/dev/null".
A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
2) Unix Programming Frequently Asked Questions:
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
Modified to allow a function to be daemonized and return for
bitbake use by Richard Purdie
"""
__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
__version__ = "0.2"
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
# For BitBake's children, we do want to inherit the parent umask.
UMASK = None
# Default maximum for the number of available file descriptors.
MAXFD = 1024
# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"
def createDaemon(function, logfile):
"""
Detach a process from the controlling terminal and run it in the
background as a daemon, returning control to the caller.
"""
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
# be a process group leader, since the child receives a new process ID
# and inherits the parent's process group ID. This step is required
# to insure that the next call to os.setsid is successful.
pid = os.fork()
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
if (pid == 0): # The first child.
# To become the session leader of this new session and the process group
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session that
# is attached to a terminal device, SIGHUP is sent to all processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
# second child is not STOPPED though, we can safely forego ignoring the
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
# process responsible for its cleanup. And, since the first child is
# a session leader without a controlling terminal, it's possible for
# it to acquire one by opening a terminal in the future (System V-
# based systems). This second fork guarantees that the child is no
# longer a session leader, preventing the daemon from ever acquiring
# a controlling terminal.
pid = os.fork() # Fork a second child.
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
if (pid == 0): # The second child.
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over permissions.
if UMASK is not None:
os.umask(UMASK)
else:
# Parent (the first child) of the second child.
os._exit(0)
else:
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
return
# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD
# Iterate through and close all file descriptors.
# for fd in range(0, maxfd):
# try:
# os.close(fd)
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
# pass
# Redirect the standard I/O file descriptors to the specified file. Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.
# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
# Duplicate standard input to standard output and standard error.
# os.dup2(0, 1) # standard output (1)
# os.dup2(0, 2) # standard error (2)
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so
# Replace those fds with our own
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
function()
os._exit(0)
"""
Python Deamonizing helper
Configurable daemon behaviors:
1.) The current working directory set to the "/" directory.
2.) The current file creation mode mask set to 0.
3.) Close all open files (1024).
4.) Redirect standard I/O streams to "/dev/null".
A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
2) Unix Programming Frequently Asked Questions:
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
Modified to allow a function to be daemonized and return for
bitbake use by Richard Purdie
"""
__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
__version__ = "0.2"
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
# For BitBake's children, we do want to inherit the parent umask.
UMASK = None
# Default maximum for the number of available file descriptors.
MAXFD = 1024
# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"
def createDaemon(function, logfile):
"""
Detach a process from the controlling terminal and run it in the
background as a daemon, returning control to the caller.
"""
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
# be a process group leader, since the child receives a new process ID
# and inherits the parent's process group ID. This step is required
# to insure that the next call to os.setsid is successful.
pid = os.fork()
except OSError, e:
raise Exception, "%s [%d]" % (e.strerror, e.errno)
if (pid == 0): # The first child.
# To become the session leader of this new session and the process group
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session that
# is attached to a terminal device, SIGHUP is sent to all processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
# second child is not STOPPED though, we can safely forego ignoring the
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
# process responsible for its cleanup. And, since the first child is
# a session leader without a controlling terminal, it's possible for
# it to acquire one by opening a terminal in the future (System V-
# based systems). This second fork guarantees that the child is no
# longer a session leader, preventing the daemon from ever acquiring
# a controlling terminal.
pid = os.fork() # Fork a second child.
except OSError, e:
raise Exception, "%s [%d]" % (e.strerror, e.errno)
if (pid == 0): # The second child.
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over permissions.
if UMASK is not None:
os.umask(UMASK)
else:
# Parent (the first child) of the second child.
os._exit(0)
else:
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
return
# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD
# Iterate through and close all file descriptors.
# for fd in range(0, maxfd):
# try:
# os.close(fd)
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
# pass
# Redirect the standard I/O file descriptors to the specified file. Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.
# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
# Duplicate standard input to standard output and standard error.
# os.dup2(0, 1) # standard output (1)
# os.dup2(0, 2) # standard error (2)
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so
# Replace those fds with our own
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
function()
os._exit(0)

View File

@@ -11,7 +11,7 @@ operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
applying the skills from the not yet passed 'Entwurf und
Analyse von Algorithmen' lecture and the cookie
Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster update_data and expandKeys operations.
@@ -37,41 +37,39 @@ the speed is more critical here.
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import sys, os, re
import sys, os, re, types
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, path)
from itertools import groupby
sys.path.insert(0,path)
from bb import data_smart
from bb import codeparser
import bb
class VarExpandError(Exception):
pass
_dict_type = data_smart.DataSmart
def init():
"""Return a new object representing the Bitbake data"""
return _dict_type()
def init_db(parent = None):
"""Return a new object representing the Bitbake data,
optionally based on an existing object"""
if parent:
return parent.createCopy()
else:
return _dict_type()
def createCopy(source):
"""Link the source set to the destination
If one does not find the value in the destination set,
search will go on to the source set to get the value.
Value from source are copy-on-write. i.e. any try to
modify one of them will end up putting the modified value
in the destination set.
"""
return source.createCopy()
"""Link the source set to the destination
If one does not find the value in the destination set,
search will go on to the source set to get the value.
Value from source are copy-on-write. i.e. any try to
modify one of them will end up putting the modified value
in the destination set.
"""
return source.createCopy()
def initVar(var, d):
"""Non-destructive var init for data structure"""
@@ -79,34 +77,91 @@ def initVar(var, d):
def setVar(var, value, d):
"""Set a variable to a given value"""
d.setVar(var, value)
"""Set a variable to a given value
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> print getVar('TEST', d)
testcontents
"""
d.setVar(var,value)
def getVar(var, d, exp = 0):
"""Gets the value of a variable"""
return d.getVar(var, exp)
"""Gets the value of a variable
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> print getVar('TEST', d)
testcontents
"""
return d.getVar(var,exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey"""
"""Renames a variable from key to newkey
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> renameVar('TEST', 'TEST2', d)
>>> print getVar('TEST2', d)
testcontents
"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set"""
"""Removes a variable from the data set
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> print getVar('TEST', d)
testcontents
>>> delVar('TEST', d)
>>> print getVar('TEST', d)
None
"""
d.delVar(var)
def setVarFlag(var, flag, flagvalue, d):
"""Set a flag for a given variable to a given value"""
d.setVarFlag(var, flag, flagvalue)
"""Set a flag for a given variable to a given value
Example:
>>> d = init()
>>> setVarFlag('TEST', 'python', 1, d)
>>> print getVarFlag('TEST', 'python', d)
1
"""
d.setVarFlag(var,flag,flagvalue)
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag)
"""Gets given flag from given var
Example:
>>> d = init()
>>> setVarFlag('TEST', 'python', 1, d)
>>> print getVarFlag('TEST', 'python', d)
1
"""
return d.getVarFlag(var,flag)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
d.delVarFlag(var, flag)
"""Removes a given flag from the variable's flags
Example:
>>> d = init()
>>> setVarFlag('TEST', 'testflag', 1, d)
>>> print getVarFlag('TEST', 'testflag', d)
1
>>> delVarFlag('TEST', 'testflag', d)
>>> print getVarFlag('TEST', 'testflag', d)
None
"""
d.delVarFlag(var,flag)
def setVarFlags(var, flags, d):
"""Set the flags for a given variable
@@ -115,27 +170,115 @@ def setVarFlags(var, flags, d):
setVarFlags will not clear previous
flags. Think of this method as
addVarFlags
Example:
>>> d = init()
>>> myflags = {}
>>> myflags['test'] = 'blah'
>>> setVarFlags('TEST', myflags, d)
>>> print getVarFlag('TEST', 'test', d)
blah
"""
d.setVarFlags(var, flags)
d.setVarFlags(var,flags)
def getVarFlags(var, d):
"""Gets a variable's flags"""
"""Gets a variable's flags
Example:
>>> d = init()
>>> setVarFlag('TEST', 'test', 'blah', d)
>>> print getVarFlags('TEST', d)['test']
blah
"""
return d.getVarFlags(var)
def delVarFlags(var, d):
"""Removes a variable's flags"""
"""Removes a variable's flags
Example:
>>> data = init()
>>> setVarFlag('TEST', 'testflag', 1, data)
>>> print getVarFlag('TEST', 'testflag', data)
1
>>> delVarFlags('TEST', data)
>>> print getVarFlags('TEST', data)
None
"""
d.delVarFlags(var)
def keys(d):
"""Return a list of keys in d"""
"""Return a list of keys in d
Example:
>>> d = init()
>>> setVar('TEST', 1, d)
>>> setVar('MOO' , 2, d)
>>> setVarFlag('TEST', 'test', 1, d)
>>> keys(d)
['TEST', 'MOO']
"""
return d.keys()
def getData(d):
"""Returns the data object used"""
return d
def setData(newData, d):
"""Sets the data object to the supplied value"""
d = newData
##
## Cookie Monsters' query functions
##
def _get_override_vars(d, override):
"""
Internal!!!
Get the Names of Variables that have a specific
override. This function returns a iterable
Set or an empty list
"""
return []
def _get_var_flags_triple(d):
"""
Internal!!!
"""
return []
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def expand(s, d, varname = None):
"""Variable expansion using the data store"""
"""Variable expansion using the data store.
Example:
Standard expansion:
>>> d = init()
>>> setVar('A', 'sshd', d)
>>> print expand('/usr/bin/${A}', d)
/usr/bin/sshd
Python expansion:
>>> d = init()
>>> print expand('result: ${@37 * 72}', d)
result: 2664
Shell expansion:
>>> d = init()
>>> print expand('${TARGET_MOO}', d)
${TARGET_MOO}
>>> setVar('TARGET_MOO', 'yupp', d)
>>> print expand('${TARGET_MOO}',d)
yupp
>>> setVar('SRC_URI', 'http://somebug.${TARGET_MOO}', d)
>>> delVar('TARGET_MOO', d)
>>> print expand('${SRC_URI}', d)
http://somebug.${TARGET_MOO}
"""
return d.expand(s, varname)
def expandKeys(alterdata, readdata = None):
@@ -152,13 +295,38 @@ def expandKeys(alterdata, readdata = None):
continue
todolist[key] = ekey
# These two for loops are split for performance to maximise the
# These two for loops are split for performance to maximise the
# usefulness of the expand cache
for key in todolist:
ekey = todolist[key]
renameVar(key, ekey, alterdata)
def expandData(alterdata, readdata = None):
"""For each variable in alterdata, expand it, and update the var contents.
Replacements use data from readdata.
Example:
>>> a=init()
>>> b=init()
>>> setVar("dlmsg", "dl_dir is ${DL_DIR}", a)
>>> setVar("DL_DIR", "/path/to/whatever", b)
>>> expandData(a, b)
>>> print getVar("dlmsg", a)
dl_dir is /path/to/whatever
"""
if readdata == None:
readdata = alterdata
for key in keys(alterdata):
val = getVar(key, alterdata)
if type(val) is not types.StringType:
continue
expanded = expand(val, readdata)
# print "key is %s, val is %s, expanded is %s" % (key, val, expanded)
if val != expanded:
setVar(key, expanded, alterdata)
def inheritFromOS(d):
"""Inherit variables from the environment."""
for s in os.environ.keys():
@@ -183,15 +351,21 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if all:
oval = getVar(var, d, 0)
val = getVar(var, d, 1)
except (KeyboardInterrupt, bb.build.FuncFailed):
except KeyboardInterrupt:
raise
except Exception, exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
except:
excname = str(sys.exc_info()[0])
if excname == "bb.build.FuncFailed":
raise
o.write('# expansion of %s threw %s\n' % (var, excname))
return 0
if all:
o.write('# %s=%s\n' % (var, oval))
if type(val) is not types.StringType:
return 0
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return 0
@@ -201,11 +375,10 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
o.write('unset %s\n' % varExpanded)
return 1
val.rstrip()
if not val:
return 0
val = str(val)
if func:
# NOTE: should probably check for unbalanced {} within the var
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
@@ -220,111 +393,173 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
o.write('%s="%s"\n' % (varExpanded, alter))
return 1
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
env = keys(d)
def export_vars(d):
keys = (key for key in d.keys() if d.getVarFlag(key, "export"))
ret = {}
for k in keys:
try:
v = d.getVar(k, True)
if v:
ret[k] = v
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception, exc:
pass
return ret
for e in env:
if getVarFlag(e, "func", d):
continue
emit_var(e, o, d, all) and o.write('\n')
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
for key in keys:
emit_var(key, o, d, False) and o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser().parse_shell(d.getVar(func, True))
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if bb.data.getVarFlag(dep, "func", d):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser().parse_shell(d.getVar(dep, True))
newdeps -= seen
for e in env:
if not getVarFlag(e, "func", d):
continue
emit_var(e, o, d) and o.write('\n')
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize()
"""Modifies the environment vars according to local overrides and commands.
Examples:
Appending to a variable:
>>> d = init()
>>> setVar('TEST', 'this is a', d)
>>> setVar('TEST_append', ' test', d)
>>> setVar('TEST_append', ' of the emergency broadcast system.', d)
>>> update_data(d)
>>> print getVar('TEST', d)
this is a test of the emergency broadcast system.
def build_dependencies(key, keys, shelldeps, d):
deps = set()
try:
if d.getVarFlag(key, "func"):
if d.getVarFlag(key, "python"):
parsedvar = d.expandWithRefs(d.getVar(key, False), key)
parser = bb.codeparser.PythonParser()
parser.parse_python(parsedvar.value)
deps = deps | parser.references
else:
parsedvar = d.expandWithRefs(d.getVar(key, False), key)
parser = bb.codeparser.ShellParser()
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
else:
parser = d.expandWithRefs(d.getVar(key, False), key)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps |= set((d.getVarFlag(key, "vardeps") or "").split())
except:
bb.note("Error expanding variable %s" % key)
raise
return deps
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
Prepending to a variable:
>>> setVar('TEST', 'virtual/libc', d)
>>> setVar('TEST_prepend', 'virtual/tmake ', d)
>>> setVar('TEST_prepend', 'virtual/patcher ', d)
>>> update_data(d)
>>> print getVar('TEST', d)
virtual/patcher virtual/tmake virtual/libc
def generate_dependencies(d):
Overrides:
>>> setVar('TEST_arm', 'target', d)
>>> setVar('TEST_ramses', 'machine', d)
>>> setVar('TEST_local', 'local', d)
>>> setVar('OVERRIDES', 'arm', d)
keys = set(key for key in d.keys() if not key.startswith("__"))
shelldeps = set(key for key in keys if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
>>> setVar('TEST', 'original', d)
>>> update_data(d)
>>> print getVar('TEST', d)
target
deps = {}
taskdeps = {}
>>> setVar('OVERRIDES', 'arm:ramses:local', d)
>>> setVar('TEST', 'original', d)
>>> update_data(d)
>>> print getVar('TEST', d)
local
tasklist = bb.data.getVar('__BBTASKS', d) or []
for task in tasklist:
deps[task] = build_dependencies(task, keys, shelldeps, d)
CopyMonster:
>>> e = d.createCopy()
>>> setVar('TEST_foo', 'foo', e)
>>> update_data(e)
>>> print getVar('TEST', e)
local
>>> setVar('OVERRIDES', 'arm:ramses:local:foo', e)
>>> update_data(e)
>>> print getVar('TEST', e)
foo
>>> f = d.createCopy()
>>> setVar('TEST_moo', 'something', f)
>>> setVar('OVERRIDES', 'moo:arm:ramses:local:foo', e)
>>> update_data(e)
>>> print getVar('TEST', e)
foo
>>> h = init()
>>> setVar('SRC_URI', 'file://append.foo;patch=1 ', h)
>>> g = h.createCopy()
>>> setVar('SRC_URI_append_arm', 'file://other.foo;patch=1', g)
>>> setVar('OVERRIDES', 'arm:moo', g)
>>> update_data(g)
>>> print getVar('SRC_URI', g)
file://append.foo;patch=1 file://other.foo;patch=1
"""
bb.msg.debug(2, bb.msg.domain.Data, "update_data()")
# now ask the cookie monster for help
#print "Cookie Monster"
#print "Append/Prepend %s" % d._special_values
#print "Overrides %s" % d._seen_overrides
overrides = (getVar('OVERRIDES', d, 1) or "").split(':') or []
#
# Well let us see what breaks here. We used to iterate
# over each variable and apply the override and then
# do the line expanding.
# If we have bad luck - which we will have - the keys
# where in some order that is so important for this
# method which we don't have anymore.
# Anyway we will fix that and write test cases this
# time.
#
# First we apply all overrides
# Then we will handle _append and _prepend
#
for o in overrides:
# calculate '_'+override
l = len(o)+1
# see if one should even try
if not d._seen_overrides.has_key(o):
continue
vars = d._seen_overrides[o]
for var in vars:
name = var[:-l]
try:
d[name] = d[var]
except:
bb.msg.note(1, bb.msg.domain.Data, "Untracked delVar")
# now on to the appends and prepends
if d._special_values.has_key('_append'):
appends = d._special_values['_append'] or []
for append in appends:
for (a, o) in getVarFlag(append, '_append', d) or []:
# maybe the OVERRIDE was not yet added so keep the append
if (o and o in overrides) or not o:
delVarFlag(append, '_append', d)
if o and not o in overrides:
continue
sval = getVar(append,d) or ""
sval+=a
setVar(append, sval, d)
if d._special_values.has_key('_prepend'):
prepends = d._special_values['_prepend'] or []
for prepend in prepends:
for (a, o) in getVarFlag(prepend, '_prepend', d) or []:
# maybe the OVERRIDE was not yet added so keep the prepend
if (o and o in overrides) or not o:
delVarFlag(prepend, '_prepend', d)
if o and not o in overrides:
continue
sval = a + (getVar(prepend,d) or "")
setVar(prepend, sval, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep] = build_dependencies(dep, keys, shelldeps, d)
newdeps |= deps[dep]
newdeps -= seen
taskdeps[task] = seen | newdeps
#print "For %s: %s" % (task, str(taskdeps[task]))
return taskdeps, deps
def inherits_class(klass, d):
val = getVar('__inherit_cache', d) or []
if os.path.join('classes', '%s.bbclass' % klass) in val:
return True
return False
def _test():
"""Start a doctest run on this module"""
import doctest
import bb
from bb import data
bb.msg.set_debug_level(0)
doctest.testmod(data)
if __name__ == "__main__":
_test()

View File

@@ -28,49 +28,22 @@ BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import copy, re, sys
import copy, os, re, sys, time, types
import bb
from bb import utils
from bb.COW import COWDictBase
from bb import utils, methodpool
from COW import COWDictBase
from new import classobj
__setvar_keyword__ = ["_append", "_prepend"]
__setvar_keyword__ = ["_append","_prepend"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
class VariableParse:
def __init__(self, varname, d, val = None):
self.varname = varname
self.d = d
self.value = val
self.references = set()
self.execs = set()
def var_sub(self, match):
key = match.group()[2:-1]
if self.varname and key:
if self.varname == key:
raise Exception("variable %s references itself!" % self.varname)
var = self.d.getVar(key, 1)
if var is not None:
self.references.add(key)
return var
else:
return match.group()
def python_sub(self, match):
code = match.group()[3:-1]
codeobj = compile(code.strip(), self.varname or "<expansion>", "eval")
parser = bb.codeparser.PythonParser()
parser.parse_python(code)
self.references |= parser.references
self.execs |= parser.execs
value = utils.better_eval(codeobj, {"d": self.d})
return str(value)
_expand_globals = {
"os": os,
"bb": bb,
"time": time,
}
class DataSmart:
@@ -82,121 +55,69 @@ class DataSmart:
self._seen_overrides = seen
self.expand_cache = {}
self.expand_locals = {"d": self}
def expandWithRefs(self, s, varname):
def expand(self,s, varname):
def var_sub(match):
key = match.group()[2:-1]
if varname and key:
if varname == key:
raise Exception("variable %s references itself!" % varname)
var = self.getVar(key, 1)
if var is not None:
return var
else:
return match.group()
if not isinstance(s, basestring): # sanity check
return VariableParse(varname, self, s)
def python_sub(match):
import bb
code = match.group()[3:-1]
s = eval(code, _expand_globals, self.expand_locals)
if type(s) == types.IntType: s = str(s)
return s
if type(s) is not types.StringType: # sanity check
return s
if varname and varname in self.expand_cache:
return self.expand_cache[varname]
varparse = VariableParse(varname, self)
while s.find('${') != -1:
olds = s
try:
s = __expand_var_regexp__.sub(varparse.var_sub, s)
s = __expand_python_regexp__.sub(varparse.python_sub, s)
if s == olds:
break
s = __expand_var_regexp__.sub(var_sub, s)
s = __expand_python_regexp__.sub(python_sub, s)
if s == olds: break
if type(s) is not types.StringType: # sanity check
bb.msg.error(bb.msg.domain.Data, 'expansion of %s returned non-string %s' % (olds, s))
except KeyboardInterrupt:
raise
except:
bb.msg.note(1, bb.msg.domain.Data, "%s:%s while evaluating:\n%s" % (sys.exc_info()[0], sys.exc_info()[1], s))
raise
varparse.value = s
if varname:
self.expand_cache[varname] = varparse
self.expand_cache[varname] = s
return varparse
def expand(self, s, varname):
return self.expandWithRefs(s, varname).value
def finalize(self):
"""Performs final steps upon the datastore, including application of overrides"""
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
#
# Well let us see what breaks here. We used to iterate
# over each variable and apply the override and then
# do the line expanding.
# If we have bad luck - which we will have - the keys
# where in some order that is so important for this
# method which we don't have anymore.
# Anyway we will fix that and write test cases this
# time.
#
# First we apply all overrides
# Then we will handle _append and _prepend
#
for o in overrides:
# calculate '_'+override
l = len(o) + 1
# see if one should even try
if o not in self._seen_overrides:
continue
vars = self._seen_overrides[o]
for var in vars:
name = var[:-l]
try:
self[name] = self[var]
except Exception:
bb.msg.note(1, bb.msg.domain.Data, "Untracked delVar")
# now on to the appends and prepends
if "_append" in self._special_values:
appends = self._special_values["_append"] or []
for append in appends:
for (a, o) in self.getVarFlag(append, "_append") or []:
# maybe the OVERRIDE was not yet added so keep the append
if (o and o in overrides) or not o:
self.delVarFlag(append, "_append")
if o and not o in overrides:
continue
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
if "_prepend" in self._special_values:
prepends = self._special_values["_prepend"] or []
for prepend in prepends:
for (a, o) in self.getVarFlag(prepend, "_prepend") or []:
# maybe the OVERRIDE was not yet added so keep the prepend
if (o and o in overrides) or not o:
self.delVarFlag(prepend, "_prepend")
if o and not o in overrides:
continue
sval = a + (self.getVar(prepend, False) or "")
self.setVar(prepend, sval)
return s
def initVar(self, var):
self.expand_cache = {}
if not var in self.dict:
self.dict[var] = {}
def _findVar(self, var):
dest = self.dict
while dest:
if var in dest:
return dest[var]
def _findVar(self,var):
_dest = self.dict
if "_data" not in dest:
while (_dest and var not in _dest):
if not "_data" in _dest:
_dest = None
break
dest = dest["_data"]
_dest = _dest["_data"]
if _dest and var in _dest:
return _dest[var]
return None
def _makeShadowCopy(self, var):
if var in self.dict:
@@ -209,7 +130,7 @@ class DataSmart:
else:
self.initVar(var)
def setVar(self, var, value):
def setVar(self,var,value):
self.expand_cache = {}
match = __setvar_regexp__.match(var)
if match and match.group("keyword") in __setvar_keyword__:
@@ -224,7 +145,7 @@ class DataSmart:
# pay the cookie monster
try:
self._special_values[keyword].add( base )
except KeyError:
except:
self._special_values[keyword] = set()
self._special_values[keyword].add( base )
@@ -236,23 +157,23 @@ class DataSmart:
# more cookies for the cookie monster
if '_' in var:
override = var[var.rfind('_')+1:]
if override not in self._seen_overrides:
if not self._seen_overrides.has_key(override):
self._seen_overrides[override] = set()
self._seen_overrides[override].add( var )
# setting var
self.dict[var]["content"] = value
def getVar(self, var, exp):
value = self.getVarFlag(var, "content")
def getVar(self,var,exp):
value = self.getVarFlag(var,"content")
if exp and value:
return self.expand(value, var)
return self.expand(value,var)
return value
def renameVar(self, key, newkey):
"""
Rename the variable key to newkey
Rename the variable key to newkey
"""
val = self.getVar(key, 0)
if val is not None:
@@ -266,30 +187,30 @@ class DataSmart:
dest = self.getVarFlag(newkey, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest)
if i in self._special_values and key in self._special_values[i]:
if self._special_values.has_key(i) and key in self._special_values[i]:
self._special_values[i].remove(key)
self._special_values[i].add(newkey)
self.delVar(key)
def delVar(self, var):
def delVar(self,var):
self.expand_cache = {}
self.dict[var] = {}
def setVarFlag(self, var, flag, flagvalue):
def setVarFlag(self,var,flag,flagvalue):
if not var in self.dict:
self._makeShadowCopy(var)
self.dict[var][flag] = flagvalue
def getVarFlag(self, var, flag):
def getVarFlag(self,var,flag):
local_var = self._findVar(var)
if local_var:
if flag in local_var:
return copy.copy(local_var[flag])
return None
def delVarFlag(self, var, flag):
def delVarFlag(self,var,flag):
local_var = self._findVar(var)
if not local_var:
return
@@ -299,7 +220,7 @@ class DataSmart:
if var in self.dict and flag in self.dict[var]:
del self.dict[var][flag]
def setVarFlags(self, var, flags):
def setVarFlags(self,var,flags):
if not var in self.dict:
self._makeShadowCopy(var)
@@ -308,7 +229,7 @@ class DataSmart:
continue
self.dict[var][i] = flags[i]
def getVarFlags(self, var):
def getVarFlags(self,var):
local_var = self._findVar(var)
flags = {}
@@ -323,7 +244,7 @@ class DataSmart:
return flags
def delVarFlags(self, var):
def delVarFlags(self,var):
if not var in self.dict:
self._makeShadowCopy(var)
@@ -353,19 +274,21 @@ class DataSmart:
def keys(self):
def _keys(d, mykey):
if "_data" in d:
_keys(d["_data"], mykey)
_keys(d["_data"],mykey)
for key in d.keys():
if key != "_data":
mykey[key] = None
keytab = {}
_keys(self.dict, keytab)
_keys(self.dict,keytab)
return keytab.keys()
def __getitem__(self, item):
def __getitem__(self,item):
#print "Warning deprecated"
return self.getVar(item, False)
def __setitem__(self, var, data):
def __setitem__(self,var,data):
#print "Warning deprecated"
self.setVar(var, data)
self.setVar(var,data)

View File

@@ -22,8 +22,7 @@ BitBake build tools.
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os, sys
import warnings
import os, re, sys
import bb.utils
import pickle
@@ -31,7 +30,6 @@ import pickle
# the runqueue forks off.
worker_pid = 0
worker_pipe = None
useStdout = True
class Event:
"""Base class for events"""
@@ -40,7 +38,7 @@ class Event:
self.pid = worker_pid
NotHandled = 0
Handled = 1
Handled = 1
Registered = 10
AlreadyRegistered = 14
@@ -50,25 +48,13 @@ _handlers = {}
_ui_handlers = {}
_ui_handler_seq = 0
# For compatibility
bb.utils._context["NotHandled"] = NotHandled
bb.utils._context["Handled"] = Handled
def fire_class_handlers(event, d):
import bb.msg
if isinstance(event, bb.msg.MsgBase):
return
for handler in _handlers:
h = _handlers[handler]
event.data = d
if type(h).__name__ == "code":
locals = {"e": event}
bb.utils.simple_exec(h, locals)
ret = bb.utils.better_eval("tmpHandler(e)", locals)
if ret is not None:
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
DeprecationWarning, stacklevel = 2)
exec(h)
tmpHandler(event)
else:
h(event)
del event.data
@@ -90,9 +76,9 @@ def fire_ui_handlers(event, d):
def fire(event, d):
"""Fire off an Event"""
# We can fire class handlers in the worker process context and this is
# We can fire class handlers in the worker process context and this is
# desired so they get the task based datastore.
# UI handlers need to be fired in the server context so we defer this. They
# UI handlers need to be fired in the server context so we defer this. They
# don't have a datastore so the datastore context isn't a problem.
fire_class_handlers(event, d)
@@ -103,14 +89,16 @@ def fire(event, d):
def worker_fire(event, d):
data = "<event>" + pickle.dumps(event) + "</event>"
worker_pipe.write(data)
worker_pipe.flush()
try:
if os.write(worker_pipe, data) != len (data):
print "Error sending event to server (short write)"
except OSError:
sys.exit(1)
def fire_from_worker(event, d):
if not event.startswith("<event>") or not event.endswith("</event>"):
print("Error, not an event %s" % event)
print "Error, not an event"
return
#print "Got event %s" % event
event = pickle.loads(event[7:-8])
fire_ui_handlers(event, d)
@@ -139,7 +127,6 @@ def remove(name, handler):
def register_UIHhandler(handler):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
bb.event.useStdout = False
return _ui_handler_seq
def unregister_UIHhandler(handlerNum):
@@ -235,11 +222,10 @@ class BuildCompleted(BuildBase):
class NoProvider(Event):
"""No Provider for an Event"""
def __init__(self, item, runtime=False, dependees=None):
def __init__(self, item, runtime=False):
Event.__init__(self)
self._item = item
self._runtime = runtime
self._dependees = dependees
def getItem(self):
return self._item
@@ -298,3 +284,4 @@ class DepTreeGenerated(Event):
def __init__(self, depgraph):
Event.__init__(self)
self._depgraph = depgraph

View File

@@ -24,8 +24,6 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from __future__ import absolute_import
from __future__ import print_function
import os, re
import bb
from bb import data
@@ -55,6 +53,24 @@ class InvalidSRCREV(Exception):
def decodeurl(url):
"""Decodes an URL into the tokens (scheme, network location, path,
user, password, parameters).
>>> decodeurl("http://www.google.com/index.html")
('http', 'www.google.com', '/index.html', '', '', {})
>>> decodeurl("file://gas/COPYING")
('file', '', 'gas/COPYING', '', '', {})
CVS url with username, host and cvsroot. The cvs module to check out is in the
parameters:
>>> decodeurl("cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg")
('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'})
Dito, but this time the username has a password part. And we also request a special tag
to check out.
>>> decodeurl("cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;module=familiar/dist/ipkg;tag=V0-99-81")
('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'})
"""
m = re.compile('(?P<type>[^:]*)://((?P<user>.+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
@@ -87,7 +103,7 @@ def decodeurl(url):
p = {}
if parm:
for s in parm.split(';'):
s1, s2 = s.split('=')
s1,s2 = s.split('=')
p[s1] = s2
return (type, host, path, user, pswd, p)
@@ -95,12 +111,27 @@ def decodeurl(url):
def encodeurl(decoded):
"""Encodes a URL from tokens (scheme, network location, path,
user, password, parameters).
>>> encodeurl(['http', 'www.google.com', '/index.html', '', '', {}])
'http://www.google.com/index.html'
CVS with username, host and cvsroot. The cvs module to check out is in the
parameters:
>>> encodeurl(['cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}])
'cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg'
Dito, but this time the username has a password part. And we also request a special tag
to check out.
>>> encodeurl(['cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}])
'cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg'
"""
(type, host, path, user, pswd, p) = decoded
if not type or not path:
raise MissingParameterError("Type or path url components missing when encoding %s" % decoded)
bb.msg.fatal(bb.msg.domain.Fetcher, "invalid or missing parameters for url encoding")
url = '%s://' % type
if user:
url += "%s" % user
@@ -120,14 +151,15 @@ def uri_replace(uri, uri_find, uri_replace, d):
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
if not uri or not uri_find or not uri_replace:
bb.msg.debug(1, bb.msg.domain.Fetcher, "uri_replace: passed an undefined value, not replacing")
uri_decoded = list(decodeurl(uri))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
result_decoded = ['', '', '', '', '', {}]
uri_decoded = list(bb.decodeurl(uri))
uri_find_decoded = list(bb.decodeurl(uri_find))
uri_replace_decoded = list(bb.decodeurl(uri_replace))
result_decoded = ['','','','','',{}]
for i in uri_find_decoded:
loc = uri_find_decoded.index(i)
result_decoded[loc] = uri_decoded[loc]
if isinstance(i, basestring):
import types
if type(i) == types.StringType:
if (re.match(i, uri_decoded[loc])):
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
if uri_find_decoded.index(i) == 2:
@@ -142,20 +174,19 @@ def uri_replace(uri, uri_find, uri_replace, d):
# else:
# for j in i:
# FIXME: apply replacements against options
return encodeurl(result_decoded)
return bb.encodeurl(result_decoded)
methods = []
urldata_cache = {}
saved_headrevs = {}
persistent_database_connection = {}
def fetcher_init(d):
"""
Called to initialize the fetchers once the configuration data is known.
Called to initilize the fetchers once the configuration data is known
Calls before this must not hit the cache.
"""
pd = persist_data.PersistData(d, persistent_database_connection)
# When to drop SCM head revisions controlled by user policy
pd = persist_data.PersistData(d)
# When to drop SCM head revisions controled by user policy
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
if srcrev_policy == "cache":
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
@@ -167,7 +198,7 @@ def fetcher_init(d):
pass
pd.delDomain("BB_URI_HEADREVS")
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
bb.msg.fatal(bb.msg.domain.Fetcher, "Invalid SRCREV cache policy of: %s" % srcrev_policy)
for m in methods:
if hasattr(m, "init"):
@@ -183,7 +214,7 @@ def fetcher_compare_revisons(d):
return true/false on whether they've changed.
"""
pd = persist_data.PersistData(d, persistent_database_connection)
pd = persist_data.PersistData(d)
data = pd.getKeyValues("BB_URI_HEADREVS")
data2 = bb.fetch.saved_headrevs
@@ -205,7 +236,6 @@ def fetcher_compare_revisons(d):
def init(urls, d, setup = True):
urldata = {}
fn = bb.data.getVar('FILE', d, 1)
if fn in urldata_cache:
urldata = urldata_cache[fn]
@@ -217,20 +247,11 @@ def init(urls, d, setup = True):
if setup:
for url in urldata:
if not urldata[url].setup:
urldata[url].setup_localpath(d)
urldata[url].setup_localpath(d)
urldata_cache[fn] = urldata
return urldata
def mirror_from_string(data):
return [ i.split() for i in (data or "").replace('\\n','\n').split('\n') if i ]
def removefile(f):
try:
os.remove(f)
except:
pass
def go(d, urls = None):
"""
Fetch all urls
@@ -243,47 +264,49 @@ def go(d, urls = None):
for u in urls:
ud = urldata[u]
m = ud.method
localpath = ""
if ud.localfile:
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
# File already present along with md5 stamp file
# Touch md5 file to show activity
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
continue
lf = bb.utils.lockfile(ud.lockfile)
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
# If someone else fetched this before we got the lock,
# notice and don't try again
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
bb.utils.unlockfile(lf)
continue
if not ud.localfile:
continue
lf = bb.utils.lockfile(ud.lockfile)
if m.try_premirror(u, ud, d):
# First try fetching uri, u, from PREMIRRORS
mirrors = mirror_from_string(bb.data.getVar('PREMIRRORS', d, True))
localpath = try_mirrors(d, u, mirrors, False, m.forcefetch(u, ud, d))
elif os.path.exists(ud.localfile):
localpath = ud.localfile
# Need to re-test forcefetch() which will return true if our copy is too old
if m.forcefetch(u, ud, d) or not localpath:
# First try fetching uri, u, from PREMIRRORS
mirrors = [ i.split() for i in (bb.data.getVar('PREMIRRORS', d, 1) or "").split('\n') if i ]
localpath = try_mirrors(d, u, mirrors)
if not localpath:
# Next try fetching from the original uri, u
try:
m.go(u, ud, d)
localpath = ud.localpath
except FetchError:
# Remove any incomplete file
removefile(ud.localpath)
# Finally, try fetching uri, u, from MIRRORS
mirrors = mirror_from_string(bb.data.getVar('MIRRORS', d, True))
localpath = try_mirrors (d, u, mirrors)
if not localpath or not os.path.exists(localpath):
raise FetchError("Unable to fetch URL %s from any source." % u)
ud.localpath = localpath
if os.path.exists(ud.md5):
# Touch the md5 file to show active use of the download
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
else:
Fetch.write_md5sum(u, ud, d)
# Finally, try fetching uri, u, from MIRRORS
mirrors = [ i.split() for i in (bb.data.getVar('MIRRORS', d, 1) or "").split('\n') if i ]
localpath = try_mirrors (d, u, mirrors)
if localpath:
ud.localpath = localpath
if ud.localfile:
if not m.forcefetch(u, ud, d):
Fetch.write_md5sum(u, ud, d)
bb.utils.unlockfile(lf)
bb.utils.unlockfile(lf)
def checkstatus(d):
"""
@@ -295,9 +318,9 @@ def checkstatus(d):
for u in urldata:
ud = urldata[u]
m = ud.method
bb.msg.debug(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
bb.msg.note(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(bb.data.getVar('PREMIRRORS', d, True))
mirrors = [ i.split() for i in (bb.data.getVar('PREMIRRORS', d, 1) or "").split('\n') if i ]
ret = try_mirrors(d, u, mirrors, True)
if not ret:
# Next try checking from the original uri, u
@@ -305,11 +328,11 @@ def checkstatus(d):
ret = m.checkstatus(u, ud, d)
except:
# Finally, try checking uri, u, from MIRRORS
mirrors = mirror_from_string(bb.data.getVar('MIRRORS', d, True))
mirrors = [ i.split() for i in (bb.data.getVar('MIRRORS', d, 1) or "").split('\n') if i ]
ret = try_mirrors (d, u, mirrors, True)
if not ret:
raise FetchError("URL %s doesn't work" % u)
bb.msg.error(bb.msg.domain.Fetcher, "URL %s doesn't work" % u)
def localpaths(d):
"""
@@ -319,7 +342,7 @@ def localpaths(d):
urldata = init([], d, True)
for u in urldata:
ud = urldata[u]
ud = urldata[u]
local.append(ud.localpath)
return local
@@ -331,15 +354,15 @@ def get_srcrev(d):
Return the version string for the current package
(usually to be used as PV)
Most packages usually only have one SCM so we just pass on the call.
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
have been set.
"""
#
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
# could translate into a call to here. If it does, we need to catch this
# and provide some way so it knows get_srcrev is active instead of being
# some number etc. hence the srcrev_internal_call tracking and the magic
# some number etc. hence the srcrev_internal_call tracking and the magic
# "SRCREVINACTION" return value.
#
# Neater solutions welcome!
@@ -349,7 +372,7 @@ def get_srcrev(d):
scms = []
# Only call setup_localpath on URIs which suppports_srcrev()
# Only call setup_localpath on URIs which suppports_srcrev()
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
for u in urldata:
ud = urldata[u]
@@ -362,8 +385,7 @@ def get_srcrev(d):
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
raise ParameterError
if bb.data.getVar('BB_SRCREV_POLICY', d, True) != "cache":
bb.data.setVar('__BB_DONT_CACHE', '1', d)
bb.data.setVar('__BB_DONT_CACHE','1', d)
if len(scms) == 1:
return urldata[scms[0]].method.sortable_revision(scms[0], urldata[scms[0]], d)
@@ -386,7 +408,7 @@ def get_srcrev(d):
def localpath(url, d, cache = True):
"""
Called from the parser with cache=False since the cache isn't ready
Called from the parser with cache=False since the cache isn't ready
at this point. Also called from classed in OE e.g. patch.bbclass
"""
ud = init([url], d)
@@ -405,15 +427,12 @@ def runfetchcmd(cmd, d, quiet = False):
# rather than host provided
# Also include some other variables.
# FIXME: Should really include all export varaiables?
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST',
'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy',
'https_proxy', 'no_proxy', 'ALL_PROXY', 'all_proxy',
'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST', 'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy', 'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
for var in exportvars:
val = data.getVar(var, d, True)
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
cmd = 'export ' + var + '=%s; %s' % (val, cmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
@@ -421,15 +440,15 @@ def runfetchcmd(cmd, d, quiet = False):
stdout_handle = os.popen(cmd + " 2>&1", "r")
output = ""
while True:
while 1:
line = stdout_handle.readline()
if not line:
break
if not quiet:
print(line, end=' ')
print line,
output += line
status = stdout_handle.close() or 0
status = stdout_handle.close() or 0
signal = status >> 8
exitstatus = status & 0xff
@@ -440,7 +459,7 @@ def runfetchcmd(cmd, d, quiet = False):
return output
def try_mirrors(d, uri, mirrors, check = False, force = False):
def try_mirrors(d, uri, mirrors, check = False):
"""
Try to use a mirrored version of the sources.
This method will be automatically called before the fetchers go.
@@ -450,7 +469,7 @@ def try_mirrors(d, uri, mirrors, check = False, force = False):
mirrors is the list of mirrors we're going to try
"""
fpath = os.path.join(data.getVar("DL_DIR", d, 1), os.path.basename(uri))
if not check and os.access(fpath, os.R_OK) and not force:
if not check and os.access(fpath, os.R_OK):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % fpath)
return fpath
@@ -478,7 +497,6 @@ def try_mirrors(d, uri, mirrors, check = False, force = False):
import sys
(type, value, traceback) = sys.exc_info()
bb.msg.debug(2, bb.msg.domain.Fetcher, "Mirror fetch failure: %s" % value)
removefile(ud.localpath)
continue
return None
@@ -489,7 +507,7 @@ class FetchData(object):
"""
def __init__(self, url, d):
self.localfile = ""
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(data.expand(url, d))
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = bb.decodeurl(data.expand(url, d))
self.date = Fetch.getSRCDate(self, d)
self.url = url
if not self.user and "user" in self.parm:
@@ -508,13 +526,12 @@ class FetchData(object):
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
self.basename = os.path.basename(self.localpath)
else:
premirrors = bb.data.getVar('PREMIRRORS', d, True)
local = ""
if premirrors and self.url:
aurl = self.url.split(";")[0]
mirrors = mirror_from_string(premirrors)
mirrors = [ i.split() for i in (premirrors or "").split('\n') if i ]
for (find, replace) in mirrors:
if replace.startswith("file://"):
path = aurl.split("://")[1]
@@ -533,11 +550,10 @@ class FetchData(object):
# Horrible...
bb.data.delVar("ISHOULDNEVEREXIST", d)
if self.localpath is not None:
# Note: These files should always be in DL_DIR whereas localpath may not be.
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath), d)
self.md5 = basepath + '.md5'
self.lockfile = basepath + '.lock'
# Note: These files should always be in DL_DIR whereas localpath may not be.
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath), d)
self.md5 = basepath + '.md5'
self.lockfile = basepath + '.lock'
class Fetch(object):
@@ -555,7 +571,7 @@ class Fetch(object):
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
Can also setup variables in urldata for use in go (saving code duplication
Can also setup variables in urldata for use in go (saving code duplication
and duplicate code execution)
"""
return url
@@ -587,17 +603,6 @@ class Fetch(object):
"""
raise NoMethodError("Missing implementation for url")
def try_premirror(self, url, urldata, d):
"""
Should premirrors be used?
"""
if urldata.method.forcefetch(url, urldata, d):
return True
elif os.path.exists(urldata.md5) and os.path.exists(urldata.localfile):
return False
else:
return True
def checkstatus(self, url, urldata, d):
"""
Check the status of a URL
@@ -627,8 +632,8 @@ class Fetch(object):
"""
Return:
a) a source revision if specified
b) True if auto srcrev is in action
c) False otherwise
b) True if auto srcrev is in action
c) False otherwise
"""
if 'rev' in ud.parm:
@@ -640,11 +645,7 @@ class Fetch(object):
rev = None
if 'name' in ud.parm:
pn = data.getVar("PN", d, 1)
rev = data.getVar("SRCREV_%s_pn-%s" % (ud.parm['name'], pn), d, 1)
if not rev:
rev = data.getVar("SRCREV_pn-%s_%s" % (pn, ud.parm['name']), d, 1)
if not rev:
rev = data.getVar("SRCREV_%s" % (ud.parm['name']), d, 1)
rev = data.getVar("SRCREV_pn-" + pn + "_" + ud.parm['name'], d, 1)
if not rev:
rev = data.getVar("SRCREV", d, 1)
if rev == "INVALID":
@@ -664,7 +665,7 @@ class Fetch(object):
b) None otherwise
"""
localcount = None
localcount= None
if 'name' in ud.parm:
pn = data.getVar("PN", d, 1)
localcount = data.getVar("LOCALCOUNT_" + ud.parm['name'], d, 1)
@@ -705,7 +706,7 @@ class Fetch(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError
pd = persist_data.PersistData(d, persistent_database_connection)
pd = persist_data.PersistData(d)
key = self.generate_revision_key(url, ud, d)
rev = pd.getValue("BB_URI_HEADREVS", key)
if rev != None:
@@ -717,12 +718,12 @@ class Fetch(object):
def sortable_revision(self, url, ud, d):
"""
"""
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
pd = persist_data.PersistData(d, persistent_database_connection)
pd = persist_data.PersistData(d)
key = self.generate_revision_key(url, ud, d)
latest_rev = self._build_revision(url, ud, d)
@@ -757,18 +758,18 @@ class Fetch(object):
key = self._revision_key(url, ud, d)
return "%s-%s" % (key, bb.data.getVar("PN", d, True) or "")
from . import cvs
from . import git
from . import local
from . import svn
from . import wget
from . import svk
from . import ssh
from . import perforce
from . import bzr
from . import hg
from . import osc
from . import repo
import cvs
import git
import local
import svn
import wget
import svk
import ssh
import perforce
import bzr
import hg
import osc
import repo
methods.append(local.Local())
methods.append(wget.Wget())

View File

@@ -46,15 +46,15 @@ class Bzr(Fetch):
revision = Fetch.srcrev_internal_helper(ud, d)
if revision is True:
ud.revision = self.latest_revision(url, ud, d)
ud.revision = self.latest_revision(url, ud, d)
elif revision:
ud.revision = revision
if not ud.revision:
ud.revision = self.latest_revision(url, ud, d)
ud.revision = self.latest_revision(url, ud, d)
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _buildbzrcommand(self, ud, d, command):
@@ -145,3 +145,4 @@ class Bzr(Fetch):
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -139,8 +139,8 @@ class Cvs(Fetch):
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory")
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
moddir = os.path.join(pkgdir,localdir)
if os.access(os.path.join(moddir,'CVS'), os.R_OK):
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(moddir)
@@ -157,7 +157,7 @@ class Cvs(Fetch):
try:
os.rmdir(moddir)
except OSError:
pass
pass
raise FetchError(ud.module)
# tar them up to a defined filename

View File

@@ -57,12 +57,12 @@ class Git(Fetch):
tag = Fetch.srcrev_internal_helper(ud, d)
if tag is True:
ud.tag = self.latest_revision(url, ud, d)
ud.tag = self.latest_revision(url, ud, d)
elif tag:
ud.tag = tag
if not ud.tag or ud.tag == "master":
ud.tag = self.latest_revision(url, ud, d)
ud.tag = self.latest_revision(url, ud, d)
subdir = ud.parm.get("subpath", "")
if subdir != "":
@@ -79,33 +79,8 @@ class Git(Fetch):
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
if 'noclone' in ud.parm:
ud.localfile = None
return None
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def forcefetch(self, url, ud, d):
if 'fullclone' in ud.parm:
return True
if 'noclone' in ud.parm:
return False
if os.path.exists(ud.localpath):
return False
if not self._contains_ref(ud.tag, d):
return True
return False
def try_premirror(self, u, ud, d):
if 'noclone' in ud.parm:
return False
if os.path.exists(ud.clonedir):
return False
if os.path.exists(ud.localpath):
return False
return True
def go(self, loc, ud, d):
"""Fetch url"""
@@ -119,40 +94,27 @@ class Git(Fetch):
coname = '%s' % (ud.tag)
codir = os.path.join(ud.clonedir, coname)
# If we have no existing clone and no mirror tarball, try and obtain one
if not os.path.exists(ud.clonedir) and not os.path.exists(repofile):
if not os.path.exists(ud.clonedir):
try:
Fetch.try_mirrors(ud.mirrortarball)
bb.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (repofile), d)
except:
pass
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(repofile):
bb.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (repofile), d)
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
runfetchcmd("%s clone -n %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
runfetchcmd("%s clone -n %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
os.chdir(ud.clonedir)
# Update the checkout if needed
if not self._contains_ref(ud.tag, d) or 'fullclone' in ud.parm:
# Remove all but the .git directory
# Remove all but the .git directory
if not self._contains_ref(ud.tag, d):
runfetchcmd("rm * -Rf", d)
if 'fullclone' in ud.parm:
runfetchcmd("%s fetch --all" % (ud.basecmd), d)
else:
runfetchcmd("%s fetch %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.branch), d)
runfetchcmd("%s fetch %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.branch), d)
runfetchcmd("%s fetch --tags %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
# Generate a mirror tarball if needed
os.chdir(ud.clonedir)
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
if mirror_tarballs != "0" or 'fullclone' in ud.parm:
if mirror_tarballs != "0" or 'fullclone' in ud.parm:
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
@@ -203,7 +165,7 @@ class Git(Fetch):
"""
Return a unique key for the url
"""
return "git:" + ud.host + ud.path.replace('/', '.') + ud.branch
return "git:" + ud.host + ud.path.replace('/', '.')
def _latest_revision(self, url, ud, d):
"""
@@ -226,7 +188,7 @@ class Git(Fetch):
def _sortable_buildindex_disabled(self, url, ud, d, rev):
"""
Return a suitable buildindex for the revision specified. This is done by counting revisions
Return a suitable buildindex for the revision specified. This is done by counting revisions
using "git rev-list" which may or may not work in different circumstances.
"""
@@ -235,7 +197,7 @@ class Git(Fetch):
# Check if we have the rev already
if not os.path.exists(ud.clonedir):
print("no repo")
print "no repo"
self.go(None, ud, d)
if not os.path.exists(ud.clonedir):
bb.msg.error(bb.msg.domain.Fetcher, "GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value" % (url, ud.clonedir))
@@ -251,4 +213,5 @@ class Git(Fetch):
buildindex = "%s" % output.split()[0]
bb.msg.debug(1, bb.msg.domain.Fetcher, "GIT repository for %s in %s is returning %s revisions in rev-list before %s" % (url, ud.clonedir, buildindex, rev))
return buildindex
return buildindex

View File

@@ -134,9 +134,9 @@ class Hg(Fetch):
os.chdir(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
@@ -170,3 +170,4 @@ class Hg(Fetch):
Return a unique key for the url
"""
return "hg:" + ud.moddir

View File

@@ -27,7 +27,6 @@ BitBake build tools.
import os
import bb
import bb.utils
from bb import data
from bb.fetch import Fetch
@@ -48,7 +47,7 @@ class Local(Fetch):
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, 1)
if filespath:
newpath = bb.utils.which(filespath, path)
newpath = bb.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, 1)
if filesdir:
@@ -66,8 +65,8 @@ class Local(Fetch):
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
return True
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
return True
if os.path.exists(urldata.localpath):
return True
return True
return False

View File

@@ -16,7 +16,7 @@ from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
class Osc(Fetch):
"""Class to fetch a module or modules from Opensuse build server
"""Class to fetch a module or modules from Opensuse build server
repositories."""
def supports(self, url, ud, d):
@@ -64,7 +64,7 @@ class Osc(Fetch):
proto = "ocs"
if "proto" in ud.parm:
proto = ud.parm["proto"]
options = []
config = "-c %s" % self.generate_config(ud, d)
@@ -108,7 +108,7 @@ class Osc(Fetch):
os.chdir(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscfetchcmd)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
try:
@@ -131,7 +131,7 @@ class Osc(Fetch):
config_path = "%s/oscrc" % data.expand('${OSCDIR}', d)
if (os.path.exists(config_path)):
os.remove(config_path)
os.remove(config_path)
f = open(config_path, 'w')
f.write("[general]\n")
@@ -146,5 +146,5 @@ class Osc(Fetch):
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()
return config_path

View File

@@ -25,7 +25,6 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import bb
from bb import data
@@ -36,15 +35,15 @@ class Perforce(Fetch):
def supports(self, url, ud, d):
return ud.type in ['p4']
def doparse(url, d):
def doparse(url,d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(user, pswd, host, port) = path.split('@')[0].split(":")
(user,pswd,host,port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
(host, port) = data.getVar('P4PORT', d).split(':')
(host,port) = data.getVar('P4PORT', d).split(':')
user = ""
pswd = ""
@@ -54,19 +53,19 @@ class Perforce(Fetch):
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
(key,value) = item.split('=')
keys.append(key)
values.append(value)
parm = dict(zip(keys, values))
parm = dict(zip(keys,values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
return host, path, user, pswd, parm
return host,path,user,pswd,parm
doparse = staticmethod(doparse)
def getcset(d, depot, host, user, pswd, parm):
def getcset(d, depot,host,user,pswd,parm):
p4opt = ""
if "cset" in parm:
return parm["cset"];
@@ -96,9 +95,9 @@ class Perforce(Fetch):
return cset.split(' ')[1]
getcset = staticmethod(getcset)
def localpath(self, url, ud, d):
def localpath(self, url, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(url, d)
(host,path,user,pswd,parm) = Perforce.doparse(url,d)
# If a label is specified, we use that as our filename
@@ -116,7 +115,7 @@ class Perforce(Fetch):
cset = Perforce.getcset(d, path, host, user, pswd, parm)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base.replace('/', '.'), cset), d)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host,base.replace('/', '.'), cset), d)
return os.path.join(data.getVar("DL_DIR", d, 1), ud.localfile)
@@ -125,7 +124,7 @@ class Perforce(Fetch):
Fetch urls
"""
(host, depot, user, pswd, parm) = Perforce.doparse(loc, d)
(host,depot,user,pswd,parm) = Perforce.doparse(loc, d)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
@@ -161,14 +160,14 @@ class Perforce(Fetch):
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
bb.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
raise FetchError(module)
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
depot = "%s@%s" % (depot,parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
depot = "%s@%s" % (depot,cset)
os.chdir(tmpfile)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
@@ -176,12 +175,12 @@ class Perforce(Fetch):
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
if not p4file:
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to get the P4 files from %s" % (depot))
bb.error("Fetch: unable to get the P4 files from %s" % (depot))
raise FetchError(module)
count = 0
for file in p4file:
for file in p4file:
list = file.split()
if list[2] == "delete":
@@ -190,11 +189,11 @@ class Perforce(Fetch):
dest = list[0][len(path)+1:]
where = dest.find("#")
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module,dest[:where],list[0]))
count = count + 1
if count == 0:
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: No files gathered from the P4 fetch")
bb.error("Fetch: No files gathered from the P4 fetch")
raise FetchError(module)
myret = os.system("tar -czf %s %s" % (ud.localpath, module))
@@ -206,3 +205,5 @@ class Perforce(Fetch):
raise FetchError(module)
# cleanup
os.system('rm -rf %s' % tmpfile)

View File

@@ -23,10 +23,11 @@ BitBake "Fetch" repo (git) implementation
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import os, re
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import runfetchcmd
class Repo(Fetch):

View File

@@ -114,5 +114,5 @@ class SSH(Fetch):
(exitstatus, output) = commands.getstatusoutput(cmd)
if exitstatus != 0:
print(output)
print output
raise FetchError('Unable to fetch %s' % url)

View File

@@ -78,7 +78,7 @@ class Svn(Fetch):
ud.revision = rev
ud.date = ""
else:
ud.revision = ""
ud.revision = ""
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)

View File

@@ -30,8 +30,6 @@ import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import encodeurl, decodeurl
from bb.fetch import runfetchcmd
class Wget(Fetch):
"""Class to fetch urls via 'wget'"""
@@ -39,11 +37,11 @@ class Wget(Fetch):
"""
Check to see if a given url can be fetched with wget.
"""
return ud.type in ['http', 'https', 'ftp']
return ud.type in ['http','https','ftp']
def localpath(self, url, ud, d):
url = encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
url = bb.encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(os.path.basename(url), d)
@@ -62,16 +60,37 @@ class Wget(Fetch):
fetchcmd = data.getVar("FETCHCOMMAND", d, 1)
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_decoded = list(bb.decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
httpproxy = None
ftpproxy = None
if uri_type == 'http':
httpproxy = data.getVar("HTTP_PROXY", d, True)
httpproxy_ignore = (data.getVar("HTTP_PROXY_IGNORE", d, True) or "").split()
for p in httpproxy_ignore:
if uri_host.endswith(p):
httpproxy = None
break
if uri_type == 'ftp':
ftpproxy = data.getVar("FTP_PROXY", d, True)
ftpproxy_ignore = (data.getVar("HTTP_PROXY_IGNORE", d, True) or "").split()
for p in ftpproxy_ignore:
if uri_host.endswith(p):
ftpproxy = None
break
if httpproxy:
fetchcmd = "http_proxy=" + httpproxy + " " + fetchcmd
if ftpproxy:
fetchcmd = "ftp_proxy=" + ftpproxy + " " + fetchcmd
bb.msg.debug(2, bb.msg.domain.Fetcher, "executing " + fetchcmd)
runfetchcmd(fetchcmd, d)
ret = os.system(fetchcmd)
if ret != 0:
return False
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page

View File

@@ -27,7 +27,7 @@
a method pool to do this task.
This pool will be used to compile and execute the functions. It
will be smart enough to
will be smart enough to
"""
from bb.utils import better_compile, better_exec
@@ -43,8 +43,8 @@ def insert_method(modulename, code, fn):
Add code of a module should be added. The methods
will be simply added, no checking will be done
"""
comp = better_compile(code, modulename, fn )
better_exec(comp, None, code, fn)
comp = better_compile(code, "<bb>", fn )
better_exec(comp, __builtins__, code, fn)
# now some instrumentation
code = comp.co_names
@@ -59,7 +59,7 @@ def insert_method(modulename, code, fn):
def check_insert_method(modulename, code, fn):
"""
Add the code if it wasnt added before. The module
name will be used for that
name will be used for that
Variables:
@modulename a short name e.g. base.bbclass
@@ -81,4 +81,4 @@ def get_parsed_dict():
"""
shortcut
"""
return _parsed_methods
return _parsed_methods

View File

@@ -22,32 +22,26 @@ Message handling infrastructure for bitbake
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import collections
import bb
import bb.event
import sys, bb
from bb import event
debug_level = {}
debug_level = collections.defaultdict(lambda: 0)
verbose = False
def _NamedTuple(name, fields):
Tuple = collections.namedtuple(name, " ".join(fields))
return Tuple(*range(len(fields)))
domain = _NamedTuple("Domain", (
"Default",
"Build",
"Cache",
"Collection",
"Data",
"Depends",
"Fetcher",
"Parsing",
"PersistData",
"Provider",
"RunQueue",
"TaskData",
"Util"))
domain = bb.utils.Enum(
'Build',
'Cache',
'Collection',
'Data',
'Depends',
'Fetcher',
'Parsing',
'PersistData',
'Provider',
'RunQueue',
'TaskData',
'Util')
class MsgBase(bb.event.Event):
@@ -55,7 +49,7 @@ class MsgBase(bb.event.Event):
def __init__(self, msg):
self._message = msg
bb.event.Event.__init__(self)
event.Event.__init__(self)
class MsgDebug(MsgBase):
"""Debug Message"""
@@ -80,65 +74,52 @@ class MsgPlain(MsgBase):
#
def set_debug_level(level):
for d in domain:
debug_level[d] = level
debug_level[domain.Default] = level
def get_debug_level(msgdomain = domain.Default):
return debug_level[msgdomain]
bb.msg.debug_level = {}
for domain in bb.msg.domain:
bb.msg.debug_level[domain] = level
bb.msg.debug_level['default'] = level
def set_verbose(level):
verbose = level
bb.msg.verbose = level
def set_debug_domains(strdomains):
for domainstr in strdomains:
for d in domain:
if domain._fields[d] == domainstr:
debug_level[d] += 1
break
else:
warn(None, "Logging domain %s is not valid, ignoring" % domainstr)
def set_debug_domains(domains):
for domain in domains:
found = False
for ddomain in bb.msg.domain:
if domain == str(ddomain):
bb.msg.debug_level[ddomain] = bb.msg.debug_level[ddomain] + 1
found = True
if not found:
bb.msg.warn(None, "Logging domain %s is not valid, ignoring" % domain)
#
# Message handling functions
#
def debug(level, msgdomain, msg, fn = None):
if not msgdomain:
msgdomain = domain.Default
if debug_level[msgdomain] >= level:
def debug(level, domain, msg, fn = None):
if not domain:
domain = 'default'
if debug_level[domain] >= level:
bb.event.fire(MsgDebug(msg), None)
if bb.event.useStdout:
print('DEBUG: %s' % (msg))
def note(level, msgdomain, msg, fn = None):
if not msgdomain:
msgdomain = domain.Default
if level == 1 or verbose or debug_level[msgdomain] >= 1:
def note(level, domain, msg, fn = None):
if not domain:
domain = 'default'
if level == 1 or verbose or debug_level[domain] >= 1:
bb.event.fire(MsgNote(msg), None)
if bb.event.useStdout:
print('NOTE: %s' % (msg))
def warn(msgdomain, msg, fn = None):
def warn(domain, msg, fn = None):
bb.event.fire(MsgWarn(msg), None)
if bb.event.useStdout:
print('WARNING: %s' % (msg))
def error(msgdomain, msg, fn = None):
def error(domain, msg, fn = None):
bb.event.fire(MsgError(msg), None)
if bb.event.useStdout:
print('ERROR: %s' % (msg))
print 'ERROR: ' + msg
def fatal(msgdomain, msg, fn = None):
def fatal(domain, msg, fn = None):
bb.event.fire(MsgFatal(msg), None)
if bb.event.useStdout:
print('FATAL: %s' % (msg))
print 'FATAL: ' + msg
sys.exit(1)
def plain(msg, fn = None):
bb.event.fire(MsgPlain(msg), None)
if bb.event.useStdout:
print(msg)

View File

@@ -24,11 +24,11 @@ File parsers for the BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
__all__ = [ 'ParseError', 'SkipPackage', 'cached_mtime', 'mark_dependency',
'supports', 'handle', 'init' ]
handlers = []
import bb, os
import bb.utils
import bb.siggen
class ParseError(Exception):
"""Exception raised when parsing fails"""
@@ -38,12 +38,12 @@ class SkipPackage(Exception):
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
if not __mtime_cache.has_key(f):
__mtime_cache[f] = os.stat(f)[8]
return __mtime_cache[f]
def cached_mtime_noerror(f):
if f not in __mtime_cache:
if not __mtime_cache.has_key(f):
try:
__mtime_cache[f] = os.stat(f)[8]
except OSError:
@@ -57,8 +57,8 @@ def update_mtime(f):
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
deps = bb.data.getVar('__depends', d) or set()
deps.update([(f, cached_mtime(f))])
deps = bb.data.getVar('__depends', d) or []
deps.append( (f, cached_mtime(f)) )
bb.data.setVar('__depends', deps, d)
def supports(fn, data):
@@ -80,16 +80,11 @@ def init(fn, data):
if h['supports'](fn):
return h['init'](data)
def init_parser(d, dumpsigs):
bb.parse.siggen = bb.siggen.init(d, dumpsigs)
def resolve_file(fn, d):
if not os.path.isabs(fn):
bbpath = bb.data.getVar("BBPATH", d, True)
newfn = bb.which(bbpath, fn)
if not newfn:
raise IOError("file %s not found in %s" % (fn, bbpath))
fn = newfn
fn = bb.which(bb.data.getVar("BBPATH", d, 1), fn)
if not fn:
raise IOError("file %s not found" % fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "LOAD %s" % fn)
return fn

View File

@@ -21,11 +21,8 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
from future_builtins import filter
import bb, re, string
from bb import methodpool
import itertools
from itertools import chain
__word__ = re.compile(r"\S+")
__parsed_methods__ = bb.methodpool.get_parsed_dict()
@@ -33,8 +30,7 @@ _bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
def eval(self, data):
for statement in self:
statement.eval(data)
map(lambda x: x.eval(data), self)
class AstNode(object):
pass
@@ -107,6 +103,7 @@ class DataNode(AstNode):
val = groupd["value"]
if 'flag' in groupd and groupd['flag'] != None:
bb.msg.debug(3, bb.msg.domain.Parsing, "setVarFlag(%s, %s, %s, data)" % (key, groupd['flag'], val))
bb.data.setVarFlag(key, groupd['flag'], val, data)
elif groupd["lazyques"]:
assigned = bb.data.getVar("__lazy_assigned", data) or []
@@ -137,8 +134,7 @@ class MethodNode:
bb.data.setVar(self.func_name, '\n'.join(self.body), data)
class PythonMethodNode(AstNode):
def __init__(self, funcname, root, body, fn):
self.func_name = funcname
def __init__(self, root, body, fn):
self.root = root
self.body = body
self.fn = fn
@@ -147,12 +143,9 @@ class PythonMethodNode(AstNode):
# Note we will add root to parsedmethods after having parse
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
if not bb.methodpool.parsed_module(self.root):
if not self.root in __parsed_methods__:
text = '\n'.join(self.body)
bb.methodpool.insert_method(self.root, text, self.fn)
bb.data.setVarFlag(self.func_name, "func", 1, data)
bb.data.setVarFlag(self.func_name, "python", 1, data)
bb.data.setVar(self.func_name, text, data)
class MethodFlagsNode(AstNode):
def __init__(self, key, m):
@@ -261,7 +254,7 @@ class InheritNode(AstNode):
def eval(self, data):
bb.parse.BBHandler.inherit(self.n, data)
def handleInclude(statements, m, fn, lineno, force):
statements.append(IncludeNode(m.group(1), fn, lineno, force))
@@ -274,8 +267,8 @@ def handleData(statements, groupd):
def handleMethod(statements, func_name, lineno, fn, body):
statements.append(MethodNode(func_name, body, lineno, fn))
def handlePythonMethod(statements, funcname, root, body, fn):
statements.append(PythonMethodNode(funcname, root, body, fn))
def handlePythonMethod(statements, root, body, fn):
statements.append(PythonMethodNode(root, body, fn))
def handleMethodFlags(statements, key, m):
statements.append(MethodFlagsNode(key, m))
@@ -300,7 +293,7 @@ def handleInherit(statements, m):
n = __word__.findall(files)
statements.append(InheritNode(m.group(1)))
def finalize(fn, d, variant = None):
def finalise(fn, d):
for lazykey in bb.data.getVar("__lazy_assigned", d) or ():
if bb.data.getVar(lazykey, d) is None:
val = bb.data.getVarFlag(lazykey, "defaultval", d)
@@ -308,23 +301,40 @@ def finalize(fn, d, variant = None):
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
for funcname in bb.data.getVar("__BBANONFUNCS", d) or []:
code.append("%s(d)" % funcname)
bb.utils.simple_exec("\n".join(code), {"d": d})
anonqueue = bb.data.getVar("__anonqueue", d, 1) or []
body = [x['content'] for x in anonqueue]
flag = { 'python' : 1, 'func' : 1 }
bb.data.setVar("__anonfunc", "\n".join(body), d)
bb.data.setVarFlags("__anonfunc", flag, d)
from bb import build
try:
t = bb.data.getVar('T', d)
bb.data.setVar('T', '${TMPDIR}/anonfunc/', d)
anonfuncs = bb.data.getVar('__BBANONFUNCS', d) or []
code = ""
for f in anonfuncs:
code = code + " %s(d)\n" % f
bb.data.setVar("__anonfunc", code, d)
build.exec_func("__anonfunc", d)
bb.data.delVar('T', d)
if t:
bb.data.setVar('T', t, d)
except Exception, e:
bb.msg.debug(1, bb.msg.domain.Parsing, "Exception when executing anonymous function: %s" % e)
raise
bb.data.delVar("__anonqueue", d)
bb.data.delVar("__anonfunc", d)
bb.data.update_data(d)
all_handlers = {}
all_handlers = {}
for var in bb.data.getVar('__BBHANDLERS', d) or []:
# try to add the handler
handler = bb.data.getVar(var, d)
handler = bb.data.getVar(var,d)
bb.event.register(var, handler)
tasklist = bb.data.getVar('__BBTASKS', d) or []
bb.build.add_tasks(tasklist, d)
bb.parse.siggen.finalise(fn, d, variant)
bb.event.fire(bb.event.RecipeParsed(fn), d)
def _create_variants(datastores, names, function):
@@ -350,7 +360,7 @@ def _expand_versions(versions):
versions = iter(versions)
while True:
try:
version = next(versions)
version = versions.next()
except StopIteration:
break
@@ -360,18 +370,14 @@ def _expand_versions(versions):
else:
newversions = expand_one(version, int(range_ver.group("from")),
int(range_ver.group("to")))
versions = itertools.chain(newversions, versions)
versions = chain(newversions, versions)
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND", True) or "").split()
for append in appends:
bb.msg.debug(2, bb.msg.domain.Parsing, "Appending .bbappend file " + append + " to " + fn)
bb.parse.BBHandler.handle(append, d, True)
safe_d = d
d = bb.data.createCopy(safe_d)
try:
finalize(fn, d)
finalise(fn, d)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, d)
datastores = {"": safe_d}
@@ -414,7 +420,7 @@ def multi_finalize(fn, d):
d = bb.data.createCopy(safe_d)
verfunc(pv, d, safe_d)
try:
finalize(fn, d)
finalise(fn, d)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, d)
@@ -430,15 +436,15 @@ def multi_finalize(fn, d):
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extended.split(), extendfunc)
for variant, variant_d in datastores.iteritems():
for variant, variant_d in datastores.items():
if variant:
try:
finalize(fn, variant_d, variant)
finalise(fn, variant_d)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, variant_d)
if len(datastores) > 1:
variants = filter(None, datastores.iterkeys())
variants = filter(None, datastores.keys())
safe_d.setVar("__VARIANTS", " ".join(variants))
datastores[""] = d

View File

@@ -11,7 +11,7 @@
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
@@ -25,17 +25,15 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
import re, bb, os
import re, bb, os, sys, time, string
import bb.fetch, bb.build, bb.utils
from bb import data
from bb import data, fetch
from . import ConfHandler
from .. import resolve_file, ast
from .ConfHandler import include, init
from ConfHandler import include, init
from bb.parse import ParseError, resolve_file, ast
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
from bb.parse import vars_from_file
__func_start_regexp__ = re.compile( r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile( r"inherit\s+(.+)" )
@@ -70,8 +68,8 @@ def inherit(files, d):
__inherit_cache = data.getVar('__inherit_cache', d) or []
fn = ""
lineno = 0
files = data.expand(files, d)
for file in files:
file = data.expand(file, d)
if file[0] != "/" and file[-8:] != ".bbclass":
file = os.path.join('classes', '%s.bbclass' % file)
@@ -82,17 +80,17 @@ def inherit(files, d):
include(fn, file, d, "inherit")
__inherit_cache = data.getVar('__inherit_cache', d) or []
def get_statements(filename, absolute_filename, base_name):
def get_statements(filename, absolsute_filename, base_name):
global cached_statements
try:
return cached_statements[absolute_filename]
return cached_statements[absolsute_filename]
except KeyError:
file = open(absolute_filename, 'r')
file = open(absolsute_filename, 'r')
statements = ast.StatementGroup()
lineno = 0
while True:
while 1:
lineno = lineno + 1
s = file.readline()
if not s: break
@@ -103,7 +101,7 @@ def get_statements(filename, absolute_filename, base_name):
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
cached_statements[absolsute_filename] = statements
return statements
def handle(fn, d, include):
@@ -120,7 +118,7 @@ def handle(fn, d, include):
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data, include)")
(root, ext) = os.path.splitext(os.path.basename(fn))
base_name = "%s%s" % (root, ext)
base_name = "%s%s" % (root,ext)
init(d)
if ext == ".bbclass":
@@ -166,7 +164,7 @@ def handle(fn, d, include):
return d
def feeder(lineno, s, fn, root, statements):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, classes, bb, __residue__
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__,__infunc__, __body__, classes, bb, __residue__
if __infunc__:
if s == '}':
__body__.append('')
@@ -183,7 +181,7 @@ def feeder(lineno, s, fn, root, statements):
__body__.append(s)
return
else:
ast.handlePythonMethod(statements, __inpython__, root, __body__, fn)
ast.handlePythonMethod(statements, root, __body__, fn)
__body__ = []
__inpython__ = False
@@ -210,8 +208,7 @@ def feeder(lineno, s, fn, root, statements):
m = __def_regexp__.match(s)
if m:
__body__.append(s)
__inpython__ = m.group(1)
__inpython__ = True
return
m = __export_func_regexp__.match(s)
@@ -234,9 +231,10 @@ def feeder(lineno, s, fn, root, statements):
ast.handleInherit(statements, m)
return
from bb.parse import ConfHandler
return ConfHandler.feeder(lineno, s, fn, statements)
# Add us to the handlers list
from .. import handlers
from bb.parse import handlers
handlers.append({'supports': supports, 'handle': handle, 'init': init})
del handlers

View File

@@ -10,7 +10,7 @@
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
@@ -24,8 +24,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, bb.data, os
import bb.utils
import re, bb.data, os, sys
from bb.parse import ParseError, resolve_file, ast
#__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}]+)\s*(?P<colon>:)?(?P<ques>\?)?=\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
@@ -37,7 +36,10 @@ __export_regexp__ = re.compile( r"export\s+(.+)" )
def init(data):
topdir = bb.data.getVar('TOPDIR', data)
if not topdir:
bb.data.setVar('TOPDIR', os.getcwd(), data)
topdir = os.getcwd()
bb.data.setVar('TOPDIR', topdir, data)
if not bb.data.getVar('BBPATH', data):
bb.fatal("The BBPATH environment variable must be set")
def supports(fn, d):
@@ -58,7 +60,7 @@ def include(oldfn, fn, data, error_out):
if not os.path.isabs(fn):
dname = os.path.dirname(oldfn)
bbpath = "%s:%s" % (dname, bb.data.getVar("BBPATH", data, 1))
abs_fn = bb.utils.which(bbpath, fn)
abs_fn = bb.which(bbpath, fn)
if abs_fn:
fn = abs_fn
@@ -86,7 +88,7 @@ def handle(fn, data, include):
statements = ast.StatementGroup()
lineno = 0
while True:
while 1:
lineno = lineno + 1
s = f.readline()
if not s: break

View File

@@ -25,9 +25,9 @@ File parsers for the BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from __future__ import absolute_import
from . import ConfHandler
from . import BBHandler
__version__ = '1.0'
__all__ = [ 'ConfHandler', 'BBHandler']
import ConfHandler
import BBHandler

View File

@@ -16,7 +16,6 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import bb, os
import bb.utils
try:
import sqlite3
@@ -34,63 +33,58 @@ class PersistData:
"""
BitBake Persistent Data Store
Used to store data in a central location such that other threads/tasks can
Used to store data in a central location such that other threads/tasks can
access them at some future date.
The "domain" is used as a key to isolate each data pool and in this
implementation corresponds to an SQL table. The SQL table consists of a
The "domain" is used as a key to isolate each data pool and in this
implementation corresponds to an SQL table. The SQL table consists of a
simple key and value pair.
Why sqlite? It handles all the locking issues for us.
"""
def __init__(self, d, persistent_database_connection):
if "connection" in persistent_database_connection:
self.cursor = persistent_database_connection["connection"].cursor()
return
def __init__(self, d):
self.cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
if self.cachedir in [None, '']:
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'PERSISTENT_DIR' or 'CACHE' variable.")
try:
os.stat(self.cachedir)
except OSError:
bb.utils.mkdirhier(self.cachedir)
bb.mkdirhier(self.cachedir)
self.cachefile = os.path.join(self.cachedir, "bb_persist_data.sqlite3")
self.cachefile = os.path.join(self.cachedir,"bb_persist_data.sqlite3")
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
persistent_database_connection["connection"] = connection
self.cursor = persistent_database_connection["connection"].cursor()
self.connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
def addDomain(self, domain):
"""
Should be called before any domain is used
Creates it if it doesn't exist.
"""
self._execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
self.connection.execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
def delDomain(self, domain):
"""
Removes a domain and all the data it contains
"""
self._execute("DROP TABLE IF EXISTS %s;" % domain)
self.connection.execute("DROP TABLE IF EXISTS %s;" % domain)
def getKeyValues(self, domain):
"""
Return a list of key + value pairs for a domain
"""
ret = {}
data = self._execute("SELECT key, value from %s;" % domain)
data = self.connection.execute("SELECT key, value from %s;" % domain)
for row in data:
ret[str(row[0])] = str(row[1])
return ret
return ret
def getValue(self, domain, key):
"""
Return the value of a key for a domain
"""
data = self._execute("SELECT * from %s where key=?;" % domain, [key])
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
for row in data:
return row[1]
@@ -98,7 +92,7 @@ class PersistData:
"""
Sets the value of a key for a domain
"""
data = self._execute("SELECT * from %s where key=?;" % domain, [key])
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
rows = 0
for row in data:
rows = rows + 1
@@ -113,21 +107,15 @@ class PersistData:
"""
self._execute("DELETE from %s where key=?;" % domain, [key])
#
# We wrap the sqlite execute calls as on contended machines or single threaded
# systems we can have multiple processes trying to access the DB at once and it seems
# sqlite sometimes doesn't wait for the timeout. We therefore loop but put in an
# emergency brake too
#
def _execute(self, *query):
count = 0
while True:
while True:
try:
ret = self.cursor.execute(*query)
#print "Had to retry %s times" % count
return ret
except sqlite3.OperationalError as e:
if 'database is locked' in str(e) and count < 500:
count = count + 1
self.connection.execute(*query)
return
except sqlite3.OperationalError, e:
if 'database is locked' in str(e):
continue
raise

View File

@@ -62,7 +62,7 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
"""
Check if the version pe,pv,pr is the preferred one.
If there is preferred version defined and ends with '%', then pv has to start with that version after removing the '%'
If there is preferred version defined and ends with '%', then pv has to start with that version after removing the '%'
"""
if (pr == preferred_r or preferred_r == None):
if (pe == preferred_e or preferred_e == None):
@@ -103,7 +103,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
for file_set in pkg_pn:
for f in file_set:
pe, pv, pr = dataCache.pkg_pepvpr[f]
pe,pv,pr = dataCache.pkg_pepvpr[f]
if preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
preferred_file = f
preferred_ver = (pe, pv, pr)
@@ -136,7 +136,7 @@ def findLatestProvider(pn, cfgData, dataCache, file_set):
latest_p = 0
latest_f = None
for file_name in file_set:
pe, pv, pr = dataCache.pkg_pepvpr[file_name]
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
dp = dataCache.pkg_dp[file_name]
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
@@ -169,14 +169,14 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
def _filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
Take a list of providers and filter/reorder according to the
environment variables and previous build results
"""
eligible = []
preferred_versions = {}
sortpkg_pn = {}
# The order of providers depends on the order of the files on the disk
# The order of providers depends on the order of the files on the disk
# up to here. Sort pkg_pn to make dependency issues reproducible rather
# than effectively random.
providers.sort()
@@ -198,7 +198,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
if preferred_versions[pn][1]:
eligible.append(preferred_versions[pn][1])
# Now add latest versions
# Now add latest verisons
for pn in sortpkg_pn:
if pn in preferred_versions and preferred_versions[pn][1]:
continue
@@ -226,7 +226,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
Take a list of providers and filter/reorder according to the
environment variables and previous build results
Takes a "normal" target item
"""
@@ -254,7 +254,7 @@ def filterProviders(providers, item, cfgData, dataCache):
def filterProvidersRunTime(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
Take a list of providers and filter/reorder according to the
environment variables and previous build results
Takes a "runtime" target item
"""
@@ -297,7 +297,7 @@ def getRuntimeProviders(dataCache, rdepend):
rproviders = []
if rdepend in dataCache.rproviders:
rproviders += dataCache.rproviders[rdepend]
rproviders += dataCache.rproviders[rdepend]
if rdepend in dataCache.packages:
rproviders += dataCache.packages[rdepend]

File diff suppressed because it is too large Load Diff

View File

@@ -102,7 +102,7 @@ class BBUIEventQueue:
def queue_event(self, event):
self.eventQueue.append(event)
def system_quit( self ):
def system_quit( self ):
bb.event.unregister_UIHhandler(self.EventHandle)
class BitBakeServer():
@@ -115,7 +115,7 @@ class BitBakeServer():
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
assert callable(function)
self._idlefuns[function] = data
def idle_commands(self, delay):
@@ -140,7 +140,6 @@ class BitBakeServer():
except:
import traceback
traceback.print_exc()
self.commands.runCommand(["stateShutdown"])
pass
if nextsleep is not None:
#print "Sleeping for %s (%s)" % (nextsleep, delay)
@@ -179,3 +178,4 @@ class BitBakeServerConnection():
self.connection.terminateServer()
except:
pass

View File

@@ -42,7 +42,7 @@ from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import inspect, select
if sys.hexversion < 0x020600F0:
print("Sorry, python 2.6 or later is required for bitbake's XMLRPC mode")
print "Sorry, python 2.6 or later is required for bitbake's XMLRPC mode"
sys.exit(1)
class BitBakeServerCommands():
@@ -74,7 +74,7 @@ class BitBakeServerCommands():
Trigger the server to quit
"""
self.server.quit = True
print("Server (cooker) exitting")
print "Server (cooker) exitting"
return
def ping(self):
@@ -89,8 +89,8 @@ class BitBakeServer(SimpleXMLRPCServer):
def __init__(self, cooker, interface = ("localhost", 0)):
"""
Constructor
"""
Constructor
"""
SimpleXMLRPCServer.__init__(self, interface,
requestHandler=SimpleXMLRPCRequestHandler,
logRequests=False, allow_none=True)
@@ -112,7 +112,7 @@ class BitBakeServer(SimpleXMLRPCServer):
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
assert callable(function)
self._idlefuns[function] = data
def serve_forever(self):
@@ -146,7 +146,7 @@ class BitBakeServer(SimpleXMLRPCServer):
traceback.print_exc()
pass
if nextsleep is None and len(self._idlefuns) > 0:
nextsleep = 0
nextsleep = 0
self.timeout = nextsleep
# Tell idle functions we're exiting
for function, data in self._idlefuns.items():
@@ -175,7 +175,7 @@ class BitBakeServerConnection():
def terminate(self):
# Don't wait for server indefinitely
import socket
socket.setdefaulttimeout(2)
socket.setdefaulttimeout(2)
try:
self.events.system_quit()
except:
@@ -184,3 +184,4 @@ class BitBakeServerConnection():
self.connection.terminateServer()
except:
pass

View File

@@ -52,14 +52,12 @@ PROBLEMS:
# Import and setup global variables
##########################################################################
from __future__ import print_function
from functools import reduce
try:
set
except NameError:
from sets import Set as set
import sys, os, readline, socket, httplib, urllib, commands, popen2, shlex, Queue, fnmatch
from bb import data, parse, build, cache, taskdata, runqueue, providers as Providers
import sys, os, readline, socket, httplib, urllib, commands, popen2, copy, shlex, Queue, fnmatch
from bb import data, parse, build, fatal, cache, taskdata, runqueue, providers as Providers
__version__ = "0.5.3.1"
__credits__ = """BitBake Shell Version %s (C) 2005 Michael 'Mickey' Lauer <mickey@Vanille.de>
@@ -100,7 +98,7 @@ class BitBakeShellCommands:
def _checkParsed( self ):
if not parsed:
print("SHELL: This command needs to parse bbfiles...")
print "SHELL: This command needs to parse bbfiles..."
self.parse( None )
def _findProvider( self, item ):
@@ -121,28 +119,28 @@ class BitBakeShellCommands:
"""Register a new name for a command"""
new, old = params
if not old in cmds:
print("ERROR: Command '%s' not known" % old)
print "ERROR: Command '%s' not known" % old
else:
cmds[new] = cmds[old]
print("OK")
print "OK"
alias.usage = "<alias> <command>"
def buffer( self, params ):
"""Dump specified output buffer"""
index = params[0]
print(self._shell.myout.buffer( int( index ) ))
print self._shell.myout.buffer( int( index ) )
buffer.usage = "<index>"
def buffers( self, params ):
"""Show the available output buffers"""
commands = self._shell.myout.bufferedCommands()
if not commands:
print("SHELL: No buffered commands available yet. Start doing something.")
print "SHELL: No buffered commands available yet. Start doing something."
else:
print("="*35, "Available Output Buffers", "="*27)
print "="*35, "Available Output Buffers", "="*27
for index, cmd in enumerate( commands ):
print("| %s %s" % ( str( index ).ljust( 3 ), cmd ))
print("="*88)
print "| %s %s" % ( str( index ).ljust( 3 ), cmd )
print "="*88
def build( self, params, cmd = "build" ):
"""Build a providee"""
@@ -151,7 +149,7 @@ class BitBakeShellCommands:
self._checkParsed()
names = globfilter( cooker.status.pkg_pn, globexpr )
if len( names ) == 0: names = [ globexpr ]
print("SHELL: Building %s" % ' '.join( names ))
print "SHELL: Building %s" % ' '.join( names )
td = taskdata.TaskData(cooker.configuration.abort)
localdata = data.createCopy(cooker.configuration.data)
@@ -170,22 +168,22 @@ class BitBakeShellCommands:
tasks.append([name, "do_%s" % cmd])
td.add_unresolved(localdata, cooker.status)
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
rq.prepare_runqueue()
rq.execute_runqueue()
except Providers.NoProvider:
print("ERROR: No Provider")
print "ERROR: No Provider"
last_exception = Providers.NoProvider
except runqueue.TaskFailure as fnids:
except runqueue.TaskFailure, fnids:
for fnid in fnids:
print("ERROR: '%s' failed" % td.fn_index[fnid])
print "ERROR: '%s' failed" % td.fn_index[fnid]
last_exception = runqueue.TaskFailure
except build.EventException as e:
print("ERROR: Couldn't build '%s'" % names)
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % names
last_exception = e
@@ -218,7 +216,7 @@ class BitBakeShellCommands:
if bbfile is not None:
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), bbfile ) )
else:
print("ERROR: Nothing provides '%s'" % name)
print "ERROR: Nothing provides '%s'" % name
edit.usage = "<providee>"
def environment( self, params ):
@@ -241,14 +239,14 @@ class BitBakeShellCommands:
global last_exception
name = params[0]
bf = completeFilePath( name )
print("SHELL: Calling '%s' on '%s'" % ( cmd, bf ))
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
try:
cooker.buildFile(bf, cmd)
except parse.ParseError:
print("ERROR: Unable to open or parse '%s'" % bf)
except build.EventException as e:
print("ERROR: Couldn't build '%s'" % name)
print "ERROR: Unable to open or parse '%s'" % bf
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % name
last_exception = e
fileBuild.usage = "<bbfile>"
@@ -272,62 +270,62 @@ class BitBakeShellCommands:
def fileReparse( self, params ):
"""(re)Parse a bb file"""
bbfile = params[0]
print("SHELL: Parsing '%s'" % bbfile)
print "SHELL: Parsing '%s'" % bbfile
parse.update_mtime( bbfile )
cooker.bb_cache.cacheValidUpdate(bbfile)
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data, cooker.status)
cooker.bb_cache.sync()
if False: #fromCache:
print("SHELL: File has not been updated, not reparsing")
print "SHELL: File has not been updated, not reparsing"
else:
print("SHELL: Parsed")
print "SHELL: Parsed"
fileReparse.usage = "<bbfile>"
def abort( self, params ):
"""Toggle abort task execution flag (see bitbake -k)"""
cooker.configuration.abort = not cooker.configuration.abort
print("SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort ))
print "SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort )
def force( self, params ):
"""Toggle force task execution flag (see bitbake -f)"""
cooker.configuration.force = not cooker.configuration.force
print("SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force ))
print "SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force )
def help( self, params ):
"""Show a comprehensive list of commands and their purpose"""
print("="*30, "Available Commands", "="*30)
print "="*30, "Available Commands", "="*30
for cmd in sorted(cmds):
function, numparams, usage, helptext = cmds[cmd]
print("| %s | %s" % (usage.ljust(30), helptext))
print("="*78)
function,numparams,usage,helptext = cmds[cmd]
print "| %s | %s" % (usage.ljust(30), helptext)
print "="*78
def lastError( self, params ):
"""Show the reason or log that was produced by the last BitBake event exception"""
if last_exception is None:
print("SHELL: No Errors yet (Phew)...")
print "SHELL: No Errors yet (Phew)..."
else:
reason, event = last_exception.args
print("SHELL: Reason for the last error: '%s'" % reason)
print "SHELL: Reason for the last error: '%s'" % reason
if ':' in reason:
msg, filename = reason.split( ':' )
filename = filename.strip()
print("SHELL: Dumping log file for last error:")
print "SHELL: Dumping log file for last error:"
try:
print(open( filename ).read())
print open( filename ).read()
except IOError:
print("ERROR: Couldn't open '%s'" % filename)
print "ERROR: Couldn't open '%s'" % filename
def match( self, params ):
"""Dump all files or providers matching a glob expression"""
what, globexpr = params
if what == "files":
self._checkParsed()
for key in globfilter( cooker.status.pkg_fn, globexpr ): print(key)
for key in globfilter( cooker.status.pkg_fn, globexpr ): print key
elif what == "providers":
self._checkParsed()
for key in globfilter( cooker.status.pkg_pn, globexpr ): print(key)
for key in globfilter( cooker.status.pkg_pn, globexpr ): print key
else:
print("Usage: match %s" % self.print_.usage)
print "Usage: match %s" % self.print_.usage
match.usage = "<files|providers> <glob>"
def new( self, params ):
@@ -337,15 +335,15 @@ class BitBakeShellCommands:
fulldirname = "%s/%s" % ( packages, dirname )
if not os.path.exists( fulldirname ):
print("SHELL: Creating '%s'" % fulldirname)
print "SHELL: Creating '%s'" % fulldirname
os.mkdir( fulldirname )
if os.path.exists( fulldirname ) and os.path.isdir( fulldirname ):
if os.path.exists( "%s/%s" % ( fulldirname, filename ) ):
print("SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename ))
print "SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename )
return False
print("SHELL: Creating '%s/%s'" % ( fulldirname, filename ))
print "SHELL: Creating '%s/%s'" % ( fulldirname, filename )
newpackage = open( "%s/%s" % ( fulldirname, filename ), "w" )
print("""DESCRIPTION = ""
print >>newpackage,"""DESCRIPTION = ""
SECTION = ""
AUTHOR = ""
HOMEPAGE = ""
@@ -372,7 +370,7 @@ SRC_URI = ""
#do_install() {
#
#}
""", file=newpackage)
"""
newpackage.close()
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
new.usage = "<directory> <filename>"
@@ -392,14 +390,14 @@ SRC_URI = ""
def pasteLog( self, params ):
"""Send the last event exception error log (if there is one) to http://rafb.net/paste"""
if last_exception is None:
print("SHELL: No Errors yet (Phew)...")
print "SHELL: No Errors yet (Phew)..."
else:
reason, event = last_exception.args
print("SHELL: Reason for the last error: '%s'" % reason)
print "SHELL: Reason for the last error: '%s'" % reason
if ':' in reason:
msg, filename = reason.split( ':' )
filename = filename.strip()
print("SHELL: Pasting log file to pastebin...")
print "SHELL: Pasting log file to pastebin..."
file = open( filename ).read()
sendToPastebin( "contents of " + filename, file )
@@ -421,23 +419,23 @@ SRC_URI = ""
cooker.buildDepgraph()
global parsed
parsed = True
print()
print
def reparse( self, params ):
"""(re)Parse a providee's bb file"""
bbfile = self._findProvider( params[0] )
if bbfile is not None:
print("SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] ))
print "SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] )
self.fileReparse( [ bbfile ] )
else:
print("ERROR: Nothing provides '%s'" % params[0])
print "ERROR: Nothing provides '%s'" % params[0]
reparse.usage = "<providee>"
def getvar( self, params ):
"""Dump the contents of an outer BitBake environment variable"""
var = params[0]
value = data.getVar( var, cooker.configuration.data, 1 )
print(value)
print value
getvar.usage = "<variable>"
def peek( self, params ):
@@ -447,9 +445,9 @@ SRC_URI = ""
if bbfile is not None:
the_data = cooker.bb_cache.loadDataFull(bbfile, cooker.configuration.data)
value = the_data.getVar( var, 1 )
print(value)
print value
else:
print("ERROR: Nothing provides '%s'" % name)
print "ERROR: Nothing provides '%s'" % name
peek.usage = "<providee> <variable>"
def poke( self, params ):
@@ -457,7 +455,7 @@ SRC_URI = ""
name, var, value = params
bbfile = self._findProvider( name )
if bbfile is not None:
print("ERROR: Sorry, this functionality is currently broken")
print "ERROR: Sorry, this functionality is currently broken"
#d = cooker.pkgdata[bbfile]
#data.setVar( var, value, d )
@@ -465,7 +463,7 @@ SRC_URI = ""
#cooker.pkgdata.setDirty(bbfile, d)
#print "OK"
else:
print("ERROR: Nothing provides '%s'" % name)
print "ERROR: Nothing provides '%s'" % name
poke.usage = "<providee> <variable> <value>"
def print_( self, params ):
@@ -473,12 +471,12 @@ SRC_URI = ""
what = params[0]
if what == "files":
self._checkParsed()
for key in cooker.status.pkg_fn: print(key)
for key in cooker.status.pkg_fn: print key
elif what == "providers":
self._checkParsed()
for key in cooker.status.providers: print(key)
for key in cooker.status.providers: print key
else:
print("Usage: print %s" % self.print_.usage)
print "Usage: print %s" % self.print_.usage
print_.usage = "<files|providers>"
def python( self, params ):
@@ -498,7 +496,7 @@ SRC_URI = ""
"""Set an outer BitBake environment variable"""
var, value = params
data.setVar( var, value, cooker.configuration.data )
print("OK")
print "OK"
setVar.usage = "<variable> <value>"
def rebuild( self, params ):
@@ -510,7 +508,7 @@ SRC_URI = ""
def shell( self, params ):
"""Execute a shell command and dump the output"""
if params != "":
print(commands.getoutput( " ".join( params ) ))
print commands.getoutput( " ".join( params ) )
shell.usage = "<...>"
def stage( self, params ):
@@ -520,17 +518,17 @@ SRC_URI = ""
def status( self, params ):
"""<just for testing>"""
print("-" * 78)
print("building list = '%s'" % cooker.building_list)
print("build path = '%s'" % cooker.build_path)
print("consider_msgs_cache = '%s'" % cooker.consider_msgs_cache)
print("build stats = '%s'" % cooker.stats)
if last_exception is not None: print("last_exception = '%s'" % repr( last_exception.args ))
print("memory output contents = '%s'" % self._shell.myout._buffer)
print "-" * 78
print "building list = '%s'" % cooker.building_list
print "build path = '%s'" % cooker.build_path
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
print "build stats = '%s'" % cooker.stats
if last_exception is not None: print "last_exception = '%s'" % repr( last_exception.args )
print "memory output contents = '%s'" % self._shell.myout._buffer
def test( self, params ):
"""<just for testing>"""
print("testCommand called with '%s'" % params)
print "testCommand called with '%s'" % params
def unpack( self, params ):
"""Execute 'unpack' on a providee"""
@@ -555,12 +553,12 @@ SRC_URI = ""
try:
providers = cooker.status.providers[item]
except KeyError:
print("SHELL: ERROR: Nothing provides", preferred)
print "SHELL: ERROR: Nothing provides", preferred
else:
for provider in providers:
if provider == pf: provider = " (***) %s" % provider
else: provider = " %s" % provider
print(provider)
print provider
which.usage = "<providee>"
##########################################################################
@@ -585,7 +583,7 @@ def sendToPastebin( desc, content ):
mydata["nick"] = "%s@%s" % ( os.environ.get( "USER", "unknown" ), socket.gethostname() or "unknown" )
mydata["text"] = content
params = urllib.urlencode( mydata )
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
host = "rafb.net"
conn = httplib.HTTPConnection( "%s:80" % host )
@@ -596,9 +594,9 @@ def sendToPastebin( desc, content ):
if response.status == 302:
location = response.getheader( "location" ) or "unknown"
print("SHELL: Pasted to http://%s%s" % ( host, location ))
print "SHELL: Pasted to http://%s%s" % ( host, location )
else:
print("ERROR: %s %s" % ( response.status, response.reason ))
print "ERROR: %s %s" % ( response.status, response.reason )
def completer( text, state ):
"""Return a possible readline completion"""
@@ -645,7 +643,7 @@ def columnize( alist, width = 80 ):
return reduce(lambda line, word, width=width: '%s%s%s' %
(line,
' \n'[(len(line[line.rfind('\n')+1:])
+ len(word.split('\n', 1)[0]
+ len(word.split('\n',1)[0]
) >= width)],
word),
alist
@@ -720,7 +718,7 @@ class BitBakeShell:
except IOError:
pass # It doesn't exist yet.
print(__credits__)
print __credits__
def cleanup( self ):
"""Write readline history and clean up resources"""
@@ -728,7 +726,7 @@ class BitBakeShell:
try:
readline.write_history_file( self.historyfilename )
except:
print("SHELL: Unable to save command history")
print "SHELL: Unable to save command history"
def registerCommand( self, command, function, numparams = 0, usage = "", helptext = "" ):
"""Register a command"""
@@ -742,11 +740,11 @@ class BitBakeShell:
try:
function, numparams, usage, helptext = cmds[command]
except KeyError:
print("SHELL: ERROR: '%s' command is not a valid command." % command)
print "SHELL: ERROR: '%s' command is not a valid command." % command
self.myout.removeLast()
else:
if (numparams != -1) and (not len( params ) == numparams):
print("Usage: '%s'" % usage)
print "Usage: '%s'" % usage
return
result = function( self.commands, params )
@@ -761,7 +759,7 @@ class BitBakeShell:
if not cmdline:
continue
if "|" in cmdline:
print("ERROR: '|' in startup file is not allowed. Ignoring line")
print "ERROR: '|' in startup file is not allowed. Ignoring line"
continue
self.commandQ.put( cmdline.strip() )
@@ -803,10 +801,10 @@ class BitBakeShell:
sys.stdout.write( pipe.fromchild.read() )
#
except EOFError:
print()
print
return
except KeyboardInterrupt:
print()
print
##########################################################################
# Start function - called from the BitBake command line utility
@@ -821,4 +819,4 @@ def start( aCooker ):
bbshell.cleanup()
if __name__ == "__main__":
print("SHELL: Sorry, this program should only be called by BitBake.")
print "SHELL: Sorry, this program should only be called by BitBake."

View File

@@ -1,260 +0,0 @@
import hashlib
import re
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
def init(d, dumpsigs):
siggens = [obj for obj in globals().itervalues()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
desired = bb.data.getVar("BB_SIGNATURE_HANDLER", d, True) or "noop"
for sg in siggens:
if desired == sg.name:
return sg(d, dumpsigs)
break
else:
bb.error("Invalid signature generator '%s', using default 'noop' generator" % desired)
bb.error("Available generators: %s" % ", ".join(obj.name for obj in siggens))
return SignatureGenerator(d, dumpsigs)
class SignatureGenerator(object):
"""
"""
name = "noop"
def __init__(self, data, dumpsigs):
return
def finalise(self, fn, d):
return
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
name = "basic"
def __init__(self, data, dumpsigs):
self.basehash = {}
self.taskhash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.gendeps = {}
self.lookupcache = {}
self.basewhitelist = (data.getVar("BB_HASHBASE_WHITELIST", True) or "").split()
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
if self.taskwhitelist:
self.twl = re.compile(self.taskwhitelist)
else:
self.twl = None
self.dumpsigs = dumpsigs
def _build_data(self, fn, d):
taskdeps, gendeps = bb.data.generate_dependencies(d)
basehash = {}
lookupcache = {}
for task in taskdeps:
data = d.getVar(task, False)
lookupcache[task] = data
for dep in sorted(taskdeps[task]):
if dep in self.basewhitelist:
continue
if dep in lookupcache:
var = lookupcache[dep]
else:
var = d.getVar(dep, False)
lookupcache[dep] = var
if var:
data = data + var
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
#bb.note("Hash for %s is %s" % (task, tashhash[task]))
if self.dumpsigs:
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
self.lookupcache[fn] = lookupcache
return taskdeps
def finalise(self, fn, d, variant):
if variant:
fn = "virtual:" + variant + ":" + fn
taskdeps = self._build_data(fn, d)
#Slow but can be useful for debugging mismatched basehashes
#for task in self.taskdeps[fn]:
# self.dump_sigtask(fn, task, d.getVar("STAMP", True), False)
for task in taskdeps:
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + "." + task])
def get_taskhash(self, fn, task, deps, dataCache):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.runtaskdeps[k] = deps
for dep in sorted(deps):
if self.twl and self.twl.search(dataCache.pkg_fn[fn]):
#bb.note("Skipping %s" % dep)
continue
if dep not in self.taskhash:
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?", dep)
data = data + self.taskhash[dep]
h = hashlib.md5(data).hexdigest()
self.taskhash[k] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
return h
def set_taskdata(self, hashes, deps):
self.runtaskdeps = deps
self.taskhash = hashes
def dump_sigtask(self, fn, task, stampbase, runtime):
k = fn + "." + task
if runtime == "customfile":
sigfile = stampbase
elif runtime:
sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[k]
else:
sigfile = stampbase + "." + task + ".sigbasedata" + "." + self.basehash[k]
data = {}
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
data['taskdeps'] = self.taskdeps[fn][task]
data['basehash'] = self.basehash[k]
data['gendeps'] = {}
data['varvals'] = {}
data['varvals'][task] = self.lookupcache[fn][task]
for dep in self.taskdeps[fn][task]:
if dep in self.basewhitelist:
continue
data['gendeps'][dep] = self.gendeps[fn][dep]
data['varvals'][dep] = self.lookupcache[fn][dep]
if runtime:
data['runtaskdeps'] = self.runtaskdeps[k]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.taskhash[dep]
p = pickle.Pickler(file(sigfile, "wb"), -1)
p.dump(data)
def dump_sigs(self, dataCache):
for fn in self.taskdeps:
for task in self.taskdeps[fn]:
k = fn + "." + task
if k not in self.taskhash:
continue
if dataCache.basetaskhash[k] != self.basehash[k]:
bb.error("Bitbake's cached basehash does not match the one we just generated!")
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
def dump_this_task(outfile, d):
fn = d.getVar("BB_FILENAME", True)
task = "do_" + d.getVar("BB_CURRENTTASK", True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile")
def compare_sigfiles(a, b):
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(file(b, "rb"))
b_data = p2.load()
#print "Checking"
#print str(a_data)
#print str(b_data)
def dict_diff(a, b):
sa = set(a.keys())
sb = set(b.keys())
common = sa & sb
changed = set()
for i in common:
if a[i] != b[i]:
changed.add(i)
added = sa - sb
removed = sb - sa
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
print "basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist'])
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
print "taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist'])
if a_data['taskdeps'] != b_data['taskdeps']:
print "Task dependencies changed from %s to %s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps']))
if a_data['basehash'] != b_data['basehash']:
print "basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash'])
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'])
if changed:
for dep in changed:
print "List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep])
if added:
for dep in added:
print "Dependency on variable %s was added" % (dep)
if removed:
for dep in removed:
print "Dependency on Variable %s was removed" % (dep)
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
for dep in changed:
print "Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep])
#if added:
# print "Dependency on variable %s was added (value %s)" % (dep, b_data['gendeps'][dep])
#if removed:
# print "Dependency on Variable %s was removed (value %s)" % (dep, a_data['gendeps'][dep])
if 'runtaskdeps' in a_data and 'runtaskdeps' in b_data and a_data['runtaskdeps'] != b_data['runtaskdeps']:
print "Tasks this task depends on changed from %s to %s" % (a_data['taskdeps'], b_data['taskdeps'])
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
if a_data['runtaskhashes'][dep] != b_data['runtaskhashes'][dep]:
print "Hash for dependent task %s changed from %s to %s" % (dep, a_data['runtaskhashes'][dep], b_data['runtaskhashes'][dep])
def dump_sigfile(a):
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
print "basewhitelist: %s" % (a_data['basewhitelist'])
print "taskwhitelist: %s" % (a_data['taskwhitelist'])
print "Task dependencies: %s" % (sorted(a_data['taskdeps']))
print "basehash: %s" % (a_data['basehash'])
for dep in a_data['gendeps']:
print "List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep])
for dep in a_data['varvals']:
print "Variable %s value is %s" % (dep, a_data['varvals'][dep])
if 'runtaskdeps' in a_data:
print "Tasks this task depends on: %s" % (a_data['runtaskdeps'])
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
print "Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep])

View File

@@ -34,7 +34,7 @@ def re_match_strings(target, strings):
for name in strings:
if (name==target or
re.search(name, target)!=None):
re.search(name,target)!=None):
return True
return False
@@ -84,7 +84,7 @@ class TaskData:
def getrun_id(self, name):
"""
Return an ID number for the run target name.
Return an ID number for the run target name.
If it doesn't exist, create one.
"""
if not name in self.run_names_index:
@@ -95,7 +95,7 @@ class TaskData:
def getfn_id(self, name):
"""
Return an ID number for the filename.
Return an ID number for the filename.
If it doesn't exist, create one.
"""
if not name in self.fn_index:
@@ -271,7 +271,7 @@ class TaskData:
def get_unresolved_build_targets(self, dataCache):
"""
Return a list of build targets who's providers
Return a list of build targets who's providers
are unknown.
"""
unresolved = []
@@ -286,7 +286,7 @@ class TaskData:
def get_unresolved_run_targets(self, dataCache):
"""
Return a list of runtime targets who's providers
Return a list of runtime targets who's providers
are unknown.
"""
unresolved = []
@@ -304,7 +304,7 @@ class TaskData:
Return a list of providers of item
"""
targetid = self.getbuild_id(item)
return self.build_targets[targetid]
def get_dependees(self, itemid):
@@ -354,15 +354,20 @@ class TaskData:
self.add_provider_internal(cfgData, dataCache, item)
except bb.providers.NoProvider:
if self.abort:
if self.get_rdependees_str(item):
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
else:
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (item))
raise
self.remove_buildtarget(self.getbuild_id(item))
targetid = self.getbuild_id(item)
self.remove_buildtarget(targetid)
self.mark_external_target(item)
def add_provider_internal(self, cfgData, dataCache, item):
"""
Add the providers of item to the task data
Mark entries were specifically added externally as against dependencies
Mark entries were specifically added externally as against dependencies
added internally during dependency resolution
"""
@@ -370,7 +375,11 @@ class TaskData:
return
if not item in dataCache.providers:
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_rdependees_str(item)), cfgData)
if self.get_rdependees_str(item):
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
else:
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (item))
bb.event.fire(bb.event.NoProvider(item), cfgData)
raise bb.providers.NoProvider(item)
if self.have_build_target(item):
@@ -382,7 +391,8 @@ class TaskData:
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
if not eligible:
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item)), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "No buildable provider PROVIDES '%s' but '%s' DEPENDS on or otherwise requires it. Enable debugging and see earlier logs to find unbuildable providers." % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item), cfgData)
raise bb.providers.NoProvider(item)
if len(eligible) > 1 and foundUnique == False:
@@ -390,6 +400,8 @@ class TaskData:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.msg.note(1, bb.msg.domain.Provider, "multiple providers are available for %s (%s);" % (item, ", ".join(providers_list)))
bb.msg.note(1, bb.msg.domain.Provider, "consider defining PREFERRED_PROVIDER_%s" % item)
bb.event.fire(bb.event.MultipleProviders(item, providers_list), cfgData)
self.consider_msgs_cache.append(item)
@@ -419,14 +431,16 @@ class TaskData:
all_p = bb.providers.getRuntimeProviders(dataCache, item)
if not all_p:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item)), cfgData)
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables" % (self.get_rdependees_str(item), item))
bb.event.fire(bb.event.NoProvider(item, runtime=True), cfgData)
raise bb.providers.NoRProvider(item)
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
if not eligible:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item)), cfgData)
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables of any buildable targets.\nEnable debugging and see earlier logs to find unbuildable targets." % (self.get_rdependees_str(item), item))
bb.event.fire(bb.event.NoProvider(item, runtime=True), cfgData)
raise bb.providers.NoRProvider(item)
if len(eligible) > 1 and numberPreferred == 0:
@@ -434,7 +448,9 @@ class TaskData:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
bb.msg.note(2, bb.msg.domain.Provider, "consider defining a PREFERRED_PROVIDER entry to match runtime %s" % item)
bb.event.fire(bb.event.MultipleProviders(item,providers_list, runtime=True), cfgData)
self.consider_msgs_cache.append(item)
if numberPreferred > 1:
@@ -442,7 +458,9 @@ class TaskData:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (top %s entries preferred) (%s);" % (item, numberPreferred, ", ".join(providers_list)))
bb.msg.note(2, bb.msg.domain.Provider, "consider defining only one PREFERRED_PROVIDER entry to match runtime %s" % item)
bb.event.fire(bb.event.MultipleProviders(item,providers_list, runtime=True), cfgData)
self.consider_msgs_cache.append(item)
# run through the list until we find one that we can build
@@ -497,9 +515,8 @@ class TaskData:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
if self.abort and targetid in self.external_targets:
target = self.build_names_index[targetid]
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (target, missing_list))
raise bb.providers.NoProvider(target)
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
raise bb.providers.NoProvider
def remove_runtarget(self, targetid, missing_list = []):
"""
@@ -522,7 +539,7 @@ class TaskData:
Resolve all unresolved build and runtime targets
"""
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
while True:
while 1:
added = 0
for target in self.get_unresolved_build_targets(dataCache):
try:
@@ -531,6 +548,10 @@ class TaskData:
except bb.providers.NoProvider:
targetid = self.getbuild_id(target)
if self.abort and targetid in self.external_targets:
if self.get_rdependees_str(target):
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (target, self.get_dependees_str(target)))
else:
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s'" % (target))
raise
self.remove_buildtarget(targetid)
for target in self.get_unresolved_run_targets(dataCache):
@@ -573,9 +594,9 @@ class TaskData:
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
for task in range(len(self.tasks_name)):
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task]))
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
@@ -585,3 +606,5 @@ class TaskData:
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
for fnid in self.rdepids:
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))

View File

@@ -15,3 +15,4 @@
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -15,3 +15,4 @@
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -28,7 +28,7 @@ import time
class BuildConfiguration:
""" Represents a potential *or* historic *or* concrete build. It
encompasses all the things that we need to tell bitbake to do to make it
build what we want it to build.
build what we want it to build.
It also stored the metadata URL and the set of possible machines (and the
distros / images / uris for these. Apart from the metdata URL these are
@@ -73,33 +73,34 @@ class BuildConfiguration:
return self.urls
# It might be a lot lot better if we stored these in like, bitbake conf
# file format.
@staticmethod
# file format.
@staticmethod
def load_from_file (filename):
f = open (filename, "r")
conf = BuildConfiguration()
with open(filename, "r") as f:
for line in f:
data = line.split (";")[1]
if (line.startswith ("metadata-url;")):
conf.metadata_url = data.strip()
continue
if (line.startswith ("url;")):
conf.urls += [data.strip()]
continue
if (line.startswith ("extra-url;")):
conf.extra_urls += [data.strip()]
continue
if (line.startswith ("machine;")):
conf.machine = data.strip()
continue
if (line.startswith ("distribution;")):
conf.distro = data.strip()
continue
if (line.startswith ("image;")):
conf.image = data.strip()
continue
for line in f.readlines():
data = line.split (";")[1]
if (line.startswith ("metadata-url;")):
conf.metadata_url = data.strip()
continue
if (line.startswith ("url;")):
conf.urls += [data.strip()]
continue
if (line.startswith ("extra-url;")):
conf.extra_urls += [data.strip()]
continue
if (line.startswith ("machine;")):
conf.machine = data.strip()
continue
if (line.startswith ("distribution;")):
conf.distro = data.strip()
continue
if (line.startswith ("image;")):
conf.image = data.strip()
continue
f.close ()
return conf
# Serialise to a file. This is part of the build process and we use this
@@ -139,13 +140,13 @@ class BuildResult(gobject.GObject):
".conf" in the directory for the build.
This is GObject so that it can be included in the TreeStore."""
(STATE_COMPLETE, STATE_FAILED, STATE_ONGOING) = \
(0, 1, 2)
def __init__ (self, parent, identifier):
gobject.GObject.__init__ (self)
self.date = None
self.date = None
self.files = []
self.status = None
@@ -156,8 +157,8 @@ class BuildResult(gobject.GObject):
# format build-<year><month><day>-<ordinal> we can easily
# pull it out.
# TODO: Better to stat a file?
(_, date, revision) = identifier.split ("-")
print(date)
(_ , date, revision) = identifier.split ("-")
print date
year = int (date[0:4])
month = int (date[4:6])
@@ -180,7 +181,7 @@ class BuildResult(gobject.GObject):
self.add_file (file)
def add_file (self, file):
# Just add the file for now. Don't care about the type.
# Just add the file for now. Don't care about the type.
self.files += [(file, None)]
class BuildManagerModel (gtk.TreeStore):
@@ -193,7 +194,7 @@ class BuildManagerModel (gtk.TreeStore):
def __init__ (self):
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
@@ -206,7 +207,7 @@ class BuildManager (gobject.GObject):
"results" directory but is also used for starting a new build."""
__gsignals__ = {
'population-finished' : (gobject.SIGNAL_RUN_LAST,
'population-finished' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'populate-error' : (gobject.SIGNAL_RUN_LAST,
@@ -219,13 +220,13 @@ class BuildManager (gobject.GObject):
date = long (time.mktime (result.date.timetuple()))
# Add a top level entry for the build
self.model.set (iter,
self.model.set (iter,
BuildManagerModel.COL_IDENT, result.identifier,
BuildManagerModel.COL_DESC, result.conf.image,
BuildManagerModel.COL_MACHINE, result.conf.machine,
BuildManagerModel.COL_DISTRO, result.conf.distro,
BuildManagerModel.COL_BUILD_RESULT, result,
BuildManagerModel.COL_MACHINE, result.conf.machine,
BuildManagerModel.COL_DISTRO, result.conf.distro,
BuildManagerModel.COL_BUILD_RESULT, result,
BuildManagerModel.COL_DATE, date,
BuildManagerModel.COL_STATE, result.state)
@@ -256,7 +257,7 @@ class BuildManager (gobject.GObject):
while (iter):
(ident, state) = self.model.get(iter,
BuildManagerModel.COL_IDENT,
BuildManagerModel.COL_IDENT,
BuildManagerModel.COL_STATE)
if state == BuildResult.STATE_ONGOING:
@@ -384,8 +385,8 @@ class BuildManager (gobject.GObject):
build_directory])
server.runCommand(["buildTargets", [conf.image], "rootfs"])
except Exception as e:
print(e)
except Exception, e:
print e
class BuildManagerTreeView (gtk.TreeView):
""" The tree view for the build manager. This shows the historic builds
@@ -421,29 +422,29 @@ class BuildManagerTreeView (gtk.TreeView):
# Misc descriptiony thing
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn (None, renderer,
col = gtk.TreeViewColumn (None, renderer,
text=BuildManagerModel.COL_DESC)
self.append_column (col)
# Machine
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Machine", renderer,
col = gtk.TreeViewColumn ("Machine", renderer,
text=BuildManagerModel.COL_MACHINE)
self.append_column (col)
# distro
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Distribution", renderer,
col = gtk.TreeViewColumn ("Distribution", renderer,
text=BuildManagerModel.COL_DISTRO)
self.append_column (col)
# date (using a custom function for formatting the cell contents it
# takes epoch -> human readable string)
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Date", renderer,
col = gtk.TreeViewColumn ("Date", renderer,
text=BuildManagerModel.COL_DATE)
self.append_column (col)
col.set_cell_data_func (renderer,
col.set_cell_data_func (renderer,
self.date_format_custom_cell_data_func)
# For status.
@@ -453,3 +454,4 @@ class BuildManagerTreeView (gtk.TreeView):
self.append_column (col)
col.set_cell_data_func (renderer,
self.state_format_custom_cell_data_fun)

View File

@@ -24,7 +24,7 @@ import gobject
class RunningBuildModel (gtk.TreeStore):
(COL_TYPE, COL_PACKAGE, COL_TASK, COL_MESSAGE, COL_ICON, COL_ACTIVE) = (0, 1, 2, 3, 4, 5)
def __init__ (self):
gtk.TreeStore.__init__ (self,
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
@@ -34,7 +34,7 @@ class RunningBuildModel (gtk.TreeStore):
class RunningBuild (gobject.GObject):
__gsignals__ = {
'build-succeeded' : (gobject.SIGNAL_RUN_LAST,
'build-succeeded' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'build-failed' : (gobject.SIGNAL_RUN_LAST,
@@ -63,7 +63,7 @@ class RunningBuild (gobject.GObject):
# for the message.
if hasattr(event, 'pid'):
pid = event.pid
if pid in self.pids_to_task:
if self.pids_to_task.has_key(pid):
(package, task) = self.pids_to_task[pid]
parent = self.tasks_to_iter[(package, task)]
@@ -82,29 +82,29 @@ class RunningBuild (gobject.GObject):
# Add the message to the tree either at the top level if parent is
# None otherwise as a descendent of a task.
self.model.append (parent,
self.model.append (parent,
(event.__name__.split()[-1], # e.g. MsgWarn, MsgError
package,
package,
task,
event._message,
icon,
icon,
False))
elif isinstance(event, bb.build.TaskStarted):
(package, task) = (event._package, event._task)
# Save out this PID.
self.pids_to_task[pid] = (package, task)
self.pids_to_task[pid] = (package,task)
# Check if we already have this package in our model. If so then
# that can be the parent for the task. Otherwise we create a new
# top level for the package.
if ((package, None) in self.tasks_to_iter):
if (self.tasks_to_iter.has_key ((package, None))):
parent = self.tasks_to_iter[(package, None)]
else:
parent = self.model.append (None, (None,
package,
parent = self.model.append (None, (None,
package,
None,
"Package: %s" % (package),
"Package: %s" % (package),
None,
False))
self.tasks_to_iter[(package, None)] = parent
@@ -114,10 +114,10 @@ class RunningBuild (gobject.GObject):
self.model.set(parent, self.model.COL_ICON, "gtk-execute")
# Add an entry in the model for this task
i = self.model.append (parent, (None,
package,
i = self.model.append (parent, (None,
package,
task,
"Task: %s" % (task),
"Task: %s" % (task),
None,
False))
@@ -176,3 +176,5 @@ class RunningBuildTreeView (gtk.TreeView):
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Message", renderer, text=3)
self.append_column (col)

View File

@@ -201,14 +201,14 @@ def init(server, eventHandler):
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline or cmdline[0] != "generateDotGraph":
print("This UI is only compatible with the -g option")
print "This UI is only compatible with the -g option"
return
ret = server.runCommand(["generateDepTreeEvent", cmdline[1], cmdline[2]])
if ret != True:
print("Couldn't run command! %s" % ret)
print "Couldn't run command! %s" % ret
return
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
except xmlrpclib.Fault, x:
print "XMLRPC Fault getting commandline:\n %s" % x
return
shutdown = 0
@@ -233,8 +233,8 @@ def init(server, eventHandler):
x = event.sofar
y = event.total
if x == y:
print(("\nParsing finished. %d cached, %d parsed, %d skipped, %d masked, %d errors."
% ( event.cached, event.parsed, event.skipped, event.masked, event.errors)))
print("\nParsing finished. %d cached, %d parsed, %d skipped, %d masked, %d errors."
% ( event.cached, event.parsed, event.skipped, event.masked, event.errors))
pbar.hide()
gtk.gdk.threads_enter()
pbar.progress.set_fraction(float(x)/float(y))
@@ -250,7 +250,7 @@ def init(server, eventHandler):
if isinstance(event, bb.command.CookerCommandCompleted):
continue
if isinstance(event, bb.command.CookerCommandFailed):
print("Command execution failed: %s" % event.error)
print "Command execution failed: %s" % event.error
break
if isinstance(event, bb.cooker.CookerExit):
break
@@ -259,13 +259,14 @@ def init(server, eventHandler):
except KeyboardInterrupt:
if shutdown == 2:
print("\nThird Keyboard Interrupt, exit.\n")
print "\nThird Keyboard Interrupt, exit.\n"
break
if shutdown == 1:
print("\nSecond Keyboard Interrupt, stopping...\n")
print "\nSecond Keyboard Interrupt, stopping...\n"
server.runCommand(["stateStop"])
if shutdown == 0:
print("\nKeyboard Interrupt, closing down...\n")
print "\nKeyboard Interrupt, closing down...\n"
server.runCommand(["stateShutdown"])
shutdown = shutdown + 1
pass

View File

@@ -25,13 +25,13 @@ from bb.ui.crumbs.runningbuild import RunningBuildTreeView, RunningBuild
def event_handle_idle_func (eventHandler, build):
# Consume as many messages as we can in the time available to us
event = eventHandler.getEvent()
while event:
build.handle_event (event)
event = eventHandler.getEvent()
# Consume as many messages as we can in the time available to us
event = eventHandler.getEvent()
while event:
build.handle_event (event)
event = eventHandler.getEvent()
return True
return True
class MainWindow (gtk.Window):
def __init__ (self):
@@ -55,15 +55,15 @@ def init (server, eventHandler):
window.cur_build_tv.set_model (running_build.model)
try:
cmdline = server.runCommand(["getCmdLineAction"])
print(cmdline)
print cmdline
if not cmdline:
return 1
ret = server.runCommand(cmdline)
if ret != True:
print("Couldn't get default commandline! %s" % ret)
print "Couldn't get default commandline! %s" % ret
return 1
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
except xmlrpclib.Fault, x:
print "XMLRPC Fault getting commandline:\n %s" % x
return 1
# Use a timeout function for probing the event queue to find out if we
@@ -74,3 +74,4 @@ def init (server, eventHandler):
running_build)
gtk.main()

View File

@@ -18,9 +18,8 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import division
import os
import sys
import itertools
import xmlrpclib
@@ -45,10 +44,10 @@ def init(server, eventHandler):
return 1
ret = server.runCommand(cmdline)
if ret != True:
print("Couldn't get default commandline! %s" % ret)
print "Couldn't get default commandline! %s" % ret
return 1
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
except xmlrpclib.Fault, x:
print "XMLRPC Fault getting commandline:\n %s" % x
return 1
shutdown = 0
@@ -66,39 +65,39 @@ def init(server, eventHandler):
if shutdown and helper.needUpdate:
activetasks, failedtasks = helper.getTasks()
if activetasks:
print("Waiting for %s active tasks to finish:" % len(activetasks))
print "Waiting for %s active tasks to finish:" % len(activetasks)
tasknum = 1
for task in activetasks:
print("%s: %s (pid %s)" % (tasknum, activetasks[task]["title"], task))
print "%s: %s (pid %s)" % (tasknum, activetasks[task]["title"], task)
tasknum = tasknum + 1
if isinstance(event, bb.msg.MsgPlain):
print(event._message)
print event._message
continue
if isinstance(event, bb.msg.MsgDebug):
print('DEBUG: ' + event._message)
print 'DEBUG: ' + event._message
continue
if isinstance(event, bb.msg.MsgNote):
print('NOTE: ' + event._message)
print 'NOTE: ' + event._message
continue
if isinstance(event, bb.msg.MsgWarn):
print('WARNING: ' + event._message)
print 'WARNING: ' + event._message
continue
if isinstance(event, bb.msg.MsgError):
return_value = 1
print('ERROR: ' + event._message)
print 'ERROR: ' + event._message
continue
if isinstance(event, bb.msg.MsgFatal):
return_value = 1
print('FATAL: ' + event._message)
continue
print 'FATAL: ' + event._message
break
if isinstance(event, bb.build.TaskFailed):
return_value = 1
logfile = event.logfile
if logfile and os.path.exists(logfile):
print("ERROR: Logfile of failure stored in: %s" % logfile)
if logfile:
print "ERROR: Logfile of failure stored in: %s" % logfile
if 1 or includelogs:
print("Log data follows:")
print "Log data follows:"
f = open(logfile, "r")
lines = []
while True:
@@ -111,19 +110,19 @@ def init(server, eventHandler):
if len(lines) > int(loglines):
lines.pop(0)
else:
print('| %s' % l)
print '| %s' % l
f.close()
if lines:
for line in lines:
print(line)
print line
if isinstance(event, bb.build.TaskBase):
print("NOTE: %s" % event._message)
print "NOTE: %s" % event._message
continue
if isinstance(event, bb.event.ParseProgress):
x = event.sofar
y = event.total
if os.isatty(sys.stdout.fileno()):
sys.stdout.write("\rNOTE: Handling BitBake files: %s (%04d/%04d) [%2d %%]" % ( next(parsespin), x, y, x*100//y ) )
sys.stdout.write("\rNOTE: Handling BitBake files: %s (%04d/%04d) [%2d %%]" % ( parsespin.next(), x, y, x*100/y ) )
sys.stdout.flush()
else:
if x == 1:
@@ -133,8 +132,8 @@ def init(server, eventHandler):
sys.stdout.write("done.")
sys.stdout.flush()
if x == y:
print(("\nParsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors)))
print("\nParsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors))
continue
if isinstance(event, bb.command.CookerCommandCompleted):
@@ -144,48 +143,39 @@ def init(server, eventHandler):
continue
if isinstance(event, bb.command.CookerCommandFailed):
return_value = 1
print("Command execution failed: %s" % event.error)
print "Command execution failed: %s" % event.error
break
if isinstance(event, bb.cooker.CookerExit):
break
if isinstance(event, bb.event.MultipleProviders):
print("NOTE: multiple providers are available for %s%s (%s)" % (event._is_runtime and "runtime " or "",
event._item,
", ".join(event._candidates)))
print("NOTE: consider defining a PREFERRED_PROVIDER entry to match %s" % event._item)
continue
if isinstance(event, bb.event.NoProvider):
if event._runtime:
r = "R"
else:
r = ""
if event._dependees:
print("ERROR: Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r))
else:
print("ERROR: Nothing %sPROVIDES '%s'" % (r, event._item))
continue
# ignore
if isinstance(event, (bb.event.BuildBase,
bb.event.StampUpdate,
bb.event.ConfigParsed,
bb.event.RecipeParsed,
bb.runqueue.runQueueEvent,
bb.runqueue.runQueueExitWait)):
if isinstance(event, bb.event.BuildStarted):
continue
print("Unknown Event: %s" % event)
if isinstance(event, bb.event.BuildCompleted):
continue
if isinstance(event, bb.event.MultipleProviders):
continue
if isinstance(event, bb.runqueue.runQueueEvent):
continue
if isinstance(event, bb.runqueue.runQueueExitWait):
continue
if isinstance(event, bb.event.StampUpdate):
continue
if isinstance(event, bb.event.ConfigParsed):
continue
if isinstance(event, bb.event.RecipeParsed):
continue
print "Unknown Event: %s" % event
except KeyboardInterrupt:
if shutdown == 2:
print("\nThird Keyboard Interrupt, exit.\n")
print "\nThird Keyboard Interrupt, exit.\n"
break
if shutdown == 1:
print("\nSecond Keyboard Interrupt, stopping...\n")
print "\nSecond Keyboard Interrupt, stopping...\n"
server.runCommand(["stateStop"])
if shutdown == 0:
print("\nKeyboard Interrupt, closing down...\n")
print "\nKeyboard Interrupt, closing down...\n"
server.runCommand(["stateShutdown"])
shutdown = shutdown + 1
pass

View File

@@ -44,8 +44,6 @@
"""
from __future__ import division
import os, sys, curses, itertools, time
import bb
import xmlrpclib
@@ -138,7 +136,7 @@ class NCursesUI:
"""Thread Activity Window"""
def __init__( self, x, y, width, height ):
NCursesUI.DecoratedWindow.__init__( self, "Thread Activity", x, y, width, height )
def setStatus( self, thread, text ):
line = "%02d: %s" % ( thread, text )
width = self.dimensions[WIDTH]
@@ -201,8 +199,8 @@ class NCursesUI:
main_left = 0
main_top = 0
main_height = ( height // 3 * 2 )
main_width = ( width // 3 ) * 2
main_height = ( height / 3 * 2 )
main_width = ( width / 3 ) * 2
clo_left = main_left
clo_top = main_top + main_height
clo_height = height - main_height - main_top - 1
@@ -227,17 +225,17 @@ class NCursesUI:
helper = uihelper.BBUIHelper()
shutdown = 0
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline:
return
ret = server.runCommand(cmdline)
if ret != True:
print("Couldn't get default commandlind! %s" % ret)
print "Couldn't get default commandlind! %s" % ret
return
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
except xmlrpclib.Fault, x:
print "XMLRPC Fault getting commandline:\n %s" % x
return
exitflag = False
@@ -248,7 +246,7 @@ class NCursesUI:
continue
helper.eventHandler(event)
#mw.appendText("%s\n" % event[0])
if isinstance(event, bb.build.TaskBase):
if isinstance(event, bb.build.Task):
mw.appendText("NOTE: %s\n" % event._message)
if isinstance(event, bb.msg.MsgDebug):
mw.appendText('DEBUG: ' + event._message + '\n')
@@ -265,10 +263,10 @@ class NCursesUI:
y = event.total
if x == y:
mw.setStatus("Idle")
mw.appendText("Parsing finished. %d cached, %d parsed, %d skipped, %d masked."
mw.appendText("Parsing finished. %d cached, %d parsed, %d skipped, %d masked."
% ( event.cached, event.parsed, event.skipped, event.masked ))
else:
mw.setStatus("Parsing: %s (%04d/%04d) [%2d %%]" % ( next(parsespin), x, y, x*100//y ) )
mw.setStatus("Parsing: %s (%04d/%04d) [%2d %%]" % ( parsespin.next(), x, y, x*100/y ) )
# if isinstance(event, bb.build.TaskFailed):
# if event.logfile:
# if data.getVar("BBINCLUDELOGS", d):
@@ -303,12 +301,12 @@ class NCursesUI:
taw.setText(0, 0, "")
if activetasks:
taw.appendText("Active Tasks:\n")
for task in activetasks.itervalues():
taw.appendText(task["title"])
for task in activetasks:
taw.appendText(task)
if failedtasks:
taw.appendText("Failed Tasks:\n")
for task in failedtasks:
taw.appendText(task["title"])
taw.appendText(task)
curses.doupdate()
except KeyboardInterrupt:
@@ -326,7 +324,7 @@ class NCursesUI:
def init(server, eventHandler):
if not os.isatty(sys.stdout.fileno()):
print("FATAL: Unable to run 'ncurses' UI without a TTY.")
print "FATAL: Unable to run 'ncurses' UI without a TTY."
return
ui = NCursesUI()
try:
@@ -334,3 +332,4 @@ def init(server, eventHandler):
except:
import traceback
traceback.print_exc()

View File

@@ -24,7 +24,6 @@ import gtk.glade
import threading
import urllib2
import os
import contextlib
from bb.ui.crumbs.buildmanager import BuildManager, BuildConfiguration
from bb.ui.crumbs.buildmanager import BuildManagerTreeView
@@ -39,7 +38,7 @@ class MetaDataLoader(gobject.GObject):
on what machines are available. The distribution and images available for
the machine and the the uris to use for building the given machine."""
__gsignals__ = {
'success' : (gobject.SIGNAL_RUN_LAST,
'success' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'error' : (gobject.SIGNAL_RUN_LAST,
@@ -78,19 +77,20 @@ class MetaDataLoader(gobject.GObject):
def run (self):
result = {}
try:
with contextlib.closing (urllib2.urlopen (self.url)) as f:
# Parse the metadata format. The format is....
# <machine>;<default distro>|<distro>...;<default image>|<image>...;<type##url>|...
for line in f:
components = line.split(";")
if (len (components) < 4):
raise MetaDataLoader.LoaderThread.LoaderImportException
machine = components[0]
distros = components[1].split("|")
images = components[2].split("|")
urls = components[3].split("|")
f = urllib2.urlopen (self.url)
result[machine] = (distros, images, urls)
# Parse the metadata format. The format is....
# <machine>;<default distro>|<distro>...;<default image>|<image>...;<type##url>|...
for line in f.readlines():
components = line.split(";")
if (len (components) < 4):
raise MetaDataLoader.LoaderThread.LoaderImportException
machine = components[0]
distros = components[1].split("|")
images = components[2].split("|")
urls = components[3].split("|")
result[machine] = (distros, images, urls)
# Create an object representing this *potential*
# configuration. It can become concrete if the machine, distro
@@ -104,13 +104,13 @@ class MetaDataLoader(gobject.GObject):
gobject.idle_add (MetaDataLoader.emit_success_signal,
self.loader)
except MetaDataLoader.LoaderThread.LoaderImportException as e:
except MetaDataLoader.LoaderThread.LoaderImportException, e:
gobject.idle_add (MetaDataLoader.emit_error_signal, self.loader,
"Repository metadata corrupt")
except Exception as e:
except Exception, e:
gobject.idle_add (MetaDataLoader.emit_error_signal, self.loader,
"Unable to download repository metadata")
print(e)
print e
def try_fetch_from_url (self, url):
# Try and download the metadata. Firing a signal if successful
@@ -211,7 +211,7 @@ class BuildSetupDialog (gtk.Dialog):
# Build
button = gtk.Button ("_Build", None, True)
image = gtk.Image ()
image.set_from_stock (gtk.STOCK_EXECUTE, gtk.ICON_SIZE_BUTTON)
image.set_from_stock (gtk.STOCK_EXECUTE,gtk.ICON_SIZE_BUTTON)
button.set_image (image)
self.add_action_widget (button, BuildSetupDialog.RESPONSE_BUILD)
button.show_all ()
@@ -293,7 +293,7 @@ class BuildSetupDialog (gtk.Dialog):
if (active_iter):
self.configuration.machine = model.get(active_iter, 0)[0]
# Extract the chosen distro from the combo
# Extract the chosen distro from the combo
model = self.distribution_combo.get_model()
active_iter = self.distribution_combo.get_active_iter()
if (active_iter):
@@ -311,62 +311,62 @@ class BuildSetupDialog (gtk.Dialog):
#
# TODO: Should be a method on the RunningBuild class
def event_handle_timeout (eventHandler, build):
# Consume as many messages as we can ...
event = eventHandler.getEvent()
while event:
build.handle_event (event)
event = eventHandler.getEvent()
return True
# Consume as many messages as we can ...
event = eventHandler.getEvent()
while event:
build.handle_event (event)
event = eventHandler.getEvent()
return True
class MainWindow (gtk.Window):
# Callback that gets fired when the user hits a button in the
# BuildSetupDialog.
def build_dialog_box_response_cb (self, dialog, response_id):
conf = None
if (response_id == BuildSetupDialog.RESPONSE_BUILD):
dialog.update_configuration()
print(dialog.configuration.machine, dialog.configuration.distro, \
dialog.configuration.image)
conf = dialog.configuration
# Callback that gets fired when the user hits a button in the
# BuildSetupDialog.
def build_dialog_box_response_cb (self, dialog, response_id):
conf = None
if (response_id == BuildSetupDialog.RESPONSE_BUILD):
dialog.update_configuration()
print dialog.configuration.machine, dialog.configuration.distro, \
dialog.configuration.image
conf = dialog.configuration
dialog.destroy()
dialog.destroy()
if conf:
self.manager.do_build (conf)
if conf:
self.manager.do_build (conf)
def build_button_clicked_cb (self, button):
dialog = BuildSetupDialog ()
def build_button_clicked_cb (self, button):
dialog = BuildSetupDialog ()
# For some unknown reason Dialog.run causes nice little deadlocks ... :-(
dialog.connect ("response", self.build_dialog_box_response_cb)
dialog.show()
# For some unknown reason Dialog.run causes nice little deadlocks ... :-(
dialog.connect ("response", self.build_dialog_box_response_cb)
dialog.show()
def __init__ (self):
gtk.Window.__init__ (self)
def __init__ (self):
gtk.Window.__init__ (self)
# Pull in *just* the main vbox from the Glade XML data and then pack
# that inside the window
gxml = gtk.glade.XML (os.path.dirname(__file__) + "/crumbs/puccho.glade",
root = "main_window_vbox")
vbox = gxml.get_widget ("main_window_vbox")
self.add (vbox)
# Pull in *just* the main vbox from the Glade XML data and then pack
# that inside the window
gxml = gtk.glade.XML (os.path.dirname(__file__) + "/crumbs/puccho.glade",
root = "main_window_vbox")
vbox = gxml.get_widget ("main_window_vbox")
self.add (vbox)
# Create the tree views for the build manager view and the progress view
self.build_manager_view = BuildManagerTreeView()
self.running_build_view = RunningBuildTreeView()
# Create the tree views for the build manager view and the progress view
self.build_manager_view = BuildManagerTreeView()
self.running_build_view = RunningBuildTreeView()
# Grab the scrolled windows that we put the tree views into
self.results_scrolledwindow = gxml.get_widget ("results_scrolledwindow")
self.progress_scrolledwindow = gxml.get_widget ("progress_scrolledwindow")
# Grab the scrolled windows that we put the tree views into
self.results_scrolledwindow = gxml.get_widget ("results_scrolledwindow")
self.progress_scrolledwindow = gxml.get_widget ("progress_scrolledwindow")
# Put the tree views inside ...
self.results_scrolledwindow.add (self.build_manager_view)
self.progress_scrolledwindow.add (self.running_build_view)
# Put the tree views inside ...
self.results_scrolledwindow.add (self.build_manager_view)
self.progress_scrolledwindow.add (self.running_build_view)
# Hook up the build button...
self.build_button = gxml.get_widget ("main_toolbutton_build")
self.build_button.connect ("clicked", self.build_button_clicked_cb)
# Hook up the build button...
self.build_button = gxml.get_widget ("main_toolbutton_build")
self.build_button.connect ("clicked", self.build_button_clicked_cb)
# I'm not very happy about the current ownership of the RunningBuild. I have
# my suspicions that this object should be held by the BuildManager since we
@@ -383,11 +383,11 @@ def running_build_succeeded_cb (running_build, manager):
# BuildManager. It can then hook onto the signals directly and drive
# interesting things it cares about.
manager.notify_build_succeeded ()
print("build succeeded")
print "build succeeded"
def running_build_failed_cb (running_build, manager):
# As above
print("build failed")
print "build failed"
manager.notify_build_failed ()
def init (server, eventHandler):

View File

@@ -19,7 +19,7 @@
"""
Use this class to fork off a thread to recieve event callbacks from the bitbake
Use this class to fork off a thread to recieve event callbacks from the bitbake
server and queue them for the UI to process. This process must be used to avoid
client/server deadlocks.
"""
@@ -110,15 +110,16 @@ class UIXMLRPCServer (SimpleXMLRPCServer):
return (sock, addr)
except socket.timeout:
pass
return (None, None)
return (None,None)
def close_request(self, request):
if request is None:
return
SimpleXMLRPCServer.close_request(self, request)
def process_request(self, request, client_address):
if request is None:
return
SimpleXMLRPCServer.process_request(self, request, client_address)

View File

@@ -19,22 +19,10 @@ BitBake Utility Functions
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, fcntl, os, string, stat, shutil, time
import sys
import bb
import errno
import bb.msg
from commands import getstatusoutput
# Version comparison
separators = ".-"
# Context used in better_exec, eval
_context = {
"os": os,
"bb": bb,
"time": time,
}
import re, fcntl, os, types, bb, string, stat, shutil
from commands import getstatusoutput
def explode_version(s):
r = []
@@ -72,9 +60,9 @@ def vercmp_part(a, b):
if ca == None and cb == None:
return 0
if isinstance(ca, basestring):
if type(ca) is types.StringType:
sa = ca in separators
if isinstance(cb, basestring):
if type(cb) is types.StringType:
sb = cb in separators
if sa and not sb:
return -1
@@ -97,131 +85,6 @@ def vercmp(ta, tb):
r = vercmp_part(ra, rb)
return r
_package_weights_ = {"pre":-2, "p":0, "alpha":-4, "beta":-3, "rc":-1} # dicts are unordered
_package_ends_ = ["pre", "p", "alpha", "beta", "rc", "cvs", "bk", "HEAD" ] # so we need ordered list
def relparse(myver):
"""Parses the last elements of a version number into a triplet, that can
later be compared.
"""
number = 0
p1 = 0
p2 = 0
mynewver = myver.split('_')
if len(mynewver) == 2:
# an _package_weights_
number = float(mynewver[0])
match = 0
for x in _package_ends_:
elen = len(x)
if mynewver[1][:elen] == x:
match = 1
p1 = _package_weights_[x]
try:
p2 = float(mynewver[1][elen:])
except:
p2 = 0
break
if not match:
# normal number or number with letter at end
divider = len(myver)-1
if myver[divider:] not in "1234567890":
# letter at end
p1 = ord(myver[divider:])
number = float(myver[0:divider])
else:
number = float(myver)
else:
# normal number or number with letter at end
divider = len(myver)-1
if myver[divider:] not in "1234567890":
#letter at end
p1 = ord(myver[divider:])
number = float(myver[0:divider])
else:
number = float(myver)
return [number, p1, p2]
__vercmp_cache__ = {}
def vercmp_string(val1, val2):
"""This takes two version strings and returns an integer to tell you whether
the versions are the same, val1>val2 or val2>val1.
"""
# quick short-circuit
if val1 == val2:
return 0
valkey = val1 + " " + val2
# cache lookup
try:
return __vercmp_cache__[valkey]
try:
return - __vercmp_cache__[val2 + " " + val1]
except KeyError:
pass
except KeyError:
pass
# consider 1_p2 vc 1.1
# after expansion will become (1_p2,0) vc (1,1)
# then 1_p2 is compared with 1 before 0 is compared with 1
# to solve the bug we need to convert it to (1,0_p2)
# by splitting _prepart part and adding it back _after_expansion
val1_prepart = val2_prepart = ''
if val1.count('_'):
val1, val1_prepart = val1.split('_', 1)
if val2.count('_'):
val2, val2_prepart = val2.split('_', 1)
# replace '-' by '.'
# FIXME: Is it needed? can val1/2 contain '-'?
val1 = val1.split("-")
if len(val1) == 2:
val1[0] = val1[0] + "." + val1[1]
val2 = val2.split("-")
if len(val2) == 2:
val2[0] = val2[0] + "." + val2[1]
val1 = val1[0].split('.')
val2 = val2[0].split('.')
# add back decimal point so that .03 does not become "3" !
for x in range(1, len(val1)):
if val1[x][0] == '0' :
val1[x] = '.' + val1[x]
for x in range(1, len(val2)):
if val2[x][0] == '0' :
val2[x] = '.' + val2[x]
# extend varion numbers
if len(val2) < len(val1):
val2.extend(["0"]*(len(val1)-len(val2)))
elif len(val1) < len(val2):
val1.extend(["0"]*(len(val2)-len(val1)))
# add back _prepart tails
if val1_prepart:
val1[-1] += '_' + val1_prepart
if val2_prepart:
val2[-1] += '_' + val2_prepart
# The above code will extend version numbers out so they
# have the same number of digits.
for x in range(0, len(val1)):
cmp1 = relparse(val1[x])
cmp2 = relparse(val2[x])
for y in range(0, 3):
myret = cmp1[y] - cmp2[y]
if myret != 0:
__vercmp_cache__[valkey] = myret
return myret
__vercmp_cache__[valkey] = 0
return 0
def explode_deps(s):
"""
Take an RDEPENDS style string of format:
@@ -250,10 +113,10 @@ def explode_dep_versions(s):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
and return a dictionary of dependencies and versions.
and return a dictonary of dependencies and versions.
"""
r = {}
l = s.replace(",", "").split()
l = s.split()
lastdep = None
lastver = ""
inversion = False
@@ -275,61 +138,40 @@ def explode_dep_versions(s):
return r
def join_deps(deps):
"""
Take the result from explode_dep_versions and generate a dependency string
"""
result = []
for dep in deps:
if deps[dep]:
result.append(dep + " (" + deps[dep] + ")")
else:
result.append(dep)
return ", ".join(result)
def extend_deps(dest, src):
"""
Extend the results from explode_dep_versions by appending all of the items
in the second list, avoiding duplicates.
"""
for dep in src:
if dep not in dest:
dest[dep] = src[dep]
elif dest[dep] != src[dep]:
dest[dep] = src[dep]
def _print_trace(body, line):
"""
Print the Environment of a Text Body
"""
import bb
# print the environment of the method
min_line = max(1, line-4)
max_line = min(line + 4, len(body)-1)
for i in range(min_line, max_line + 1):
bb.msg.error(bb.msg.domain.Util, "Printing the environment of the function")
min_line = max(1,line-4)
max_line = min(line+4,len(body)-1)
for i in range(min_line,max_line+1):
bb.msg.error(bb.msg.domain.Util, "\t%.4d:%s" % (i, body[i-1]) )
def better_compile(text, file, realfile, mode = "exec"):
def better_compile(text, file, realfile):
"""
A better compile method. This method
will print the offending lines.
"""
try:
return compile(text, file, mode)
except Exception as e:
return compile(text, file, "exec")
except Exception, e:
import bb,sys
# split the text into lines again
body = text.split('\n')
bb.msg.error(bb.msg.domain.Util, "Error in compiling python function in: %s" % (realfile))
bb.msg.error(bb.msg.domain.Util, str(e))
if e.lineno:
bb.msg.error(bb.msg.domain.Util, "The lines leading to this error were:")
bb.msg.error(bb.msg.domain.Util, "\t%d:%s:'%s'" % (e.lineno, e.__class__.__name__, body[e.lineno-1]))
_print_trace(body, e.lineno)
else:
bb.msg.error(bb.msg.domain.Util, "The function causing this error was:")
for line in body:
bb.msg.error(bb.msg.domain.Util, line)
raise
bb.msg.error(bb.msg.domain.Util, "Error in compiling python function in: ", realfile)
bb.msg.error(bb.msg.domain.Util, "The lines leading to this error were:")
bb.msg.error(bb.msg.domain.Util, "\t%d:%s:'%s'" % (e.lineno, e.__class__.__name__, body[e.lineno-1]))
_print_trace(body, e.lineno)
# exit now
sys.exit(1)
def better_exec(code, context, text, realfile):
"""
@@ -337,40 +179,69 @@ def better_exec(code, context, text, realfile):
print the lines that are responsible for the
error.
"""
import bb.parse
import bb,sys
try:
exec(code, _context, context)
exec code in context
except:
(t, value, tb) = sys.exc_info()
(t,value,tb) = sys.exc_info()
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
raise
# print the Header of the Error Message
bb.msg.error(bb.msg.domain.Util, "Error in executing python function in: %s" % realfile)
bb.msg.error(bb.msg.domain.Util, "Exception:%s Message:%s" % (t, value))
bb.msg.error(bb.msg.domain.Util, "Exception:%s Message:%s" % (t,value) )
# Strip 'us' from the stack (better_exec call)
tb = tb.tb_next
# let us find the line number now
while tb.tb_next:
tb = tb.tb_next
import traceback
tbextract = traceback.extract_tb(tb)
tbextract = "\n".join(traceback.format_list(tbextract))
bb.msg.error(bb.msg.domain.Util, "Traceback:")
for line in tbextract.split('\n'):
bb.msg.error(bb.msg.domain.Util, line)
line = traceback.tb_lineno(tb)
bb.msg.error(bb.msg.domain.Util, "The lines leading to this error were:")
_print_trace( text.split('\n'), line )
_print_trace( text.split('\n'), line )
raise
def simple_exec(code, context):
exec(code, _context, context)
def Enum(*names):
"""
A simple class to give Enum support
"""
def better_eval(source, locals):
return eval(source, _context, locals)
assert names, "Empty enums are not supported"
class EnumClass(object):
__slots__ = names
def __iter__(self): return iter(constants)
def __len__(self): return len(constants)
def __getitem__(self, i): return constants[i]
def __repr__(self): return 'Enum' + str(names)
def __str__(self): return 'enum ' + str(constants)
class EnumValue(object):
__slots__ = ('__value')
def __init__(self, value): self.__value = value
Value = property(lambda self: self.__value)
EnumType = property(lambda self: EnumType)
def __hash__(self): return hash(self.__value)
def __cmp__(self, other):
# C fans might want to remove the following assertion
# to make all enums comparable by ordinal value {;))
assert self.EnumType is other.EnumType, "Only values from the same enum are comparable"
return cmp(self.__value, other.__value)
def __invert__(self): return constants[maximum - self.__value]
def __nonzero__(self): return bool(self.__value)
def __repr__(self): return str(names[self.__value])
maximum = len(names) - 1
constants = [None] * len(names)
for i, each in enumerate(names):
val = EnumValue(i)
setattr(EnumClass, each, val)
constants[i] = val
constants = tuple(constants)
EnumType = EnumClass()
return EnumType
def lockfile(name):
"""
@@ -379,36 +250,37 @@ def lockfile(name):
"""
path = os.path.dirname(name)
if not os.path.isdir(path):
import bb, sys
bb.msg.error(bb.msg.domain.Util, "Error, lockfile path does not exist!: %s" % path)
sys.exit(1)
while True:
# If we leave the lockfiles lying around there is no problem
# but we should clean up after ourselves. This gives potential
# for races though. To work around this, when we acquire the lock
# we check the file we locked was still the lock file on disk.
# by comparing inode numbers. If they don't match or the lockfile
# for races though. To work around this, when we acquire the lock
# we check the file we locked was still the lock file on disk.
# by comparing inode numbers. If they don't match or the lockfile
# no longer exists, we start again.
# This implementation is unfair since the last person to request the
# This implementation is unfair since the last person to request the
# lock is the most likely to win it.
try:
lf = open(name, "a + ")
lf = open(name, "a+")
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
statinfo = os.fstat(lf.fileno())
if os.path.exists(lf.name):
statinfo2 = os.stat(lf.name)
if statinfo.st_ino == statinfo2.st_ino:
return lf
statinfo2 = os.stat(lf.name)
if statinfo.st_ino == statinfo2.st_ino:
return lf
# File no longer exists or changed, retry
lf.close
except Exception as e:
except Exception, e:
continue
def unlockfile(lf):
"""
Unlock a file locked using lockfile()
Unlock a file locked using lockfile()
"""
os.unlink(lf.name)
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
@@ -424,7 +296,7 @@ def md5_file(filename):
except ImportError:
import md5
m = md5.new()
for line in open(filename):
m.update(line)
return m.hexdigest()
@@ -452,7 +324,6 @@ def preserved_envvars_list():
'BB_PRESERVE_ENV',
'BB_ENV_WHITELIST',
'BB_ENV_EXTRAWHITE',
'BB_TASKHASH',
'COLORTERM',
'DBUS_SESSION_BUS_ADDRESS',
'DESKTOP_SESSION',
@@ -485,17 +356,19 @@ def filter_environment(good_vars):
are not known and may influence the build in a negative way.
"""
import bb
removed_vars = []
for key in os.environ.keys():
if key in good_vars:
continue
removed_vars.append(key)
os.unsetenv(key)
del os.environ[key]
if len(removed_vars):
bb.msg.debug(1, bb.msg.domain.Util, "Removed the following variables from the environment: %s" % (", ".join(removed_vars)))
bb.debug(1, "Removed the following variables from the environment:", ",".join(removed_vars))
return removed_vars
@@ -525,7 +398,7 @@ def build_environment(d):
"""
Build an environment from all exported variables.
"""
import bb.data
import bb
for var in bb.data.keys(d):
export = bb.data.getVarFlag(var, "export", d)
if export:
@@ -534,7 +407,7 @@ def build_environment(d):
def prunedir(topdir):
# Delete everything reachable from the directory named in 'topdir'.
# CAUTION: This is dangerous!
for root, dirs, files in os.walk(topdir, topdown = False):
for root, dirs, files in os.walk(topdir, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
@@ -549,7 +422,7 @@ def prunedir(topdir):
# but thats possibly insane and suffixes is probably going to be small
#
def prune_suffix(var, suffixes, d):
# See if var ends with any of the suffixes listed and
# See if var ends with any of the suffixes listed and
# remove it if found
for suffix in suffixes:
if var.endswith(suffix):
@@ -561,172 +434,169 @@ def mkdirhier(dir):
directory already exists like os.makedirs
"""
bb.msg.debug(3, bb.msg.domain.Util, "mkdirhier(%s)" % dir)
bb.debug(3, "mkdirhier(%s)" % dir)
try:
os.makedirs(dir)
bb.msg.debug(2, bb.msg.domain.Util, "created " + dir)
except OSError as e:
if e.errno != errno.EEXIST:
raise e
bb.debug(2, "created " + dir)
except OSError, e:
if e.errno != 17: raise e
def movefile(src, dest, newmtime = None, sstat = None):
import stat
def movefile(src,dest,newmtime=None,sstat=None):
"""Moves a file from src to dest, preserving all permissions and
attributes; mtime will be preserved even when moving across
filesystems. Returns true on success and false on failure. Move is
atomic.
"""
#print "movefile(" + src + "," + dest + "," + str(newmtime) + "," + str(sstat) + ")"
#print "movefile("+src+","+dest+","+str(newmtime)+","+str(sstat)+")"
try:
if not sstat:
sstat = os.lstat(src)
except Exception as e:
print("movefile: Stating source file failed...", e)
sstat=os.lstat(src)
except Exception, e:
print "movefile: Stating source file failed...", e
return None
destexists = 1
destexists=1
try:
dstat = os.lstat(dest)
dstat=os.lstat(dest)
except:
dstat = os.lstat(os.path.dirname(dest))
destexists = 0
dstat=os.lstat(os.path.dirname(dest))
destexists=0
if destexists:
if stat.S_ISLNK(dstat[stat.ST_MODE]):
try:
os.unlink(dest)
destexists = 0
except Exception as e:
destexists=0
except Exception, e:
pass
if stat.S_ISLNK(sstat[stat.ST_MODE]):
try:
target = os.readlink(src)
target=os.readlink(src)
if destexists and not stat.S_ISDIR(dstat[stat.ST_MODE]):
os.unlink(dest)
os.symlink(target, dest)
os.symlink(target,dest)
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
os.unlink(src)
return os.lstat(dest)
except Exception as e:
print("movefile: failed to properly create symlink:", dest, "->", target, e)
except Exception, e:
print "movefile: failed to properly create symlink:", dest, "->", target, e
return None
renamefailed = 1
if sstat[stat.ST_DEV] == dstat[stat.ST_DEV]:
renamefailed=1
if sstat[stat.ST_DEV]==dstat[stat.ST_DEV]:
try:
os.rename(src, dest)
renamefailed = 0
except Exception as e:
if e[0] != errno.EXDEV:
ret=os.rename(src,dest)
renamefailed=0
except Exception, e:
import errno
if e[0]!=errno.EXDEV:
# Some random error.
print("movefile: Failed to move", src, "to", dest, e)
print "movefile: Failed to move", src, "to", dest, e
return None
# Invalid cross-device-link 'bind' mounted or actually Cross-Device
if renamefailed:
didcopy = 0
didcopy=0
if stat.S_ISREG(sstat[stat.ST_MODE]):
try: # For safety copy then move it over.
shutil.copyfile(src, dest + "#new")
os.rename(dest + "#new", dest)
didcopy = 1
except Exception as e:
print('movefile: copy', src, '->', dest, 'failed.', e)
shutil.copyfile(src,dest+"#new")
os.rename(dest+"#new",dest)
didcopy=1
except Exception, e:
print 'movefile: copy', src, '->', dest, 'failed.', e
return None
else:
#we don't yet handle special, so we need to fall back to /bin/mv
a = getstatusoutput("/bin/mv -f " + "'" + src + "' '" + dest + "'")
if a[0] != 0:
print("movefile: Failed to move special file:" + src + "' to '" + dest + "'", a)
a=getstatusoutput("/bin/mv -f "+"'"+src+"' '"+dest+"'")
if a[0]!=0:
print "movefile: Failed to move special file:" + src + "' to '" + dest + "'", a
return None # failure
try:
if didcopy:
os.lchown(dest, sstat[stat.ST_UID], sstat[stat.ST_GID])
os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
os.unlink(src)
except Exception as e:
print("movefile: Failed to chown/chmod/unlink", dest, e)
except Exception, e:
print "movefile: Failed to chown/chmod/unlink", dest, e
return None
if newmtime:
os.utime(dest, (newmtime, newmtime))
os.utime(dest,(newmtime,newmtime))
else:
os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
newmtime = sstat[stat.ST_MTIME]
newmtime=sstat[stat.ST_MTIME]
return newmtime
def copyfile(src, dest, newmtime = None, sstat = None):
def copyfile(src,dest,newmtime=None,sstat=None):
"""
Copies a file from src to dest, preserving all permissions and
attributes; mtime will be preserved even when moving across
filesystems. Returns true on success and false on failure.
"""
#print "copyfile(" + src + "," + dest + "," + str(newmtime) + "," + str(sstat) + ")"
#print "copyfile("+src+","+dest+","+str(newmtime)+","+str(sstat)+")"
try:
if not sstat:
sstat = os.lstat(src)
except Exception as e:
print("copyfile: Stating source file failed...", e)
sstat=os.lstat(src)
except Exception, e:
print "copyfile: Stating source file failed...", e
return False
destexists = 1
destexists=1
try:
dstat = os.lstat(dest)
dstat=os.lstat(dest)
except:
dstat = os.lstat(os.path.dirname(dest))
destexists = 0
dstat=os.lstat(os.path.dirname(dest))
destexists=0
if destexists:
if stat.S_ISLNK(dstat[stat.ST_MODE]):
try:
os.unlink(dest)
destexists = 0
except Exception as e:
destexists=0
except Exception, e:
pass
if stat.S_ISLNK(sstat[stat.ST_MODE]):
try:
target = os.readlink(src)
target=os.readlink(src)
if destexists and not stat.S_ISDIR(dstat[stat.ST_MODE]):
os.unlink(dest)
os.symlink(target, dest)
os.symlink(target,dest)
#os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
return os.lstat(dest)
except Exception as e:
print("copyfile: failed to properly create symlink:", dest, "->", target, e)
except Exception, e:
print "copyfile: failed to properly create symlink:", dest, "->", target, e
return False
if stat.S_ISREG(sstat[stat.ST_MODE]):
os.chmod(src, stat.S_IRUSR) # Make sure we can read it
try: # For safety copy then move it over.
shutil.copyfile(src, dest + "#new")
os.rename(dest + "#new", dest)
except Exception as e:
print('copyfile: copy', src, '->', dest, 'failed.', e)
return False
finally:
os.chmod(src, sstat[stat.ST_MODE])
os.utime(src, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
try: # For safety copy then move it over.
shutil.copyfile(src,dest+"#new")
os.rename(dest+"#new",dest)
except Exception, e:
print 'copyfile: copy', src, '->', dest, 'failed.', e
return False
else:
#we don't yet handle special, so we need to fall back to /bin/mv
a = getstatusoutput("/bin/cp -f " + "'" + src + "' '" + dest + "'")
if a[0] != 0:
print("copyfile: Failed to copy special file:" + src + "' to '" + dest + "'", a)
return False # failure
#we don't yet handle special, so we need to fall back to /bin/mv
a=getstatusoutput("/bin/cp -f "+"'"+src+"' '"+dest+"'")
if a[0]!=0:
print "copyfile: Failed to copy special file:" + src + "' to '" + dest + "'", a
return False # failure
try:
os.lchown(dest, sstat[stat.ST_UID], sstat[stat.ST_GID])
os.lchown(dest,sstat[stat.ST_UID],sstat[stat.ST_GID])
os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
except Exception as e:
print("copyfile: Failed to chown/chmod/unlink", dest, e)
except Exception, e:
print "copyfile: Failed to chown/chmod/unlink", dest, e
return False
if newmtime:
os.utime(dest, (newmtime, newmtime))
os.utime(dest,(newmtime,newmtime))
else:
os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
newmtime = sstat[stat.ST_MTIME]
newmtime=sstat[stat.ST_MTIME]
return newmtime
def which(path, item, direction = 0):
@@ -744,19 +614,3 @@ def which(path, item, direction = 0):
return next
return ""
def init_logger(logger, verbose, debug, debug_domains):
"""
Set verbosity and debug levels in the logger
"""
if verbose:
logger.set_verbose(True)
if debug:
logger.set_debug_level(debug)
else:
logger.set_debug_level(0)
if debug_domains:
logger.set_debug_domains(debug_domains)

View File

@@ -1,570 +0,0 @@
# -*- coding: utf-8 -*-
"""
codegen
~~~~~~~
Extension to ast that allow ast -> python code generation.
:copyright: Copyright 2008 by Armin Ronacher.
:license: BSD.
"""
from ast import *
BOOLOP_SYMBOLS = {
And: 'and',
Or: 'or'
}
BINOP_SYMBOLS = {
Add: '+',
Sub: '-',
Mult: '*',
Div: '/',
FloorDiv: '//',
Mod: '%',
LShift: '<<',
RShift: '>>',
BitOr: '|',
BitAnd: '&',
BitXor: '^'
}
CMPOP_SYMBOLS = {
Eq: '==',
Gt: '>',
GtE: '>=',
In: 'in',
Is: 'is',
IsNot: 'is not',
Lt: '<',
LtE: '<=',
NotEq: '!=',
NotIn: 'not in'
}
UNARYOP_SYMBOLS = {
Invert: '~',
Not: 'not',
UAdd: '+',
USub: '-'
}
ALL_SYMBOLS = {}
ALL_SYMBOLS.update(BOOLOP_SYMBOLS)
ALL_SYMBOLS.update(BINOP_SYMBOLS)
ALL_SYMBOLS.update(CMPOP_SYMBOLS)
ALL_SYMBOLS.update(UNARYOP_SYMBOLS)
def to_source(node, indent_with=' ' * 4, add_line_information=False):
"""This function can convert a node tree back into python sourcecode.
This is useful for debugging purposes, especially if you're dealing with
custom asts not generated by python itself.
It could be that the sourcecode is evaluable when the AST itself is not
compilable / evaluable. The reason for this is that the AST contains some
more data than regular sourcecode does, which is dropped during
conversion.
Each level of indentation is replaced with `indent_with`. Per default this
parameter is equal to four spaces as suggested by PEP 8, but it might be
adjusted to match the application's styleguide.
If `add_line_information` is set to `True` comments for the line numbers
of the nodes are added to the output. This can be used to spot wrong line
number information of statement nodes.
"""
generator = SourceGenerator(indent_with, add_line_information)
generator.visit(node)
return ''.join(generator.result)
class SourceGenerator(NodeVisitor):
"""This visitor is able to transform a well formed syntax tree into python
sourcecode. For more details have a look at the docstring of the
`node_to_source` function.
"""
def __init__(self, indent_with, add_line_information=False):
self.result = []
self.indent_with = indent_with
self.add_line_information = add_line_information
self.indentation = 0
self.new_lines = 0
def write(self, x):
if self.new_lines:
if self.result:
self.result.append('\n' * self.new_lines)
self.result.append(self.indent_with * self.indentation)
self.new_lines = 0
self.result.append(x)
def newline(self, node=None, extra=0):
self.new_lines = max(self.new_lines, 1 + extra)
if node is not None and self.add_line_information:
self.write('# line: %s' % node.lineno)
self.new_lines = 1
def body(self, statements):
self.new_line = True
self.indentation += 1
for stmt in statements:
self.visit(stmt)
self.indentation -= 1
def body_or_else(self, node):
self.body(node.body)
if node.orelse:
self.newline()
self.write('else:')
self.body(node.orelse)
def signature(self, node):
want_comma = []
def write_comma():
if want_comma:
self.write(', ')
else:
want_comma.append(True)
padding = [None] * (len(node.args) - len(node.defaults))
for arg, default in zip(node.args, padding + node.defaults):
write_comma()
self.visit(arg)
if default is not None:
self.write('=')
self.visit(default)
if node.vararg is not None:
write_comma()
self.write('*' + node.vararg)
if node.kwarg is not None:
write_comma()
self.write('**' + node.kwarg)
def decorators(self, node):
for decorator in node.decorator_list:
self.newline(decorator)
self.write('@')
self.visit(decorator)
# Statements
def visit_Assign(self, node):
self.newline(node)
for idx, target in enumerate(node.targets):
if idx:
self.write(', ')
self.visit(target)
self.write(' = ')
self.visit(node.value)
def visit_AugAssign(self, node):
self.newline(node)
self.visit(node.target)
self.write(BINOP_SYMBOLS[type(node.op)] + '=')
self.visit(node.value)
def visit_ImportFrom(self, node):
self.newline(node)
self.write('from %s%s import ' % ('.' * node.level, node.module))
for idx, item in enumerate(node.names):
if idx:
self.write(', ')
self.write(item)
def visit_Import(self, node):
self.newline(node)
for item in node.names:
self.write('import ')
self.visit(item)
def visit_Expr(self, node):
self.newline(node)
self.generic_visit(node)
def visit_FunctionDef(self, node):
self.newline(extra=1)
self.decorators(node)
self.newline(node)
self.write('def %s(' % node.name)
self.signature(node.args)
self.write('):')
self.body(node.body)
def visit_ClassDef(self, node):
have_args = []
def paren_or_comma():
if have_args:
self.write(', ')
else:
have_args.append(True)
self.write('(')
self.newline(extra=2)
self.decorators(node)
self.newline(node)
self.write('class %s' % node.name)
for base in node.bases:
paren_or_comma()
self.visit(base)
# XXX: the if here is used to keep this module compatible
# with python 2.6.
if hasattr(node, 'keywords'):
for keyword in node.keywords:
paren_or_comma()
self.write(keyword.arg + '=')
self.visit(keyword.value)
if node.starargs is not None:
paren_or_comma()
self.write('*')
self.visit(node.starargs)
if node.kwargs is not None:
paren_or_comma()
self.write('**')
self.visit(node.kwargs)
self.write(have_args and '):' or ':')
self.body(node.body)
def visit_If(self, node):
self.newline(node)
self.write('if ')
self.visit(node.test)
self.write(':')
self.body(node.body)
while True:
else_ = node.orelse
if len(else_) == 1 and isinstance(else_[0], If):
node = else_[0]
self.newline()
self.write('elif ')
self.visit(node.test)
self.write(':')
self.body(node.body)
else:
self.newline()
self.write('else:')
self.body(else_)
break
def visit_For(self, node):
self.newline(node)
self.write('for ')
self.visit(node.target)
self.write(' in ')
self.visit(node.iter)
self.write(':')
self.body_or_else(node)
def visit_While(self, node):
self.newline(node)
self.write('while ')
self.visit(node.test)
self.write(':')
self.body_or_else(node)
def visit_With(self, node):
self.newline(node)
self.write('with ')
self.visit(node.context_expr)
if node.optional_vars is not None:
self.write(' as ')
self.visit(node.optional_vars)
self.write(':')
self.body(node.body)
def visit_Pass(self, node):
self.newline(node)
self.write('pass')
def visit_Print(self, node):
# XXX: python 2.6 only
self.newline(node)
self.write('print ')
want_comma = False
if node.dest is not None:
self.write(' >> ')
self.visit(node.dest)
want_comma = True
for value in node.values:
if want_comma:
self.write(', ')
self.visit(value)
want_comma = True
if not node.nl:
self.write(',')
def visit_Delete(self, node):
self.newline(node)
self.write('del ')
for idx, target in enumerate(node):
if idx:
self.write(', ')
self.visit(target)
def visit_TryExcept(self, node):
self.newline(node)
self.write('try:')
self.body(node.body)
for handler in node.handlers:
self.visit(handler)
def visit_TryFinally(self, node):
self.newline(node)
self.write('try:')
self.body(node.body)
self.newline(node)
self.write('finally:')
self.body(node.finalbody)
def visit_Global(self, node):
self.newline(node)
self.write('global ' + ', '.join(node.names))
def visit_Nonlocal(self, node):
self.newline(node)
self.write('nonlocal ' + ', '.join(node.names))
def visit_Return(self, node):
self.newline(node)
self.write('return ')
self.visit(node.value)
def visit_Break(self, node):
self.newline(node)
self.write('break')
def visit_Continue(self, node):
self.newline(node)
self.write('continue')
def visit_Raise(self, node):
# XXX: Python 2.6 / 3.0 compatibility
self.newline(node)
self.write('raise')
if hasattr(node, 'exc') and node.exc is not None:
self.write(' ')
self.visit(node.exc)
if node.cause is not None:
self.write(' from ')
self.visit(node.cause)
elif hasattr(node, 'type') and node.type is not None:
self.visit(node.type)
if node.inst is not None:
self.write(', ')
self.visit(node.inst)
if node.tback is not None:
self.write(', ')
self.visit(node.tback)
# Expressions
def visit_Attribute(self, node):
self.visit(node.value)
self.write('.' + node.attr)
def visit_Call(self, node):
want_comma = []
def write_comma():
if want_comma:
self.write(', ')
else:
want_comma.append(True)
self.visit(node.func)
self.write('(')
for arg in node.args:
write_comma()
self.visit(arg)
for keyword in node.keywords:
write_comma()
self.write(keyword.arg + '=')
self.visit(keyword.value)
if node.starargs is not None:
write_comma()
self.write('*')
self.visit(node.starargs)
if node.kwargs is not None:
write_comma()
self.write('**')
self.visit(node.kwargs)
self.write(')')
def visit_Name(self, node):
self.write(node.id)
def visit_Str(self, node):
self.write(repr(node.s))
def visit_Bytes(self, node):
self.write(repr(node.s))
def visit_Num(self, node):
self.write(repr(node.n))
def visit_Tuple(self, node):
self.write('(')
idx = -1
for idx, item in enumerate(node.elts):
if idx:
self.write(', ')
self.visit(item)
self.write(idx and ')' or ',)')
def sequence_visit(left, right):
def visit(self, node):
self.write(left)
for idx, item in enumerate(node.elts):
if idx:
self.write(', ')
self.visit(item)
self.write(right)
return visit
visit_List = sequence_visit('[', ']')
visit_Set = sequence_visit('{', '}')
del sequence_visit
def visit_Dict(self, node):
self.write('{')
for idx, (key, value) in enumerate(zip(node.keys, node.values)):
if idx:
self.write(', ')
self.visit(key)
self.write(': ')
self.visit(value)
self.write('}')
def visit_BinOp(self, node):
self.visit(node.left)
self.write(' %s ' % BINOP_SYMBOLS[type(node.op)])
self.visit(node.right)
def visit_BoolOp(self, node):
self.write('(')
for idx, value in enumerate(node.values):
if idx:
self.write(' %s ' % BOOLOP_SYMBOLS[type(node.op)])
self.visit(value)
self.write(')')
def visit_Compare(self, node):
self.write('(')
self.write(node.left)
for op, right in zip(node.ops, node.comparators):
self.write(' %s %%' % CMPOP_SYMBOLS[type(op)])
self.visit(right)
self.write(')')
def visit_UnaryOp(self, node):
self.write('(')
op = UNARYOP_SYMBOLS[type(node.op)]
self.write(op)
if op == 'not':
self.write(' ')
self.visit(node.operand)
self.write(')')
def visit_Subscript(self, node):
self.visit(node.value)
self.write('[')
self.visit(node.slice)
self.write(']')
def visit_Slice(self, node):
if node.lower is not None:
self.visit(node.lower)
self.write(':')
if node.upper is not None:
self.visit(node.upper)
if node.step is not None:
self.write(':')
if not (isinstance(node.step, Name) and node.step.id == 'None'):
self.visit(node.step)
def visit_ExtSlice(self, node):
for idx, item in node.dims:
if idx:
self.write(', ')
self.visit(item)
def visit_Yield(self, node):
self.write('yield ')
self.visit(node.value)
def visit_Lambda(self, node):
self.write('lambda ')
self.signature(node.args)
self.write(': ')
self.visit(node.body)
def visit_Ellipsis(self, node):
self.write('Ellipsis')
def generator_visit(left, right):
def visit(self, node):
self.write(left)
self.visit(node.elt)
for comprehension in node.generators:
self.visit(comprehension)
self.write(right)
return visit
visit_ListComp = generator_visit('[', ']')
visit_GeneratorExp = generator_visit('(', ')')
visit_SetComp = generator_visit('{', '}')
del generator_visit
def visit_DictComp(self, node):
self.write('{')
self.visit(node.key)
self.write(': ')
self.visit(node.value)
for comprehension in node.generators:
self.visit(comprehension)
self.write('}')
def visit_IfExp(self, node):
self.visit(node.body)
self.write(' if ')
self.visit(node.test)
self.write(' else ')
self.visit(node.orelse)
def visit_Starred(self, node):
self.write('*')
self.visit(node.value)
def visit_Repr(self, node):
# XXX: python 2.6 only
self.write('`')
self.visit(node.value)
self.write('`')
# Helper Nodes
def visit_alias(self, node):
self.write(node.name)
if node.asname is not None:
self.write(' as ' + node.asname)
def visit_comprehension(self, node):
self.write(' for ')
self.visit(node.target)
self.write(' in ')
self.visit(node.iter)
if node.ifs:
for if_ in node.ifs:
self.write(' if ')
self.visit(if_)
def visit_excepthandler(self, node):
self.newline(node)
self.write('except')
if node.type is not None:
self.write(' ')
self.visit(node.type)
if node.name is not None:
self.write(' as ')
self.visit(node.name)
self.write(':')
self.body(node.body)

View File

@@ -1,4 +0,0 @@
# PLY package
# Author: David Beazley (dave@dabeaz.com)
__all__ = ['lex','yacc']

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,710 +0,0 @@
# builtin.py - builtins and utilities definitions for pysh.
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
"""Builtin and internal utilities implementations.
- Beware not to use python interpreter environment as if it were the shell
environment. For instance, commands working directory must be explicitely handled
through env['PWD'] instead of relying on python working directory.
"""
import errno
import optparse
import os
import re
import subprocess
import sys
import time
def has_subprocess_bug():
return getattr(subprocess, 'list2cmdline') and \
( subprocess.list2cmdline(['']) == '' or \
subprocess.list2cmdline(['foo|bar']) == 'foo|bar')
# Detect python bug 1634343: "subprocess swallows empty arguments under win32"
# <http://sourceforge.net/tracker/index.php?func=detail&aid=1634343&group_id=5470&atid=105470>
# Also detect: "[ 1710802 ] subprocess must escape redirection characters under win32"
# <http://sourceforge.net/tracker/index.php?func=detail&aid=1710802&group_id=5470&atid=105470>
if has_subprocess_bug():
import subprocess_fix
subprocess.list2cmdline = subprocess_fix.list2cmdline
from sherrors import *
class NonExitingParser(optparse.OptionParser):
"""OptionParser default behaviour upon error is to print the error message and
exit. Raise a utility error instead.
"""
def error(self, msg):
raise UtilityError(msg)
#-------------------------------------------------------------------------------
# set special builtin
#-------------------------------------------------------------------------------
OPT_SET = NonExitingParser(usage="set - set or unset options and positional parameters")
OPT_SET.add_option( '-f', action='store_true', dest='has_f', default=False,
help='The shell shall disable pathname expansion.')
OPT_SET.add_option('-e', action='store_true', dest='has_e', default=False,
help="""When this option is on, if a simple command fails for any of the \
reasons listed in Consequences of Shell Errors or returns an exit status \
value >0, and is not part of the compound list following a while, until, \
or if keyword, and is not a part of an AND or OR list, and is not a \
pipeline preceded by the ! reserved word, then the shell shall immediately \
exit.""")
OPT_SET.add_option('-x', action='store_true', dest='has_x', default=False,
help="""The shell shall write to standard error a trace for each command \
after it expands the command and before it executes it. It is unspecified \
whether the command that turns tracing off is traced.""")
def builtin_set(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_SET.parse_args(args)
env = interp.get_env()
if option.has_f:
env.set_opt('-f')
if option.has_e:
env.set_opt('-e')
if option.has_x:
env.set_opt('-x')
return 0
#-------------------------------------------------------------------------------
# shift special builtin
#-------------------------------------------------------------------------------
def builtin_shift(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
params = interp.get_env().get_positional_args()
if args:
try:
n = int(args[0])
if n > len(params):
raise ValueError()
except ValueError:
return 1
else:
n = 1
params[:n] = []
interp.get_env().set_positional_args(params)
return 0
#-------------------------------------------------------------------------------
# export special builtin
#-------------------------------------------------------------------------------
OPT_EXPORT = NonExitingParser(usage="set - set or unset options and positional parameters")
OPT_EXPORT.add_option('-p', action='store_true', dest='has_p', default=False)
def builtin_export(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_EXPORT.parse_args(args)
if option.has_p:
raise NotImplementedError()
for arg in args:
try:
name, value = arg.split('=', 1)
except ValueError:
name, value = arg, None
env = interp.get_env().export(name, value)
return 0
#-------------------------------------------------------------------------------
# return special builtin
#-------------------------------------------------------------------------------
def builtin_return(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
res = 0
if args:
try:
res = int(args[0])
except ValueError:
res = 0
if not 0<=res<=255:
res = 0
# BUG: should be last executed command exit code
raise ReturnSignal(res)
#-------------------------------------------------------------------------------
# trap special builtin
#-------------------------------------------------------------------------------
def builtin_trap(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
if len(args) < 2:
stderr.write('trap: usage: trap [[arg] signal_spec ...]\n')
return 2
action = args[0]
for sig in args[1:]:
try:
env.traps[sig] = action
except Exception, e:
stderr.write('trap: %s\n' % str(e))
return 0
#-------------------------------------------------------------------------------
# unset special builtin
#-------------------------------------------------------------------------------
OPT_UNSET = NonExitingParser("unset - unset values and attributes of variables and functions")
OPT_UNSET.add_option( '-f', action='store_true', dest='has_f', default=False)
OPT_UNSET.add_option( '-v', action='store_true', dest='has_v', default=False)
def builtin_unset(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_UNSET.parse_args(args)
status = 0
env = interp.get_env()
for arg in args:
try:
if option.has_f:
env.remove_function(arg)
else:
del env[arg]
except KeyError:
pass
except VarAssignmentError:
status = 1
return status
#-------------------------------------------------------------------------------
# wait special builtin
#-------------------------------------------------------------------------------
def builtin_wait(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return interp.wait([int(arg) for arg in args])
#-------------------------------------------------------------------------------
# cat utility
#-------------------------------------------------------------------------------
def utility_cat(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
if not args:
args = ['-']
status = 0
for arg in args:
if arg == '-':
data = stdin.read()
else:
path = os.path.join(env['PWD'], arg)
try:
f = file(path, 'rb')
try:
data = f.read()
finally:
f.close()
except IOError, e:
if e.errno != errno.ENOENT:
raise
status = 1
continue
stdout.write(data)
stdout.flush()
return status
#-------------------------------------------------------------------------------
# cd utility
#-------------------------------------------------------------------------------
OPT_CD = NonExitingParser("cd - change the working directory")
def utility_cd(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_CD.parse_args(args)
env = interp.get_env()
directory = None
printdir = False
if not args:
home = env.get('HOME')
if home:
# Unspecified, do nothing
return 0
else:
directory = home
elif len(args)==1:
directory = args[0]
if directory=='-':
if 'OLDPWD' not in env:
raise UtilityError("OLDPWD not set")
printdir = True
directory = env['OLDPWD']
else:
raise UtilityError("too many arguments")
curpath = None
# Absolute directories will be handled correctly by the os.path.join call.
if not directory.startswith('.') and not directory.startswith('..'):
cdpaths = env.get('CDPATH', '.').split(';')
for cdpath in cdpaths:
p = os.path.join(cdpath, directory)
if os.path.isdir(p):
curpath = p
break
if curpath is None:
curpath = directory
curpath = os.path.join(env['PWD'], directory)
env['OLDPWD'] = env['PWD']
env['PWD'] = curpath
if printdir:
stdout.write('%s\n' % curpath)
return 0
#-------------------------------------------------------------------------------
# colon utility
#-------------------------------------------------------------------------------
def utility_colon(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return 0
#-------------------------------------------------------------------------------
# echo utility
#-------------------------------------------------------------------------------
def utility_echo(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# Echo only takes arguments, no options. Use printf if you need fancy stuff.
output = ' '.join(args) + '\n'
stdout.write(output)
stdout.flush()
return 0
#-------------------------------------------------------------------------------
# egrep utility
#-------------------------------------------------------------------------------
# egrep is usually a shell script.
# Unfortunately, pysh does not support shell scripts *with arguments* right now,
# so the redirection is implemented here, assuming grep is available.
def utility_egrep(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return run_command('grep', ['-E'] + args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# env utility
#-------------------------------------------------------------------------------
def utility_env(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
if args and args[0]=='-i':
raise NotImplementedError('env: -i option is not implemented')
i = 0
for arg in args:
if '=' not in arg:
break
# Update the current environment
name, value = arg.split('=', 1)
env[name] = value
i += 1
if args[i:]:
# Find then execute the specified interpreter
utility = env.find_in_path(args[i])
if not utility:
return 127
args[i:i+1] = utility
name = args[i]
args = args[i+1:]
try:
return run_command(name, args, interp, env, stdin, stdout, stderr,
debugflags)
except UtilityError:
stderr.write('env: failed to execute %s' % ' '.join([name]+args))
return 126
else:
for pair in env.get_variables().iteritems():
stdout.write('%s=%s\n' % pair)
return 0
#-------------------------------------------------------------------------------
# exit utility
#-------------------------------------------------------------------------------
def utility_exit(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
res = None
if args:
try:
res = int(args[0])
except ValueError:
res = None
if not 0<=res<=255:
res = None
if res is None:
# BUG: should be last executed command exit code
res = 0
raise ExitSignal(res)
#-------------------------------------------------------------------------------
# fgrep utility
#-------------------------------------------------------------------------------
# see egrep
def utility_fgrep(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return run_command('grep', ['-F'] + args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# gunzip utility
#-------------------------------------------------------------------------------
# see egrep
def utility_gunzip(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return run_command('gzip', ['-d'] + args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# kill utility
#-------------------------------------------------------------------------------
def utility_kill(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
for arg in args:
pid = int(arg)
status = subprocess.call(['pskill', '/T', str(pid)],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# pskill is asynchronous, hence the stupid polling loop
while 1:
p = subprocess.Popen(['pslist', str(pid)],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output = p.communicate()[0]
if ('process %d was not' % pid) in output:
break
time.sleep(1)
return status
#-------------------------------------------------------------------------------
# mkdir utility
#-------------------------------------------------------------------------------
OPT_MKDIR = NonExitingParser("mkdir - make directories.")
OPT_MKDIR.add_option('-p', action='store_true', dest='has_p', default=False)
def utility_mkdir(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# TODO: implement umask
# TODO: implement proper utility error report
option, args = OPT_MKDIR.parse_args(args)
for arg in args:
path = os.path.join(env['PWD'], arg)
if option.has_p:
try:
os.makedirs(path)
except IOError, e:
if e.errno != errno.EEXIST:
raise
else:
os.mkdir(path)
return 0
#-------------------------------------------------------------------------------
# netstat utility
#-------------------------------------------------------------------------------
def utility_netstat(name, args, interp, env, stdin, stdout, stderr, debugflags):
# Do you really expect me to implement netstat ?
# This empty form is enough for Mercurial tests since it's
# supposed to generate nothing upon success. Faking this test
# is not a big deal either.
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return 0
#-------------------------------------------------------------------------------
# pwd utility
#-------------------------------------------------------------------------------
OPT_PWD = NonExitingParser("pwd - return working directory name")
OPT_PWD.add_option('-L', action='store_true', dest='has_L', default=True,
help="""If the PWD environment variable contains an absolute pathname of \
the current directory that does not contain the filenames dot or dot-dot, \
pwd shall write this pathname to standard output. Otherwise, the -L option \
shall behave as the -P option.""")
OPT_PWD.add_option('-P', action='store_true', dest='has_L', default=False,
help="""The absolute pathname written shall not contain filenames that, in \
the context of the pathname, refer to files of type symbolic link.""")
def utility_pwd(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_PWD.parse_args(args)
stdout.write('%s\n' % env['PWD'])
return 0
#-------------------------------------------------------------------------------
# printf utility
#-------------------------------------------------------------------------------
RE_UNESCAPE = re.compile(r'(\\x[a-zA-Z0-9]{2}|\\[0-7]{1,3}|\\.)')
def utility_printf(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
def replace(m):
assert m.group()
g = m.group()[1:]
if g.startswith('x'):
return chr(int(g[1:], 16))
if len(g) <= 3 and len([c for c in g if c in '01234567']) == len(g):
# Yay, an octal number
return chr(int(g, 8))
return {
'a': '\a',
'b': '\b',
'f': '\f',
'n': '\n',
'r': '\r',
't': '\t',
'v': '\v',
'\\': '\\',
}.get(g)
# Convert escape sequences
format = re.sub(RE_UNESCAPE, replace, args[0])
stdout.write(format % tuple(args[1:]))
return 0
#-------------------------------------------------------------------------------
# true utility
#-------------------------------------------------------------------------------
def utility_true(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return 0
#-------------------------------------------------------------------------------
# sed utility
#-------------------------------------------------------------------------------
RE_SED = re.compile(r'^s(.).*\1[a-zA-Z]*$')
# cygwin sed fails with some expressions when they do not end with a single space.
# see unit tests for details. Interestingly, the same expressions works perfectly
# in cygwin shell.
def utility_sed(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# Scan pattern arguments and append a space if necessary
for i in xrange(len(args)):
if not RE_SED.search(args[i]):
continue
args[i] = args[i] + ' '
return run_command(name, args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# sleep utility
#-------------------------------------------------------------------------------
def utility_sleep(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
time.sleep(int(args[0]))
return 0
#-------------------------------------------------------------------------------
# sort utility
#-------------------------------------------------------------------------------
OPT_SORT = NonExitingParser("sort - sort, merge, or sequence check text files")
def utility_sort(name, args, interp, env, stdin, stdout, stderr, debugflags):
def sort(path):
if path == '-':
lines = stdin.readlines()
else:
try:
f = file(path)
try:
lines = f.readlines()
finally:
f.close()
except IOError, e:
stderr.write(str(e) + '\n')
return 1
if lines and lines[-1][-1]!='\n':
lines[-1] = lines[-1] + '\n'
return lines
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_SORT.parse_args(args)
alllines = []
if len(args)<=0:
args += ['-']
# Load all files lines
curdir = os.getcwd()
try:
os.chdir(env['PWD'])
for path in args:
alllines += sort(path)
finally:
os.chdir(curdir)
alllines.sort()
for line in alllines:
stdout.write(line)
return 0
#-------------------------------------------------------------------------------
# hg utility
#-------------------------------------------------------------------------------
hgcommands = [
'add',
'addremove',
'commit', 'ci',
'debugrename',
'debugwalk',
'falabala', # Dummy command used in a mercurial test
'incoming',
'locate',
'pull',
'push',
'qinit',
'remove', 'rm',
'rename', 'mv',
'revert',
'showconfig',
'status', 'st',
'strip',
]
def rewriteslashes(name, args):
# Several hg commands output file paths, rewrite the separators
if len(args) > 1 and name.lower().endswith('python') \
and args[0].endswith('hg'):
for cmd in hgcommands:
if cmd in args[1:]:
return True
# svn output contains many paths with OS specific separators.
# Normalize these to unix paths.
base = os.path.basename(name)
if base.startswith('svn'):
return True
return False
def rewritehg(output):
if not output:
return output
# Rewrite os specific messages
output = output.replace(': The system cannot find the file specified',
': No such file or directory')
output = re.sub(': Access is denied.*$', ': Permission denied', output)
output = output.replace(': No connection could be made because the target machine actively refused it',
': Connection refused')
return output
def run_command(name, args, interp, env, stdin, stdout,
stderr, debugflags):
# Execute the command
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
hgbin = interp.options().hgbinary
ishg = hgbin and ('hg' in name or args and 'hg' in args[0])
unixoutput = 'cygwin' in name or ishg
exec_env = env.get_variables()
try:
# BUG: comparing file descriptor is clearly not a reliable way to tell
# whether they point on the same underlying object. But in pysh limited
# scope this is usually right, we do not expect complicated redirections
# besides usual 2>&1.
# Still there is one case we have but cannot deal with is when stdout
# and stderr are redirected *by pysh caller*. This the reason for the
# --redirect pysh() option.
# Now, we want to know they are the same because we sometimes need to
# transform the command output, mostly remove CR-LF to ensure that
# command output is unix-like. Cygwin utilies are a special case because
# they explicitely set their output streams to binary mode, so we have
# nothing to do. For all others commands, we have to guess whether they
# are sending text data, in which case the transformation must be done.
# Again, the NUL character test is unreliable but should be enough for
# hg tests.
redirected = stdout.fileno()==stderr.fileno()
if not redirected:
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
else:
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
except WindowsError, e:
raise UtilityError(str(e))
if not unixoutput:
def encode(s):
if '\0' in s:
return s
return s.replace('\r\n', '\n')
else:
encode = lambda s: s
if rewriteslashes(name, args):
encode1_ = encode
def encode(s):
s = encode1_(s)
s = s.replace('\\\\', '\\')
s = s.replace('\\', '/')
return s
if ishg:
encode2_ = encode
def encode(s):
return rewritehg(encode2_(s))
stdout.write(encode(out))
if not redirected:
stderr.write(encode(err))
return p.returncode

File diff suppressed because it is too large Load Diff

View File

@@ -1,116 +0,0 @@
#! /usr/bin/env python
import sys
from _lsprof import Profiler, profiler_entry
__all__ = ['profile', 'Stats']
def profile(f, *args, **kwds):
"""XXX docstring"""
p = Profiler()
p.enable(subcalls=True, builtins=True)
try:
f(*args, **kwds)
finally:
p.disable()
return Stats(p.getstats())
class Stats(object):
"""XXX docstring"""
def __init__(self, data):
self.data = data
def sort(self, crit="inlinetime"):
"""XXX docstring"""
if crit not in profiler_entry.__dict__:
raise ValueError("Can't sort by %s" % crit)
self.data.sort(lambda b, a: cmp(getattr(a, crit),
getattr(b, crit)))
for e in self.data:
if e.calls:
e.calls.sort(lambda b, a: cmp(getattr(a, crit),
getattr(b, crit)))
def pprint(self, top=None, file=None, limit=None, climit=None):
"""XXX docstring"""
if file is None:
file = sys.stdout
d = self.data
if top is not None:
d = d[:top]
cols = "% 12s %12s %11.4f %11.4f %s\n"
hcols = "% 12s %12s %12s %12s %s\n"
cols2 = "+%12s %12s %11.4f %11.4f + %s\n"
file.write(hcols % ("CallCount", "Recursive", "Total(ms)",
"Inline(ms)", "module:lineno(function)"))
count = 0
for e in d:
file.write(cols % (e.callcount, e.reccallcount, e.totaltime,
e.inlinetime, label(e.code)))
count += 1
if limit is not None and count == limit:
return
ccount = 0
if e.calls:
for se in e.calls:
file.write(cols % ("+%s" % se.callcount, se.reccallcount,
se.totaltime, se.inlinetime,
"+%s" % label(se.code)))
count += 1
ccount += 1
if limit is not None and count == limit:
return
if climit is not None and ccount == climit:
break
def freeze(self):
"""Replace all references to code objects with string
descriptions; this makes it possible to pickle the instance."""
# this code is probably rather ickier than it needs to be!
for i in range(len(self.data)):
e = self.data[i]
if not isinstance(e.code, str):
self.data[i] = type(e)((label(e.code),) + e[1:])
if e.calls:
for j in range(len(e.calls)):
se = e.calls[j]
if not isinstance(se.code, str):
e.calls[j] = type(se)((label(se.code),) + se[1:])
_fn2mod = {}
def label(code):
if isinstance(code, str):
return code
try:
mname = _fn2mod[code.co_filename]
except KeyError:
for k, v in sys.modules.items():
if v is None:
continue
if not hasattr(v, '__file__'):
continue
if not isinstance(v.__file__, str):
continue
if v.__file__.startswith(code.co_filename):
mname = _fn2mod[code.co_filename] = k
break
else:
mname = _fn2mod[code.co_filename] = '<%s>'%code.co_filename
return '%s:%d(%s)' % (mname, code.co_firstlineno, code.co_name)
if __name__ == '__main__':
import os
sys.argv = sys.argv[1:]
if not sys.argv:
print >> sys.stderr, "usage: lsprof.py <script> <arguments...>"
sys.exit(2)
sys.path.insert(0, os.path.abspath(os.path.dirname(sys.argv[0])))
stats = profile(execfile, sys.argv[0], globals(), locals())
stats.sort()
stats.pprint()

View File

@@ -1,167 +0,0 @@
# pysh.py - command processing for pysh.
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
import optparse
import os
import sys
import interp
SH_OPT = optparse.OptionParser(prog='pysh', usage="%prog [OPTIONS]", version='0.1')
SH_OPT.add_option('-c', action='store_true', dest='command_string', default=None,
help='A string that shall be interpreted by the shell as one or more commands')
SH_OPT.add_option('--redirect-to', dest='redirect_to', default=None,
help='Redirect script commands stdout and stderr to the specified file')
# See utility_command in builtin.py about the reason for this flag.
SH_OPT.add_option('--redirected', dest='redirected', action='store_true', default=False,
help='Tell the interpreter that stdout and stderr are actually the same objects, which is really stdout')
SH_OPT.add_option('--debug-parsing', action='store_true', dest='debug_parsing', default=False,
help='Trace PLY execution')
SH_OPT.add_option('--debug-tree', action='store_true', dest='debug_tree', default=False,
help='Display the generated syntax tree.')
SH_OPT.add_option('--debug-cmd', action='store_true', dest='debug_cmd', default=False,
help='Trace command execution before parameters expansion and exit status.')
SH_OPT.add_option('--debug-utility', action='store_true', dest='debug_utility', default=False,
help='Trace utility calls, after parameters expansions')
SH_OPT.add_option('--ast', action='store_true', dest='ast', default=False,
help='Encoded commands to execute in a subprocess')
SH_OPT.add_option('--profile', action='store_true', default=False,
help='Profile pysh run')
def split_args(args):
# Separate shell arguments from command ones
# Just stop at the first argument not starting with a dash. I know, this is completely broken,
# it ignores files starting with a dash or may take option values for command file. This is not
# supposed to happen for now
command_index = len(args)
for i,arg in enumerate(args):
if not arg.startswith('-'):
command_index = i
break
return args[:command_index], args[command_index:]
def fixenv(env):
path = env.get('PATH')
if path is not None:
parts = path.split(os.pathsep)
# Remove Windows utilities from PATH, they are useless at best and
# some of them (find) may be confused with other utilities.
parts = [p for p in parts if 'system32' not in p.lower()]
env['PATH'] = os.pathsep.join(parts)
if env.get('HOME') is None:
# Several utilities, including cvsps, cannot work without
# a defined HOME directory.
env['HOME'] = os.path.expanduser('~')
return env
def _sh(cwd, shargs, cmdargs, options, debugflags=None, env=None):
if os.environ.get('PYSH_TEXT') != '1':
import msvcrt
for fp in (sys.stdin, sys.stdout, sys.stderr):
msvcrt.setmode(fp.fileno(), os.O_BINARY)
hgbin = os.environ.get('PYSH_HGTEXT') != '1'
if debugflags is None:
debugflags = []
if options.debug_parsing: debugflags.append('debug-parsing')
if options.debug_utility: debugflags.append('debug-utility')
if options.debug_cmd: debugflags.append('debug-cmd')
if options.debug_tree: debugflags.append('debug-tree')
if env is None:
env = fixenv(dict(os.environ))
if cwd is None:
cwd = os.getcwd()
if not cmdargs:
# Nothing to do
return 0
ast = None
command_file = None
if options.command_string:
input = cmdargs[0]
if not options.ast:
input += '\n'
else:
args, input = interp.decodeargs(input), None
env, ast = args
cwd = env.get('PWD', cwd)
else:
command_file = cmdargs[0]
arguments = cmdargs[1:]
prefix = interp.resolve_shebang(command_file, ignoreshell=True)
if prefix:
input = ' '.join(prefix + [command_file] + arguments)
else:
# Read commands from file
f = file(command_file)
try:
# Trailing newline to help the parser
input = f.read() + '\n'
finally:
f.close()
redirect = None
try:
if options.redirected:
stdout = sys.stdout
stderr = stdout
elif options.redirect_to:
redirect = open(options.redirect_to, 'wb')
stdout = redirect
stderr = redirect
else:
stdout = sys.stdout
stderr = sys.stderr
# TODO: set arguments to environment variables
opts = interp.Options()
opts.hgbinary = hgbin
ip = interp.Interpreter(cwd, debugflags, stdout=stdout, stderr=stderr,
opts=opts)
try:
# Export given environment in shell object
for k,v in env.iteritems():
ip.get_env().export(k,v)
return ip.execute_script(input, ast, scriptpath=command_file)
finally:
ip.close()
finally:
if redirect is not None:
redirect.close()
def sh(cwd=None, args=None, debugflags=None, env=None):
if args is None:
args = sys.argv[1:]
shargs, cmdargs = split_args(args)
options, shargs = SH_OPT.parse_args(shargs)
if options.profile:
import lsprof
p = lsprof.Profiler()
p.enable(subcalls=True)
try:
return _sh(cwd, shargs, cmdargs, options, debugflags, env)
finally:
p.disable()
stats = lsprof.Stats(p.getstats())
stats.sort()
stats.pprint(top=10, file=sys.stderr, climit=5)
else:
return _sh(cwd, shargs, cmdargs, options, debugflags, env)
def main():
sys.exit(sh())
if __name__=='__main__':
main()

View File

@@ -1,888 +0,0 @@
# pyshlex.py - PLY compatible lexer for pysh.
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
# TODO:
# - review all "char in 'abc'" snippets: the empty string can be matched
# - test line continuations within quoted/expansion strings
# - eof is buggy wrt sublexers
# - the lexer cannot really work in pull mode as it would be required to run
# PLY in pull mode. It was designed to work incrementally and it would not be
# that hard to enable pull mode.
import re
try:
s = set()
del s
except NameError:
from Set import Set as set
from ply import lex
from sherrors import *
class NeedMore(Exception):
pass
def is_blank(c):
return c in (' ', '\t')
_RE_DIGITS = re.compile(r'^\d+$')
def are_digits(s):
return _RE_DIGITS.search(s) is not None
_OPERATORS = dict([
('&&', 'AND_IF'),
('||', 'OR_IF'),
(';;', 'DSEMI'),
('<<', 'DLESS'),
('>>', 'DGREAT'),
('<&', 'LESSAND'),
('>&', 'GREATAND'),
('<>', 'LESSGREAT'),
('<<-', 'DLESSDASH'),
('>|', 'CLOBBER'),
('&', 'AMP'),
(';', 'COMMA'),
('<', 'LESS'),
('>', 'GREATER'),
('(', 'LPARENS'),
(')', 'RPARENS'),
])
#Make a function to silence pychecker "Local variable shadows global"
def make_partial_ops():
partials = {}
for k in _OPERATORS:
for i in range(1, len(k)+1):
partials[k[:i]] = None
return partials
_PARTIAL_OPERATORS = make_partial_ops()
def is_partial_op(s):
"""Return True if s matches a non-empty subpart of an operator starting
at its first character.
"""
return s in _PARTIAL_OPERATORS
def is_op(s):
"""If s matches an operator, returns the operator identifier. Return None
otherwise.
"""
return _OPERATORS.get(s)
_RESERVEDS = dict([
('if', 'If'),
('then', 'Then'),
('else', 'Else'),
('elif', 'Elif'),
('fi', 'Fi'),
('do', 'Do'),
('done', 'Done'),
('case', 'Case'),
('esac', 'Esac'),
('while', 'While'),
('until', 'Until'),
('for', 'For'),
('{', 'Lbrace'),
('}', 'Rbrace'),
('!', 'Bang'),
('in', 'In'),
('|', 'PIPE'),
])
def get_reserved(s):
return _RESERVEDS.get(s)
_RE_NAME = re.compile(r'^[0-9a-zA-Z_]+$')
def is_name(s):
return _RE_NAME.search(s) is not None
def find_chars(seq, chars):
for i,v in enumerate(seq):
if v in chars:
return i,v
return -1, None
class WordLexer:
"""WordLexer parse quoted or expansion expressions and return an expression
tree. The input string can be any well formed sequence beginning with quoting
or expansion character. Embedded expressions are handled recursively. The
resulting tree is made of lists and strings. Lists represent quoted or
expansion expressions. Each list first element is the opening separator,
the last one the closing separator. In-between can be any number of strings
or lists for sub-expressions. Non quoted/expansion expression can written as
strings or as lists with empty strings as starting and ending delimiters.
"""
NAME_CHARSET = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_'
NAME_CHARSET = dict(zip(NAME_CHARSET, NAME_CHARSET))
SPECIAL_CHARSET = '@*#?-$!0'
#Characters which can be escaped depends on the current delimiters
ESCAPABLE = {
'`': set(['$', '\\', '`']),
'"': set(['$', '\\', '`', '"']),
"'": set(),
}
def __init__(self, heredoc = False):
# _buffer is the unprocessed input characters buffer
self._buffer = []
# _stack is empty or contains a quoted list being processed
# (this is the DFS path to the quoted expression being evaluated).
self._stack = []
self._escapable = None
# True when parsing unquoted here documents
self._heredoc = heredoc
def add(self, data, eof=False):
"""Feed the lexer with more data. If the quoted expression can be
delimited, return a tuple (expr, remaining) containing the expression
tree and the unconsumed data.
Otherwise, raise NeedMore.
"""
self._buffer += list(data)
self._parse(eof)
result = self._stack[0]
remaining = ''.join(self._buffer)
self._stack = []
self._buffer = []
return result, remaining
def _is_escapable(self, c, delim=None):
if delim is None:
if self._heredoc:
# Backslashes works as if they were double quoted in unquoted
# here-documents
delim = '"'
else:
if len(self._stack)<=1:
return True
delim = self._stack[-2][0]
escapables = self.ESCAPABLE.get(delim, None)
return escapables is None or c in escapables
def _parse_squote(self, buf, result, eof):
if not buf:
raise NeedMore()
try:
pos = buf.index("'")
except ValueError:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
result += ["'"]
return pos+1, True
def _parse_bquote(self, buf, result, eof):
if not buf:
raise NeedMore()
if buf[0]=='\n':
#Remove line continuations
result[:] = ['', '', '']
elif self._is_escapable(buf[0]):
result[-1] += buf[0]
result += ['']
else:
#Keep as such
result[:] = ['', '\\'+buf[0], '']
return 1, True
def _parse_dquote(self, buf, result, eof):
if not buf:
raise NeedMore()
pos, sep = find_chars(buf, '$\\`"')
if pos==-1:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
if sep=='"':
result += ['"']
return pos+1, True
else:
#Keep everything until the separator and defer processing
return pos, False
def _parse_command(self, buf, result, eof):
if not buf:
raise NeedMore()
chars = '$\\`"\''
if result[0] == '$(':
chars += ')'
pos, sep = find_chars(buf, chars)
if pos == -1:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
if (result[0]=='$(' and sep==')') or (result[0]=='`' and sep=='`'):
result += [sep]
return pos+1, True
else:
return pos, False
def _parse_parameter(self, buf, result, eof):
if not buf:
raise NeedMore()
pos, sep = find_chars(buf, '$\\`"\'}')
if pos==-1:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
if sep=='}':
result += [sep]
return pos+1, True
else:
return pos, False
def _parse_dollar(self, buf, result, eof):
sep = result[0]
if sep=='$':
if not buf:
#TODO: handle empty $
raise NeedMore()
if buf[0]=='(':
if len(buf)==1:
raise NeedMore()
if buf[1]=='(':
result[0] = '$(('
buf[:2] = []
else:
result[0] = '$('
buf[:1] = []
elif buf[0]=='{':
result[0] = '${'
buf[:1] = []
else:
if buf[0] in self.SPECIAL_CHARSET:
result[-1] = buf[0]
read = 1
else:
for read,c in enumerate(buf):
if c not in self.NAME_CHARSET:
break
else:
if not eof:
raise NeedMore()
read += 1
result[-1] += ''.join(buf[0:read])
if not result[-1]:
result[:] = ['', result[0], '']
else:
result += ['']
return read,True
sep = result[0]
if sep=='$(':
parsefunc = self._parse_command
elif sep=='${':
parsefunc = self._parse_parameter
else:
raise NotImplementedError()
pos, closed = parsefunc(buf, result, eof)
return pos, closed
def _parse(self, eof):
buf = self._buffer
stack = self._stack
recurse = False
while 1:
if not stack or recurse:
if not buf:
raise NeedMore()
if buf[0] not in ('"\\`$\''):
raise ShellSyntaxError('Invalid quoted string sequence')
stack.append([buf[0], ''])
buf[:1] = []
recurse = False
result = stack[-1]
if result[0]=="'":
parsefunc = self._parse_squote
elif result[0]=='\\':
parsefunc = self._parse_bquote
elif result[0]=='"':
parsefunc = self._parse_dquote
elif result[0]=='`':
parsefunc = self._parse_command
elif result[0][0]=='$':
parsefunc = self._parse_dollar
else:
raise NotImplementedError()
read, closed = parsefunc(buf, result, eof)
buf[:read] = []
if closed:
if len(stack)>1:
#Merge in parent expression
parsed = stack.pop()
stack[-1] += [parsed]
stack[-1] += ['']
else:
break
else:
recurse = True
def normalize_wordtree(wtree):
"""Fold back every literal sequence (delimited with empty strings) into
parent sequence.
"""
def normalize(wtree):
result = []
for part in wtree[1:-1]:
if isinstance(part, list):
part = normalize(part)
if part[0]=='':
#Move the part content back at current level
result += part[1:-1]
continue
elif not part:
#Remove empty strings
continue
result.append(part)
if not result:
result = ['']
return [wtree[0]] + result + [wtree[-1]]
return normalize(wtree)
def make_wordtree(token, here_document=False):
"""Parse a delimited token and return a tree similar to the ones returned by
WordLexer. token may contain any combinations of expansion/quoted fields and
non-ones.
"""
tree = ['']
remaining = token
delimiters = '\\$`'
if not here_document:
delimiters += '\'"'
while 1:
pos, sep = find_chars(remaining, delimiters)
if pos==-1:
tree += [remaining, '']
return normalize_wordtree(tree)
tree.append(remaining[:pos])
remaining = remaining[pos:]
try:
result, remaining = WordLexer(heredoc = here_document).add(remaining, True)
except NeedMore:
raise ShellSyntaxError('Invalid token "%s"')
tree.append(result)
def wordtree_as_string(wtree):
"""Rewrite an expression tree generated by make_wordtree as string."""
def visit(node, output):
for child in node:
if isinstance(child, list):
visit(child, output)
else:
output.append(child)
output = []
visit(wtree, output)
return ''.join(output)
def unquote_wordtree(wtree):
"""Fold the word tree while removing quotes everywhere. Other expansion
sequences are joined as such.
"""
def unquote(wtree):
unquoted = []
if wtree[0] in ('', "'", '"', '\\'):
wtree = wtree[1:-1]
for part in wtree:
if isinstance(part, list):
part = unquote(part)
unquoted.append(part)
return ''.join(unquoted)
return unquote(wtree)
class HereDocLexer:
"""HereDocLexer delimits whatever comes from the here-document starting newline
not included to the closing delimiter line included.
"""
def __init__(self, op, delim):
assert op in ('<<', '<<-')
if not delim:
raise ShellSyntaxError('invalid here document delimiter %s' % str(delim))
self._op = op
self._delim = delim
self._buffer = []
self._token = []
def add(self, data, eof):
"""If the here-document was delimited, return a tuple (content, remaining).
Raise NeedMore() otherwise.
"""
self._buffer += list(data)
self._parse(eof)
token = ''.join(self._token)
remaining = ''.join(self._buffer)
self._token, self._remaining = [], []
return token, remaining
def _parse(self, eof):
while 1:
#Look for first unescaped newline. Quotes may be ignored
escaped = False
for i,c in enumerate(self._buffer):
if escaped:
escaped = False
elif c=='\\':
escaped = True
elif c=='\n':
break
else:
i = -1
if i==-1 or self._buffer[i]!='\n':
if not eof:
raise NeedMore()
#No more data, maybe the last line is closing delimiter
line = ''.join(self._buffer)
eol = ''
self._buffer[:] = []
else:
line = ''.join(self._buffer[:i])
eol = self._buffer[i]
self._buffer[:i+1] = []
if self._op=='<<-':
line = line.lstrip('\t')
if line==self._delim:
break
self._token += [line, eol]
if i==-1:
break
class Token:
#TODO: check this is still in use
OPERATOR = 'OPERATOR'
WORD = 'WORD'
def __init__(self):
self.value = ''
self.type = None
def __getitem__(self, key):
#Behave like a two elements tuple
if key==0:
return self.type
if key==1:
return self.value
raise IndexError(key)
class HereDoc:
def __init__(self, op, name=None):
self.op = op
self.name = name
self.pendings = []
TK_COMMA = 'COMMA'
TK_AMPERSAND = 'AMP'
TK_OP = 'OP'
TK_TOKEN = 'TOKEN'
TK_COMMENT = 'COMMENT'
TK_NEWLINE = 'NEWLINE'
TK_IONUMBER = 'IO_NUMBER'
TK_ASSIGNMENT = 'ASSIGNMENT_WORD'
TK_HERENAME = 'HERENAME'
class Lexer:
"""Main lexer.
Call add() until the script AST is returned.
"""
# Here-document handling makes the whole thing more complex because they basically
# force tokens to be reordered: here-content must come right after the operator
# and the here-document name, while some other tokens might be following the
# here-document expression on the same line.
#
# So, here-doc states are basically:
# *self._state==ST_NORMAL
# - self._heredoc.op is None: no here-document
# - self._heredoc.op is not None but name is: here-document operator matched,
# waiting for the document name/delimiter
# - self._heredoc.op and name are not None: here-document is ready, following
# tokens are being stored and will be pushed again when the document is
# completely parsed.
# *self._state==ST_HEREDOC
# - The here-document is being delimited by self._herelexer. Once it is done
# the content is pushed in front of the pending token list then all these
# tokens are pushed once again.
ST_NORMAL = 'ST_NORMAL'
ST_OP = 'ST_OP'
ST_BACKSLASH = 'ST_BACKSLASH'
ST_QUOTED = 'ST_QUOTED'
ST_COMMENT = 'ST_COMMENT'
ST_HEREDOC = 'ST_HEREDOC'
#Match end of backquote strings
RE_BACKQUOTE_END = re.compile(r'(?<!\\)(`)')
def __init__(self, parent_state = None):
self._input = []
self._pos = 0
self._token = ''
self._type = TK_TOKEN
self._state = self.ST_NORMAL
self._parent_state = parent_state
self._wordlexer = None
self._heredoc = HereDoc(None)
self._herelexer = None
### Following attributes are not used for delimiting token and can safely
### be changed after here-document detection (see _push_toke)
# Count the number of tokens following a 'For' reserved word. Needed to
# return an 'In' reserved word if it comes in third place.
self._for_count = None
def add(self, data, eof=False):
"""Feed the lexer with data.
When eof is set to True, returns unconsumed data or raise if the lexer
is in the middle of a delimiting operation.
Raise NeedMore otherwise.
"""
self._input += list(data)
self._parse(eof)
self._input[:self._pos] = []
return ''.join(self._input)
def _parse(self, eof):
while self._state:
if self._pos>=len(self._input):
if not eof:
raise NeedMore()
elif self._state not in (self.ST_OP, self.ST_QUOTED, self.ST_HEREDOC):
#Delimit the current token and leave cleanly
self._push_token('')
break
else:
#Let the sublexer handle the eof themselves
pass
if self._state==self.ST_NORMAL:
self._parse_normal()
elif self._state==self.ST_COMMENT:
self._parse_comment()
elif self._state==self.ST_OP:
self._parse_op(eof)
elif self._state==self.ST_QUOTED:
self._parse_quoted(eof)
elif self._state==self.ST_HEREDOC:
self._parse_heredoc(eof)
else:
assert False, "Unknown state " + str(self._state)
if self._heredoc.op is not None:
raise ShellSyntaxError('missing here-document delimiter')
def _parse_normal(self):
c = self._input[self._pos]
if c=='\n':
self._push_token(c)
self._token = c
self._type = TK_NEWLINE
self._push_token('')
self._pos += 1
elif c in ('\\', '\'', '"', '`', '$'):
self._state = self.ST_QUOTED
elif is_partial_op(c):
self._push_token(c)
self._type = TK_OP
self._token += c
self._pos += 1
self._state = self.ST_OP
elif is_blank(c):
self._push_token(c)
#Discard blanks
self._pos += 1
elif self._token:
self._token += c
self._pos += 1
elif c=='#':
self._state = self.ST_COMMENT
self._type = TK_COMMENT
self._pos += 1
else:
self._pos += 1
self._token += c
def _parse_op(self, eof):
assert self._token
while 1:
if self._pos>=len(self._input):
if not eof:
raise NeedMore()
c = ''
else:
c = self._input[self._pos]
op = self._token + c
if c and is_partial_op(op):
#Still parsing an operator
self._token = op
self._pos += 1
else:
#End of operator
self._push_token(c)
self._state = self.ST_NORMAL
break
def _parse_comment(self):
while 1:
if self._pos>=len(self._input):
raise NeedMore()
c = self._input[self._pos]
if c=='\n':
#End of comment, do not consume the end of line
self._state = self.ST_NORMAL
break
else:
self._token += c
self._pos += 1
def _parse_quoted(self, eof):
"""Precondition: the starting backquote/dollar is still in the input queue."""
if not self._wordlexer:
self._wordlexer = WordLexer()
if self._pos<len(self._input):
#Transfer input queue character into the subparser
input = self._input[self._pos:]
self._pos += len(input)
wtree, remaining = self._wordlexer.add(input, eof)
self._wordlexer = None
self._token += wordtree_as_string(wtree)
#Put unparsed character back in the input queue
if remaining:
self._input[self._pos:self._pos] = list(remaining)
self._state = self.ST_NORMAL
def _parse_heredoc(self, eof):
assert not self._token
if self._herelexer is None:
self._herelexer = HereDocLexer(self._heredoc.op, self._heredoc.name)
if self._pos<len(self._input):
#Transfer input queue character into the subparser
input = self._input[self._pos:]
self._pos += len(input)
self._token, remaining = self._herelexer.add(input, eof)
#Reset here-document state
self._herelexer = None
heredoc, self._heredoc = self._heredoc, HereDoc(None)
if remaining:
self._input[self._pos:self._pos] = list(remaining)
self._state = self.ST_NORMAL
#Push pending tokens
heredoc.pendings[:0] = [(self._token, self._type, heredoc.name)]
for token, type, delim in heredoc.pendings:
self._token = token
self._type = type
self._push_token(delim)
def _push_token(self, delim):
if not self._token:
return 0
if self._heredoc.op is not None:
if self._heredoc.name is None:
#Here-document name
if self._type!=TK_TOKEN:
raise ShellSyntaxError("expecting here-document name, got '%s'" % self._token)
self._heredoc.name = unquote_wordtree(make_wordtree(self._token))
self._type = TK_HERENAME
else:
#Capture all tokens until the newline starting the here-document
if self._type==TK_NEWLINE:
assert self._state==self.ST_NORMAL
self._state = self.ST_HEREDOC
self._heredoc.pendings.append((self._token, self._type, delim))
self._token = ''
self._type = TK_TOKEN
return 1
# BEWARE: do not change parser state from here to the end of the function:
# when parsing between an here-document operator to the end of the line
# tokens are stored in self._heredoc.pendings. Therefore, they will not
# reach the section below.
#Check operators
if self._type==TK_OP:
#False positive because of partial op matching
op = is_op(self._token)
if not op:
self._type = TK_TOKEN
else:
#Map to the specific operator
self._type = op
if self._token in ('<<', '<<-'):
#Done here rather than in _parse_op because there is no need
#to change the parser state since we are still waiting for
#the here-document name
if self._heredoc.op is not None:
raise ShellSyntaxError("syntax error near token '%s'" % self._token)
assert self._heredoc.op is None
self._heredoc.op = self._token
if self._type==TK_TOKEN:
if '=' in self._token and not delim:
if self._token.startswith('='):
#Token is a WORD... a TOKEN that is.
pass
else:
prev = self._token[:self._token.find('=')]
if is_name(prev):
self._type = TK_ASSIGNMENT
else:
#Just a token (unspecified)
pass
else:
reserved = get_reserved(self._token)
if reserved is not None:
if reserved=='In' and self._for_count!=2:
#Sorry, not a reserved word after all
pass
else:
self._type = reserved
if reserved in ('For', 'Case'):
self._for_count = 0
elif are_digits(self._token) and delim in ('<', '>'):
#Detect IO_NUMBER
self._type = TK_IONUMBER
elif self._token==';':
self._type = TK_COMMA
elif self._token=='&':
self._type = TK_AMPERSAND
elif self._type==TK_COMMENT:
#Comments are not part of sh grammar, ignore them
self._token = ''
self._type = TK_TOKEN
return 0
if self._for_count is not None:
#Track token count in 'For' expression to detect 'In' reserved words.
#Can only be in third position, no need to go beyond
self._for_count += 1
if self._for_count==3:
self._for_count = None
self.on_token((self._token, self._type))
self._token = ''
self._type = TK_TOKEN
return 1
def on_token(self, token):
raise NotImplementedError
tokens = [
TK_TOKEN,
# To silence yacc unused token warnings
# TK_COMMENT,
TK_NEWLINE,
TK_IONUMBER,
TK_ASSIGNMENT,
TK_HERENAME,
]
#Add specific operators
tokens += _OPERATORS.values()
#Add reserved words
tokens += _RESERVEDS.values()
class PLYLexer(Lexer):
"""Bridge Lexer and PLY lexer interface."""
def __init__(self):
Lexer.__init__(self)
self._tokens = []
self._current = 0
self.lineno = 0
def on_token(self, token):
value, type = token
self.lineno = 0
t = lex.LexToken()
t.value = value
t.type = type
t.lexer = self
t.lexpos = 0
t.lineno = 0
self._tokens.append(t)
def is_empty(self):
return not bool(self._tokens)
#PLY compliant interface
def token(self):
if self._current>=len(self._tokens):
return None
t = self._tokens[self._current]
self._current += 1
return t
def get_tokens(s):
"""Parse the input string and return a tuple (tokens, unprocessed) where
tokens is a list of parsed tokens and unprocessed is the part of the input
string left untouched by the lexer.
"""
lexer = PLYLexer()
untouched = lexer.add(s, True)
tokens = []
while 1:
token = lexer.token()
if token is None:
break
tokens.append(token)
tokens = [(t.value, t.type) for t in tokens]
return tokens, untouched

View File

@@ -1,772 +0,0 @@
# pyshyacc.py - PLY grammar definition for pysh
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
"""PLY grammar file.
"""
import sys
import pyshlex
tokens = pyshlex.tokens
from ply import yacc
import sherrors
class IORedirect:
def __init__(self, op, filename, io_number=None):
self.op = op
self.filename = filename
self.io_number = io_number
class HereDocument:
def __init__(self, op, name, content, io_number=None):
self.op = op
self.name = name
self.content = content
self.io_number = io_number
def make_io_redirect(p):
"""Make an IORedirect instance from the input 'io_redirect' production."""
name, io_number, io_target = p
assert name=='io_redirect'
if io_target[0]=='io_file':
io_type, io_op, io_file = io_target
return IORedirect(io_op, io_file, io_number)
elif io_target[0]=='io_here':
io_type, io_op, io_name, io_content = io_target
return HereDocument(io_op, io_name, io_content, io_number)
else:
assert False, "Invalid IO redirection token %s" % repr(io_type)
class SimpleCommand:
"""
assigns contains (name, value) pairs.
"""
def __init__(self, words, redirs, assigns):
self.words = list(words)
self.redirs = list(redirs)
self.assigns = list(assigns)
class Pipeline:
def __init__(self, commands, reverse_status=False):
self.commands = list(commands)
assert self.commands #Grammar forbids this
self.reverse_status = reverse_status
class AndOr:
def __init__(self, op, left, right):
self.op = str(op)
self.left = left
self.right = right
class ForLoop:
def __init__(self, name, items, cmds):
self.name = str(name)
self.items = list(items)
self.cmds = list(cmds)
class WhileLoop:
def __init__(self, condition, cmds):
self.condition = list(condition)
self.cmds = list(cmds)
class UntilLoop:
def __init__(self, condition, cmds):
self.condition = list(condition)
self.cmds = list(cmds)
class FunDef:
def __init__(self, name, body):
self.name = str(name)
self.body = body
class BraceGroup:
def __init__(self, cmds):
self.cmds = list(cmds)
class IfCond:
def __init__(self, cond, if_cmds, else_cmds):
self.cond = list(cond)
self.if_cmds = if_cmds
self.else_cmds = else_cmds
class Case:
def __init__(self, name, items):
self.name = name
self.items = items
class SubShell:
def __init__(self, cmds):
self.cmds = cmds
class RedirectList:
def __init__(self, cmd, redirs):
self.cmd = cmd
self.redirs = list(redirs)
def get_production(productions, ptype):
"""productions must be a list of production tuples like (name, obj) where
name is the production string identifier.
Return the first production named 'ptype'. Raise KeyError if None can be
found.
"""
for production in productions:
if production is not None and production[0]==ptype:
return production
raise KeyError(ptype)
#-------------------------------------------------------------------------------
# PLY grammar definition
#-------------------------------------------------------------------------------
def p_multiple_commands(p):
"""multiple_commands : newline_sequence
| complete_command
| multiple_commands complete_command"""
if len(p)==2:
if p[1] is not None:
p[0] = [p[1]]
else:
p[0] = []
else:
p[0] = p[1] + [p[2]]
def p_complete_command(p):
"""complete_command : list separator
| list"""
if len(p)==3 and p[2] and p[2][1] == '&':
p[0] = ('async', p[1])
else:
p[0] = p[1]
def p_list(p):
"""list : list separator_op and_or
| and_or"""
if len(p)==2:
p[0] = [p[1]]
else:
#if p[2]!=';':
# raise NotImplementedError('AND-OR list asynchronous execution is not implemented')
p[0] = p[1] + [p[3]]
def p_and_or(p):
"""and_or : pipeline
| and_or AND_IF linebreak pipeline
| and_or OR_IF linebreak pipeline"""
if len(p)==2:
p[0] = p[1]
else:
p[0] = ('and_or', AndOr(p[2], p[1], p[4]))
def p_maybe_bang_word(p):
"""maybe_bang_word : Bang"""
p[0] = ('maybe_bang_word', p[1])
def p_pipeline(p):
"""pipeline : pipe_sequence
| bang_word pipe_sequence"""
if len(p)==3:
p[0] = ('pipeline', Pipeline(p[2][1:], True))
else:
p[0] = ('pipeline', Pipeline(p[1][1:]))
def p_pipe_sequence(p):
"""pipe_sequence : command
| pipe_sequence PIPE linebreak command"""
if len(p)==2:
p[0] = ['pipe_sequence', p[1]]
else:
p[0] = p[1] + [p[4]]
def p_command(p):
"""command : simple_command
| compound_command
| compound_command redirect_list
| function_definition"""
if p[1][0] in ( 'simple_command',
'for_clause',
'while_clause',
'until_clause',
'case_clause',
'if_clause',
'function_definition',
'subshell',
'brace_group',):
if len(p) == 2:
p[0] = p[1]
else:
p[0] = ('redirect_list', RedirectList(p[1], p[2][1:]))
else:
raise NotImplementedError('%s command is not implemented' % repr(p[1][0]))
def p_compound_command(p):
"""compound_command : brace_group
| subshell
| for_clause
| case_clause
| if_clause
| while_clause
| until_clause"""
p[0] = p[1]
def p_subshell(p):
"""subshell : LPARENS compound_list RPARENS"""
p[0] = ('subshell', SubShell(p[2][1:]))
def p_compound_list(p):
"""compound_list : term
| newline_list term
| term separator
| newline_list term separator"""
productions = p[1:]
try:
sep = get_production(productions, 'separator')
if sep[1]!=';':
raise NotImplementedError()
except KeyError:
pass
term = get_production(productions, 'term')
p[0] = ['compound_list'] + term[1:]
def p_term(p):
"""term : term separator and_or
| and_or"""
if len(p)==2:
p[0] = ['term', p[1]]
else:
if p[2] is not None and p[2][1] == '&':
p[0] = ['term', ('async', p[1][1:])] + [p[3]]
else:
p[0] = p[1] + [p[3]]
def p_maybe_for_word(p):
# Rearrange 'For' priority wrt TOKEN. See p_for_word
"""maybe_for_word : For"""
p[0] = ('maybe_for_word', p[1])
def p_for_clause(p):
"""for_clause : for_word name linebreak do_group
| for_word name linebreak in sequential_sep do_group
| for_word name linebreak in wordlist sequential_sep do_group"""
productions = p[1:]
do_group = get_production(productions, 'do_group')
try:
items = get_production(productions, 'in')[1:]
except KeyError:
raise NotImplementedError('"in" omission is not implemented')
try:
items = get_production(productions, 'wordlist')[1:]
except KeyError:
items = []
name = p[2]
p[0] = ('for_clause', ForLoop(name, items, do_group[1:]))
def p_name(p):
"""name : token""" #Was NAME instead of token
p[0] = p[1]
def p_in(p):
"""in : In"""
p[0] = ('in', p[1])
def p_wordlist(p):
"""wordlist : wordlist token
| token"""
if len(p)==2:
p[0] = ['wordlist', ('TOKEN', p[1])]
else:
p[0] = p[1] + [('TOKEN', p[2])]
def p_case_clause(p):
"""case_clause : Case token linebreak in linebreak case_list Esac
| Case token linebreak in linebreak case_list_ns Esac
| Case token linebreak in linebreak Esac"""
if len(p) < 8:
items = []
else:
items = p[6][1:]
name = p[2]
p[0] = ('case_clause', Case(name, [c[1] for c in items]))
def p_case_list_ns(p):
"""case_list_ns : case_list case_item_ns
| case_item_ns"""
p_case_list(p)
def p_case_list(p):
"""case_list : case_list case_item
| case_item"""
if len(p)==2:
p[0] = ['case_list', p[1]]
else:
p[0] = p[1] + [p[2]]
def p_case_item_ns(p):
"""case_item_ns : pattern RPARENS linebreak
| pattern RPARENS compound_list linebreak
| LPARENS pattern RPARENS linebreak
| LPARENS pattern RPARENS compound_list linebreak"""
p_case_item(p)
def p_case_item(p):
"""case_item : pattern RPARENS linebreak DSEMI linebreak
| pattern RPARENS compound_list DSEMI linebreak
| LPARENS pattern RPARENS linebreak DSEMI linebreak
| LPARENS pattern RPARENS compound_list DSEMI linebreak"""
if len(p) < 7:
name = p[1][1:]
else:
name = p[2][1:]
try:
cmds = get_production(p[1:], "compound_list")[1:]
except KeyError:
cmds = []
p[0] = ('case_item', (name, cmds))
def p_pattern(p):
"""pattern : token
| pattern PIPE token"""
if len(p)==2:
p[0] = ['pattern', ('TOKEN', p[1])]
else:
p[0] = p[1] + [('TOKEN', p[2])]
def p_maybe_if_word(p):
# Rearrange 'If' priority wrt TOKEN. See p_if_word
"""maybe_if_word : If"""
p[0] = ('maybe_if_word', p[1])
def p_maybe_then_word(p):
# Rearrange 'Then' priority wrt TOKEN. See p_then_word
"""maybe_then_word : Then"""
p[0] = ('maybe_then_word', p[1])
def p_if_clause(p):
"""if_clause : if_word compound_list then_word compound_list else_part Fi
| if_word compound_list then_word compound_list Fi"""
else_part = []
if len(p)==7:
else_part = p[5]
p[0] = ('if_clause', IfCond(p[2][1:], p[4][1:], else_part))
def p_else_part(p):
"""else_part : Elif compound_list then_word compound_list else_part
| Elif compound_list then_word compound_list
| Else compound_list"""
if len(p)==3:
p[0] = p[2][1:]
else:
else_part = []
if len(p)==6:
else_part = p[5]
p[0] = ('elif', IfCond(p[2][1:], p[4][1:], else_part))
def p_while_clause(p):
"""while_clause : While compound_list do_group"""
p[0] = ('while_clause', WhileLoop(p[2][1:], p[3][1:]))
def p_maybe_until_word(p):
# Rearrange 'Until' priority wrt TOKEN. See p_until_word
"""maybe_until_word : Until"""
p[0] = ('maybe_until_word', p[1])
def p_until_clause(p):
"""until_clause : until_word compound_list do_group"""
p[0] = ('until_clause', UntilLoop(p[2][1:], p[3][1:]))
def p_function_definition(p):
"""function_definition : fname LPARENS RPARENS linebreak function_body"""
p[0] = ('function_definition', FunDef(p[1], p[5]))
def p_function_body(p):
"""function_body : compound_command
| compound_command redirect_list"""
if len(p)!=2:
raise NotImplementedError('functions redirections lists are not implemented')
p[0] = p[1]
def p_fname(p):
"""fname : TOKEN""" #Was NAME instead of token
p[0] = p[1]
def p_brace_group(p):
"""brace_group : Lbrace compound_list Rbrace"""
p[0] = ('brace_group', BraceGroup(p[2][1:]))
def p_maybe_done_word(p):
#See p_assignment_word for details.
"""maybe_done_word : Done"""
p[0] = ('maybe_done_word', p[1])
def p_maybe_do_word(p):
"""maybe_do_word : Do"""
p[0] = ('maybe_do_word', p[1])
def p_do_group(p):
"""do_group : do_word compound_list done_word"""
#Do group contains a list of AndOr
p[0] = ['do_group'] + p[2][1:]
def p_simple_command(p):
"""simple_command : cmd_prefix cmd_word cmd_suffix
| cmd_prefix cmd_word
| cmd_prefix
| cmd_name cmd_suffix
| cmd_name"""
words, redirs, assigns = [], [], []
for e in p[1:]:
name = e[0]
if name in ('cmd_prefix', 'cmd_suffix'):
for sube in e[1:]:
subname = sube[0]
if subname=='io_redirect':
redirs.append(make_io_redirect(sube))
elif subname=='ASSIGNMENT_WORD':
assigns.append(sube)
else:
words.append(sube)
elif name in ('cmd_word', 'cmd_name'):
words.append(e)
cmd = SimpleCommand(words, redirs, assigns)
p[0] = ('simple_command', cmd)
def p_cmd_name(p):
"""cmd_name : TOKEN"""
p[0] = ('cmd_name', p[1])
def p_cmd_word(p):
"""cmd_word : token"""
p[0] = ('cmd_word', p[1])
def p_maybe_assignment_word(p):
#See p_assignment_word for details.
"""maybe_assignment_word : ASSIGNMENT_WORD"""
p[0] = ('maybe_assignment_word', p[1])
def p_cmd_prefix(p):
"""cmd_prefix : io_redirect
| cmd_prefix io_redirect
| assignment_word
| cmd_prefix assignment_word"""
try:
prefix = get_production(p[1:], 'cmd_prefix')
except KeyError:
prefix = ['cmd_prefix']
try:
value = get_production(p[1:], 'assignment_word')[1]
value = ('ASSIGNMENT_WORD', value.split('=', 1))
except KeyError:
value = get_production(p[1:], 'io_redirect')
p[0] = prefix + [value]
def p_cmd_suffix(p):
"""cmd_suffix : io_redirect
| cmd_suffix io_redirect
| token
| cmd_suffix token
| maybe_for_word
| cmd_suffix maybe_for_word
| maybe_done_word
| cmd_suffix maybe_done_word
| maybe_do_word
| cmd_suffix maybe_do_word
| maybe_until_word
| cmd_suffix maybe_until_word
| maybe_assignment_word
| cmd_suffix maybe_assignment_word
| maybe_if_word
| cmd_suffix maybe_if_word
| maybe_then_word
| cmd_suffix maybe_then_word
| maybe_bang_word
| cmd_suffix maybe_bang_word"""
try:
suffix = get_production(p[1:], 'cmd_suffix')
token = p[2]
except KeyError:
suffix = ['cmd_suffix']
token = p[1]
if isinstance(token, tuple):
if token[0]=='io_redirect':
p[0] = suffix + [token]
else:
#Convert maybe_* to TOKEN if necessary
p[0] = suffix + [('TOKEN', token[1])]
else:
p[0] = suffix + [('TOKEN', token)]
def p_redirect_list(p):
"""redirect_list : io_redirect
| redirect_list io_redirect"""
if len(p) == 2:
p[0] = ['redirect_list', make_io_redirect(p[1])]
else:
p[0] = p[1] + [make_io_redirect(p[2])]
def p_io_redirect(p):
"""io_redirect : io_file
| IO_NUMBER io_file
| io_here
| IO_NUMBER io_here"""
if len(p)==3:
p[0] = ('io_redirect', p[1], p[2])
else:
p[0] = ('io_redirect', None, p[1])
def p_io_file(p):
#Return the tuple (operator, filename)
"""io_file : LESS filename
| LESSAND filename
| GREATER filename
| GREATAND filename
| DGREAT filename
| LESSGREAT filename
| CLOBBER filename"""
#Extract the filename from the file
p[0] = ('io_file', p[1], p[2][1])
def p_filename(p):
#Return the filename
"""filename : TOKEN"""
p[0] = ('filename', p[1])
def p_io_here(p):
"""io_here : DLESS here_end
| DLESSDASH here_end"""
p[0] = ('io_here', p[1], p[2][1], p[2][2])
def p_here_end(p):
"""here_end : HERENAME TOKEN"""
p[0] = ('here_document', p[1], p[2])
def p_newline_sequence(p):
# Nothing in the grammar can handle leading NEWLINE productions, so add
# this one with the lowest possible priority relatively to newline_list.
"""newline_sequence : newline_list"""
p[0] = None
def p_newline_list(p):
"""newline_list : NEWLINE
| newline_list NEWLINE"""
p[0] = None
def p_linebreak(p):
"""linebreak : newline_list
| empty"""
p[0] = None
def p_separator_op(p):
"""separator_op : COMMA
| AMP"""
p[0] = p[1]
def p_separator(p):
"""separator : separator_op linebreak
| newline_list"""
if len(p)==2:
#Ignore newlines
p[0] = None
else:
#Keep the separator operator
p[0] = ('separator', p[1])
def p_sequential_sep(p):
"""sequential_sep : COMMA linebreak
| newline_list"""
p[0] = None
# Low priority TOKEN => for_word conversion.
# Let maybe_for_word be used as a token when necessary in higher priority
# rules.
def p_for_word(p):
"""for_word : maybe_for_word"""
p[0] = p[1]
def p_if_word(p):
"""if_word : maybe_if_word"""
p[0] = p[1]
def p_then_word(p):
"""then_word : maybe_then_word"""
p[0] = p[1]
def p_done_word(p):
"""done_word : maybe_done_word"""
p[0] = p[1]
def p_do_word(p):
"""do_word : maybe_do_word"""
p[0] = p[1]
def p_until_word(p):
"""until_word : maybe_until_word"""
p[0] = p[1]
def p_assignment_word(p):
"""assignment_word : maybe_assignment_word"""
p[0] = ('assignment_word', p[1][1])
def p_bang_word(p):
"""bang_word : maybe_bang_word"""
p[0] = ('bang_word', p[1][1])
def p_token(p):
"""token : TOKEN
| Fi"""
p[0] = p[1]
def p_empty(p):
'empty :'
p[0] = None
# Error rule for syntax errors
def p_error(p):
msg = []
w = msg.append
w('%r\n' % p)
w('followed by:\n')
for i in range(5):
n = yacc.token()
if not n:
break
w(' %r\n' % n)
raise sherrors.ShellSyntaxError(''.join(msg))
# Build the parser
try:
import pyshtables
except ImportError:
yacc.yacc(tabmodule = 'pyshtables')
else:
yacc.yacc(tabmodule = 'pysh.pyshtables', write_tables = 0, debug = 0)
def parse(input, eof=False, debug=False):
"""Parse a whole script at once and return the generated AST and unconsumed
data in a tuple.
NOTE: eof is probably meaningless for now, the parser being unable to work
in pull mode. It should be set to True.
"""
lexer = pyshlex.PLYLexer()
remaining = lexer.add(input, eof)
if lexer.is_empty():
return [], remaining
if debug:
debug = 2
return yacc.parse(lexer=lexer, debug=debug), remaining
#-------------------------------------------------------------------------------
# AST rendering helpers
#-------------------------------------------------------------------------------
def format_commands(v):
"""Return a tree made of strings and lists. Make command trees easier to
display.
"""
if isinstance(v, list):
return [format_commands(c) for c in v]
if isinstance(v, tuple):
if len(v)==2 and isinstance(v[0], str) and not isinstance(v[1], str):
if v[0] == 'async':
return ['AsyncList', map(format_commands, v[1])]
else:
#Avoid decomposing tuples like ('pipeline', Pipeline(...))
return format_commands(v[1])
return format_commands(list(v))
elif isinstance(v, IfCond):
name = ['IfCond']
name += ['if', map(format_commands, v.cond)]
name += ['then', map(format_commands, v.if_cmds)]
name += ['else', map(format_commands, v.else_cmds)]
return name
elif isinstance(v, ForLoop):
name = ['ForLoop']
name += [repr(v.name)+' in ', map(str, v.items)]
name += ['commands', map(format_commands, v.cmds)]
return name
elif isinstance(v, AndOr):
return [v.op, format_commands(v.left), format_commands(v.right)]
elif isinstance(v, Pipeline):
name = 'Pipeline'
if v.reverse_status:
name = '!' + name
return [name, format_commands(v.commands)]
elif isinstance(v, SimpleCommand):
name = ['SimpleCommand']
if v.words:
name += ['words', map(str, v.words)]
if v.assigns:
assigns = [tuple(a[1]) for a in v.assigns]
name += ['assigns', map(str, assigns)]
if v.redirs:
name += ['redirs', map(format_commands, v.redirs)]
return name
elif isinstance(v, RedirectList):
name = ['RedirectList']
if v.redirs:
name += ['redirs', map(format_commands, v.redirs)]
name += ['command', format_commands(v.cmd)]
return name
elif isinstance(v, IORedirect):
return ' '.join(map(str, (v.io_number, v.op, v.filename)))
elif isinstance(v, HereDocument):
return ' '.join(map(str, (v.io_number, v.op, repr(v.name), repr(v.content))))
elif isinstance(v, SubShell):
return ['SubShell', map(format_commands, v.cmds)]
else:
return repr(v)
def print_commands(cmds, output=sys.stdout):
"""Pretty print a command tree."""
def print_tree(cmd, spaces, output):
if isinstance(cmd, list):
for c in cmd:
print_tree(c, spaces + 3, output)
else:
print >>output, ' '*spaces + str(cmd)
formatted = format_commands(cmds)
print_tree(formatted, 0, output)
def stringify_commands(cmds):
"""Serialize a command tree as a string.
Returned string is not pretty and is currently used for unit tests only.
"""
def stringify(value):
output = []
if isinstance(value, list):
formatted = []
for v in value:
formatted.append(stringify(v))
formatted = ' '.join(formatted)
output.append(''.join(['<', formatted, '>']))
else:
output.append(value)
return ' '.join(output)
return stringify(format_commands(cmds))
def visit_commands(cmds, callable):
"""Visit the command tree and execute callable on every Pipeline and
SimpleCommand instances.
"""
if isinstance(cmds, (tuple, list)):
map(lambda c: visit_commands(c,callable), cmds)
elif isinstance(cmds, (Pipeline, SimpleCommand)):
callable(cmds)

View File

@@ -1,41 +0,0 @@
# sherrors.py - shell errors and signals
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
"""Define shell exceptions and error codes.
"""
class ShellError(Exception):
pass
class ShellSyntaxError(ShellError):
pass
class UtilityError(ShellError):
"""Raised upon utility syntax error (option or operand error)."""
pass
class ExpansionError(ShellError):
pass
class CommandNotFound(ShellError):
"""Specified command was not found."""
pass
class RedirectionError(ShellError):
pass
class VarAssignmentError(ShellError):
"""Variable assignment error."""
pass
class ExitSignal(ShellError):
"""Exit signal."""
pass
class ReturnSignal(ShellError):
"""Exit signal."""
pass

View File

@@ -1,77 +0,0 @@
# subprocess - Subprocesses with accessible I/O streams
#
# For more information about this module, see PEP 324.
#
# This module should remain compatible with Python 2.2, see PEP 291.
#
# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
#
# Licensed to PSF under a Contributor Agreement.
# See http://www.python.org/2.4/license for licensing details.
def list2cmdline(seq):
"""
Translate a sequence of arguments into a command line
string, using the same rules as the MS C runtime:
1) Arguments are delimited by white space, which is either a
space or a tab.
2) A string surrounded by double quotation marks is
interpreted as a single argument, regardless of white space
contained within. A quoted string can be embedded in an
argument.
3) A double quotation mark preceded by a backslash is
interpreted as a literal double quotation mark.
4) Backslashes are interpreted literally, unless they
immediately precede a double quotation mark.
5) If backslashes immediately precede a double quotation mark,
every pair of backslashes is interpreted as a literal
backslash. If the number of backslashes is odd, the last
backslash escapes the next double quotation mark as
described in rule 3.
"""
# See
# http://msdn.microsoft.com/library/en-us/vccelng/htm/progs_12.asp
result = []
needquote = False
for arg in seq:
bs_buf = []
# Add a space to separate this argument from the others
if result:
result.append(' ')
needquote = (" " in arg) or ("\t" in arg) or ("|" in arg) or arg == ""
if needquote:
result.append('"')
for c in arg:
if c == '\\':
# Don't know if we need to double yet.
bs_buf.append(c)
elif c == '"':
# Double backspaces.
result.append('\\' * len(bs_buf)*2)
bs_buf = []
result.append('\\"')
else:
# Normal char
if bs_buf:
result.extend(bs_buf)
bs_buf = []
result.append(c)
# Add remaining backspaces, if any.
if bs_buf:
result.extend(bs_buf)
if needquote:
result.extend(bs_buf)
result.append('"')
return ''.join(result)

View File

@@ -0,0 +1,9 @@
# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
LCONF_VERSION = "1"
BBFILES ?= ""
BBLAYERS = " \
${OEROOT}/meta \
${OEROOT}/meta-moblin \
"

View File

@@ -1,12 +1,10 @@
# CONF_VERSION is increased each time build/conf/ changes incompatibly
CONF_VERSION = "1"
# Uncomment and change to cache the files Poky downloads in an alternative
# location, default it ${TOPDIR}/downloads
#DL_DIR ?= "${TOPDIR}/downloads"
# Uncomment and change to cache Poky's built staging output in an alternative
# location, default ${TOPDIR}/sstate-cache
#SSTATE_DIR ?= "${TOPDIR}/sstate-cache"
# Where to cache the files Poky downloads
DL_DIR ?= "${OEROOT}/sources"
# Where to cache Poky's built staging output
PSTAGE_DIR ?= "${OEROOT}/pstage"
# Uncomment and set to allow bitbake to execute multiple tasks at once.
# For a quadcore, BB_NUMBER_THREADS = "4", PARALLEL_MAKE = "-j 4" would
@@ -20,8 +18,7 @@ MACHINE ?= "qemux86"
# Other supported machines
#MACHINE ?= "qemuarm"
#MACHINE ?= "qemux86-64"
#MACHINE ?= "atom-pc"
#MACHINE ?= "netbook"
#MACHINE ?= "c7x0"
#MACHINE ?= "akita"
#MACHINE ?= "spitz"
@@ -36,13 +33,29 @@ MACHINE ?= "qemux86"
#MACHINE ?= "mx31litekit"
#MACHINE ?= "mx31phy"
#MACHINE ?= "zylonite"
#MACHINE ?= "igep0020"
#MACHINE ?= "igep0030"
DISTRO ?= "poky"
# For bleeding edge / experimental / unstable package versions
# DISTRO ?= "poky-bleeding"
# Poky has various extra metadata collections (openmoko, extras).
# To enable these, uncomment all (or some of) the following lines:
# BBFILES = "\
# ${OEROOT}/meta/packages/*/*.bb \
# ${OEROOT}/meta-extras/packages/*/*.bb \
# ${OEROOT}/meta-openmoko/packages/*/*.bb \
# ${OEROOT}/meta-moblin/packages/*/*.bb \
# "
# BBFILE_COLLECTIONS = "normal extras openmoko moblin"
# BBFILE_PATTERN_normal = "^${OEROOT}/meta/"
# BBFILE_PATTERN_extras = "^${OEROOT}/meta-extras/"
# BBFILE_PATTERN_openmoko = "^${OEROOT}/meta-openmoko/"
# BBFILE_PATTERN_moblin = "^${OEROOT}/meta-moblin/"
# BBFILE_PRIORITY_normal = "5"
# BBFILE_PRIORITY_extras = "5"
# BBFILE_PRIORITY_openmoko = "5"
# BBFILE_PRIORITY_moblin = "5"
BBMASK = ""
# EXTRA_IMAGE_FEATURES allows extra packages to be added to the generated images
@@ -73,13 +86,8 @@ EXTRA_IMAGE_FEATURES_mx31ads = "tools-testapps debug-tweaks"
# The first package type listed will be used for rootfs generation
# include 'package_deb' for debs
# include 'package_ipk' for ipks
# include 'package_rpm' for rpms
#PACKAGE_CLASSES ?= "package_rpm package_deb package_ipk"
PACKAGE_CLASSES ?= "package_rpm package_ipk"
# A list of additional classes to use when building the system
# include 'image-prelink' in order to prelink the filesystem image
USER_CLASSES ?= "image-prelink"
#PACKAGE_CLASSES ?= "package_deb package_ipk"
PACKAGE_CLASSES ?= "package_ipk"
# POKYMODE controls the characteristics of the generated packages/images by
# telling poky which type of toolchain to use.
@@ -97,7 +105,7 @@ USER_CLASSES ?= "image-prelink"
# Note that a full build of everything in OpenEmbedded will take GigaBytes of hard
# disk space, so make sure to free enough space. The default TMPDIR is
# <build directory>/tmp
#TMPDIR = "${POKYBASE}/build/tmp"
TMPDIR = "${OEROOT}/build/tmp"
# Uncomment this if you are using the Openedhand provided qemu deb - see README
@@ -138,29 +146,8 @@ ENABLE_BINARY_LOCALE_GENERATION = "1"
# packages for architectures other than the host i.e. building i586 packages
# on an x86_64 host.
# Supported values are i586 and x86_64
#SDKMACHINE ?= "i586"
#SDKMACHINE="i586"
# Poky can try and fetch packaged-staging packages from a http, https or ftp
# mirror. Set this variable to the root of a pstage directory on a server.
#SSTATE_MIRRORS ?= "\
#file://.* http://someserver.tld/share/sstate/ \n \
#file://.* file:///some/local/dir/sstate/"
# Set IMAGETEST to qemu if you want to build testcases and start
# testing in qemu after do_rootfs.
#IMAGETEST = "qemu"
# By default test cases in sanity suite will be ran. If you want to run other
# test suite or specific test case(e.g. bat or boot test case under sanity suite),
# list them like following.
#TEST_SCEN = "sanity bat sanity:boot"
# Set GLIBC_GENERATE_LOCALES to the locales you wish to generate should you not
# wish to perform the time-consuming step of generating all LIBC locales.
# WARNING: this may break localisation!
#GLIBC_GENERATE_LOCALES = "en_GB.UTF-8 en_US.UTF-8"
# Default to not build 32 bit libs on 64 bit systems, comment this
# out if that is desired
NO32LIBS = "1"
#PSTAGE_MIRROR ?= "http://someserver.tld/share/pstage"

View File

@@ -20,7 +20,7 @@ SCONF_VERSION = "1"
# although this only works for http
#GIT_PROXY_HOST = "proxy.example.com"
#GIT_PROXY_PORT = "81"
#export GIT_PROXY_COMMAND = "${POKYBASE}/scripts/poky-git-proxy-command"
#export GIT_PROXY_COMMAND = "${OEROOT}/scripts/poky-git-proxy-command"
# GIT_PROXY_IGNORE_* lines define hosts which do not require a proxy to access
#GIT_CORE_CONFIG = "Yes"
@@ -32,7 +32,7 @@ SCONF_VERSION = "1"
# and then share that binary somewhere in PATH, then use the following settings
#GIT_PROXY_HOST = "proxy.example.com"
#GIT_PROXY_PORT = "81"
#export GIT_PROXY_COMMAND = "${POKYBASE}/scripts/poky-git-proxy-socks-command"
#export GIT_PROXY_COMMAND = "${OEROOT}/scripts/poky-git-proxy-socks-command"
# Uncomment this to use a shared download directory

View File

@@ -1,38 +0,0 @@
all: html pdf tarball
pdf:
../tools/poky-docbook-to-pdf poky-ref-manual.xml ../template
../tools/poky-docbook-to-pdf bsp-guide.xml ../template
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
html:
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
xsltproc $(XSLTOPTS) -o poky-ref-manual.html $(XSL_XHTML_URI) poky-ref-manual.xml
xsltproc $(XSLTOPTS) -o bsp-guide.html $(XSL_XHTML_URI) bsp-guide.xml
tarball: html
tar -cvzf poky-ref-manual.tgz poky-ref-manual.html style.css figures/yocto-project-transp.png figures/poky-ref-manual.png
validate:
xmllint --postvalid --xinclude --noout poky-ref-manual.xml
OUTPUTS = poky-ref-manual.tgz poky-ref-manual.html poky-ref-manual.pdf bsp-guide.pdf bsp-guide.html
SOURCES = *.png *.xml *.css *.svg
publish:
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/
clean:
rm -f $(OUTPUTS)

View File

@@ -1,451 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='bsp'>
<title>Board Support Packages (BSP) - Developers Guide</title>
<para>
A Board Support Package (BSP) is a collection of information which together
defines how to support a particular hardware device, set of devices, or
hardware platform. It will include information about the hardware features
present on the device and kernel configuration information along with any
additional hardware drivers required. It will also list any additional software
components required in addition to a generic Linux software stack for both
essential and optional platform features.
</para>
<para>
The intent of this document is to define a structure for these components
so that BSPs follow a commonly understood layout, allowing them to be
provided in a common form that everyone understands. It also allows end-users
to become familiar with one common format and encourages standardisation
of software support of hardware.
</para>
<para>
The proposed format does have elements that are specific to the Poky and
OpenEmbedded build systems. It is intended that this information can be
used by other systems besides Poky/OpenEmbedded and that it will be simple
to extract information and convert to other formats if required. The format
described can be directly accepted as a layer by Poky using its standard
layers mechanism, but it is important to recognise that the BSP captures all
the hardware specific details in one place in a standard format, which is
useful for any person wishing to use the hardware platform regardless of
the build system in use.
</para>
<para>
The BSP specification does not include a build system or other tools -
it is concerned with the hardware specific components only. At the end
distribution point the BSP may be shipped combined with a build system
and other tools, but it is important to maintain the distinction that these
are separate components which may just be combined in certain end products.
</para>
<section id='bsp-filelayout'>
<title>Example Filesystem Layout</title>
<para>
The BSP consists of a file structure inside a base directory, meta-bsp in this example, where "bsp" is a placeholder for the machine or platform name. Examples of some files that it could contain are:
</para>
<para>
<programlisting>
meta-bsp/
meta-bsp/binary/zImage
meta-bsp/binary/poky-image-minimal.directdisk
meta-bsp/conf/layer.conf
meta-bsp/conf/machine/*.conf
meta-bsp/conf/machine/include/tune-*.inc
meta-bsp/packages/bootloader/bootloader_0.1.bb
meta-bsp/packages/linux/linux-bsp-2.6.50/*.patch
meta-bsp/packages/linux/linux-bsp-2.6.50/defconfig-bsp
meta-bsp/packages/linux/linux-bsp_2.6.50.bb
meta-bsp/packages/modem/modem-driver_0.1.bb
meta-bsp/packages/modem/modem-daemon_0.1.bb
meta-bsp/packages/image-creator/image-creator-native_0.1.bb
meta-bsp/prebuilds/
</programlisting>
</para>
<para>
The following sections detail what these files and directories could contain.
</para>
</section>
<section id='bsp-filelayout-binary'>
<title>Prebuilt User Binaries (meta-bsp/binary/*)</title>
<para>
This optional area contains useful prebuilt kernels and userspace filesystem
images appropriate to the target system. Users could use these to get a system
running and quickly get started on development tasks. The exact types of binaries
present will be highly hardware-dependent but a README file should be present
explaining how to use them with the target hardware. If prebuilt binaries are
present, source code to meet licensing requirements must also be provided in
some form.
</para>
</section>
<section id='bsp-filelayout-layer'>
<title>Layer Configuration (meta-bsp/conf/layer.conf)</title>
<para>
This file identifies the structure as a Poky layer. This file identifies the
contents of the layer and contains information about how Poky should use
it. In general it will most likely be a standard boilerplate file consisting of:
</para>
<para>
<programlisting>
# We have a conf directory, add to BBPATH
BBPATH := "${BBPATH}${LAYERDIR}"
# We have a recipes directory containing .bb and .bbappend files, add to BBFILES
BBFILES := "${BBFILES} ${LAYERDIR}/recipes/*/*.bb ${LAYERDIR}/recipes/*/*.bbappend"
BBFILE_COLLECTIONS += "bsp"
BBFILE_PATTERN_bsp := "^${LAYERDIR}/"
BBFILE_PRIORITY_bsp = "5"
</programlisting>
</para>
<para>
which simply makes bitbake aware of the recipes and conf directories.
</para>
<para>
This file is required for recognition of the BSP by Poky.
</para>
</section>
<section id='bsp-filelayout-machine'>
<title>Hardware Configuration Options (meta-bsp/conf/machine/*.conf)</title>
<para>
The machine files bind together all the information contained elsewhere
in the BSP into a format that Poky/OpenEmbedded can understand. If
the BSP supports multiple machines, multiple machine configuration files
can be present. These filenames correspond to the values users set the
MACHINE variable to.
</para>
<para>
These files would define things like which kernel package to use
(PREFERRED_PROVIDER of virtual/kernel), which hardware drivers to
include in different types of images, any special software components
that are needed, any bootloader information, and also any special image
format requirements.
</para>
<para>
At least one machine file is required for a Poky BSP layer but more than one may be present.
</para>
</section>
<section id='bsp-filelayout-tune'>
<title>Hardware Optimisation Options (meta-bsp/conf/machine/include/tune-*.inc)</title>
<para>
These are shared hardware "tuning" definitions and are commonly used to
pass specific optimisation flags to the compiler. An example is
tune-atom.inc:
</para>
<para>
<programlisting>
BASE_PACKAGE_ARCH = "core2"
TARGET_CC_ARCH = "-m32 -march=core2 -msse3 -mtune=generic -mfpmath=sse"
</programlisting>
</para>
<para>
which defines a new package architecture called "core2" and uses the
optimization flags specified, which are carefully chosen to give best
performance on atom cpus.
</para>
<para>
The tune file would be included by the machine definition and can be
contained in the BSP or reference one from the standard core set of
files included with Poky itself.
</para>
<para>
These files are optional for a Poky BSP layer.
</para>
</section>
<section id='bsp-filelayout-kernel'>
<title>Linux Kernel Configuration (meta-bsp/packages/linux/*)</title>
<para>
These files make up the definition of a kernel to use with this
hardware. In this case it is a complete self-contained kernel with its own
configuration and patches but kernels can be shared between many
machines as well. Taking some specific example files:
</para>
<para>
<programlisting>
meta-bsp/packages/linux/linux-bsp_2.6.50.bb
</programlisting>
</para>
<para>
which is the core kernel recipe which firstly details where to get the kernel
source from. All standard source code locations are supported so this could
be a release tarball, some git repository, or source included in
the directory within the BSP itself. It then contains information about which
patches to apply and how to configure and build it. It can reuse the main
Poky kernel build class, so the definitions here can remain very simple.
</para>
<para>
<programlisting>
linux-bsp-2.6.50/*.patch
</programlisting>
</para>
<para>
which are patches which may be applied against the base kernel, wherever
they may have been obtained from.
</para>
<para>
<programlisting>
meta-bsp/packages/linux/linux-bsp-2.6.50/defconfig-bsp
</programlisting>
</para>
<para>
which is the configuration information to use to configure the kernel.
</para>
<para>
Examples of kernel recipes are available in Poky itself. These files are
optional since a kernel from Poky itself could be selected, although it
would be unusual not to have a kernel configuration.
</para>
</section>
<section id='bsp-filelayout-packages'>
<title>Other Software (meta-bsp/packages/*)</title>
<para>
This area includes other pieces of software which the hardware may need for best
operation. These are just examples of the kind of things that may be
encountered. These are standard .bb file recipes in the usual Poky format,
so for examples, see standard Poky recipes. The source can be included directly,
referred to in source control systems or release tarballs of external software projects.
</para>
<para>
<programlisting>
meta-bsp/packages/bootloader/bootloader_0.1.bb
</programlisting>
</para>
<para>
Some kind of bootloader recipe which may be used to generate a new
bootloader binary. Sometimes these are included in the final image
format and needed to reflash hardware.
</para>
<para>
<programlisting>
meta-bsp/packages/modem/modem-driver_0.1.bb
meta-bsp/packages/modem/modem-daemon_0.1.bb
</programlisting>
</para>
<para>
These are examples of a hardware driver and also a hardware daemon which
may need to be included in images to make the hardware useful. "modem"
is one example but there may be other components needed like firmware.
</para>
<para>
<programlisting>
meta-bsp/packages/image-creator/image-creator-native_0.1.bb
</programlisting>
</para>
<para>
Sometimes the device will need an image in a very specific format for
its update mechanism to accept and reflash with it. Recipes to build the
tools needed to do this can be included with the BSP.
</para>
<para>
These files only need be provided if the platform requires them.
</para>
</section>
<section id='bs-filelayout-bbappend'>
<title>Append BSP specific information to existing recipes</title>
<para>
Say you have a recipe like pointercal which has machine-specific information in it,
and then you have your new BSP code in a layer. Before the .bbappend extension was
introduced, you'd have to copy the whole pointercal recipe and files into your layer,
and then add the single file for your machine, which is ugly.
.bbappend makes the above work much easier, to allow BSP-specific information to be merged
with the original recipe easily. When bitbake finds any X.bbappend files, they will be
included after bitbake loads X.bb but before finalise or anonymous methods run.
This allows the BSP layer to poke around and do whatever it might want to customise
the original recipe.
If your recipe needs to reference extra files it can use the FILESEXTRAPATH variable
to specify their location. The example below shows extra files contained in a folder
called ${PN} (the package name).
</para>
<programlisting>
FILESEXTRAPATHS := "${THISDIR}/${PN}"
</programlisting>
<para>
Then the BSP could add machine-specific config files in layer directory, which will be
added by bitbake. You can look at meta-emenlow/packages/formfactor as an example.
</para>
</section>
<section id='bsp-filelayout-prebuilds'>
<title>Prebuild Data (meta-bsp/prebuilds/*)</title>
<para>
The location can contain a precompiled representation of the source code
contained elsewhere in the BSP layer. It can be processed and used by
Poky to provide much faster build times, assuming a compatible configuration is used.
</para>
<para>
These files are optional.
</para>
</section>
<section id='bsp-click-through-licensing'>
<title>BSP 'Click-through' Licensing Procedure</title>
<note><para> This section is here as a description of how
click-through licensing is expected to work, and is
not yet not impemented.
</para></note>
<para>
In some cases, a BSP may contain separately licensed IP
(Intellectual Property) for a component, which imposes
upon the user a requirement to accept the terms of a
'click-through' license. Once the license is accepted
(in whatever form that may be, see details below) the
Poky build system can then build and include the
corresponding component in the final BSP image. Some
affected components may be essential to the normal
functioning of the system and have no 'free' replacement
i.e. the resulting system would be non-functional
without them. Other components may be simply
'good-to-have' or purely elective, or if essential
nonetheless have a 'free' (possibly less-capable)
version which may substituted for in the BSP recipe.
</para>
<para>
For the latter cases, where it is possible to do so from
a functionality perspective, the Poky website will make
available a 'de-featured' BSP completely free of
encumbered IP, which can be used directly and without
any further licensing requirements. If present, this
fully 'de-featured' BSP will be named meta-bsp (i.e. the
normal default naming convention). This is the simplest
and therefore preferred option if available, assuming
the resulting functionality meets requirements.
</para>
<para>
If however, a non-encumbered version is unavailable or
the 'free' version would provide unsuitable
functionality or quality, an encumbered version can be
used. Encumbered versions of a BSP are given names of
the form meta-bsp-nonfree. There are several ways
within the Poky build system to satisfy the licensing
requirements for an encumbered BSP, in roughly the
following order of preference:
</para>
<itemizedlist>
<listitem>
<para>
Get a license key (or keys) for the encumbered BSP
by
visiting <ulink url='https://pokylinux.org/bsp-keys.html'>https://pokylinux.org/bsp-keys.html</ulink>
and give the web form there the name of the BSP
and your e-mail address.
</para>
<programlisting>
[screenshot of dialog box]
</programlisting>
<para>
After agreeing to any applicable license terms, the
BSP key(s) will be immediately sent to the address
given and can be used by specifying BSPKEY_&lt;keydomain&gt;
environment variables when building the image:
</para>
<programlisting>
$ BSPKEY_&lt;keydomain&gt;=&lt;key&gt; bitbake poky-image-sato
</programlisting>
<para>
This will allow the encumbered image to be built
with no change at all to the normal build process.
</para>
<para>
Equivalently and probably more conveniently, a line
for each key can instead be put into the user's
local.conf file.
</para>
<para>
The &lt;keydomain&gt; component of the
BSPKEY_&lt;keydomain&gt; is required because there
may be multiple licenses in effect for a give BSP; a
given &lt;keydomain&gt; in such cases corresponds to
a particular license. In order for an encumbered
BSP encompassing multiple key domains to be built
successfully, a &lt;keydomain&gt; entry for each
applicable license must be present in local.conf or
supplied on the command-line.
</para>
</listitem>
<listitem>
<para>
Do nothing - build as you normally would, and follow
any license prompts that originate from the
encumbered BSP (the build will cleanly stop at this
point). These usually take the form of instructions
needed to manually fetch the encumbered package(s)
and md5 sums into e.g. the poky/build/downloads
directory. Once the manual package fetch has been
completed, restarting the build will continue where
it left off, this time without the prompt since the
license requirements will have been satisfied.
</para>
</listitem>
<listitem>
<para>
Get a full-featured BSP recipe rather than a key, by
visiting
<ulink url='https://pokylinux.org/bsps.html'>https://pokylinux.org/bsps.html</ulink>.
Accepting the license agreement(s) presented will
subsequently allow you to download a tarball
containing a full-featured BSP legally cleared for
your use by the just-given license agreement(s).
This method will also allow the encumbered image to
be built with no change at all to the normal build
process.
</para>
</listitem>
</itemizedlist>
<para>
Note that method 3 is also the only option available
when downloading pre-compiled images generated from
non-free BSPs. Those images are likewise available at
<ulink url='https://pokylinux.org/bsps.html'>https://pokylinux.org/bsps.html</ulink>.
</para>
</section>
</chapter>

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.4 KiB

View File

@@ -1,155 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='intro'>
<title>Introduction</title>
<section id='intro-welcome'>
<title>Welcome to Poky!</title>
<para>
Poky is the the build tool in Yocto Project.
It is at the heart of Yocto Project.
You use Poky within Yocto Project to build the images (kernel software) for targeted hardware.
</para>
<para>
Before jumping into Poky you should have an understanding of Yokto Project.
Be sure you are familiar with the information in the Yocto Project Quick Start.
You can find this documentation on the public <ulink rul='http://yoctoproject.org/'>Yocto Project Website</ulink>.
</para>
</section>
<section>
<title>What is Poky?</title>
<para>
Poky provides an open source Linux, X11, Matchbox, GTK+, Pimlico, Clutter, and other <ulink url='http://gnome.org/mobile'>GNOME Mobile</ulink> technologies based full platform build tool within Yocto Project.
It creates a focused, stable, subset of OpenEmbedded that can be easily and reliably built and developed upon.
Poky fully supports a wide range of x86 ARM, MIPS and PowerPC hardware and device virtulisation.
</para>
<para>
Poky is primarily a platform builder which generates filesystem images
based on open source software such as the Kdrive X server, the Matchbox
window manager, the GTK+ toolkit and the D-Bus message bus system. Images
for many kinds of devices can be generated, however the standard example
machines target QEMU full system emulation(x86, ARM, MIPS and PowerPC) and
real reference boards for each of these architectures.
Poky's ability to boot inside a QEMU
emulator makes it particularly suitable as a test platform for development
of embedded software.
</para>
<para>
An important component integrated within Poky is Sato, a GNOME Mobile
based user interface environment.
It is designed to work well with screens at very high DPI and restricted
size, such as those often found on smartphones and PDAs. It is coded with
focus on efficiency and speed so that it works smoothly on hand-held and
other embedded hardware. It will sit neatly on top of any device
using the GNOME Mobile stack, providing a well defined user experience.
</para>
<screenshot>
<mediaobject>
<imageobject>
<imagedata fileref="screenshots/ss-sato.png" format="PNG" align='center' scalefit='1' width="100%" contentdepth="100%"/>
</imageobject>
<caption>
<para>The Sato Desktop - A screenshot from a machine running a Poky built image</para>
</caption>
</mediaobject>
</screenshot>
<para>
Poky has a growing open source community and is also backed up by commercial organisations including <ulink url="http://www.intel.com/">Intel Corporation</ulink>.
</para>
</section>
<section id='intro-manualoverview'>
<title>Documentation Overview</title>
<para>
The Poky User Guide is split into sections covering different aspects of Poky.
The <link linkend='usingpoky'>'Using Poky' section</link> gives an overview of the components that make up Poky followed by information about using Poky and debugging images created in Yocto Project.
The <link linkend='extendpoky'>'Extending Poky' section</link> gives information about how to extend and customise Poky along with advice on how to manage these changes.
The <link linkend='platdev'>'Platform Development with Poky' section</link> gives information about interaction between Poky and target hardware for common platform development tasks such as software development, debugging and profiling.
The rest of the manual consists of several reference sections each giving details on a specific section of Poky functionality.
</para>
<para>
This manual applies to Poky Release 3.3 (Green).
</para>
</section>
<section id='intro-requirements'>
<title>System Requirements</title>
<para>
We recommend Debian-based distributions, in particular a recent Ubuntu
release (10.04 or newer), as the host system for Poky. Nothing in Poky is
distribution specific and other distributions will most likely work as long
as the appropriate prerequisites are installed - we know of Poky being used
successfully on Redhat, SUSE, Gentoo and Slackware host systems.
For information on what you need to develop images using Yocto Project and Poky
you should see the Yocto Project Quick Start on the public
<ulink rul='http://yoctoproject.org/'>Yocto Project Website</ulink>.
</para>
</section>
<section id='intro-getit'>
<title>Obtaining Poky</title>
<section id='intro-getit-releases'>
<title>Releases</title>
<para>Periodically, we make releases of Poky and these are available
at <ulink url='http://pokylinux.org/releases/'/>.
These are more stable and tested than the nightly development images.</para>
</section>
<section id='intro-getit-nightly'>
<title>Nightly Builds</title>
<para>
We make nightly builds of Poky for testing purposes and to make the
latest developments available. The output from these builds is available
at <ulink url='http://autobuilder.pokylinux.org/'/>
where the numbers increase for each subsequent build and can be used to reference it.
</para>
<para>
Automated builds are available for "standard" Poky and for Poky SDKs and toolchains as well
as any testing versions we might have such as poky-bleeding. The toolchains can
be used either as external standalone toolchains or can be combined with Poky as a
prebuilt toolchain to reduce build time. Using the external toolchains is simply a
case of untarring the tarball into the root of your system (it only creates files in
<filename class="directory">/opt/poky</filename>) and then enabling the option
in <filename>local.conf</filename>.
</para>
</section>
<section id='intro-getit-dev'>
<title>Development Checkouts</title>
<para>
Poky is available from our GIT repository located at
git://git.pokylinux.org/poky.git; a web interface to the repository
can be accessed at <ulink url='http://git.pokylinux.org/'/>.
</para>
<para>
The 'master' is where the deveopment work takes place and you should use this if you're
after to work with the latest cutting edge developments. It is possible trunk
can suffer temporary periods of instability while new features are developed and
if this is undesireable we recommend using one of the release branches.
</para>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,101 +0,0 @@
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id='poky-ref-manual' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
<bookinfo>
<mediaobject>
<imageobject>
<imagedata fileref='figures/poky-ref-manual.png'
format='SVG'
align='center' scalefit='1' width='100%'/>
</imageobject>
</mediaobject>
<title>Poky Reference Manual</title>
<subtitle>A Guide and Reference to Poky</subtitle>
<authorgroup>
<author>
<firstname>Richard</firstname> <surname>Purdie</surname>
<affiliation>
<orgname>Intel Corporation</orgname>
</affiliation>
<email>richard@linux.intel.com</email>
</author>
<author>
<firstname>Tomas</firstname> <surname>Frydrych</surname>
<affiliation>
<orgname>Intel Corporation</orgname>
</affiliation>
</author>
<author>
<firstname>Marcin</firstname> <surname>Juszkiewicz</surname>
</author>
<author>
<firstname>Dodji</firstname> <surname>Seketeli</surname>
</author>
</authorgroup>
<revhistory>
<revision>
<revnumber>4.0+git</revnumber>
<date>27 Oct 2010</date>
<revremark>Poky Master Documentation</revremark>
</revision>
</revhistory>
<copyright>
<year>2007-2010</year>
<holder>Linux Foundation</holder>
</copyright>
<legalnotice>
<para>
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
</para>
</legalnotice>
</bookinfo>
<xi:include href="introduction.xml"/>
<xi:include href="usingpoky.xml"/>
<xi:include href="extendpoky.xml"/>
<xi:include href="bsp.xml"/>
<xi:include href="development.xml"/>
<xi:include href="ref-structure.xml"/>
<xi:include href="ref-bitbake.xml"/>
<xi:include href="ref-classes.xml"/>
<xi:include href="ref-images.xml"/>
<xi:include href="ref-features.xml"/>
<xi:include href="ref-variables.xml"/>
<xi:include href="ref-varlocality.xml"/>
<xi:include href="faq.xml"/>
<xi:include href="resources.xml"/>
<index id='index'>
<title>Index</title>
</index>
</book>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,505 +0,0 @@
<!DOCTYPE appendix PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<appendix id='ref-structure'>
<title>Reference: Directory Structure</title>
<para>
Poky consists of several components and understanding what these are
and where they're located is one of the keys to using it. This section walks
through the Poky directory structure giving information about the various
files and directories.
</para>
<section id='structure-core'>
<title>Top level core components</title>
<section id='structure-core-bitbake'>
<title><filename class="directory">bitbake/</filename></title>
<para>
A copy of BitBake is included within Poky for ease of use, and should
usually match the current BitBake stable release from the BitBake project.
Bitbake, a metadata interpreter, reads the Poky metadata and runs the tasks
defined in the Poky metadata. Failures are usually from the metadata, not
BitBake itself, so most users don't need to worry about BitBake. The
<filename class="directory">bitbake/bin/</filename> directory is placed
into the PATH environment variable by the <link
linkend="structure-core-script">poky-init-build-env</link> script.
</para>
<para>
For more information on BitBake please see the BitBake project site at
<ulink url="http://bitbake.berlios.de/"/>
and the BitBake on-line manual at <ulink url="http://bitbake.berlios.de/manual/"/>.
</para>
</section>
<section id='structure-core-build'>
<title><filename class="directory">build/</filename></title>
<para>
This directory contains user configuration files and the output
generated by Poky in its standard configuration where the source tree is
combined with the output. It is also possible to place output and configuration
files in a directory separate from the Poky source, see the section <link
linkend='structure-core-script'>seperate output directory</link>.
</para>
</section>
<section id='structure-core-meta'>
<title><filename class="directory">meta/</filename></title>
<para>
This directory contains the core metadata, a key part of Poky. Within this
directory there are definitions of the machines, the Poky distribution
and the packages that make up a given system.
</para>
</section>
<section id='structure-core-meta-extras'>
<title><filename class="directory">meta-extras/</filename></title>
<para>
This directory is similar to <filename class="directory">meta/</filename>,
and contains some extra metadata not included in standard Poky. These are
disabled by default, and are not supported as part of Poky.
</para>
</section>
<section id='structure-core-meta-***'>
<title><filename class="directory">meta-***/</filename></title>
<para>
These directories are optional layers to be added to core metadata, which
are enabled by adding them to conf/bblayers.conf.
</para>
</section>
<section id='structure-core-scripts'>
<title><filename class="directory">scripts/</filename></title>
<para>
This directory contains various integration scripts which implement
extra functionality in the Poky environment, such as the QEMU
scripts. This directory is appended to the PATH environment variable by the
<link linkend="structure-core-script">poky-init-build-env</link> script.
</para>
</section>
<section id='structure-core-sources'>
<title><filename class="directory">sources/</filename></title>
<para>
While not part of a checkout, Poky will create this directory as
part of any build. Any downloads are placed in this directory (as
specified by the <glossterm><link linkend='var-DL_DIR'>DL_DIR</link>
</glossterm> variable). This directory can be shared between Poky
builds to save downloading files multiple times. SCM checkouts are
also stored here as e.g. <filename class="directory">sources/svn/
</filename>, <filename class="directory">sources/cvs/</filename> or
<filename class="directory">sources/git/</filename> and the
sources directory may contain archives of checkouts for various
revisions or dates.
</para>
<para>
It's worth noting that BitBake creates <filename class="extension">.md5
</filename> stamp files for downloads. It uses these to mark downloads as
complete as well as for checksum and access accounting purposes. If you add
a file manually to the directory, you need to touch the corresponding
<filename class="extension">.md5</filename> file too.
</para>
<para>
This location can be overridden by setting <glossterm><link
linkend='var-DL_DIR'>DL_DIR</link></glossterm> in <filename>local.conf
</filename>. This directory can be shared between builds and even between
machines via NFS, so downloads are only made once, speeding up builds.
</para>
</section>
<section id='handbook'>
<title><filename class="directory">documentation</filename></title>
<para>
This is the location for documentaiton about poky including this handbook.
</para>
</section>
<section id='structure-core-script'>
<title><filename>poky-init-build-env</filename></title>
<para>
This script is used to setup the Poky build environment. Sourcing this file in
a shell makes changes to PATH and sets other core BitBake variables based on the
current working directory. You need to use this before running Poky commands.
Internally it uses scripts within the <filename class="directory">scripts/
</filename> directory to do the bulk of the work. This script supports
specifying any directory as the build output:
</para>
<programlisting>
source POKY_SRC/poky-init-build-env [BUILDDIR]
</programlisting>
<para>
The above command can be typed from any directory, as long as POKY_SRC points to
the desired Poky source tree. The optional BUILDDIR could be any directory you'd
like Poky to generate the build output into.
</para>
</section>
</section>
<section id='structure-build'>
<title><filename class="directory">build/</filename> - The Build Directory</title>
<section id='structure-build-conf-local.conf'>
<title><filename>build/conf/local.conf</filename></title>
<para>
This file contains all the local user configuration of Poky. If there
is no <filename>local.conf</filename> present, it is created from
<filename>local.conf.sample</filename>. The <filename>local.conf</filename>
file contains documentation on the various configuration options. Any
variable set here overrides any variable set elsewhere within Poky unless
that variable is hardcoded within Poky (e.g. by using '=' instead of '?=').
Some variables are hardcoded for various reasons but these variables are
relatively rare.
</para>
<para>
Edit this file to set the <glossterm><link linkend='var-MACHINE'>MACHINE</link></glossterm> for which you want to build, which package types you
wish to use (PACKAGE_CLASSES) or where downloaded files should go
(<glossterm><link linkend='var-DL_DIR'>DL_DIR</link></glossterm>).
</para>
</section>
<section id='structure-build-conf-bblayers.conf'>
<title><filename>build/conf/bblayers.conf</filename></title>
<para>
This file defines layers walked by bitbake. If there's no <filename>
bblayers.conf</filename> present, it is created from <filename>bblayers.conf.sample
</filename> when the environment setup script is sourced.
</para>
</section>
<section id='structure-build-tmp'>
<title><filename class="directory">build/tmp/</filename></title>
<para>
This is created by BitBake if it doesn't exist and is where all the Poky output
is placed. To clean Poky and start a build from scratch (other than downloads),
you can wipe this directory. The <filename class="directory">tmp/
</filename> directory has some important sub-components detailed below.
</para>
</section>
<section id='structure-build-tmp-cache'>
<title><filename class="directory">build/tmp/cache/</filename></title>
<para>
When BitBake parses the metadata it creates a cache file of the result which can
be used when subsequently running commands. These are stored here on
a per machine basis.
</para>
</section>
<section id='structure-build-tmp-deploy'>
<title><filename class="directory">build/tmp/deploy/</filename></title>
<para>Any 'end result' output from Poky is placed under here.</para>
</section>
<section id='structure-build-tmp-deploy-deb'>
<title><filename class="directory">build/tmp/deploy/deb/</filename></title>
<para>
Any .deb packages emitted by Poky are placed here, sorted into feeds for
different architecture types.
</para>
</section>
<section id='structure-build-tmp-deploy-rpm'>
<title><filename class="directory">build/tmp/deploy/rpm/</filename></title>
<para>
Any .rpm packages emitted by Poky are placed here, sorted into feeds for
different architecture types.
</para>
</section>
<section id='structure-build-tmp-deploy-images'>
<title><filename class="directory">build/tmp/deploy/images/</filename></title>
<para>
Complete filesystem images are placed here. If you want to flash the resulting
image from a build onto a device, look here for them.
</para>
</section>
<section id='structure-build-tmp-deploy-ipk'>
<title><filename class="directory">build/tmp/deploy/ipk/</filename></title>
<para>Any resulting .ipk packages emitted by Poky are placed here.</para>
</section>
<section id='structure-build-tmp-sysroots'>
<title><filename class="directory">build/tmp/sysroots/</filename></title>
<para>
Any package needing to share output with other packages does so within sysroots.
This means it contains any shared header files and any shared libraries amongst
other data. It is subdivided by architecture so multiple builds can run within
the one build directory.
</para>
</section>
<section id='structure-build-tmp-stamps'>
<title><filename class="directory">build/tmp/stamps/</filename></title>
<para>
This is used by BitBake for accounting purposes to keep track of which tasks
have been run and when. It is also subdivided by architecture. The files are
empty and the important information is the filenames and timestamps.
</para>
</section>
<section id='structure-build-tmp-log'>
<title><filename class="directory">build/tmp/log/</filename></title>
<para>
This contains some general logs if not placing in a package's
<glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm>, such as
the log output from check_pkg or distro_check tasks.
</para>
</section>
<section id='structure-build-tmp-pkgdata'>
<title><filename class="directory">build/tmp/pkgdata/</filename></title>
<para>
This is an intermediate place for saving packaging data, which will be used
in later packaging process. For detail please refer to <link linkend='ref-classes-package'>
package.bbclass</link>.
</para>
</section>
<section id='structure-build-tmp-pstagelogs'>
<title><filename class="directory">build/tmp/pstagelogs/</filename></title>
<para>
This directory contains manifest for task based prebuilt. Each manifest is basically
a file list for installed files from a given task, which would be useful for later
packaging or cleanup process.
</para>
</section>
<section id='structure-build-tmp-work'>
<title><filename class="directory">build/tmp/work/</filename></title>
<para>
This directory contains various subdirectories for each architecture, and each package built by BitBake has its own work directory under the appropriate architecture subdirectory. All tasks are executed from this work directory. As an example, the source for a particular package will be unpacked, patched, configured and compiled all within its own work directory.
</para>
<para>
It is worth considering the structure of a typical work directory. An
example is the linux-rp kernel, version 2.6.20 r7 on the machine spitz
built within Poky. For this package a work directory of <filename
class="directory">tmp/work/spitz-poky-linux-gnueabi/linux-rp-2.6.20-r7/
</filename>, referred to as <glossterm><link linkend='var-WORKDIR'>WORKDIR
</link></glossterm>, is created. Within this directory, the source is
unpacked to linux-2.6.20 and then patched by quilt (see <link
linkend="usingpoky-modifying-packages-quilt">Section 3.5.1</link>).
Within the <filename class="directory">linux-2.6.20</filename> directory,
standard Quilt directories <filename class="directory">linux-2.6.20/patches</filename>
and <filename class="directory">linux-2.6.20/.pc</filename> are created,
and standard quilt commands can be used.
</para>
<para>
There are other directories generated within <glossterm><link
linkend='var-WORKDIR'>WORKDIR</link></glossterm>. The most important
is <glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm><filename class="directory">/temp/</filename> which has log files for each
task (<filename>log.do_*.pid</filename>) and the scripts BitBake runs for
each task (<filename>run.do_*.pid</filename>). The <glossterm><link
linkend='var-WORKDIR'>WORKDIR</link></glossterm><filename
class="directory">/image/</filename> directory is where <command>make
install</command> places its output which is then split into subpackages
within <glossterm><link linkend='var-WORKDIR'>WORKDIR</link></glossterm>
<filename class="directory">/packages-split/</filename>.
</para>
</section>
</section>
<section id='structure-meta'>
<title><filename class="directory">meta/</filename> - The Metadata</title>
<para>
As mentioned previously, this is the core of Poky. It has several
important subdivisions:
</para>
<section id='structure-meta-classes'>
<title><filename class="directory">meta/classes/</filename></title>
<para>
Contains the <filename class="extension">*.bbclass</filename> files. Class
files are used to abstract common code allowing it to be reused by multiple
packages. The <filename>base.bbclass</filename> file is inherited by every
package. Examples of other important classes are
<filename>autotools.bbclass</filename> that in theory allows any
Autotool-enabled package to work with Poky with minimal effort, or
<filename>kernel.bbclass</filename> that contains common code and functions
for working with the linux kernel. Functions like image generation or
packaging also have their specific class files (<filename>image.bbclass
</filename>, <filename>rootfs_*.bbclass</filename> and
<filename>package*.bbclass</filename>).
</para>
</section>
<section id='structure-meta-conf'>
<title><filename class="directory">meta/conf/</filename></title>
<para>
This is the core set of configuration files which start from
<filename>bitbake.conf</filename> and from which all other configuration
files are included (see the includes at the end of the file, even
<filename>local.conf</filename> is loaded from there!). While
<filename>bitbake.conf</filename> sets up the defaults, these can often be
overridden by user (<filename>local.conf</filename>), machine or
distribution configuration files.
</para>
</section>
<section id='structure-meta-conf-machine'>
<title><filename class="directory">meta/conf/machine/</filename></title>
<para>
Contains all the machine configuration files. If you set MACHINE="spitz", the
end result is Poky looking for a <filename>spitz.conf</filename> file in this directory. The includes
directory contains various data common to multiple machines. If you want to add
support for a new machine to Poky, this is the directory to look in.
</para>
</section>
<section id='structure-meta-conf-distro'>
<title><filename class="directory">meta/conf/distro/</filename></title>
<para>
Any distribution specific configuration is controlled from here. OpenEmbedded
supports multiple distributions of which Poky is one. Poky only contains the
Poky distribution so poky.conf is the main file here. This includes the
versions and SRCDATES for applications which are configured here. An example of
an alternative configuration is poky-bleeding.conf although this mainly inherits
its configuration from Poky itself.
</para>
</section>
<section id='structure-meta-recipes-bsp'>
<title><filename class="directory">meta/recipes-bsp/</filename></title>
<para>
Anything linking to specific hardware or hardware configuration information
are placed here, such as uboot, grub, etc.
</para>
</section>
<section id='structure-meta-recipes-connectivity'>
<title><filename class="directory">meta/recipes-connectivity/</filename></title>
<para>
Libraries and applications related to communication with other devices
</para>
</section>
<section id='structure-meta-recipes-core'>
<title><filename class="directory">meta/recipes-core/</filename></title>
<para>
What's needed to build a basic working Linux image including commonly used dependencies
</para>
</section>
<section id='structure-meta-recipes-devtools'>
<title><filename class="directory">meta/recipes-devtools/</filename></title>
<para>
Tools primarily used by the build system (but can also be used on targets)
</para>
</section>
<section id='structure-meta-recipes-extended'>
<title><filename class="directory">meta/recipes-extended/</filename></title>
<para>
Applications which whilst not essential add features compared to the alternatives in
core. May be needed for full tool functionality or LSB compliance.
</para>
</section>
<section id='structure-meta-recipes-gnome'>
<title><filename class="directory">meta/recipes-gnome/</filename></title>
<para>
All things related to the GTK+ application framework
</para>
</section>
<section id='structure-meta-recipes-graphics'>
<title><filename class="directory">meta/recipes-graphics/</filename></title>
<para>
X and other graphically related system libraries
</para>
</section>
<section id='structure-meta-recipes-kernel'>
<title><filename class="directory">meta/recipes-kernel/</filename></title>
<para>
The kernel and generic applications/libraries with strong kernel dependencies
</para>
</section>
<section id='structure-meta-recipes-multimedia'>
<title><filename class="directory">meta/recipes-multimedia/</filename></title>
<para>
Codecs and support utilties for audio, images and video
</para>
</section>
<section id='structure-meta-recipes-qt'>
<title><filename class="directory">meta/recipes-qt/</filename></title>
<para>
All things related to the QT application framework
</para>
</section>
<section id='structure-meta-recipes-sato'>
<title><filename class="directory">meta/recipes-sato/</filename></title>
<para>
The Sato demo/reference UI/UX, its associated apps and configuration
</para>
</section>
<section id='structure-meta-site'>
<title><filename class="directory">meta/site/</filename></title>
<para>
Certain autoconf test results cannot be determined when cross compiling since it
can't run tests on a live system. This directory therefore contains a list of
cached results for various architectures which is passed to autoconf.
</para>
</section>
</section>
</appendix>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,952 +0,0 @@
/*
Generic XHTML / DocBook XHTML CSS Stylesheet.
Browser wrangling and typographic design by
Oyvind Kolas / pippin@gimp.org
Customised for Poky by
Matthew Allum / mallum@o-hand.com
Thanks to:
Liam R. E. Quin
William Skaggs
Jakub Steiner
Structure
---------
The stylesheet is divided into the following sections:
Positioning
Margins, paddings, width, font-size, clearing.
Decorations
Borders, style
Colors
Colors
Graphics
Graphical backgrounds
Nasty IE tweaks
Workarounds needed to make it work in internet explorer,
currently makes the stylesheet non validating, but up until
this point it is validating.
Mozilla extensions
Transparency for footer
Rounded corners on boxes
*/
/*************** /
/ Positioning /
/ ***************/
body {
font-family: Verdana, Sans, sans-serif;
min-width: 640px;
width: 80%;
margin: 0em auto;
padding: 2em 5em 5em 5em;
color: #333;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color:#999999;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: normal;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: normal;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: normal;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110.000%;
border-bottom: 1px solid black;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 80%;
font-weight: normal;
}
.authorgroup {
background-color: transparent;
background-repeat: no-repeat;
padding-top: 256px;
background-image: url("figures/poky-ref-manual.png");
background-position: left top;
margin-top: -256px;
padding-right: 50px;
margin-left: 50px;
text-align: right;
width: 600px;
}
h3.author {
margin: 0em 0me 0em 0em;
padding: 0em 0em 0em 0em;
font-weight: normal;
font-size: 100%;
clear: both;
}
.author tt.email {
font-size: 66%;
}
.titlepage hr {
width: 0em;
clear: both;
}
.revhistory {
padding-top: 2em;
clear: both;
}
.toc,
.list-of-tables,
.list-of-examples,
.list-of-figures {
padding: 1.33em 0em 2.5em 0em;
}
.toc p,
.list-of-tables p,
.list-of-figures p,
.list-of-examples p {
padding: 0em 0em 0em 0em;
padding: 0em 0em 0.3em;
margin: 1.5em 0em 0em 0em;
}
.toc p b,
.list-of-tables p b,
.list-of-figures p b,
.list-of-examples p b{
font-size: 100.0%;
font-weight: bold;
}
.toc dl,
.list-of-tables dl,
.list-of-figures dl,
.list-of-examples dl {
margin: 0em 0em 0.5em 0em;
padding: 0em 0em 0em 0em;
}
.toc dt {
margin: 0em 0em 0em 0em;
padding: 0em 0em 0em 0em;
}
.toc dd {
margin: 0em 0em 0em 2.6em;
padding: 0em 0em 0em 0em;
}
div.glossary dl,
div.variablelist dl {
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
font-weight: normal;
width: 20em;
text-align: right;
}
.variablelist dl dt {
margin-top: 0.5em;
}
.glossary dl dd,
.variablelist dl dd {
margin-top: -1em;
margin-left: 25.5em;
}
.glossary dd p,
.variablelist dd p {
margin-top: 0em;
margin-bottom: 1em;
}
div.calloutlist table td {
padding: 0em 0em 0em 0em;
margin: 0em 0em 0em 0em;
}
div.calloutlist table td p {
margin-top: 0em;
margin-bottom: 1em;
}
div p.copyright {
text-align: left;
}
div.legalnotice p.legalnotice-title {
margin-bottom: 0em;
}
p {
line-height: 1.5em;
margin-top: 0em;
}
dl {
padding-top: 0em;
}
hr {
border: solid 1px;
}
.mediaobject,
.mediaobjectco {
text-align: center;
}
img {
border: none;
}
ul {
padding: 0em 0em 0em 1.5em;
}
ul li {
padding: 0em 0em 0em 0em;
}
ul li p {
text-align: left;
}
table {
width :100%;
}
th {
padding: 0.25em;
text-align: left;
font-weight: normal;
vertical-align: top;
}
td {
padding: 0.25em;
vertical-align: top;
}
p a[id] {
margin: 0px;
padding: 0px;
display: inline;
background-image: none;
}
a {
text-decoration: underline;
color: #444;
}
pre {
overflow: auto;
}
a:hover {
text-decoration: underline;
/*font-weight: bold;*/
}
div.informalfigure,
div.informalexample,
div.informaltable,
div.figure,
div.table,
div.example {
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
div.informalfigure p.title b,
div.informalexample p.title b,
div.informaltable p.title b,
div.figure p.title b,
div.example p.title b,
div.table p.title b{
padding-top: 0em;
margin-top: 0em;
font-size: 100%;
font-weight: normal;
}
.mediaobject .caption,
.mediaobject .caption p {
text-align: center;
font-size: 80%;
padding-top: 0.5em;
padding-bottom: 0.5em;
}
.epigraph {
padding-left: 55%;
margin-bottom: 1em;
}
.epigraph p {
text-align: left;
}
.epigraph .quote {
font-style: italic;
}
.epigraph .attribution {
font-style: normal;
text-align: right;
}
span.application {
font-style: italic;
}
.programlisting {
font-family: monospace;
font-size: 80%;
white-space: pre;
margin: 1.33em 0em;
padding: 1.33em;
}
.tip,
.warning,
.caution,
.note {
margin-top: 1em;
margin-bottom: 1em;
}
/* force full width of table within div */
.tip table,
.warning table,
.caution table,
.note table {
border: none;
width: 100%;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
padding: 0.8em 0.0em 0.0em 0.0em;
margin : 0em 0em 0em 0em;
}
.tip p,
.warning p,
.caution p,
.note p {
margin-top: 0.5em;
margin-bottom: 0.5em;
padding-right: 1em;
text-align: left;
}
.acronym {
text-transform: uppercase;
}
b.keycap,
.keycap {
padding: 0.09em 0.3em;
margin: 0em;
}
.itemizedlist li {
clear: none;
}
.filename {
font-size: medium;
font-family: Courier, monospace;
}
div.navheader, div.heading{
position: absolute;
left: 0em;
top: 0em;
width: 100%;
background-color: #cdf;
width: 100%;
}
div.navfooter, div.footing{
position: fixed;
left: 0em;
bottom: 0em;
background-color: #eee;
width: 100%;
}
div.navheader td,
div.navfooter td {
font-size: 66%;
}
div.navheader table th {
/*font-family: Georgia, Times, serif;*/
/*font-size: x-large;*/
font-size: 80%;
}
div.navheader table {
border-left: 0em;
border-right: 0em;
border-top: 0em;
width: 100%;
}
div.navfooter table {
border-left: 0em;
border-right: 0em;
border-bottom: 0em;
width: 100%;
}
div.navheader table td a,
div.navfooter table td a {
color: #777;
text-decoration: none;
}
/* normal text in the footer */
div.navfooter table td {
color: black;
}
div.navheader table td a:visited,
div.navfooter table td a:visited {
color: #444;
}
/* links in header and footer */
div.navheader table td a:hover,
div.navfooter table td a:hover {
text-decoration: underline;
background-color: transparent;
color: #33a;
}
div.navheader hr,
div.navfooter hr {
display: none;
}
.qandaset tr.question td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.qandaset tr.answer td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.answer td {
padding-bottom: 1.5em;
}
.emphasis {
font-weight: bold;
}
/************* /
/ decorations /
/ *************/
.titlepage {
}
.part .title {
}
.subtitle {
border: none;
}
/*
h1 {
border: none;
}
h2 {
border-top: solid 0.2em;
border-bottom: solid 0.06em;
}
h3 {
border-top: 0em;
border-bottom: solid 0.06em;
}
h4 {
border: 0em;
border-bottom: solid 0.06em;
}
h5 {
border: 0em;
}
*/
.programlisting {
border: solid 1px;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example {
border: 1px solid;
}
.tip,
.warning,
.caution,
.note {
border: 1px solid;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom: 1px solid;
}
.question td {
border-top: 1px solid black;
}
.answer {
}
b.keycap,
.keycap {
border: 1px solid;
}
div.navheader, div.heading{
border-bottom: 1px solid;
}
div.navfooter, div.footing{
border-top: 1px solid;
}
/********* /
/ colors /
/ *********/
body {
color: #333;
background: white;
}
a {
background: transparent;
}
a:hover {
background-color: #dedede;
}
h1,
h2,
h3,
h4,
h5,
h6,
h7,
h8 {
background-color: transparent;
}
hr {
border-color: #aaa;
}
.tip, .warning, .caution, .note {
border-color: #aaa;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom-color: #aaa;
}
.warning {
background-color: #fea;
}
.caution {
background-color: #fea;
}
.tip {
background-color: #eff;
}
.note {
background-color: #dfc;
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
color: #044;
}
div.figure,
div.table,
div.example,
div.informalfigure,
div.informaltable,
div.informalexample {
border-color: #aaa;
}
pre.programlisting {
color: black;
background-color: #fff;
border-color: #aaa;
border-width: 2px;
}
.guimenu,
.guilabel,
.guimenuitem {
background-color: #eee;
}
b.keycap,
.keycap {
background-color: #eee;
border-color: #999;
}
div.navheader {
border-color: black;
}
div.navfooter {
border-color: black;
}
/*********** /
/ graphics /
/ ***********/
/*
body {
background-image: url("images/body_bg.jpg");
background-attachment: fixed;
}
.navheader,
.note,
.tip {
background-image: url("images/note_bg.jpg");
background-attachment: fixed;
}
.warning,
.caution {
background-image: url("images/warning_bg.jpg");
background-attachment: fixed;
}
.figure,
.informalfigure,
.example,
.informalexample,
.table,
.informaltable {
background-image: url("images/figure_bg.jpg");
background-attachment: fixed;
}
*/
h1,
h2,
h3,
h4,
h5,
h6,
h7{
}
div.preface .titlepage .title,
div.colophon .title,
div.chapter .titlepage .title {
background-image: url("images/title-bg.png");
background-position: bottom;
background-repeat: repeat-x;
}
div.section div.section .titlepage .title,
div.sect2 .titlepage .title {
background: none;
}
h1.title {
background-color: transparent;
background-image: url("poky-ref-manual.png");
background-repeat: no-repeat;
height: 256px;
text-indent: -9000px;
overflow:hidden;
}
h2.subtitle {
background-color: transparent;
text-indent: -9000px;
overflow:hidden;
width: 0px;
display: none;
}
/*************************************** /
/ pippin.gimp.org specific alterations /
/ ***************************************/
/*
div.heading, div.navheader {
color: #777;
font-size: 80%;
padding: 0;
margin: 0;
text-align: left;
position: absolute;
top: 0px;
left: 0px;
width: 100%;
height: 50px;
background: url('/gfx/heading_bg.png') transparent;
background-repeat: repeat-x;
background-attachment: fixed;
border: none;
}
div.heading a {
color: #444;
}
div.footing, div.navfooter {
border: none;
color: #ddd;
font-size: 80%;
text-align:right;
width: 100%;
padding-top: 10px;
position: absolute;
bottom: 0px;
left: 0px;
background: url('/gfx/footing_bg.png') transparent;
}
*/
/****************** /
/ nasty ie tweaks /
/ ******************/
/*
div.heading, div.navheader {
width:expression(document.body.clientWidth + "px");
}
div.footing, div.navfooter {
width:expression(document.body.clientWidth + "px");
margin-left:expression("-5em");
}
body {
padding:expression("4em 5em 0em 5em");
}
*/
/**************************************** /
/ mozilla vendor specific css extensions /
/ ****************************************/
/*
div.navfooter, div.footing{
-moz-opacity: 0.8em;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example,
.tip,
.warning,
.caution,
.note {
-moz-border-radius: 0.5em;
}
b.keycap,
.keycap {
-moz-border-radius: 0.3em;
}
*/
table tr td table tr td {
display: none;
}
hr {
display: none;
}
table {
border: 0em;
}
.photo {
float: right;
margin-left: 1.5em;
margin-bottom: 1.5em;
margin-top: 0em;
max-width: 17em;
border: 1px solid gray;
padding: 3px;
background: white;
}
.seperator {
padding-top: 2em;
clear: both;
}
#validators {
margin-top: 5em;
text-align: right;
color: #777;
}
@media print {
body {
font-size: 8pt;
}
.noprint {
display: none;
}
}
.tip,
.note {
background: #91ae35;
color: #fff;
padding: 20px;
margin: 20px;
}
.tip h3,
.note h3 {
padding: 0em;
margin: 0em;
font-size: 2em;
font-weight: bold;
color: #fff;
}
.tip a,
.note a {
color: #fff;
text-decoration: underline;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.4 KiB

View File

@@ -1,58 +0,0 @@
<fop version="1.0">
<!-- Strict user configuration -->
<strict-configuration>true</strict-configuration>
<!-- Strict FO validation -->
<strict-validation>true</strict-validation>
<!--
Set the baseDir so common/openedhand.svg references in plans still
work ok. Note, relative file references to current dir should still work.
-->
<base>../template</base>
<font-base>../template</font-base>
<!-- Source resolution in dpi (dots/pixels per inch) for determining the
size of pixels in SVG and bitmap images, default: 72dpi -->
<!-- <source-resolution>72</source-resolution> -->
<!-- Target resolution in dpi (dots/pixels per inch) for specifying the
target resolution for generated bitmaps, default: 72dpi -->
<!-- <target-resolution>72</target-resolution> -->
<!-- default page-height and page-width, in case
value is specified as auto -->
<default-page-settings height="11in" width="8.26in"/>
<!-- <use-cache>false</use-cache> -->
<renderers>
<renderer mime="application/pdf">
<fonts>
<font metrics-file="VeraMono.xml"
kerning="yes"
embed-url="VeraMono.ttf">
<font-triplet name="veramono" style="normal" weight="normal"/>
</font>
<font metrics-file="VeraMoBd.xml"
kerning="yes"
embed-url="VeraMoBd.ttf">
<font-triplet name="veramono" style="normal" weight="bold"/>
</font>
<font metrics-file="Vera.xml"
kerning="yes"
embed-url="Vera.ttf">
<font-triplet name="verasans" style="normal" weight="normal"/>
<font-triplet name="verasans" style="normal" weight="bold"/>
<font-triplet name="verasans" style="italic" weight="normal"/>
<font-triplet name="verasans" style="italic" weight="bold"/>
</font>
<auto-detect/>
</fonts>
</renderer>
</renderers>
</fop>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -1,51 +0,0 @@
#!/bin/sh
if [ -z "$1" -o -z "$2" ]; then
echo "usage: [-v] $0 <docbook file> <templatedir>"
echo
echo "*NOTE* you need xsltproc, fop and nwalsh docbook stylesheets"
echo " installed for this to work!"
echo
exit 0
fi
FO=`echo $1 | sed s/.xml/.fo/` || exit 1
PDF=`echo $1 | sed s/.xml/.pdf/` || exit 1
TEMPLATEDIR=$2
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI="http://docbook.sourceforge.net/release/xsl/current"
# Creates a temporary XSL stylesheet based on titlepage.xsl
xsltproc -o /tmp/titlepage.xsl \
--xinclude \
$XSL_BASE_URI/template/titlepage.xsl \
$TEMPLATEDIR/titlepage.templates.xml || exit 1
# Creates the file needed for FOP
xsltproc --xinclude \
--stringparam hyphenate false \
--stringparam formal.title.placement "figure after" \
--stringparam ulink.show 1 \
--stringparam body.font.master 9 \
--stringparam title.font.master 11 \
--stringparam draft.watermark.image "$TEMPLATEDIR/draft.png" \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--output $FO \
$TEMPLATEDIR/poky-db-pdf.xsl \
$1 || exit 1
# Invokes the Java version of FOP. Uses the additional configuration file common/fop-config.xml
fop -c $TEMPLATEDIR/fop-config.xml -fo $FO -pdf $PDF || exit 1
rm -f $FO
rm -f /tmp/titlepage.xsl
echo
echo " #### Success! $PDF ready. ####"
echo

View File

@@ -1,32 +0,0 @@
XSLTOPTS = --stringparam html.stylesheet style.css \
--xinclude
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
all: html tarball
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
html:
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
# xsltproc $(XSLTOPTS) -o yocto-project-qs.html $(XSL_XHTML_URI) yocto-project-qs.xml
xsltproc $(XSLTOPTS) -o yocto-project-qs.html yocto-project-qs-customization.xsl yocto-project-qs.xml
tarball: html
tar -cvzf yocto-project-qs.tgz yocto-project-qs.html style.css figures/yocto-environment.png figures/building-an-image.png figures/using-a-pre-built-image.png figures/yocto-project-transp.png
validate:
xmllint --postvalid --xinclude --noout yocto-project-qs.xml
OUTPUTS = yocto-project-qs.tgz yocto-project-qs.html
SOURCES = *.png *.xml *.css
publish:
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/
clean:
rm -f $(OUTPUTS)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.4 KiB

View File

@@ -1,963 +0,0 @@
/*
Generic XHTML / DocBook XHTML CSS Stylesheet.
Browser wrangling and typographic design by
Oyvind Kolas / pippin@gimp.org
Customised for Poky by
Matthew Allum / mallum@o-hand.com
Thanks to:
Liam R. E. Quin
William Skaggs
Jakub Steiner
Structure
---------
The stylesheet is divided into the following sections:
Positioning
Margins, paddings, width, font-size, clearing.
Decorations
Borders, style
Colors
Colors
Graphics
Graphical backgrounds
Nasty IE tweaks
Workarounds needed to make it work in internet explorer,
currently makes the stylesheet non validating, but up until
this point it is validating.
Mozilla extensions
Transparency for footer
Rounded corners on boxes
*/
/*************** /
/ Positioning /
/ ***************/
body {
font-family: Verdana, Sans, sans-serif;
min-width: 640px;
width: 80%;
margin: 0em auto;
padding: 2em 5em 5em 5em;
color: #333;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color:#999999;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: normal;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: normal;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: normal;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110.000%;
border-bottom: 1px solid black;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 80%;
font-weight: normal;
}
.authorgroup {
background-color: transparent;
background-repeat: no-repeat;
padding-top: 256px;
background-image: url("../figures/yocto-project-bw.png");
background-position: top;
margin-top: -256px;
padding-right: 50px;
margin-left: 50px;
text-align: center;
width: 600px;
}
h3.author {
margin: 0em 0me 0em 0em;
padding: 0em 0em 0em 0em;
font-weight: normal;
font-size: 100%;
clear: both;
}
.author tt.email {
font-size: 66%;
}
.titlepage hr {
width: 0em;
clear: both;
}
.revhistory {
padding-top: 2em;
clear: both;
}
.toc,
.list-of-tables,
.list-of-examples,
.list-of-figures {
padding: 1.33em 0em 2.5em 0em;
}
.toc p,
.list-of-tables p,
.list-of-figures p,
.list-of-examples p {
padding: 0em 0em 0em 0em;
padding: 0em 0em 0.3em;
margin: 1.5em 0em 0em 0em;
}
.toc p b,
.list-of-tables p b,
.list-of-figures p b,
.list-of-examples p b{
font-size: 100.0%;
font-weight: bold;
}
.toc dl,
.list-of-tables dl,
.list-of-figures dl,
.list-of-examples dl {
margin: 0em 0em 0.5em 0em;
padding: 0em 0em 0em 0em;
}
.toc dt {
margin: 0em 0em 0em 0em;
padding: 0em 0em 0em 0em;
}
.toc dd {
margin: 0em 0em 0em 2.6em;
padding: 0em 0em 0em 0em;
}
div.glossary dl,
div.variablelist dl {
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
font-weight: normal;
width: 20em;
text-align: right;
}
.variablelist dl dt {
margin-top: 0.5em;
}
.glossary dl dd,
.variablelist dl dd {
margin-top: -1em;
margin-left: 25.5em;
}
.glossary dd p,
.variablelist dd p {
margin-top: 0em;
margin-bottom: 1em;
}
div.calloutlist table td {
padding: 0em 0em 0em 0em;
margin: 0em 0em 0em 0em;
}
div.calloutlist table td p {
margin-top: 0em;
margin-bottom: 1em;
}
div p.copyright {
text-align: left;
}
div.legalnotice p.legalnotice-title {
margin-bottom: 0em;
}
p {
line-height: 1.5em;
margin-top: 0em;
color: black; font-size: 100%;
}
dl {
padding-top: 0em;
}
hr {
border: solid 1px;
}
.mediaobject,
.mediaobjectco {
text-align: center;
}
img {
border: none;
}
ul {
padding: 0em 0em 0em 1.5em;
}
ul li {
padding: 0em 0em 0em 0em;
}
ul li p {
text-align: left;
}
table {
width :100%;
}
th {
padding: 0.25em;
text-align: left;
font-weight: normal;
vertical-align: top;
}
td {
padding: 0.25em;
vertical-align: top;
}
p a[id] {
margin: 0px;
padding: 0px;
display: inline;
background-image: none;
}
a {
text-decoration: underline;
color: #444;
}
pre {
overflow: auto;
}
a:hover {
text-decoration: underline;
/*font-weight: bold;*/
}
div.informalfigure,
div.informalexample,
div.informaltable,
div.figure,
div.table,
div.example {
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
div.informalfigure p.title b,
div.informalexample p.title b,
div.informaltable p.title b,
div.figure p.title b,
div.example p.title b,
div.table p.title b{
padding-top: 0em;
margin-top: 0em;
font-size: 100%;
font-weight: normal;
}
.mediaobject .caption,
.mediaobject .caption p {
text-align: center;
font-size: 80%;
padding-top: 0.5em;
padding-bottom: 0.5em;
}
.epigraph {
padding-left: 55%;
margin-bottom: 1em;
}
.epigraph p {
text-align: left;
}
.epigraph .quote {
font-style: italic;
}
.epigraph .attribution {
font-style: normal;
text-align: right;
}
span.application {
font-style: italic;
}
.programlisting {
font-family: monospace;
font-size: 80%;
white-space: pre;
margin: 1.33em 0em;
padding: 1.33em;
}
.tip,
.warning,
.caution,
.note {
margin-top: 1em;
margin-bottom: 1em;
}
/* force full width of table within div */
.tip table,
.warning table,
.caution table,
.note table {
border: none;
width: 100%;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
padding: 0.8em 0.0em 0.0em 0.0em;
margin : 0em 0em 0em 0em;
}
.tip p,
.warning p,
.caution p,
.note p {
margin-top: 0.5em;
margin-bottom: 0.5em;
padding-right: 1em;
text-align: left;
}
.acronym {
text-transform: uppercase;
}
b.keycap,
.keycap {
padding: 0.09em 0.3em;
margin: 0em;
}
.itemizedlist li {
clear: none;
}
.filename {
font-size: medium;
font-family: Courier, monospace;
}
div.navheader, div.heading{
position: absolute;
left: 0em;
top: 0em;
width: 100%;
background-color: #cdf;
width: 100%;
}
div.navfooter, div.footing{
position: fixed;
left: 0em;
bottom: 0em;
background-color: #eee;
width: 100%;
}
div.navheader td,
div.navfooter td {
font-size: 66%;
}
div.navheader table th {
/*font-family: Georgia, Times, serif;*/
/*font-size: x-large;*/
font-size: 80%;
}
div.navheader table {
border-left: 0em;
border-right: 0em;
border-top: 0em;
width: 100%;
}
div.navfooter table {
border-left: 0em;
border-right: 0em;
border-bottom: 0em;
width: 100%;
}
div.navheader table td a,
div.navfooter table td a {
color: #777;
text-decoration: none;
}
/* normal text in the footer */
div.navfooter table td {
color: black;
}
div.navheader table td a:visited,
div.navfooter table td a:visited {
color: #444;
}
/* links in header and footer */
div.navheader table td a:hover,
div.navfooter table td a:hover {
text-decoration: underline;
background-color: transparent;
color: #33a;
}
div.navheader hr,
div.navfooter hr {
display: none;
}
.qandaset tr.question td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.qandaset tr.answer td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.answer td {
padding-bottom: 1.5em;
}
.emphasis {
font-weight: bold;
}
/************* /
/ decorations /
/ *************/
.titlepage {
}
.part .title {
}
.subtitle {
border: none;
}
/*
h1 {
border: none;
}
h2 {
border-top: solid 0.2em;
border-bottom: solid 0.06em;
}
h3 {
border-top: 0em;
border-bottom: solid 0.06em;
}
h4 {
border: 0em;
border-bottom: solid 0.06em;
}
h5 {
border: 0em;
}
*/
.programlisting {
border: solid 1px;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example {
border: 1px solid;
}
.tip,
.warning,
.caution,
.note {
border: 1px solid;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom: 1px solid;
}
.question td {
border-top: 1px solid black;
}
.answer {
}
b.keycap,
.keycap {
border: 1px solid;
}
div.navheader, div.heading{
border-bottom: 1px solid;
}
div.navfooter, div.footing{
border-top: 1px solid;
}
/********* /
/ colors /
/ *********/
body {
color: #333;
background: white;
}
a {
background: transparent;
}
a:hover {
background-color: #dedede;
}
h1,
h2,
h3,
h4,
h5,
h6,
h7,
h8 {
background-color: transparent;
}
hr {
border-color: #aaa;
}
.tip, .warning, .caution, .note {
border-color: #aaa;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom-color: #aaa;
}
.warning {
background-color: #fea;
}
.caution {
background-color: #fea;
}
.tip {
background-color: #eff;
}
.note {
background-color: #dfc;
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
color: #044;
}
div.figure,
div.table,
div.example,
div.informalfigure,
div.informaltable,
div.informalexample {
border-color: #aaa;
}
pre.programlisting {
color: black;
background-color: #fff;
border-color: #aaa;
border-width: 2px;
}
.guimenu,
.guilabel,
.guimenuitem {
background-color: #eee;
}
b.keycap,
.keycap {
background-color: #eee;
border-color: #999;
}
div.navheader {
border-color: black;
}
div.navfooter {
border-color: black;
}
/*********** /
/ graphics /
/ ***********/
/*
body {
background-image: url("images/body_bg.jpg");
background-attachment: fixed;
}
.navheader,
.note,
.tip {
background-image: url("images/note_bg.jpg");
background-attachment: fixed;
}
.warning,
.caution {
background-image: url("images/warning_bg.jpg");
background-attachment: fixed;
}
.figure,
.informalfigure,
.example,
.informalexample,
.table,
.informaltable {
background-image: url("images/figure_bg.jpg");
background-attachment: fixed;
}
*/
h1,
h2,
h3,
h4,
h5,
h6,
h7{
}
/*
Example of how to stick an image as part of the title.
div.article .titlepage .title
{
background-image: url("figures/white-on-black.png");
background-position: center;
background-repeat: repeat-x;
}
*/
div.preface .titlepage .title,
div.colophon .title,
div.chapter .titlepage .title,
div.article .titlepage .title
{
}
div.section div.section .titlepage .title,
div.sect2 .titlepage .title {
background: none;
}
h1.title {
background-color: transparent;
background-image: url("figures/yocto-project-bw.png");
background-repeat: no-repeat;
height: 256px;
text-indent: -9000px;
overflow:hidden;
}
h2.subtitle {
background-color: transparent;
text-indent: -9000px;
overflow:hidden;
width: 0px;
display: none;
}
/*************************************** /
/ pippin.gimp.org specific alterations /
/ ***************************************/
/*
div.heading, div.navheader {
color: #777;
font-size: 80%;
padding: 0;
margin: 0;
text-align: left;
position: absolute;
top: 0px;
left: 0px;
width: 100%;
height: 50px;
background: url('/gfx/heading_bg.png') transparent;
background-repeat: repeat-x;
background-attachment: fixed;
border: none;
}
div.heading a {
color: #444;
}
div.footing, div.navfooter {
border: none;
color: #ddd;
font-size: 80%;
text-align:right;
width: 100%;
padding-top: 10px;
position: absolute;
bottom: 0px;
left: 0px;
background: url('/gfx/footing_bg.png') transparent;
}
*/
/****************** /
/ nasty ie tweaks /
/ ******************/
/*
div.heading, div.navheader {
width:expression(document.body.clientWidth + "px");
}
div.footing, div.navfooter {
width:expression(document.body.clientWidth + "px");
margin-left:expression("-5em");
}
body {
padding:expression("4em 5em 0em 5em");
}
*/
/**************************************** /
/ mozilla vendor specific css extensions /
/ ****************************************/
/*
div.navfooter, div.footing{
-moz-opacity: 0.8em;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example,
.tip,
.warning,
.caution,
.note {
-moz-border-radius: 0.5em;
}
b.keycap,
.keycap {
-moz-border-radius: 0.3em;
}
*/
table tr td table tr td {
display: none;
}
hr {
display: none;
}
table {
border: 0em;
}
.photo {
float: right;
margin-left: 1.5em;
margin-bottom: 1.5em;
margin-top: 0em;
max-width: 17em;
border: 1px solid gray;
padding: 3px;
background: white;
}
.seperator {
padding-top: 2em;
clear: both;
}
#validators {
margin-top: 5em;
text-align: right;
color: #777;
}
@media print {
body {
font-size: 8pt;
}
.noprint {
display: none;
}
}
.tip,
.note {
background: #91ae35;
color: #fff;
padding: 20px;
margin: 20px;
}
.tip h3,
.note h3 {
padding: 0em;
margin: 0em;
font-size: 2em;
font-weight: bold;
color: #fff;
}
.tip a,
.note a {
color: #fff;
text-decoration: underline;
}

View File

@@ -1,8 +0,0 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
<xsl:param name="generate.toc" select="'article nop'"></xsl:param>
</xsl:stylesheet>

View File

@@ -1,365 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<article id='intro'>
<imagedata fileref="figures/yocto-project-transp.png" width="6in" depth="1in" align="right" scale="25" />
<section id='fake-title'>
<title>Yocto Project Quick Start</title>
</section>
<section id='welcome'>
<title>Welcome!</title>
<para>
Welcome to the Yocto Project!
The Yocto Project (YP) is an open-source collaboration project focused on embedded Linux
developers.
Amongst other things, YP uses the Poky build tool to construct complete Linux images.
</para>
<para>
This short document will give you some basic information about the environment as well
as let you experience it in its simplest form.
After reading this document you will have a basic understanding of what the Yocto Project is
and how to use some of its core components.
This document steps you through a simple example showing you how to build a small image
and run it using the QEMU emulator.
</para>
<para>
For complete information on the Yocto Project you should check out the
<ulink url='http://www.yoctoproject.org'>Yocto Project Website</ulink>.
You can find the latest builds, breaking news, full development documentation, and a
rich Yocto Project Development Community into which you can tap.
</para>
</section>
<section id='yp-intro'>
<title>Introducing the Yocto Project Development Environment</title>
<para>
The Yocto Project through the Poky build tool provides an open source development
environment targeting the ARM, MIPS, PowerPC and x86 architectures for a variety of
platforms including x86-64 and emulated ones.
You can use components from the the Yocto Project to design, develop, build, debug, simulate,
and test the complete software stack using Linux, the X Window System, GNOME Mobile-based
application frameworks, and Qt frameworks.
</para>
<para></para>
<para></para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/yocto-environment.png"
format="PNG" align='center' scalefit='1' width="100%"/>
</imageobject>
<caption>
<para>The Yocto Project Development Environment</para>
</caption>
</mediaobject>
<para>
Yocto Project:
</para>
<itemizedlist>
<listitem>
<para>Provides a recent Linux kernel along with a set of system commands and libraries suitable for the embedded environment.</para>
</listitem>
<listitem>
<para>Makes available system components such as X11, Matchbox, GTK+, Pimlico, Clutter,
GuPNP and Qt (among others) so you can create a richer user interface experience on
devices that use displays or have a GUI.
For devices that don't have a GUI or display you simply would not employ these
components.</para>
</listitem>
<listitem>
<para>Creates a focused and stable core compatible with the OpenEmbedded
project with which you can easily and reliably build and develop.</para>
</listitem>
<listitem>
<para>Fully supports a wide range of hardware and device emulation through the QEMU
Emulator.</para>
</listitem>
</itemizedlist>
<para>
Yocto Project can generate images for many kinds of devices.
However, the standard example machines target QEMU full system emulation for x86, ARM, MIPS,
and PPC based architectures as well as specific hardware such as the Intel Desktop Board
DH55TC.
Because an image developed with Yocto Project can boot inside a QEMU emulator, the
development environment works nicely as a test platform for developing embedded software.
</para>
<para>
Another important Yocto Project feature is the Sato reference User Interface.
This optional GNOME mobile-based UI, which is intended for devices with
resolution but restricted size screens, sits neatly on top of a device using the
GNOME Mobile Stack providing a well defined user experience.
Implemented in its own layer, it makes it clear to developers how they can implement
their own UIs on top of Yocto Linux.
</para>
</section>
<section id='resources'>
<title>What You Need and How You Get It</title>
<para>
You need these things to develop in the Yocto Project environment:
</para>
<itemizedlist>
<listitem>
<para>A host system running a supported Linux distribution (i.e. recent releases of
Fedora, OpenSUSE, Debian, and Ubuntu).</para>
</listitem>
<listitem>
<para>The right packages.</para>
</listitem>
<listitem>
<para>A release of Yocto Project.</para>
</listitem>
</itemizedlist>
<section id='the-linux-distro'>
<title>The Linux Distribution</title>
<para>
This document assumes you are running a reasonably current Linux-based host system.
The examples work for both Debian-based and RPM-based distributions.
</para>
</section>
<section id='packages'>
<title>The Packages</title>
<para>
The packages you need for a Debian-based host are shown in the following command:
</para>
<literallayout class='monospaced'>
$ sudo apt-get install sed wget cvs subversion git-core coreutils \
unzip texi2html texinfo libsdl1.2-dev docbook-utils gawk \
python-pysqlite2 diffstat help2man make gcc build-essential \
g++ desktop-file-utils chrpath libgl1-mesa-dev libglu1-mesa-dev \
mercurial
</literallayout>
<para>
The packages you need for an RPM-based host like Fedora are shown in these commands:
</para>
<literallayout class='monospaced'>
$ sudo yum groupinstall "development tools"
$ sudo yum install python m4 make wget curl ftp hg tar bzip2 gzip \
unzip python-psyco perl texinfo texi2html diffstat openjade \
docbook-style-dsssl sed docbook-style-xsl docbook-dtds \
docbook-utils sed bc glibc-devel ccache pcre pcre-devel quilt \
groff linuxdoc-tools patch linuxdoc-tools cmake help2man \
perl-ExtUtils-MakeMaker tcl-devel gettext chrpath ncurses apr \
SDL-devel mesa-libGL-devel mesa-libGLU-devel
</literallayout>
<para>
<emphasis>NOTE:</emphasis> Packages vary in number and name for other Linux distributions.
The commands here should work. We are interested, though, to learn what works for you.
You can find more information for package requirements on common Linux distributions
at <ulink url="http://wiki.openembedded.net/index.php/OEandYourDistro"></ulink>.
However, you should be careful when using this information as the information applies
to old Linux distributions that are known to not work with a current Poky install.
</para>
</section>
<section id='releases'>
<title>Yocto Project Release</title>
<para>
The latest release images for the Yocto Project are kept at
<ulink url="http://yoctoproject.org/downloads/yoctolinux-0.9/"></ulink>.
Nightly and developmental builds are also maintained. However, for this
document a released version of Yocto Project is used.
</para>
</section>
</section>
<section id='test-run'>
<title>A Quick Test Run</title>
<para>
Now that you have your system requirements in order you can give Yocto Project a try.
This section presents some steps that let you do the following:
</para>
<itemizedlist>
<listitem>
<para>Build an image and run it in the emulator</para>
</listitem>
<listitem>
<para>Or, use a pre-built image and run it in the emulator</para>
</listitem>
</itemizedlist>
<section id='building-image'>
<title>Building an Image</title>
<para>
In the development environment you will need to build an image whenever you change hardware support, add or change system libraries, or add or change services that have dependencies.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/building-an-image.png" format="PNG" align='center' scalefit='1'/>
</imageobject>
<caption>
<para>Building an Image</para>
</caption>
</mediaobject>
<para>
Use the following commands from a shell on your Debian-based host to build your image.
The build creates an entire Linux system including the Toolchain from the source.
</para>
<para><emphasis>NOTE:</emphasis> The build process using Sato currently consumes
50GB of disk space.
To allow for variations in the build process and for future package expansion we
recommend 100GB of free disk space.
</para>
<para>
<literallayout class='monospaced'>
$ wget http://www.yoctoproject.org/downloads/poky/poky-laverne-4.0.tar.bz2
$ tar xjf poky-laverne-4.0.tar.bz2
$ source poky-4.0/poky-init-build-env poky-4.0-build
</literallayout>
</para>
<itemizedlist>
<listitem><para>The first two commands extract the Yocto Project files from the
release area and place them into a subdirectory of your current directory
(<command>poky-4.0-build</command> in this example).</para></listitem>
<listitem><para>The <command>$ source</command> command creates the directory and places
you there.
The build directory contains all the object files used during the build.
The default build directory is <command>poky-4.0-build</command>.
Note that you can change the target architecture by editing the
<command>&lt;build_directory&gt;/conf/local.conf</command> file.
By default the target architecture is qemux86.</para></listitem>
</itemizedlist>
<para>
Now might be a good time to edit the <command>conf/local.conf</command>
file.
The defaults should all be fine. However, you might want to look at the variables
BB_NUMBER_THREADS and PARALLEL_MAKE.
By default, these variables are commented out.
</para>
<para>
Continue with the following command to build the OS image for the target, which is
poky-image-sato in this example.
<literallayout class='monospaced'>
$ bitbake poky-image-sato
</literallayout>
<emphasis>NOTE:</emphasis> If you are running Fedora 14 or another distribution
with GNU make 3.82 you might have to run the following two
<command>$bitbake</command> commands instead:
<literallayout class='monospaced'>
$ bitbake make-native
$ bitbake poky-image-sato
</literallayout>
The final command runs the image:
<literallayout class='monospaced'>
$ poky-qemu qemux86
</literallayout>
The build process could take several hours the first time you run it.
Depending on the number of processor and cores, the amount or RAM, the speed of your
internet connection and other factors.
After the initial build, subsequent builds run much faster.
</para>
</section>
<section id='using-pre-built'>
<title>Using a Pre-Built Binaries and QEMU</title>
<para>
If hardware, libraries and services are stable you can use a pre-built image of the image, kernel and toolchain and just run it on the target using the emulator QEMU.
This situation is perfect for developing application software.
</para>
<para></para>
<para></para>
<para></para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/using-a-pre-built-image.png" format="PNG" align='center' scalefit='1'/>
</imageobject>
<caption>
<para>Using a Pre-Built Image</para>
</caption>
</mediaobject>
<para>
For this scenario you need to do three things:
</para>
<itemizedlist>
<listitem>
<para>
Install the standalone Yocto toolchain tarball
</para>
</listitem>
<listitem>
<para>
Download the pre-built kernel that will run on QEMU.
You need to be sure to get the QEMU image that matches your target machines architecture (e.g. x86, ARM, etc.).
</para>
</listitem>
<listitem>
<para>
Download and decompress the file image system.
</para>
</listitem>
</itemizedlist>
<para>
You can download the pre-built toolchain which includes the poky-qemu script and support files from <ulink url='http://yoctoproject.org/downloads/yoctolinux-0.9/toolchain/'></ulink>. These are available for i586 (32-bit) and x86_64 (64 bit) host machines, targeting each of the 5 supported target architectures. The tarballs are self contained and install into /opt/poky.
Use these commands to install the toolchain tarball (taking the 64 bit host, 32 bit i586 target as an example):
</para>
<para>
<literallayout class='monospaced'>
$ cd /
$ sudo tar -xvjf yoctolinux-eglibc-x86_64-i586-toolchain-sdk-0.9.tar.bz2
</literallayout>
</para>
<para>
You can download the pre-built Linux kernel and the file image system from <ulink url='http://yoctoproject.org/downloads/yoctolinux-0.9/'></ulink>.
The kernel and file image system have the following forms, respectively:
</para>
<literallayout class='monospaced'>
*zImage*qemu*.bin
poky-image-*-qemu*.ext2.bz2
</literallayout>
<para>
You must decompress the file image system using the following command:
</para>
<literallayout class='monospaced'>
$ bzip2 -d
</literallayout>
<para>
You can now start the emulator using these commands (assuming an 32 bit i586 target):
</para>
<literallayout class='monospaced'>
$ source /opt/poky/environment-setup-i586-poky-linux
$ poky-qemu &lt;<emphasis>kernel</emphasis>&gt; &lt;<emphasis>image</emphasis>&gt;
</literallayout>
</section>
</section>
</article>
<!--
vim: expandtab tw=80 ts=4
-->

38
handbook/ChangeLog Normal file
View File

@@ -0,0 +1,38 @@
2008-02-29 Matthew Allum <mallum@openedhand.com>
* development.xml:
Disable images too big / lack context for now.
* introduction.xml:
Remove some OH specific stuff.
* style.css:
Remove limit on image size
2008-02-15 Matthew Allum <mallum@openedhand.com>
* introduction.xml:
Minor tweaks to 'What is Poky'
2008-02-15 Matthew Allum <mallum@openedhand.com>
* poky-handbook.xml:
* poky-handbook.png
* poky-beaver.png
* poky-logo.svg:
* style.css:
Add some title images.
2008-02-14 Matthew Allum <mallum@openedhand.com>
* development.xml:
remove uri's
* style.css:
Fix glossary
2008-02-06 Matthew Allum <mallum@openedhand.com>
* Makefile:
Add various xslto options for html.
* introduction.xml:
Remove link in title.
* style.css:
Add initial version

38
handbook/Makefile Normal file
View File

@@ -0,0 +1,38 @@
all: html pdf tarball
pdf:
./poky-doc-tools/poky-docbook-to-pdf poky-handbook.xml
./poky-doc-tools/poky-docbook-to-pdf bsp-guide.xml
# -- old way --
# dblatex poky-handbook.xml
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel 1 \
--stringparam section.autolabel 1 \
--xinclude
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
html:
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
xsltproc $(XSLTOPTS) -o poky-handbook.html $(XSL_XHTML_URI) poky-handbook.xml
xsltproc $(XSLTOPTS) -o bsp-guide.html $(XSL_XHTML_URI) bsp-guide.xml
# -- old way --
# xmlto xhtml-nochunks poky-handbook.xml
tarball: html
tar -cvzf poky-handbook.tgz poky-handbook.html style.css screenshots/ss-sato.png poky-beaver.png poky-handbook.png
validate:
xmllint --postvalid --xinclude --noout poky-handbook.xml
OUTPUTS = poky-handbook.tgz poky-handbook.html poky-handbook.pdf bsp-guide.pdf
SOURCES = *.png *.xml *.css *.svg
publish:
scp -r $(OUTPUTS) $(SOURCES) o-hand.com:/srv/www/pokylinux.org/doc/

View File

@@ -9,7 +9,7 @@
<mediaobject>
<imageobject>
<imagedata fileref='poky-ref-manual.png'
<imagedata fileref='common/poky-handbook.png'
format='SVG'
align='center' scalefit='1' width='100%'/>
</imageobject>
@@ -37,7 +37,7 @@
<copyright>
<year>2010</year>
<holder>Linux Foundation</holder>
<holder>Intel Corporation</holder>
</copyright>
<legalnotice>

Some files were not shown because too many files have changed in this diff Show More