Compare commits

..

72 Commits

Author SHA1 Message Date
Beth Flanagan
45526f5ecf Set release to correct version: poky.conf
Correcting release version in poky.conf

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
2011-02-04 11:44:57 -08:00
Beth Flanagan
d1fd60f69d Setting Yocto rev number to 4.1: poky.conf
Found just prior to mirror push. Modified poky.conf to use the correct
version.

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
2011-02-03 17:43:11 -08:00
Beth Flanagan
7fa2b1c154 Laverne 4.1 release: NOTES and CHANGELOG
Name: Laverne
Version: 4.0.1
Built from Revision: fd7a07b3a2
Build Date: Jan 26 2011
Builder: autobuilder.pokylinux.org

Commit of final release notes and changelog for Laverne 4.1

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
2011-02-03 13:24:07 -08:00
Scott Garman
fd7a07b3a2 poky-extract-sdk: allow relative paths for extract-dir
psuedo needs a full path to its pid file, so convert
relative extract-dir paths to full ones.

The symptom of this bug is receiving the following error:

pseudo: Couldn't open relative/path/to/var/pseudo/pseudo.pid: No such file or directory

This fixes [BUGID #670]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2011-01-25 09:31:49 +00:00
Beth Flanagan
01bc47f4d4 quilt: Fixed configure test for patch --version.
OpenSuSE 11.3 uses GNU patch 2.6.1.81-5b68 which breaks quilt's
configure test for patch version.

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
2011-01-14 12:08:54 +00:00
Richard Purdie
12a3d41a24 image.bbclass: Use the dedicated BB_WORKERCONTEXT, not bitbake internals to detect context
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-01-14 12:08:45 +00:00
Richard Purdie
ce4f835679 scripts/poky-qemu: Improve tmp layout assumption
If someone has changed TMPDIR in local.conf to a non-standard location, the
poky-qemu script currently doesn't handle this and assumes if BUILDDIR is set,
$BUILDDIR/tmp will exist.

Its simple to check if this exists and if not, to ask bitbake where the
directory is so this patch changes the code to do that.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2011-01-14 12:07:46 +00:00
Scott Garman
54f08d23cd Make poky-qemu and related scripts work with arbitrary SDK locations
* No longer assume SDK toolchains are installed in /opt/poky
* [BUGFIX #568] where specifying paths to both the kernel and fs
  image caused an error due to POKY_NATIVE_SYSROOT never being
  set, triggering failure of poky-qemu-ifup/ifdown
* Cosmetic improvements to usage() functions by using basename

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2011-01-14 12:07:29 +00:00
Scott Garman
8a3d0f375c poky-qemu: Fix issues when running Yocto 0.9 release images
This fixes two bugs with poky-qemu when it is run from a
standalone meta-toolchain setup.

[BUGFIX #535] and [BUGFIX #536]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2011-01-14 12:07:11 +00:00
Paul Eggleton
0c2003f134 openssl: restore -Wall flag
The -Wall flag was unintentionally removed from the end of the CFLAG var in
089612794d by me. This patch puts it back in.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-12-16 15:44:15 +00:00
Joshua Lock
6e71b0a012 web-webkit: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-15 14:31:21 +00:00
Joshua Lock
4b5c1c0530 contacts: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-15 14:10:46 +00:00
Joshua Lock
171e709ae6 dates: fix for Make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-15 14:10:46 +00:00
Joshua Lock
a8b8557e4c owl-video-widget: fix Makefile for super strict make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-15 12:35:41 +00:00
Joshua Lock
399e6b8008 libowl-av: fix for Make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-14 18:58:21 +00:00
Joshua Lock
290280b332 gst-plugins: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-14 17:56:53 +00:00
Joshua Lock
9e11fbf904 gstreamer: fix to comply with make 3.82's stricter parser
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-14 15:39:42 +00:00
Joshua Lock
0f8244faba linux-libc-headers: fix for Make 3.82
Fix the kernel Makefile for use with Make 3.82 by splitting mixed implicit and
normal rules into separate rules.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-14 12:49:13 +00:00
Joshua Lock
0cc23a8656 busybox: additional fixes for Make 3.82
There where still some mixed implicit and normal rules in the Busybox Makefile,
Update our existing make-382.patch to split these into separate rules.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-14 12:23:24 +00:00
Joshua Lock
30c39cc97c procps: fix for build against make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-10 17:32:33 +00:00
Joshua Lock
261ca88596 busybox: import upstream patch for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-10 16:52:32 +00:00
Joshua Lock
72ddd5c202 eglibc: fix build of eglibc-initial for make 3.82
Make 3.82, as shipped with Fedora 14, fixes some holes in the parser which in
turn breaks behaviour of some Makefiles. Most notably eglibc's.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-12-10 16:50:39 +00:00
Paul Eggleton
6026999e81 qemu: fix failure to find zlib header files during configure
Corrects problems during configure of qemu-native due to the BUILD_CFLAGS
not being included when attempting to compile the test program for zlib
within the configure script.

Signed-off-by: Paul Eggleton <paul.eggleton@intel.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-12-10 14:56:36 +00:00
Paul Eggleton
c5ab4d56f9 openssl-native: disable execstack flag to prevent problems with SELinux
The execstack flag gets set on libcrypto.so by default which causes SELinux
to prevent it from being loaded on systems using SELinux, which includes
Fedora. This patch disables the execstack flag. (Note: Red Hat do this in
their openssl packaging.)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
2010-12-10 11:37:14 +00:00
Joshua Lock
b9d6950732 documentation/bsp: update to reference FILESEXTRAPATHS
It's no longer neccesarry to define THISDIR and FILESPATH in each bbappend
recipe. Should you need to reference extra files you should use FILESEXTRAPATHS

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2010-10-24 01:02:40 -07:00
Richard Purdie
95b64df744 documentation: Update copyright to the Linux Foundation
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-24 01:02:39 -07:00
Richard Purdie
941855f0c0 documentation/yocto-qs: Fix references to a poky-qemu package and replace with the yocto toolchain tarball
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-24 01:02:39 -07:00
Scott Rifenbark
17641a7ead Removed text from section 5.1.2.1.1.
Removed several blocks of text from section 5.1.2.1.1
"Installing and Setting up the Eclipse IDE".  This text according
to Jessica was no longer needed.
2010-10-24 01:02:39 -07:00
Scott Rifenbark
c6e842d9f5 Corrected the package command for Debian-based hosts.
Corrected a typo listing the package libsdl1.2-dev as libsdll.2-dev.
Also added the package mercurial.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-24 01:02:39 -07:00
Scott Rifenbark
372186ff62 Added package installation requirements.
Added commands to support package installation of RPM-based host systems
to the example.  Input based on feedback from Dirk.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-24 01:02:39 -07:00
Scott Rifenbark
c718ef5b60 Re-inserted Poky Image as part of the front matter.
I have inserted the Poky image in the front matter again because the
book is a Poky Guide.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-24 01:02:39 -07:00
Richard Purdie
e160e68136 documentation/poky-ref-manual: Fix image makefile to reference the image
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-24 01:02:39 -07:00
Scott Rifenbark
c42340603f Moved the Poky image file to the "Figures" folder.
The image file was in the same directory as the main reference manual
files.  So I moved the file into subdirectory "Figures" with other
figures.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-24 01:02:39 -07:00
Scott Rifenbark
1a0cf646cf Re-installed the Poky Handbook image at the top of the manual
I could not get the Yocto Project logo to appear correctly in the book
after the title.  I also decided that since Poky is by no means
going away that this book should have that image associated with it
as it is the Poky Reference Manual.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-24 01:02:39 -07:00
Scott Rifenbark
3ed9ba33a3 Updated the yocto-environment picture and added example command edits.
When scaled to fit the page the picture had a black vertical line
artifact to the right.  I snipped out the image a little tigher to
eliminate this line.

I also incorporated Dirk's comments tightening up the sequence of
example commands to do the build.  I incorporated Fedora 14 note
and addition of the BB_NUMBER_THREADS and PARALLEL_MAKE variables.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-24 01:02:38 -07:00
Richard Purdie
88ff0a4470 rm_work.bbclass: Handle case where pseudo directory doesn't exist
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 17:05:29 -07:00
Richard Purdie
7b6db199fa bitbake/fetch: When fetchers return errors, ensure any partial download is cleared
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 17:05:14 -07:00
Richard Purdie
a28bc0d0b6 package_deb: The packaging command itself is run under fakeroot so these lines are totally unneeded
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 15:26:54 -07:00
Richard Purdie
47cfaec4ea classes: Only enable fakeroot on setscene tasks with packaging
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:14:13 -07:00
Richard Purdie
59df74164d bitbake/fetch: Make URL checking slightly less verbose (distracting with the sstate code)
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:13:59 -07:00
Richard Purdie
b5419d931e sstate: Fix mirror handling for file:// urls
The fetcher has special handling for file:// mirror urls, being efficient and
just providing an updated path. Unfortunately the sstate fetching code wasn't
able to handle this. This patch detects this and injects a symlink to ensure
everything works. It also fixes some datastore references to be correct and
ensures the sstate download directory exists if it doesn't already.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:13:51 -07:00
Richard Purdie
1ddda942bd pseudo/fakeroot: Move the pseudo directory creation into bitbake
If sstate was used to accelerate a build, the pseudo directory might not have
been created leading to subsequent task failures.

Also, sstate packages were not being installed under pseudo context meaning
file permissions could have been lost.

Fix these problems by creating a FAKEROOTDIRS variable which bitbake ensures
exists before running tasks and running the appropriate setscene tasks under
fakeroot context.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:13:40 -07:00
Richard Purdie
c115f07678 package_deb: Fix a typo meaning the debian packaging was not running in the fakeroot evnironment
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:13:30 -07:00
Richard Purdie
73be41e475 package_rpm: Don't check for the existence of dvar as its never used
If a sstate package exists for the package task but not for the rpm packaging
task, the output from the package task will be used. The directory pointed
to by dvar will not exist under this scenario.

Since the directory is never used by the packaging process remove the
check, substituting the pkgd variable which is always present and used.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:13:22 -07:00
Richard Purdie
b96412845d base.bbclass: Ensure an empty do_build tasks exists to silence a warning
The message "WARNING: Function do_build doesn't exist" doesn't look professional,
so fix the underlying problem even if this warning is harmless.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:13:12 -07:00
Richard Purdie
f0c88f220e sstate: Fix broken plaindirs support
When installing a sstate package, directories tracked by plaindirs were being installed
to the incorrect location. With the current implementation this was limited to
the do_package task.

This patch ensures plaindirs tracked files are created in the correct location, fixing
the bug where these files would go missing.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:12:55 -07:00
Dexuan Cui
1a3140eaf6 libtheora: add DEPENDS on libogg
This is used to fix the following build failure:

 checking for oggpackB_read... no
| configure: error: newer libogg version (1.1 or later) required

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
2010-10-22 11:12:40 -07:00
Richard Purdie
e34401720d base/sstate: Add cleanall task to remove downloads and sstate cached files
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:12:28 -07:00
Richard Purdie
72aadcf274 local.conf.sample: Default to not building 32 bit libs on 64 bit systems as most people don't need it and it confuses them
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:12:19 -07:00
Richard Purdie
411910bb98 metadata_scm: Ensure that if an SCM isn't present, we dont print a revision of 'fatal:' as it looks bad
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-22 11:12:09 -07:00
Richard Purdie
23bae7e299 docmentation/yocto-project-qs: Fix Yocto Environment graphic
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-21 22:26:55 +01:00
Scott Rifenbark
13642c439c Removed first link to openembedded and replaced with more general text.
The link to openembedded was used to reference Linux distributions supporting
Yocto Project.  The link has been removed and replaced with more generic
text so as to not have to link to openembedded.  Text used is
"A Host system running a supported Linux distribution (i.e. recent releases
of Fedora, OpenSUSE, Debian, and Ubuntu)."

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 14:10:47 -07:00
Scott Rifenbark
784f9b3369 Updated the first figure in the quick start.
This conceptual figure has been replaced by a more detailed work
flow representing YP.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 14:01:34 -07:00
Scott Rifenbark
0826752c04 Corrected link to the yocto website.
This link was incorrect and has been changed to yoctoproject.com.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 13:40:11 -07:00
Scott Rifenbark
a168d77ea9 Updated supporting text to reflect new poky-4.0-build directory in example
The example commands that build an image were updated to reflect the
real 4.0 release.  I updated the paragraph after the example commands
to refer to the new release used in the command examples.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 13:11:46 -07:00
Richard Purdie
297c60afd8 documentation-project-qs: Remove stray ]
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-21 20:56:02 +01:00
Richard Purdie
0f49a2c359 documentation/qs-guide: Fix urls for release
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-21 20:56:02 +01:00
Saul Wold
37eec34886 Yocto Project 0.9 - Poky Laverne 4.0 Release
Signed-off-by: Saul Wold <Saul.Wold@intel.com>
2010-10-21 20:54:57 +01:00
Scott Rifenbark
2e793fe2bf Edits as described below:
1) Wording change based on Darren's input of making Linux kernel sound like the only open source part of YL

2) Removal of the "v" option for the tar command example.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 12:16:42 -07:00
Bruce Ashfield
2bf160150f mpc8315e-rdb: align PACKAGE_EXTRA_ARCHS with tuning
Fixes [BUGID #500]

While the tuning for the mpc8315e is 603e, the PACKAGE_EXTRA_ARCHES
was ppce300. This created a mismatch and resulted in rootfs assembly
issues due to missing locales.

We align both at 603 and can revist a better tuning in the future.

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
2010-10-21 15:27:39 +01:00
Scott Rifenbark
c3a5bee36f Review changes applied.
1. Added Richard Purdie's general editing feedback to the "Welcome" and
"Introducing the Yocto Project Development Environment" sections.

2. Added Kevin Tian's feedback:  1) changed "Sudo" to "sudo", 2) reversed
the order of the sample "cd" and "source" commands since the "source" command
builds the directory structure first so changing to the directory before running
"source" made no sense, 3) removed the "bitbake qemu-native" command.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 15:25:55 +01:00
Scott Rifenbark
4a5a659674 Updated figure.
Feedback from Kevin Tian suggested that the outer box be labeled "QEMU" rather
than "Target."  Also that the two inner boxes be "Set of Emulated Devices" and
"Target CPU."  Final change was the use of "Yocto Project Scripts" rather than
"Yocto Linux Scripts."

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 15:25:55 +01:00
Scott Rifenbark
ce18dde858 Updated this figure to correctly capaitalize opkg, zyper, and app-get
Feedback from Kevin Tian suggested "OPKG" should be lower-case.
Also, use of "zypper" instead of "YUM."  I also lower-cased
"apt-get."

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 15:25:55 +01:00
Scott Rifenbark
e51e4870f0 Removed reference to pre-build in section 5.3.2, "Using OprofileUI".
The "Using OprofileUI" section had a description of how to use a
pre-built UI and how to download and build one.  Feedback from Jessica
Zhang suggested removing the instruction for using a pre-built UI.
All that remains in the first paragraph now is instruction on how
to download and build the UI.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 15:25:55 +01:00
Scott Rifenbark
440a4cfffb Added the Anjuta Plug-in information
Added section 5.1.2.2 "The Anjuta Plug-in" into the Poky Reference Manual.
    This section consists of sub-sections 5.1.2.2.1 "Setting Up the Anjuta
    Plug-in", 5.1.2.2.2 "Configuring the Anjuta Plug-in", and 5.1.2.2.3 "Using
    the Anjuta Plug-in".  This information was in the original Poky Handbook
    but had been removed by me since I thought it was not going to be supported
    for the 0.9 Yocto Release.  It has now been restored with a note indicating
    that Anjuta will not be supported post 0.9 release.

    I did some general text editing in each section for readability.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2010-10-21 15:25:55 +01:00
Richard Purdie
779288f438 documentation/pokt-ref-manual: Update with Yocto branding
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-21 15:25:55 +01:00
Richard Purdie
c3fa1a6677 documentation: Add Yocto quickstart guide
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-21 15:25:55 +01:00
Richard Purdie
e8ea66f5ff documentation/poky-ref-manual: Update packages references to recipes and make sure bbappend files are included in example BBFILES lines
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-20 10:12:25 -07:00
Richard Purdie
476242adc4 bitbake/fetch/git: Ensure fullclone repositories are fully fetched
The git fetcher was failing to pull in new branches into a git
repository mirror tarball as the git fetch command being used didn't
add new remote branches.

This patch uses "git fetch --all" for fullclones to ensure any
new remote branches are cloned correctly.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-20 10:10:51 -07:00
Richard Purdie
e9ef9424a3 bitbake/fetcher: Deal with a ton of different bugs
The more we try and patch up the fetcher code, the more things break. The
code blocks in question are practically unreadable and are full of corner
cases where fetching could fail. In summary the issues noticed included:

a) Always fetching strange broken urls from the premirror for "noclone"
   git repositories
b) Not creating or rewriting .md5 stamp files inconsistently
c) Always fetching git source mirror tarballs from the premirror even
   if they already exist but the checkout directory does now
d) Passing "None" values to os.access() and os.path.extsts() checks under
   certain circumstances
e) Not using fetched git mirror tarballs if the preexist and always
   try and fetch them.

This patch rewrites the sections of code in question to be simpler and
more readable, fixing the above problems and most likely other odd
corner cases.

Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-20 10:09:12 -07:00
Richard Purdie
fc9c11de28 bitbake/fetch/git.py: Fix git fetcher to correctly use mirror tarballs
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
2010-10-20 10:09:01 -07:00
Saul Wold
4ae9c0785e poky.conf: Update DISTRO_NAME and DISTRO_VERSION for the Yocto Project Release
Signed-off-by: Saul Wold <Saul.Wold@intel.com>
2010-10-18 11:54:36 -07:00
2894 changed files with 3242329 additions and 112852 deletions

11
.gitignore vendored
View File

@@ -2,7 +2,6 @@
*.pyo
build/conf/local.conf
build/conf/bblayers.conf
build/downloads
build/tmp/
build/sstate-cache
build/pyshtables.py
@@ -24,14 +23,4 @@ documentation/poky-ref-manual/poky-ref-manual.pdf
documentation/poky-ref-manual/poky-ref-manual.tgz
documentation/poky-ref-manual/bsp-guide.html
documentation/poky-ref-manual/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.html
documentation/bsp-guide/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.tgz
documentation/yocto-project-qs/yocto-project-qs.html
documentation/yocto-project-qs/yocto-project-qs.tgz
documentation/kernel-manual/kernel-manual.html
documentation/kernel-manual/kernel-manual.tgz
documentation/kernel-manual/kernel-manual.pdf

220
CHANGELOG Normal file
View File

@@ -0,0 +1,220 @@
commit fd7a07b3a2153826bedda2ef76b9a33ab2791680
Author: Scott Garman <scott.a.garman@intel.com>
Date: Fri Jan 21 14:15:05 2011 -0800
poky-extract-sdk: allow relative paths for extract-dir
psuedo needs a full path to its pid file, so convert
relative extract-dir paths to full ones.
The symptom of this bug is receiving the following error:
pseudo: Couldn't open relative/path/to/var/pseudo/pseudo.pid: No such file or directory
This fixes [BUGID #670]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
commit 01bc47f4d47df3276b4b6c2583bcddd834fd5050
Author: Beth Flanagan <elizabeth.flanagan@intel.com>
Date: Wed Nov 3 17:20:00 2010 -0700
quilt: Fixed configure test for patch --version.
OpenSuSE 11.3 uses GNU patch 2.6.1.81-5b68 which breaks quilt's
configure test for patch version.
Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
commit 12a3d41a24db79ae6c0491defffcf4f4753001cf
Author: Richard Purdie <richard.purdie@linuxfoundation.org>
Date: Fri Jan 14 11:57:18 2011 +0000
image.bbclass: Use the dedicated BB_WORKERCONTEXT, not bitbake internals to detect context
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
commit ce4f8356796bc797d9156ed252a4ed638a2150d5
Author: Richard Purdie <rpurdie@linux.intel.com>
Date: Wed Dec 15 23:22:16 2010 +0000
scripts/poky-qemu: Improve tmp layout assumption
If someone has changed TMPDIR in local.conf to a non-standard location, the
poky-qemu script currently doesn't handle this and assumes if BUILDDIR is set,
$BUILDDIR/tmp will exist.
Its simple to check if this exists and if not, to ask bitbake where the
directory is so this patch changes the code to do that.
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
commit 54f08d23cd7d0de6aec31f4764389ff4dab2990d
Author: Scott Garman <scott.a.garman@intel.com>
Date: Tue Dec 7 20:59:06 2010 -0800
Make poky-qemu and related scripts work with arbitrary SDK locations
* No longer assume SDK toolchains are installed in /opt/poky
* [BUGFIX #568] where specifying paths to both the kernel and fs
image caused an error due to POKY_NATIVE_SYSROOT never being
set, triggering failure of poky-qemu-ifup/ifdown
* Cosmetic improvements to usage() functions by using basename
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
commit 8a3d0f375ce416ada1a5443e4a8e467504001beb
Author: Scott Garman <scott.a.garman@intel.com>
Date: Fri Nov 12 16:31:13 2010 -0800
poky-qemu: Fix issues when running Yocto 0.9 release images
This fixes two bugs with poky-qemu when it is run from a
standalone meta-toolchain setup.
[BUGFIX #535] and [BUGFIX #536]
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
commit 0c2003f13434c77f901a976523478d37d8aadb48
Author: Paul Eggleton <paul.eggleton@linux.intel.com>
Date: Thu Dec 16 10:29:50 2010 +0000
openssl: restore -Wall flag
The -Wall flag was unintentionally removed from the end of the CFLAG var in
089612794d4d8d9c79bd2a4365d6df78371f7f40 by me. This patch puts it back in.
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
commit 6e71b0a012f0676c06b7b4788d932f320fca0b74
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 14:31:21 2010 +0000
web-webkit: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 4b5c1c053000d297956f08949ffde7454ee33c5d
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 13:42:15 2010 +0000
contacts: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 171e709ae6f4b1a7640bf393f57aa787648cdc0f
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 12:58:09 2010 +0000
dates: fix for Make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit a8b8557e4cb34b594bb620eb276bcaf7a8e0a8e3
Author: Joshua Lock <josh@linux.intel.com>
Date: Wed Dec 15 12:27:52 2010 +0000
owl-video-widget: fix Makefile for super strict make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 399e6b8008cb0b8cc0b75efd48dd821a6cf5a8a8
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 18:29:43 2010 +0000
libowl-av: fix for Make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 290280b332570ec73301f76765b1c5f2de20a9fd
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 17:56:53 2010 +0000
gst-plugins: fix for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 9e11fbf9048b17526ca8160d82b69f386595c9a7
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 15:39:42 2010 +0000
gstreamer: fix to comply with make 3.82's stricter parser
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 0f8244faba5c36c0580081c112ea27ce683af99b
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 12:49:13 2010 +0000
linux-libc-headers: fix for Make 3.82
Fix the kernel Makefile for use with Make 3.82 by splitting mixed implicit and
normal rules into separate rules.
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 0cc23a86562d0ce1e236ceb4a56a8f19d400192f
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Dec 14 12:21:33 2010 +0000
busybox: additional fixes for Make 3.82
There where still some mixed implicit and normal rules in the Busybox Makefile,
Update our existing make-382.patch to split these into separate rules.
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 30c39cc97c384134661300e107d7a81f257f8034
Author: Joshua Lock <josh@linux.intel.com>
Date: Fri Nov 12 16:36:54 2010 +0000
procps: fix for build against make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 261ca885962ba9606bcad4c5415927a79fdd7b96
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Nov 9 12:18:14 2010 +0000
busybox: import upstream patch for make 3.82
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 72ddd5c20246a5d5b1752b58a61ef75b4c39cc40
Author: Joshua Lock <josh@linux.intel.com>
Date: Tue Nov 9 12:14:28 2010 +0000
eglibc: fix build of eglibc-initial for make 3.82
Make 3.82, as shipped with Fedora 14, fixes some holes in the parser which in
turn breaks behaviour of some Makefiles. Most notably eglibc's.
Signed-off-by: Joshua Lock <josh@linux.intel.com>
commit 6026999e81042a7f6560f9bce04390865509b235
Author: Paul Eggleton <paul.eggleton@intel.com>
Date: Fri Nov 19 15:03:32 2010 +0000
qemu: fix failure to find zlib header files during configure
Corrects problems during configure of qemu-native due to the BUILD_CFLAGS
not being included when attempting to compile the test program for zlib
within the configure script.
Signed-off-by: Paul Eggleton <paul.eggleton@intel.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
commit c5ab4d56f97a0e45b124d40c9f536541be04c201
Author: Paul Eggleton <paul.eggleton@intel.com>
Date: Wed Nov 17 11:37:47 2010 +0000
openssl-native: disable execstack flag to prevent problems with SELinux
The execstack flag gets set on libcrypto.so by default which causes SELinux
to prevent it from being loaded on systems using SELinux, which includes
Fedora. This patch disables the execstack flag. (Note: Red Hat do this in
their openssl packaging.)
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>

70
NOTES Normal file
View File

@@ -0,0 +1,70 @@
Name: Laverne
Version: 4.0.1
Built from Revision: fd7a07b3a2153826bedda2ef76b9a33ab2791680
Build Date: Jan 26 2011
Builder: autobuilder.pokylinux.org
The Laverne 4.0.1 Release ensures you can use Poky Laverne on systems running
Fedora 14 and Opensuse 11.3, fixes issues with the poky-qemu script, and fixes
several other bugs. For the full changelog for Laverne 4.0.1 please read
CHANGELOG.
Following are descriptions of fixes and known issues.
Fixes
------------------------
* Make 3.82, as shipped with Fedora 14, included parser bug fixes that
resulted in a much stricter parser. As a result, the Makefiles could not be
parsed for many of the software versions shipped with Laverne. The Makefiles
in the following recipes were fixed:
o eglibc
o busybox
o procps
o linux-libc-headers
o gstreamer
o gst-plugins
o libowl-av
o owl-video-widget
o dates
o contacts
o web-webkit
* The ability to build openssl-native on a system that has SELINUX enabled
was restored. (We disabled the execstack flag at compile time.)
* A host-intrusion issue caused by a failure in QEMU to find zlib headers
during configure was fixed. The issue was causing qemu-native to use the
system zlib if it was present. If the system zlib was not present the build
would fail.
* Stability and usability enhancements, which included handling relative
filesystem paths, were made to poky-qemu scripts.
* The run-time remapping of package names when adding extra packages to an
image via the IMAGE_INSTALL mechanism were fixed.
* The configure test in quilt for GNU patch was fixed to that it correctly
detects the version.
Known Issues
------------------------
* The mpc3815e-rbd and routerstationpro machines were untested and not a
part of the official Laverne 4.0 release. These machines are still unusable
for this Laverne 4.0.1 release.
o mpx3815e-rdb will not boot due to a kernel/uboot issue Bug #685
o routerstation will not boot (by default) due to incorrect boot
parameters Bug #681
o routerstationpro debug messages related to the ethernet driver print
during boot Bug #679
* Shutdown/poweroff on qemuarm does not cleanly halt the virtual machine.
To workaround this issue use the reboot command. Using this command avoids
a "power-cycle" and instead cleanly shuts down the VM Bug #684
* Two "Connection Manager" icons appear in the Sato UI. This duplication has
been fixed in master. Note that you can use either icon to launch the
connectivity UI. Bug #683
* The on-screen keyboard incorrectly launches in the qemumips machine. This
issue is due to a mis-configured formfactor file Bug #682

View File

@@ -1,428 +1,436 @@
Poky Hardware README
====================
Poky Hardware Reference Guide
=============================
This file gives details about using Poky with different hardware reference
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device.
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device. To discuss
support for further hardware reference boards/devices please contact OpenedHand.
Support for additional devices is normally added by creating BSP layers - for
more information please see the Yocto Board Support Package (BSP) Developer's
Guide - documentation source is in documentation/bspguide or download the PDF
from:
http://yoctoproject.org/community/documentation
Support for machines other than QEMU may be moved out to separate BSP layers in
future versions.
QEMU Emulation Targets
======================
QEMU Emulation Images (qemuarm and qemux86)
===========================================
To simplify development Poky supports building images to work with the QEMU
emulator in system emulation mode. Several architectures are currently
supported:
* ARM (qemuarm)
* x86 (qemux86)
* x86-64 (qemux86-64)
* PowerPC (qemuppc)
* MIPS (qemumips)
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
MACHINE setting corresponding to the target is given in brackets.
emulator in system emulation mode. Two architectures are currently supported,
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
in the Poky Handbook.
Hardware Reference Boards
=========================
The following boards are supported by Poky's core layer:
The following boards are supported by Poky:
* Texas Instruments Beagleboard (beagleboard)
* Freescale MPC8315E-RDB (mpc8315e-rdb)
* Ubiquiti Networks RouterStation Pro (routerstationpro)
* Compulab CM-X270 (cm-x270)
* Compulab EM-X270 (em-x270)
* FreeScale iMX31ADS (mx31ads)
* Marvell PXA3xx Zylonite (zylonite)
* Logic iMX31 Lite Kit (mx31litekit)
* Phytec phyCORE-iMX31 (mx31phy)
For more information see the board's section below. The Poky MACHINE setting
For more information see board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Consumer Devices
================
The following consumer devices are supported by Poky's core layer:
The following consumer devices are supported by Poky:
* Intel Atom based PCs and devices (atom-pc)
* FIC Neo1973 GTA01 smartphone (fic-gta01)
* HTC Universal (htcuniversal)
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
* Sharp Zaurus SL-C7x0 series (c7x0)
* Sharp Zaurus SL-C1000 (akita)
* Sharp Zaurus SL-C3x00 series (spitz)
For more information see the device's section below. The Poky MACHINE setting
corresponding to the device is given in brackets.
For more information see board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Poky Boot CD (bootcdx86)
========================
The Poky boot CD iso images are designed as a demonstration of the Poky
environment and to show the versatile image formats Poky can generate. It will
run on Pentium2 or greater PC style computers. The iso image can be
burnt to CD and then booted from.
Specific Hardware Documentation
===============================
Hardware Reference Boards
=========================
Intel Atom based PCs and devices (atom-pc)
==========================================
Compulab CM-X270 (cm-x270)
==========================
The atom-pc MACHINE is tested on the following platforms:
The bootloader on this board doesn't support writing jffs2 images directly to
NAND and normally uses a proprietary kernel flash driver. To allow the use of
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
is booted which contains mtd utilities and this is then used to write the main
filesystem.
o Asus eee901
o Acer Aspire One
o Toshiba NB305
o Intel Embedded Development Board 1-N450 (Black Sand)
It is assumed the board is connected to a network where a TFTP server is
available and that a serial terminal is available to communicate with the
bootloader (38400, 8N1). If a DHCP server is available the device will use it
to obtain an IP address. If not, run:
and is likely to work on many unlisted atom based devices. The MACHINE type
supports ethernet, wifi, sound, and i915 graphics by default in addition to
common PC input devices, busses, and so on.
ARMmon > setip dhcp off
ARMmon > setip ip 192.168.1.203
ARMmon > setip mask 255.255.255.0
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing poky generated images to physical media is
straightforward with a caveat for USB devices. The following examples assume the
target boot device is /dev/sdb, be sure to verify this and use the correct
device as the following commands are run as root and are not reversable.
To reflash the kernel:
Hard Disk:
1. Build a directdisk image format. This will generate proper partition tables
that will in turn be written to the physical media. For example:
ARMmon > download kernel tftp zimage 192.168.1.202
ARMmon > flash kernel
$ bitbake poky-image-minimal-directdisk
2. Use the "dd" utility to write the image to the raw block device. For example:
where zimage is the name of the kernel on the TFTP server and its IP address is
192.168.1.202. The names of the files must be all lowercase.
# dd if=poky-image-minimal-directdisk-atom-pc.hdddirect of=/dev/sdb
To reflash the initrd/initramfs:
USB Device:
1. Build an hddimg image format. This is a simple filesystem without partition
tables and is suitable for USB keys. For example:
ARMmon > download ramdisk tftp diskimage 192.168.1.202
ARMmon > flash ramdisk
$ bitbake poky-image-minimal-live
where diskimage is the name of the initramfs image (a cpio.gz file).
2. Use the "dd" utility to write the image to the raw block device. For
example:
To boot the initramfs:
# dd if=poky-image-minimal-live-atom-pc.hddimg of=/dev/sdb
ARMmon > ramdisk on
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
If the device fails to boot with "Boot error" displayed, it is likely the BIOS
cannot understand the physical layout of the disk (or rather it expects a
particular layout and cannot handle anything else). There are two possible
solutions to this problem:
To reflash the main image login to the system as user "root", then run:
1. Change the BIOS USB Device setting to HDD mode. The label will vary by
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
geometry from the device.
# ifconfig eth0 192.168.1.203
# tftp -g -r mainimage 192.168.1.202
# flash_eraseall /dev/mtd1
# nandwrite /dev/mtd1 mainimage
2. Without such an option, the BIOS generally boots the device in USB-ZIP
mode.
which configures the network interface with the IP address 192.168.1.203,
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
the flash and then writes the new image to the flash.
a. Configure the USB device for USB-ZIP mode:
# mkdiskimage -4 /dev/sdb 0 63 62
The main image can then be booted with:
Where 63 and 62 are the head and sector count as reported by fdisk.
Remove and reinsert the device to allow the kernel to detect the new
partition layout.
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
b. Copy the contents of the poky image to the USB-ZIP mode device:
Note that the initramfs image is built by poky in a slightly different mode to
normal since it uses uclibc. To generate this use a command like:
# mount -o loop poky-image-minimal-live-atom-pc.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
c. Install the syslinux boot loader:
# syslinux /dev/sdb4
Compulab EM-X270 (em-x270)
==========================
Install the boot device in the target board and configure the BIOS to boot
from it.
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
Compulab website. Inside the images directory of this ZIP file is another ZIP
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
For more details on the USB-ZIP scenario, see the syslinux documentation:
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
Insert this USB disk into the supplied adapter and connect this to the
board. Whilst holding down the the suspend button press the reset button. The
board will now boot off the USB key and into a version of Angstrom. On the
desktop is an icon labelled "Updater". Run this program to launch the updater
that will flash the Poky kernel and rootfs to the board.
Texas Instruments Beagleboard (beagleboard)
===========================================
FreeScale iMX31ADS (mx31ads)
===========================
The Beagleboard is an ARM Cortex-A8 development board with USB, DVI-D, S-Video,
2D/3D accelerated graphics, audio, serial, JTAG, and SD/MMC. The xM adds a
faster CPU, more RAM, an ethernet port, more USB ports, microSD, and removes
the NAND flash. The beagleboard MACHINE is tested on the following platforms:
The correct serial port is the top-most female connector to the right of the
ethernet socket.
o Beagleboard C4
o Beagleboard xM Rev A
For uploading data to RedBoot we are going to use tftp. In this example we
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
The Beagleboard C4 has NAND, while the xM does not. For the sake of simplicity,
these instructions assume you have erased the NAND on the C4 so its boot
behavior matches that of the xM. To do this, issue the following commands from
the u-boot prompt (note that the unlock may be unecessary depending on the
version of u-boot installed on your board and only one of the erase commands
will succeed):
To set the IP address, run:
# nand unlock
# nand erase
# nand erase.chip
ip_address -l 192.168.9.2/24 -h 192.168.9.1
To further tailor these instructions for your board, please refer to the
documentation at http://www.beagleboard.org.
To download a kernel called "zimage" from the TFTP server, run:
From a Linux system with access to the image files perform the following steps
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
if used via a usb card reader):
load -r -b 0x100000 zimage
1. Partition and format an SD card:
# fdisk -lu /dev/mmcblk0
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
/dev/mmcblk0p2 144585 465884 160650 83 Linux
To write the kernel to flash run:
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
# mke2fs -j -L "root" /dev/mmcblk0p2
fis create kernel
The following assumes the SD card partition 1 and 2 are mounted at
/media/boot and /media/root respectively. Removing the card and reinserting
it will do just that on most modern Linux desktop environments.
The files referenced below are made available after the build in
build/tmp/deploy/images.
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
2. Install the boot loaders
# cp MLO-beagleboard /media/boot/MLO
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
load -r -b 0x100000 rootfs
3. Install the root filesystem
# tar x -C /media/root -f poky-image-$IMAGE_TYPE-beagleboard.tar.bz2
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
To write the root filesystem to flash run:
4. Install the kernel uImage
# cp uImage-beagleboard.bin /media/boot/uImage
fis create root
5. Prepare a u-boot script to simplify the boot process
The Beagleboard can be made to boot at this point from the u-boot command
shell. To automate this process, generate a user.scr script as follows.
To load and boot a kernel and rootfs from flash:
Install uboot-mkimage (from uboot-mkimage on Ubuntu or uboot-tools on Fedora).
fis load kernel
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
Prepare a script config:
To load and boot a kernel from a TFTP server with the rootfs over NFS:
# (cat << EOF
setenv bootcmd 'mmc init; fatload mmc 0:1 0x80300000 uImage; bootm 0x80300000'
setenv bootargs 'console=tty0 console=ttyO2,115200n8 root=/dev/mmcblk0p2 rootwait rootfstype=ext3 ro'
boot
EOF
) > serial-boot.cmd
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Core Minimal" -d ./serial-boot.cmd ./boot.scr
# cp boot.scr /media/boot
load -r -b 0x100000 zimage
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
6. Unmount the SD partitions, insert the SD card into the Beagleboard, and
boot the Beagleboard
The instructions above are for using the (default) NOR flash on the board,
there is also 128M of NAND flash. It is possible to install Poky to the NAND
flash which gives more space for the rootfs and instructions for using this are
given below. To switch to the NAND flash:
Note: As of the 2.6.37 linux-yocto kernel recipe, the Beagleboard uses the
OMAP_SERIAL device (ttyO2). If you are using an older kernel, such as the
2.6.34 linux-yocto-stable, be sure to replace ttyO2 with ttyS2 above. You
should also override the machine SERIAL_CONSOLE in your local.conf in
order to setup the getty on the serial line:
factive NAND
SERIAL_CONSOLE_beagleboard = "115200 ttyS2"
This will then restart RedBoot using the NAND rather than the NOR. If you
have not used the NAND before then it is unlikely that there will be a
partition table yet. You can get the list of partitions with 'fis list'.
If this shows no partitions then you can create them with:
Freescale MPC8315E-RDB (mpc8315e-rdb)
=====================================
fis init
The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
software development of network attached storage (NAS) and digital media server
applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
includes a built-in security accelerator.
The output of 'fis list' should now show:
Setup instructions
------------------
Name FLASH addr Mem addr Length Entry point
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
You will need the following:
* nfs root setup on your workstation
* tftp server installed on your workstation
Partitions for the kernel and rootfs need to be created:
Load the kernel and boot it as follows:
fis create -l 0x1A0000 -e 0x00100000 kernel
fis create -l 0x5000000 -e 0x00100000 root
1. Get the kernel (uImage.mpc8315erdb) and dtb (mpc8315erdb.dtb) files from
the Poky build tmp/deploy directory, and make them available on your tftp
server.
You may now use the instructions above for flashing. However it is important
to note that the erase block size for the NAND is different to the NOR so the
JFFS erase size will need to be changed to 0x4000. Stardard images are built
for NOR and you will need to build custom images for NAND.
2. Set up the environment in U-Boot:
You will also need to update the kernel command line to use the correct root
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
scheme shown above. If this fails then you can doublecheck against the output
from the kernel when it evaluates the available mtd partitions.
=>setenv ipaddr <board ip>
=>setenv serverip <tftp server ip>
=>setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
3. Download kernel and dtb to boot kernel.
Marvell PXA3xx Zylonite (zylonite)
==================================
=>tftp 800000 uImage.mpc8315erdb
=>tftp 780000 mpc8315erdb.dtb
=>bootm 800000 - 780000
These instructions assume the Zylonite is connected to a machine running a TFTP
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
to access the blob bootloader. The kernel is on the TFTP server as
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
the images are to be saved in NAND flash.
The following commands setup blob:
Ubiquiti Networks RouterStation Pro (routerstationpro)
======================================================
blob> setip client 192.168.123.4
blob> setip server 192.168.123.5
The RouterStation Pro is an Atheros AR7161 MIPS-based board. Geared towards
networking applications, it has all of the usual features as well as three
type IIIA mini-PCI slots and an on-board 3-port 10/100/1000 Ethernet switch,
in addition to the 10/100/1000 Ethernet WAN port which supports
Power-over-Ethernet.
To flash the kernel:
Setup instructions
------------------
blob> tftp zylonite-kernel
blob> nandwrite -j 0x80800000 0x60000 0x200000
You will need the following:
* A serial cable - female to female (or female to male + gender changer)
NOTE: cable must be straight through, *not* a null modem cable.
* USB flash drive or hard disk that is able to be powered from the
board's USB port.
* tftp server installed on your workstation
To flash the rootfs:
NOTE: in the following instructions it is assumed that /dev/sdb corresponds
to the USB disk when it is plugged into your workstation. If this is not the
case in your setup then please be careful to substitute the correct device
name in all commands where appropriate.
blob> tftp zylonite-rootfs
blob> nanderase -j 0x260000 0x5000000
blob> nandwrite -j 0x80800000 0x260000 <length>
--- Preparation ---
(where <length> is the rootfs size which will be printed by the tftp step)
1) Build an image (e.g. poky-image-minimal) using "routerstationpro" as the
MACHINE
To boot the board:
2) Partition the USB drive so that primary partition 1 is type Linux (83).
Minimum size depends on your root image size - poky-image-minimal probably
only needs 8-16MB, other images will need more.
blob> nkernel
blob> boot
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009e87d
Logic iMX31 Lite Kit (mx31litekit)
===============================
Device Boot Start End Blocks Id System
/dev/sdb1 62 1952751 976345 83 Linux
The easiest method to boot this board is to take an MMC/SD card and format
the first partition as ext2, then extract the poky image onto this as root.
Assuming the board is network connected, a TFTP server is available at
192.168.1.33 and a serial terminal is available (115200 8N1), the following
commands will boot a kernel called "mx31kern" from the TFTP server:
3) Format partition 1 on the USB as ext3
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
losh> exec 0x80100000 -
# mke2fs -j /dev/sdb1
4) Mount partition 1 and then extract the contents of
tmp/deploy/images/poky-image-XXXX.tar.bz2 into it (preserving permissions).
Phytec phyCORE-iMX31 (mx31phy)
==============================
# mount /dev/sdb1 /media/sdb1
# cd /media/sdb1
# tar -xvjpf tmp/deploy/images/poky-image-XXXX.tar.bz2
Support for this board is currently being developed. Experimental jffs2
images and a suitable kernel are available and are known to work with the
board.
5) Unmount the USB drive and then plug it into the board's USB port
6) Connect the board's serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
Consumer Devices
================
$ picocom /dev/ttyUSB0 -b 115200
FIC Neo1973 GTA01 smartphone (fic-gta01)
========================================
7) Connect the network into eth0 (the one that is NOT the 3 port switch). If
you are using power-over-ethernet then the board will power up at this point.
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
which you can build with "bitbake dfu-util-native" command.
8) Start up the board, watch the serial console. Hit Ctrl+C to abort the
autostart if the board is configured that way (it is by default). The
bootloader's fconfig command can be used to disable autostart and configure
the IP settings if you need to change them (default IP is 192.168.1.20).
Flashing requires these steps:
9) Make the kernel (tmp/deploy/images/vmlinux-routerstationpro.bin) available
on the tftp server.
1. Power down the device.
2. Connect the device to the host machine via USB.
3. Hold AUX key and press Power key. There should be a bootmenu
on screen.
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
The output should look like this:
10) If you are going to write the kernel to flash (optional - see "Booting a
kernel directly" below for the alternative), remove the current kernel and
rootfs flash partitions. You can list the partitions using the following
bootloader command:
dfu-util - (C) 2007 by OpenMoko Inc.
This program is Free Software and has ABSOLUTELY NO WARRANTY
RedBoot> fis list
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
You can delete the existing kernel and rootfs with these commands:
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
jffs2 image file to use as the root filesystem
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
RedBoot> fis delete kernel
RedBoot> fis delete rootfs
--- Booting a kernel directly ---
HTC Universal (htcuniversal)
============================
1) Load the kernel using the following bootloader command:
Note: HTC Universal support is highly experimental.
RedBoot> load -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin
On the HTC Universal, entirely replacing the Windows installation is not
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
has booted, Windows is no longer in memory or active but when power is removed,
the user will be returned to windows and will need to return to Linux from
there.
You should see a message on it being successfully loaded.
Once an MMC/SD card is available it is suggested its split into two partitions,
one for a program called HaRET which lets you boot Linux from within Windows
and the second for the rootfs. The HaRET partition should be the first partition
on the card and be vfat formatted. It doesn't need to be large, just enough for
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
second partition. The first partition should be vfat so Windows recognises it
as if it doesn't, it has been known to reformat cards.
2) Execute the kernel:
On the first partition you need three files:
RedBoot> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
* a HaRET binary (version 0.5.1 works well and a working version
should be part of the last Poky release)
* a kernel renamed to "zImage"
* a default.txt which contains:
Note that specifying the command line with -c is important as linux-yocto does
not provide a default command line.
set kernel "zImage"
set mtype "855"
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
boot2
--- Writing a kernel to flash ---
On the second parition the root file system is extracted as root. A different
partition layout or other kernel options can be changed in the default.txt file.
1) Go to your tftp server and gzip the kernel you want in flash. It should
halve the size.
When inserted into the device, Windows should see the card and let you browse
its contents using File Explorer. Running the HaRET binary will present a dialog
box (maybe after messages warning about running unsigned binaries) where you
select OK and you should then see Poky boot. Kernel messages can be seen by
adding psplash=false to the kernel commandline.
2) Load the kernel using the following bootloader command:
RedBoot> load -r -b 0x80600000 -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin.gz
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
============================================================
This should output something similar to the following:
Note: Nokia tablet support is highly experimental.
Raw file loaded 0x80600000-0x8087c537, assumed entry at 0x80600000
The Nokia internet tablet devices are OMAP based tablet formfactor devices
with large screens (800x480), wifi and touchscreen.
Calculate the length by subtracting the first number from the second number
and then rounding the result up to the nearest 0x1000.
To flash images to these devices you need the "flasher" utility which can be
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
utility needs to be run as root and the usb filesystem needs to be mounted
although most distributions will have done this for you. Once you have this
follow these steps:
3) Using the length calculated above, create a flash partition for the kernel:
1. Power down the device.
2. Connect the device to the host machine via USB
(connecting power to the device doesn't hurt either).
3. Run "flasher -i"
4. Power on the device.
5. The program should give an indication it's found
a tablet device. If not, recheck the cables, make sure you're
root and usbfs/usbdevfs is mounted.
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
jffs2 image file to use as the root filesystem
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
and <kernel> is the kernel to use
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
7. Run "flasher -R" to reboot the device.
8. The device should boot into Poky.
RedBoot> fis create -b 0x80600000 -l 0x240000 kernel
The nokia800 images and kernel will run on both the N800 and N810.
(change 0x240000 to your rounded length -- change "kernel" to whatever
you want to name your kernel)
--- Booting a kernel from flash ---
Sharp Zaurus SL-C7x0 series (c7x0)
==================================
To boot the flashed kernel perform the following steps.
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
these devices follow these steps:
1) At the bootloader prompt, load the kernel:
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
card as "initrd.bin":
RedBoot> fis load -d -e kernel
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
(Change the name "kernel" above if you chose something different earlier)
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
"zImage.bin":
(-e means 'elf', -d 'decompress')
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
2) Execute the kernel using the exec command as above.
4. Copy an updater script (updater.sh.c7x0) onto the card
as "updater.sh":
--- Automating the boot process ---
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
5. Power down the Zaurus.
6. Hold "OK" key and power on the device. An update menu should appear
(in Japanese).
7. Choose "Update" (item 4).
8. The next screen will ask for the source, choose the appropriate
card (CF or SD).
9. Make sure AC power is connected.
10. The next screen asks for confirmation, choose "Yes" (the left button).
11. The update process will start, flash the files on the card onto
the device and the device will then reboot into Poky.
Sharp Zaurus SL-C1000 (akita)
=============================
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
c7x0. To install Poky images on this device follow the instructions for
the c7x0 but replace "c7x0" with "akita" where appropriate.
Sharp Zaurus SL-C3x00 series (spitz)
====================================
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
to akita but with an internal microdrive. The installation procedure
assumes a standard microdrive based device where the root (first)
partition has been enlarged to fit the image (at least 100MB,
400MB for the SDK).
The procedure is the same as for the c7x0 and akita models with the
following differences:
1. Instead of a jffs2 image you need to copy a compressed tarball of the
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
card as "hdimage1.tgz":
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
2. You additionally need to copy a special tar utility (gnu-tar) onto
the card as "gnu-tar":
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar
After writing the kernel to flash and testing the load and exec commands
manually, you can automate the boot process with a boot script.
1) RedBoot> fconfig
(Answer the questions not specified here as they pertain to your environment)
2) Run script at boot: true
Boot script:
.. fis load -d -e kernel
.. exec
Enter script, terminate with empty line
>> fis load -d -e kernel
>> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
>>
3) Answer the remaining questions and write the changes to flash:
Update RedBoot non-volatile configuration - continue (y/n)? y
... Erase from 0xbfff0000-0xc0000000: .
... Program from 0x87ff0000-0x88000000 at 0xbfff0000: .
4) Power cycle the board.

View File

@@ -1,7 +1,7 @@
Tim Ansell <mithro@mithis.net>
Phil Blundell <pb@handhelds.org>
Seb Frankengul <seb@frankengul.org>
Holger Freyther <holger@moiji-mobile.com>
Holger Freyther <zecke@handhelds.org>
Marcin Juszkiewicz <marcin@juszkiewicz.com.pl>
Chris Larson <kergoth@handhelds.org>
Ulrich Luckas <luckas@musoft.de>

View File

@@ -23,18 +23,14 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
import sys
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])),
'lib'))
import optparse
import warnings
from traceback import format_exception
try:
import bb
except RuntimeError, exc:
sys.exit(str(exc))
from bb import event
import bb
import bb.msg
from bb import cooker
from bb import ui
@@ -43,9 +39,12 @@ from bb.server import none
#from bb.server import xmlrpc
__version__ = "1.11.0"
logger = logging.getLogger("BitBake")
#============================================================================#
# BBOptions
#============================================================================#
class BBConfiguration(object):
"""
Manages build options and configurations for one run
@@ -57,44 +56,34 @@ class BBConfiguration(object):
self.pkgs_to_build = []
def get_ui(config):
if config.ui:
interface = config.ui
else:
interface = 'knotty'
def print_exception(exc, value, tb):
"""Send exception information through bb.msg"""
bb.fatal("".join(format_exception(exc, value, tb, limit=8)))
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__("bb.ui", fromlist = [interface])
return getattr(module, interface).main
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, knotty [default]." % interface)
sys.excepthook = print_exception
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""Display python warning messages using bb.msg"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warn(s)
s = s.split("\n")[0]
bb.msg.warn(None, s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
warnings.simplefilter("ignore", DeprecationWarning)
#============================================================================#
# main
#============================================================================#
def main():
return_value = 1
parser = optparse.OptionParser(
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
usage = """%prog [options] [package ...]
@@ -170,11 +159,6 @@ Default BBFILES are the .bb files in the current directory.""")
configuration.pkgs_to_build.extend(args[1:])
configuration.initial_path = os.environ['PATH']
ui_main = get_ui(configuration)
loghandler = event.LogHandler()
logger.addHandler(loghandler)
#server = bb.server.xmlrpc
server = bb.server.none
@@ -191,17 +175,16 @@ Default BBFILES are the .bb files in the current directory.""")
bb.utils.clean_environment()
cooker = bb.cooker.BBCooker(configuration, server)
cooker.parseCommandLine()
serverinfo = server.BitbakeServerInfo(cooker.server)
server.BitBakeServerFork(cooker, cooker.server, serverinfo, cooker_logfile)
server.BitBakeServerFork(serverinfo, cooker.serve, cooker_logfile)
del cooker
logger.removeHandler(loghandler)
# Setup a connection to the server (cooker)
server_connection = server.BitBakeServerConnection(serverinfo)
serverConnection = server.BitBakeServerConnection(serverinfo)
# Launch the UI
if configuration.ui:
@@ -210,15 +193,25 @@ Default BBFILES are the .bb files in the current directory.""")
ui = "knotty"
try:
return server.BitbakeUILauch().launch(serverinfo, ui_main, server_connection.connection, server_connection.events)
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
uimodule = __import__("bb.ui", fromlist = [ui])
ui_init = getattr(uimodule, ui).init
except AttributeError:
print("FATAL: Invalid user interface '%s' specified. " % ui)
print("Valid interfaces are 'ncurses', 'depexp' or the default, 'knotty'.")
else:
try:
return_value = ui_init(serverConnection.connection, serverConnection.events)
except Exception as e:
print("FATAL: Unable to start to '%s' UI: %s" % (ui, e))
raise
finally:
server_connection.terminate()
serverConnection.terminate()
return return_value
if __name__ == "__main__":
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc(5)
ret = main()
sys.exit(ret)

View File

@@ -1,158 +0,0 @@
#!/usr/bin/env python
# This script has subcommands which operate against your bitbake layers, either
# displaying useful information, or acting against them.
# Currently, it only provides a show_appends command, which shows you what
# bbappends are in effect, and warns you if you have appends which are not being
# utilized.
import cmd
import logging
import os.path
import sys
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.cache
import bb.cooker
import bb.providers
from bb.cooker import state
from bb.server import none
logger = logging.getLogger('BitBake')
default_cmd = 'show_appends'
def main(args):
logging.basicConfig(format='%(levelname)s: %(message)s')
bb.utils.clean_environment()
cmds = Commands()
if args:
cmds.onecmd(' '.join(args))
else:
cmds.onecmd(default_cmd)
return cmds.returncode
class Commands(cmd.Cmd):
def __init__(self):
cmd.Cmd.__init__(self)
self.returncode = 0
self.config = Config(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
bb.server.none)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.prepare_cooker()
def prepare_cooker(self):
sys.stderr.write("Parsing recipes..")
logger.setLevel(logging.ERROR)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
self.cooker_data.appends = self.cooker.appendlist
def do_show_layers(self, args):
logger.info(str(self.config_data.getVar('BBLAYERS', True)))
def do_show_appends(self, args):
if not self.cooker_data.appends:
logger.info('No append files found')
return
logger.info('State of append files:')
for pn in self.cooker_data.pkg_pn:
self.show_appends_for_pn(pn)
self.show_appends_with_no_recipes()
def show_appends_for_pn(self, pn):
filenames = self.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.cooker.configuration.data,
self.cooker_data,
self.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
appended, missing = self.get_appends_for_files(filenames)
if appended:
for basename, appends in appended:
logger.info('%s:', basename)
for append in appends:
logger.info(' %s', append)
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
def get_appends_for_files(self, filenames):
appended, notappended = set(), set()
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
if cls:
continue
basename = os.path.basename(filename)
appends = self.cooker_data.appends.get(basename)
if appends:
appended.add((basename, frozenset(appends)))
else:
notappended.add(basename)
return appended, notappended
def show_appends_with_no_recipes(self):
recipes = set(os.path.basename(f)
for f in self.cooker_data.pkg_fn.iterkeys())
appended_recipes = self.cooker_data.appends.iterkeys()
appends_without_recipes = [self.cooker_data.appends[recipe]
for recipe in appended_recipes
if recipe not in recipes]
if appends_without_recipes:
appendlines = (' %s' % append
for appends in appends_without_recipes
for append in appends)
logger.warn('No recipes available for:\n%s',
'\n'.join(appendlines))
self.returncode |= 4
def do_EOF(self, line):
return True
class Config(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.file = []
self.debug = 0
self.__dict__.update(options)
def __getattr__(self, attribute):
try:
return super(Config, self).__getattribute__(attribute)
except AttributeError:
return None
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -100,9 +100,6 @@ the_data = cooker.bb_cache.loadDataFull(fn, cooker.get_file_appends(fn), cooker.
cooker.bb_cache.setData(fn, buildfile, the_data)
cooker.bb_cache.handle_data(fn, cooker.status)
#exportlist = bb.utils.preserved_envvars_export_list()
#bb.utils.filter_environment(exportlist)
if taskname.endswith("_setscene"):
the_data.setVarFlag(taskname, "quieterrors", "1")

View File

@@ -20,7 +20,7 @@
import optparse, os, sys
# bitbake
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__), 'lib'))
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
import bb.parse
from string import split, join
@@ -497,7 +497,7 @@ def main():
doc.insert_doc_item(doc_ins)
# let us create the HTML now
bb.utils.mkdirhier(output_dir)
bb.mkdirhier(output_dir)
os.chdir(output_dir)
# Let us create the sites now. We do it in the following order

View File

@@ -1,24 +1,4 @@
" Vim filetype detection file
" Language: BitBake
" Author: Ricardo Salveti <rsalveti@rsalveti.net>
" Copyright: Copyright (C) 2008 Ricardo Salveti <rsalveti@rsalveti.net>
" Licence: You may redistribute this under the same terms as Vim itself
"
" This sets up the syntax highlighting for BitBake files, like .bb, .bbclass and .inc
if &compatible || version < 600
finish
endif
" .bb and .bbclass
au BufNewFile,BufRead *.b{b,bclass} set filetype=bitbake
" .inc
au BufNewFile,BufRead *.inc set filetype=bitbake
" .conf
au BufNewFile,BufRead *.conf
\ if (match(expand("%:p:h"), "conf") > 0) |
\ set filetype=bitbake |
\ endif
au BufNewFile,BufRead *.bb setfiletype bitbake
au BufNewFile,BufRead *.bbclass setfiletype bitbake
au BufNewFile,BufRead *.inc setfiletype bitbake
" au BufNewFile,BufRead *.conf setfiletype bitbake

View File

@@ -1 +0,0 @@
set sts=4 sw=4 et

View File

@@ -1,85 +0,0 @@
" Vim plugin file
" Purpose: Create a template for new bb files
" Author: Ricardo Salveti <rsalveti@gmail.com>
" Copyright: Copyright (C) 2008 Ricardo Salveti <rsalveti@gmail.com>
"
" This file is licensed under the MIT license, see COPYING.MIT in
" this source distribution for the terms.
"
" Based on the gentoo-syntax package
"
" Will try to use git to find the user name and email
if &compatible || v:version < 600
finish
endif
fun! <SID>GetUserName()
let l:user_name = system("git-config --get user.name")
if v:shell_error
return "Unknow User"
else
return substitute(l:user_name, "\n", "", "")
endfun
fun! <SID>GetUserEmail()
let l:user_email = system("git-config --get user.email")
if v:shell_error
return "unknow@user.org"
else
return substitute(l:user_email, "\n", "", "")
endfun
fun! BBHeader()
let l:current_year = strftime("%Y")
let l:user_name = <SID>GetUserName()
let l:user_email = <SID>GetUserEmail()
0 put ='# Copyright (C) ' . l:current_year .
\ ' ' . l:user_name . ' <' . l:user_email . '>'
put ='# Released under the MIT license (see COPYING.MIT for the terms)'
$
endfun
fun! NewBBTemplate()
let l:paste = &paste
set nopaste
" Get the header
call BBHeader()
" New the bb template
put ='DESCRIPTION = \"\"'
put ='HOMEPAGE = \"\"'
put ='LICENSE = \"\"'
put ='SECTION = \"\"'
put ='DEPENDS = \"\"'
put ='PR = \"r0\"'
put =''
put ='SRC_URI = \"\"'
" Go to the first place to edit
0
/^DESCRIPTION =/
exec "normal 2f\""
if paste == 1
set paste
endif
endfun
if !exists("g:bb_create_on_empty")
let g:bb_create_on_empty = 1
endif
" disable in case of vimdiff
if v:progname =~ "vimdiff"
let g:bb_create_on_empty = 0
endif
augroup NewBB
au BufNewFile *.bb
\ if g:bb_create_on_empty |
\ call NewBBTemplate() |
\ endif
augroup END

View File

@@ -1,123 +1,127 @@
" Vim syntax file
" Language: BitBake bb/bbclasses/inc
" Author: Chris Larson <kergoth@handhelds.org>
" Ricardo Salveti <rsalveti@rsalveti.net>
" Copyright: Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
" Copyright (C) 2008 Ricardo Salveti <rsalveti@rsalveti.net>
"
" Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
" This file is licensed under the MIT license, see COPYING.MIT in
" this source distribution for the terms.
"
" Syntax highlighting for bb, bbclasses and inc files.
"
" It's an entirely new type, just has specific syntax in shell and python code
" Language: BitBake
" Maintainer: Chris Larson <kergoth@handhelds.org>
" Filenames: *.bb, *.bbclass
if &compatible || v:version < 600
finish
endif
if exists("b:current_syntax")
finish
if version < 600
syntax clear
elseif exists("b:current_syntax")
finish
endif
syn case match
" Catch incorrect syntax (only matches if nothing else does)
"
syn match bbUnmatched "."
syn include @python syntax/python.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
" BitBake syntax
" Matching case
syn case match
" Other
" Indicates the error when nothing is matched
syn match bbUnmatched "."
syn match bbComment "^#.*$" display contains=bbTodo
syn keyword bbTodo TODO FIXME XXX contained
syn match bbDelimiter "[(){}=]" contained
syn match bbQuote /['"]/ contained
syn match bbArrayBrackets "[\[\]]" contained
" Comments
syn cluster bbCommentGroup contains=bbTodo,@Spell
syn keyword bbTodo COMBAK FIXME TODO XXX contained
syn match bbComment "#.*$" contains=@bbCommentGroup
" String helpers
syn match bbQuote +['"]+ contained
syn match bbDelimiter "[(){}=]" contained
syn match bbArrayBrackets "[\[\]]" contained
" BitBake strings
syn match bbContinue "\\$"
syn region bbString matchgroup=bbQuote start=+"+ skip=+\\$+ excludenl end=+"+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ excludenl end=+'+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
" Vars definition
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ excludenl end=+}+ contained contains=@python
syn match bbContinue "\\$"
syn region bbString matchgroup=bbQuote start=/"/ skip=/\\$/ excludenl end=/"/ contained keepend contains=bbTodo,bbContinue,bbVarInlinePy,bbVarDeref
syn region bbString matchgroup=bbQuote start=/'/ skip=/\\$/ excludenl end=/'/ contained keepend contains=bbTodo,bbContinue,bbVarInlinePy,bbVarDeref
" Vars metadata flags
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
" BitBake variable metadata
" Includes and requires
syn keyword bbInclude inherit include require contained
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
syn match bbVarBraces "[\${}]"
syn region bbVarDeref matchgroup=bbVarBraces start="${" end="}" contained
" syn region bbVarDeref start="${" end="}" contained
" syn region bbVarInlinePy start="${@" end="}" contained contains=@python
syn region bbVarInlinePy matchgroup=bbVarBraces start="${@" end="}" contained contains=@python
" Add taks and similar
syn keyword bbStatement addtask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
" syn match bbVarDeref "${[a-zA-Z0-9\-_\.]\+}" contained
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.]\+\(_[${}a-zA/-Z0-9\-_\.]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
syn match bbIdentifier "[a-zA-Z0-9\-_\./]\+" display contained
"syn keyword bbVarEq = display contained nextgroup=bbVarValue
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
syn match bbVarValue ".*$" contained contains=bbString
" BitBake variable metadata flags
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
"syn match bbVarFlagFlag "\[\([a-zA-Z0-9\-_\.]\+\)\]\s*\(=\)\@=" contains=bbIdentifier nextgroup=bbVarEq
" Functions!
syn match bbFunction "\h\w*" display contained
" BitBake python metadata
syn keyword bbPythonFlag python contained nextgroup=bbFunction
syn match bbPythonFuncDef "^\(python\s\+\)\(\w\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPythonFlag,bbFunction,bbDelimiter nextgroup=bbPythonFuncRegion skipwhite
syn region bbPythonFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
"hi def link bbPythonFuncRegion Comment
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_-]*" display contained contains=bbOEFunctions
" BitBake shell metadata
syn include @shell syntax/sh.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([0-9A-Za-z_-]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
" BitBake python metadata
syn keyword bbPyFlag python contained
syn match bbPyFuncDef "^\(python\s\+\)\([0-9A-Za-z_-]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPyFlag,bbFunction,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
syn keyword bbFakerootFlag fakeroot contained nextgroup=bbFunction
syn match bbShellFuncDef "^\(fakeroot\s*\)\?\(\w\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbFakerootFlag,bbFunction,bbDelimiter nextgroup=bbShellFuncRegion skipwhite
syn region bbShellFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
"hi def link bbShellFuncRegion Comment
" BitBake 'def'd python functions
syn keyword bbPyDef def contained
syn region bbPyDefRegion start='^\(def\s\+\)\([0-9A-Za-z_-]\+\)\(\s*(.*)\s*\):\s*$' end='^\(\s\|$\)\@!' contains=@python
syn keyword bbDef def contained
syn region bbDefRegion start='^def\s\+\w\+\s*([^)]*)\s*:\s*$' end='^\(\s\|$\)\@!' contains=@python
" Highlighting Definitions
hi def link bbUnmatched Error
hi def link bbInclude Include
hi def link bbTodo Todo
hi def link bbComment Comment
hi def link bbQuote String
hi def link bbString String
hi def link bbDelimiter Keyword
hi def link bbArrayBrackets Statement
hi def link bbContinue Special
hi def link bbExport Type
hi def link bbExportFlag Type
hi def link bbIdentifier Identifier
hi def link bbVarDeref PreProc
hi def link bbVarDef Identifier
hi def link bbVarValue String
hi def link bbShFakeRootFlag Type
hi def link bbFunction Function
hi def link bbPyFlag Type
hi def link bbPyDef Statement
hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
" BitBake statements
syn keyword bbStatement include inherit require addtask addhandler EXPORT_FUNCTIONS display contained
syn match bbStatementLine "^\(include\|inherit\|require\|addtask\|addhandler\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
syn match bbStatementRest ".*$" contained contains=bbString,bbVarDeref
" Highlight
"
hi def link bbArrayBrackets Statement
hi def link bbUnmatched Error
hi def link bbContinue Special
hi def link bbDef Statement
hi def link bbPythonFlag Type
hi def link bbExportFlag Type
hi def link bbFakerootFlag Type
hi def link bbStatement Statement
hi def link bbString String
hi def link bbTodo Todo
hi def link bbComment Comment
hi def link bbOperator Operator
hi def link bbError Error
hi def link bbFunction Function
hi def link bbDelimiter Delimiter
hi def link bbIdentifier Identifier
hi def link bbVarEq Operator
hi def link bbQuote String
hi def link bbVarValue String
" hi def link bbVarInlinePy PreProc
hi def link bbVarDeref PreProc
hi def link bbVarBraces PreProc
let b:current_syntax = "bb"

View File

@@ -45,7 +45,7 @@ endif
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
$(xmltotypes): $(manual)
$(call command,xmlto --with-dblatex --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
$(call command,xmlto --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
clean:
rm -rf $(cleanfiles)

View File

@@ -97,7 +97,7 @@ share common metadata between many packages.</para></listitem>
<title>Setting a default value (??=)</title>
<para><screen><varname>A</varname> ??= "somevalue"</screen></para>
<para><screen><varname>A</varname> ??= "someothervalue"</screen></para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy version of ?=, in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used.</para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy version of ??=, in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used.</para>
</section>
<section>
<title>Immediate variable expansion (:=)</title>
@@ -318,7 +318,7 @@ a per URI parameters separated by a <quote>;</quote> consisting of a key and a v
<title>CVS File Fetcher</title>
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
</para>
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>. If <varname>scmdata</varname> is set to <quote>keep</quote>
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout by default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</screen>
@@ -351,7 +351,7 @@ will be tried first when fetching a file if that fails the actual file will be t
</para>
<para>This Fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command, <varname>DL_DIR</varname> is the directory where tarballs will be saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
</para>
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the subversion protocol, <varname>rev</varname> is the subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname>. <varname>proto</varname> is the subversion prototype, <varname>rev</varname> is the subversions revision.
</para>
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
@@ -364,7 +364,7 @@ will be tried first when fetching a file if that fails the actual file will be t
</para>
<para>The Variables <varname>DL_DIR</varname>, <varname>GITDIR</varname> are used. <varname>DL_DIR</varname> will be used to store the checkedout version. <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
</para>
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>.
</para>
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"

View File

@@ -28,41 +28,6 @@ if sys.version_info < (2, 6, 0):
raise RuntimeError("Sorry, python 2.6.0 or later is required for this version of bitbake")
import os
import logging
import traceback
class NullHandler(logging.Handler):
def emit(self, record):
pass
Logger = logging.getLoggerClass()
class BBLogger(Logger):
def __init__(self, name):
if name.split(".")[0] == "BitBake":
self.debug = self.bbdebug
Logger.__init__(self, name)
def bbdebug(self, level, msg, *args, **kwargs):
return self.log(logging.DEBUG - level + 1, msg, *args, **kwargs)
def plain(self, msg, *args, **kwargs):
return self.log(logging.INFO + 1, msg, *args, **kwargs)
def verbose(self, msg, *args, **kwargs):
return self.log(logging.INFO - 1, msg, *args, **kwargs)
def exception(self, msg, *args, **kwargs):
return self.critical("%s\n%s" % (msg, traceback.format_exc()), *args, **kwargs)
logging.raiseExceptions = False
logging.setLoggerClass(BBLogger)
logger = logging.getLogger("BitBake")
logger.addHandler(NullHandler())
logger.setLevel(logging.INFO)
# This has to be imported after the setLoggerClass, as the import of bb.msg
# can result in construction of the various loggers.
import bb.msg
if "BBDEBUG" in os.environ:
@@ -70,29 +35,25 @@ if "BBDEBUG" in os.environ:
if level:
bb.msg.set_debug_level(level)
if True or os.environ.get("BBFETCH2"):
from bb import fetch2 as fetch
sys.modules['bb.fetch'] = sys.modules['bb.fetch2']
# Messaging convenience functions
def plain(*args):
logger.plain(''.join(args))
bb.msg.plain(''.join(args))
def debug(lvl, *args):
logger.debug(lvl, ''.join(args))
bb.msg.debug(lvl, None, ''.join(args))
def note(*args):
logger.info(''.join(args))
bb.msg.note(1, None, ''.join(args))
def warn(*args):
logger.warn(''.join(args))
bb.msg.warn(None, ''.join(args))
def error(*args):
logger.error(''.join(args))
bb.msg.error(None, ''.join(args))
def fatal(*args):
logger.critical(''.join(args))
sys.exit(1)
bb.msg.fatal(None, ''.join(args))
def deprecated(func, name = None, advice = ""):

View File

@@ -25,20 +25,9 @@
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import bb
import bb.msg
import bb.process
from contextlib import nested
from bb import data, event, mkdirhier, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
NULL = open(os.devnull, 'r+')
import bb, os, sys
import bb.utils
# When we execute a python function we'd like certain things
# in all namespaces, hence we add them to __builtins__
@@ -47,22 +36,13 @@ NULL = open(os.devnull, 'r+')
__builtins__['bb'] = bb
__builtins__['os'] = os
# events
class FuncFailed(Exception):
def __init__(self, name = None, logfile = None):
self.logfile = logfile
self.name = name
if name:
self.msg = "Function '%s' failed" % name
else:
self.msg = "Function failed"
def __str__(self):
if self.logfile and os.path.exists(self.logfile):
msg = ("%s (see %s for further information)" %
(self.msg, self.logfile))
else:
msg = self.msg
return msg
"""
Executed function failed
First parameter a message
Second paramter is a logfile (optional)
"""
class TaskBase(event.Event):
"""Base class for task events"""
@@ -89,56 +69,38 @@ class TaskSucceeded(TaskBase):
class TaskFailed(TaskBase):
"""Task execution failed"""
def __init__(self, task, logfile, metadata):
def __init__(self, msg, logfile, t, d ):
self.logfile = logfile
super(TaskFailed, self).__init__(task, metadata)
self.msg = msg
TaskBase.__init__(self, t, d)
class TaskInvalid(TaskBase):
"""Invalid Task"""
def __init__(self, task, metadata):
super(TaskInvalid, self).__init__(task, metadata)
self._message = "No such task '%s'" % task
class LogTee(object):
def __init__(self, logger, outfile):
self.outfile = outfile
self.logger = logger
self.name = self.outfile.name
def write(self, string):
self.logger.plain(string)
self.outfile.write(string)
def __enter__(self):
self.outfile.__enter__()
return self
def __exit__(self, *excinfo):
self.outfile.__exit__(*excinfo)
def __repr__(self):
return '<LogTee {0}>'.format(self.name)
# functions
def exec_func(func, d, dirs = None):
"""Execute an BB 'function'"""
body = data.getVar(func, d)
if not body:
if body is None:
logger.warn("Function %s doesn't exist", func)
bb.warn("Function %s doesn't exist" % func)
return
flags = data.getVarFlags(func, d)
cleandirs = flags.get('cleandirs')
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot', 'task']:
if not item in flags:
flags[item] = None
ispython = flags['python']
cleandirs = flags['cleandirs']
if cleandirs:
for cdir in data.expand(cleandirs, d).split():
bb.utils.remove(cdir, True)
os.system("rm -rf %s" % cdir)
if dirs is None:
dirs = flags.get('dirs')
dirs = flags['dirs']
if dirs:
dirs = data.expand(dirs, d).split()
@@ -148,254 +110,277 @@ def exec_func(func, d, dirs = None):
adir = dirs[-1]
else:
adir = data.getVar('B', d, 1)
if not os.path.exists(adir):
adir = None
ispython = flags.get('python')
if flags.get('fakeroot') and not flags.get('task'):
bb.fatal("Function %s specifies fakeroot but isn't a task?!" % func)
# Save current directory
try:
prevdir = os.getcwd()
except OSError:
prevdir = data.getVar('TOPDIR', d, True)
lockflag = flags.get('lockfiles')
if lockflag:
lockfiles = [data.expand(f, d) for f in lockflag.split()]
else:
lockfiles = None
# Setup scriptfile
t = data.getVar('T', d, 1)
if not t:
raise SystemExit("T variable not set, unable to build")
bb.utils.mkdirhier(t)
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
logfile = d.getVar("BB_LOGFILE", True)
tempdir = data.getVar('T', d, 1)
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
# Change to correct directory (if specified)
if adir and os.access(adir, os.F_OK):
os.chdir(adir)
with bb.utils.fileslocked(lockfiles):
locks = []
lockfiles = flags['lockfiles']
if lockfiles:
for lock in data.expand(lockfiles, d).split():
locks.append(bb.utils.lockfile(lock))
try:
# Run the function
if ispython:
exec_func_python(func, d, runfile, cwd=adir)
exec_func_python(func, d, runfile, logfile)
else:
exec_func_shell(func, d, runfile, cwd=adir)
exec_func_shell(func, d, runfile, logfile, flags)
_functionfmt = """
def {function}(d):
{body}
# Restore original directory
try:
os.chdir(prevdir)
except:
pass
{function}(d)
"""
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
def exec_func_python(func, d, runfile, cwd=None):
finally:
# Unlock any lockfiles
for lock in locks:
bb.utils.unlockfile(lock)
def exec_func_python(func, d, runfile, logfile):
"""Execute a python BB 'function'"""
bbfile = d.getVar('FILE', True)
try:
olddir = os.getcwd()
except OSError:
olddir = None
code = _functionfmt.format(function=func, body=d.getVar(func, True))
bb.utils.mkdirhier(os.path.dirname(runfile))
with open(runfile, 'w') as script:
script.write(code)
if cwd:
os.chdir(cwd)
bbfile = bb.data.getVar('FILE', d, 1)
tmp = "def " + func + "(d):\n%s" % data.getVar(func, d)
tmp += '\n' + func + '(d)'
f = open(runfile, "w")
f.write(tmp)
comp = utils.better_compile(tmp, func, bbfile)
try:
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
utils.better_exec(comp, {"d": d}, tmp, bbfile)
except:
if sys.exc_info()[0] in (bb.parse.SkipPackage, bb.build.FuncFailed):
(t, value, tb) = sys.exc_info()
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
raise
raise FuncFailed("Function %s failed" % func, logfile)
raise FuncFailed(func, None)
finally:
if olddir:
os.chdir(olddir)
def exec_func_shell(function, d, runfile, cwd=None):
"""Execute a shell function from the metadata
def exec_func_shell(func, d, runfile, logfile, flags):
"""Execute a shell BB 'function' Returns true if execution was successful.
For this, it creates a bash shell script in the tmp dectory, writes the local
data into it and finally executes. The output of the shell will end in a log file and stdout.
Note on directory behavior. The 'dirs' varflag should contain a list
of the directories you need created prior to execution. The last
item in the list is where we will chdir/cd to.
"""
# Don't let the emitted shell script override PWD
d.delVarFlag('PWD', 'export')
deps = flags['deps']
check = flags['check']
if check in globals():
if globals()[check](func, deps):
return
with open(runfile, 'w') as script:
script.write('#!/bin/sh -e\n')
if logger.isEnabledFor(logging.DEBUG):
script.write("set -x\n")
data.emit_func(function, script, d)
if cwd:
script.write("cd %s\n" % cwd)
script.write("%s\n" % function)
os.fchmod(script.fileno(), 0775)
f = open(runfile, "w")
f.write("#!/bin/sh -e\n")
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
data.emit_func(func, f, d)
env = {
'PATH': d.getVar('PATH', True),
'LC_ALL': 'C',
}
f.write("cd %s\n" % os.getcwd())
if func: f.write("%s\n" % func)
f.close()
os.chmod(runfile, 0775)
if not func:
raise FuncFailed("Function not specified for exec_func_shell")
cmd = runfile
# execute function
if flags['fakeroot'] and not flags['task']:
bb.fatal("Function %s specifies fakeroot but isn't a task?!" % func)
if logger.isEnabledFor(logging.DEBUG):
logfile = LogTee(logger, sys.stdout)
else:
logfile = sys.stdout
lang_environment = "LC_ALL=C "
ret = os.system('%ssh -e %s' % (lang_environment, runfile))
try:
bb.process.run(cmd, env=env, shell=False, stdin=NULL, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(function, logfn)
if ret == 0:
return
def _task_data(fn, task, d):
localdata = data.createCopy(d)
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:], d.getVar('OVERRIDES', False)))
localdata.finalize()
data.expandKeys(localdata)
return localdata
raise FuncFailed("function %s failed" % func, logfile)
def _exec_task(fn, task, d, quieterr):
"""Execute a BB 'task'
Execution of a task involves a bit more setup than executing a function,
running it with its own local metadata, and with some useful variables set.
"""
def exec_task(fn, task, d):
"""Execute an BB 'task'
The primary difference between executing a task versus executing
a function is that a task exists in the task digraph, and therefore
has dependencies amongst other tasks."""
# Check whther this is a valid task
if not data.getVarFlag(task, 'task', d):
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
bb.msg.error(bb.msg.domain.Build, "No such task: %s" % task)
return 1
logger.debug(1, "Executing task %s", task)
quieterr = False
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T', True)
if not tempdir:
bb.fatal("T variable not set, unable to build")
try:
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % task)
old_overrides = data.getVar('OVERRIDES', d, 0)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', 'task-%s:%s' % (task[3:], old_overrides), localdata)
data.update_data(localdata)
data.expandKeys(localdata)
data.setVar('BB_FILENAME', fn, d)
data.setVar('BB_CURRENTTASK', task[3:], d)
event.fire(TaskStarted(task, localdata), localdata)
bb.utils.mkdirhier(tempdir)
loglink = os.path.join(tempdir, 'log.{0}'.format(task))
logfn = os.path.join(tempdir, 'log.{0}.{1}'.format(task, os.getpid()))
if loglink:
bb.utils.remove(loglink)
# Setup logfiles
t = data.getVar('T', d, 1)
if not t:
raise SystemExit("T variable not set, unable to build")
bb.utils.mkdirhier(t)
loglink = "%s/log.%s" % (t, task)
logfile = "%s/log.%s.%s" % (t, task, str(os.getpid()))
d.setVar("BB_LOGFILE", logfile)
# Even though the log file has not yet been opened, lets create the link
if loglink:
try:
os.remove(loglink)
except OSError as e:
pass
try:
os.symlink(logfile, loglink)
except OSError as e:
pass
# Handle logfiles
si = file('/dev/null', 'r')
try:
os.symlink(logfn, loglink)
except OSError:
pass
so = file(logfile, 'w')
except OSError as e:
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
pass
se = so
prefuncs = localdata.getVarFlag(task, 'prefuncs', expand=True)
postfuncs = localdata.getVarFlag(task, 'postfuncs', expand=True)
# Dup the existing fds so we dont lose them
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Handle logfiles
si = file('/dev/null', 'r')
try:
logfile = file(logfn, 'w')
except OSError:
logger.exception("Opening log file '%s'", logfn)
pass
# Replace those fds with our own
os.dup2(si.fileno(), osi[1])
os.dup2(so.fileno(), oso[1])
os.dup2(se.fileno(), ose[1])
# Dup the existing fds so we dont lose them
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Since we've remapped stdout and stderr, its safe for log messages to be printed there now
# exec_func can nest so we have to save state
origstdout = bb.event.useStdout
bb.event.useStdout = True
# Replace those fds with our own
os.dup2(si.fileno(), osi[1])
os.dup2(logfile.fileno(), oso[1])
os.dup2(logfile.fileno(), ose[1])
# Ensure python logging goes to the logfile
handler = logging.StreamHandler(logfile)
handler.setFormatter(logformatter)
bblogger.addHandler(handler)
localdata.setVar('BB_LOGFILE', logfn)
event.fire(TaskStarted(task, localdata), localdata)
try:
for func in (prefuncs or '').split():
prefuncs = (data.getVarFlag(task, 'prefuncs', localdata) or "").split()
for func in prefuncs:
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
postfuncs = (data.getVarFlag(task, 'postfuncs', localdata) or "").split()
for func in postfuncs:
exec_func(func, localdata)
except FuncFailed as exc:
event.fire(TaskSucceeded(task, localdata), localdata)
# make stamp, or cause event and raise exception
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
make_stamp(task, d)
except FuncFailed as message:
# Try to extract the optional logfile
try:
(msg, logfile) = message
except:
logfile = None
msg = message
if not quieterr:
logger.error(str(exc))
event.fire(TaskFailed(task, logfn, localdata), localdata)
bb.msg.error(bb.msg.domain.Build, "Task failed: %s" % message )
failedevent = TaskFailed(msg, logfile, task, d)
event.fire(failedevent, d)
return 1
except Exception:
from traceback import format_exc
if not quieterr:
bb.msg.error(bb.msg.domain.Build, "Build of %s failed" % (task))
bb.msg.error(bb.msg.domain.Build, format_exc())
failedevent = TaskFailed("Task Failed", None, task, d)
event.fire(failedevent, d)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
bblogger.removeHandler(handler)
bb.event.useStdout = origstdout
# Restore the backup fds
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Close our logs
si.close()
so.close()
se.close()
if logfile and os.path.exists(logfile) and os.path.getsize(logfile) == 0:
bb.msg.debug(2, bb.msg.domain.Build, "Zero size logfile %s, removing" % logfile)
os.remove(logfile)
try:
os.remove(loglink)
except OSError as e:
pass
# Close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
si.close()
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug(2, "Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp') and not localdata.getVarFlag(task, 'selfstamp'):
make_stamp(task, localdata)
return 0
def exec_task(fn, task, d):
try:
quieterr = False
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
def extract_stamp(d, fn):
"""
Extracts stamp format which is either a data dictionary (fn unset)
or a dataCache entry (fn set).
"""
if fn:
return d.stamp[fn]
return data.getVar('STAMP', d, 1)
return _exec_task(fn, task, d, quieterr)
except Exception:
from traceback import format_exc
if not quieterr:
logger.error("Build of %s failed" % (task))
logger.error(format_exc())
failedevent = TaskFailed(task, None, d)
event.fire(failedevent, d)
return 1
def stamp_internal(taskname, d, file_name):
def stamp_internal(task, d, file_name):
"""
Internal stamp helper function
Removes any stamp for the given task
Makes sure the stamp directory exists
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
stamp = extract_stamp(d, file_name)
if not stamp:
return
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
stamp = "%s.%s" % (stamp, task)
bb.utils.mkdirhier(os.path.dirname(stamp))
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if os.access(stamp, os.F_OK):
os.remove(stamp)
return stamp
def make_stamp(task, d, file_name = None):
@@ -404,10 +389,7 @@ def make_stamp(task, d, file_name = None):
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if stamp:
bb.utils.remove(stamp)
f = open(stamp, "w")
f.close()
@@ -416,15 +398,7 @@ def del_stamp(task, d, file_name = None):
Removes a stamp for a given task
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
bb.utils.remove(stamp)
def stampfile(taskname, d, file_name = None):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name)
stamp_internal(task, d, file_name)
def add_tasks(tasklist, d):
task_deps = data.getVar('_task_deps', d)
@@ -455,7 +429,6 @@ def add_tasks(tasklist, d):
getTask('recrdeptask')
getTask('nostamp')
getTask('fakeroot')
getTask('noexec')
task_deps['parents'][task] = []
for dep in flags['deps']:
dep = data.expand(dep, d)

View File

@@ -29,165 +29,27 @@
import os
import logging
from collections import defaultdict, namedtuple
import bb.data
import bb.utils
logger = logging.getLogger("BitBake.Cache")
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
__cache_version__ = "138"
__cache_version__ = "132"
recipe_fields = (
'pn',
'pv',
'pr',
'pe',
'defaultpref',
'depends',
'provides',
'task_deps',
'stamp',
'stamp_extrainfo',
'broken',
'not_world',
'skipped',
'timestamp',
'packages',
'packages_dynamic',
'rdepends',
'rdepends_pkg',
'rprovides',
'rprovides_pkg',
'rrecommends',
'rrecommends_pkg',
'nocache',
'variants',
'file_depends',
'tasks',
'basetaskhashes',
'hashfilename',
'inherits',
'summary',
'license',
'section',
'fakerootenv',
'fakerootdirs'
)
class RecipeInfo(namedtuple('RecipeInfo', recipe_fields)):
__slots__ = ()
@classmethod
def listvar(cls, var, metadata):
return cls.getvar(var, metadata).split()
@classmethod
def intvar(cls, var, metadata):
return int(cls.getvar(var, metadata) or 0)
@classmethod
def depvar(cls, var, metadata):
return bb.utils.explode_deps(cls.getvar(var, metadata))
@classmethod
def pkgvar(cls, var, packages, metadata):
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
for pkg in packages)
@classmethod
def taskvar(cls, var, tasks, metadata):
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
for task in tasks)
@classmethod
def flaglist(cls, flag, varlist, metadata):
return dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
@classmethod
def getvar(cls, var, metadata):
return metadata.getVar(var, True) or ''
@classmethod
def make_optional(cls, default=None, **kwargs):
"""Construct the namedtuple from the specified keyword arguments,
with every value considered optional, using the default value if
it was not specified."""
for field in cls._fields:
kwargs[field] = kwargs.get(field, default)
return cls(**kwargs)
@classmethod
def from_metadata(cls, filename, metadata):
if cls.getvar('__SKIPPED', metadata):
return cls.make_optional(skipped=True)
tasks = metadata.getVar('__BBTASKS', False)
pn = cls.getvar('PN', metadata)
packages = cls.listvar('PACKAGES', metadata)
if not pn in packages:
packages.append(pn)
return RecipeInfo(
tasks = tasks,
basetaskhashes = cls.taskvar('BB_BASEHASH', tasks, metadata),
hashfilename = cls.getvar('BB_HASHFILENAME', metadata),
file_depends = metadata.getVar('__depends', False),
task_deps = metadata.getVar('_task_deps', False) or
{'tasks': [], 'parents': {}},
variants = cls.listvar('__VARIANTS', metadata) + [''],
skipped = False,
timestamp = bb.parse.cached_mtime(filename),
packages = cls.listvar('PACKAGES', metadata),
pn = pn,
pe = cls.getvar('PE', metadata),
pv = cls.getvar('PV', metadata),
pr = cls.getvar('PR', metadata),
nocache = cls.getvar('__BB_DONT_CACHE', metadata),
defaultpref = cls.intvar('DEFAULT_PREFERENCE', metadata),
broken = cls.getvar('BROKEN', metadata),
not_world = cls.getvar('EXCLUDE_FROM_WORLD', metadata),
stamp = cls.getvar('STAMP', metadata),
stamp_extrainfo = cls.flaglist('stamp-extra-info', tasks, metadata),
packages_dynamic = cls.listvar('PACKAGES_DYNAMIC', metadata),
depends = cls.depvar('DEPENDS', metadata),
provides = cls.depvar('PROVIDES', metadata),
rdepends = cls.depvar('RDEPENDS', metadata),
rprovides = cls.depvar('RPROVIDES', metadata),
rrecommends = cls.depvar('RRECOMMENDS', metadata),
rprovides_pkg = cls.pkgvar('RPROVIDES', packages, metadata),
rdepends_pkg = cls.pkgvar('RDEPENDS', packages, metadata),
rrecommends_pkg = cls.pkgvar('RRECOMMENDS', packages, metadata),
inherits = cls.getvar('__inherit_cache', metadata),
summary = cls.getvar('SUMMARY', metadata),
license = cls.getvar('LICENSE', metadata),
section = cls.getvar('SECTION', metadata),
fakerootenv = cls.getvar('FAKEROOTENV', metadata),
fakerootdirs = cls.getvar('FAKEROOTDIRS', metadata),
)
class Cache(object):
class Cache:
"""
BitBake Cache implementation
"""
def __init__(self, data):
self.cachedir = bb.data.getVar("CACHE", data, True)
self.clean = set()
self.checked = set()
self.clean = {}
self.checked = {}
self.depends_cache = {}
self.data = None
self.data_fn = None
@@ -195,74 +57,92 @@ class Cache(object):
if self.cachedir in [None, '']:
self.has_cache = False
logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
bb.msg.note(1, bb.msg.domain.Cache, "Not using a cache. Set CACHE = <directory> to enable.")
return
self.has_cache = True
self.cachefile = os.path.join(self.cachedir, "bb_cache.dat")
logger.debug(1, "Using cache in '%s'", self.cachedir)
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
bb.utils.mkdirhier(self.cachedir)
# If any of configuration.data's dependencies are newer than the
# cache there isn't even any point in loading it...
newest_mtime = 0
deps = bb.data.getVar("__base_depends", data)
deps = bb.data.getVar("__depends", data)
old_mtimes = [old_mtime for _, old_mtime in deps]
old_mtimes = [old_mtime for f, old_mtime in deps]
old_mtimes.append(newest_mtime)
newest_mtime = max(old_mtimes)
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
def load_cachefile(self):
with open(self.cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
p = pickle.Unpickler(file(self.cachefile, "rb"))
self.depends_cache, version_data = p.load()
if version_data['CACHE_VER'] != __cache_version__:
raise ValueError('Cache Version Mismatch')
if version_data['BITBAKE_VER'] != bb.__version__:
raise ValueError('Bitbake Version Mismatch')
except EOFError:
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
self.depends_cache = {}
except:
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
self.depends_cache = {}
else:
if os.path.isfile(self.cachefile):
bb.msg.note(1, bb.msg.domain.Cache, "Out of date cache found, rebuilding...")
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
def getVar(self, var, fn, exp = 0):
"""
Gets the value of a variable
(similar to getVar in the data class)
cachesize = os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
There are two scenarios:
1. We have cached data - serve from depends_cache[fn]
2. We're learning what data to cache - serve from data
backend but add a copy of the data to the cache.
"""
if fn in self.clean:
return self.depends_cache[fn][var]
previous_percent = 0
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
self.depends_cache.setdefault(fn, {})
self.depends_cache[key] = value
if fn != self.data_fn:
# We're trying to access data in the cache which doesn't exist
# yet setData hasn't been called to setup the right access. Very bad.
bb.msg.error(bb.msg.domain.Cache, "Parsing error data_fn %s and fn %s don't match" % (self.data_fn, fn))
# only fire events on even percentage boundaries
current_progress = cachefile.tell()
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress),
self.data)
self.cacheclean = False
result = bb.data.getVar(var, self.data, exp)
self.depends_cache[fn][var] = result
return result
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
len(self.depends_cache)),
self.data)
def setData(self, virtualfn, fn, data):
"""
Called to prime bb_cache ready to learn which variables to cache.
Will be followed by calls to self.getVar which aren't cached
but can be fulfilled from self.data.
"""
self.data_fn = virtualfn
self.data = data
@staticmethod
def virtualfn2realfn(virtualfn):
# Make sure __depends makes the depends_cache
# If we're a virtual class we need to make sure all our depends are appended
# to the depends of fn.
depends = self.getVar("__depends", virtualfn) or set()
self.depends_cache.setdefault(fn, {})
if "__depends" not in self.depends_cache[fn] or not self.depends_cache[fn]["__depends"]:
self.depends_cache[fn]["__depends"] = depends
else:
self.depends_cache[fn]["__depends"].update(depends)
# Make sure the variants always make it into the cache too
self.getVar('__VARIANTS', virtualfn, True)
self.depends_cache[virtualfn]["CACHETIMESTAMP"] = bb.parse.cached_mtime(fn)
def virtualfn2realfn(self, virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
"""
@@ -272,94 +152,79 @@ class Cache(object):
if virtualfn.startswith('virtual:'):
cls = virtualfn.split(':', 2)[1]
fn = virtualfn.replace('virtual:' + cls + ':', '')
#bb.msg.debug(2, bb.msg.domain.Cache, "virtualfn2realfn %s to %s %s" % (virtualfn, fn, cls))
return (fn, cls)
@staticmethod
def realfn2virtual(realfn, cls):
def realfn2virtual(self, realfn, cls):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls == "":
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and '%s' to %s" % (realfn, cls, realfn))
return realfn
#bb.msg.debug(2, bb.msg.domain.Cache, "realfn2virtual %s and %s to %s" % (realfn, cls, "virtual:" + cls + ":" + realfn))
return "virtual:" + cls + ":" + realfn
@classmethod
def loadDataFull(cls, virtualfn, appends, cfgData):
def loadDataFull(self, virtualfn, appends, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
(fn, cls) = self.virtualfn2realfn(virtualfn)
logger.debug(1, "Parsing %s (full)", fn)
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s (full)" % fn)
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
@classmethod
def parse(cls, filename, appends, configdata):
"""Parse the specified filename, returning the recipe information"""
infos = []
datastores = cls.load_bbfile(filename, appends, configdata)
depends = set()
for variant, data in sorted(datastores.iteritems(),
key=lambda i: i[0],
reverse=True):
virtualfn = cls.realfn2virtual(filename, variant)
depends |= (data.getVar("__depends", False) or set())
if depends and not variant:
data.setVar("__depends", depends)
info = RecipeInfo.from_metadata(filename, data)
infos.append((virtualfn, info))
return infos
def load(self, filename, appends, configdata):
"""Obtain the recipe information for the specified filename,
using cached values if available, otherwise parsing.
Note that if it does parse to obtain the info, it will not
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename)
if cached:
infos = []
info = self.depends_cache[filename]
for variant in info.variants:
virtualfn = self.realfn2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
logger.debug(1, "Parsing %s", filename)
return self.parse(filename, appends, configdata)
return cached, infos
bb_data = self.load_bbfile(fn, appends, cfgData)
return bb_data[cls]
def loadData(self, fn, appends, cfgData, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
"""
Load a subset of data for fn.
If the cached data is valid we do nothing,
To do this, we need to parse the file and set the system
to record the variables accessed.
Return the cache status and whether the file was skipped when parsed
"""
skipped = 0
virtuals = 0
cached, infos = self.load(fn, appends, cfgData)
for virtualfn, info in infos:
if info.skipped:
logger.debug(1, "Skipping %s", virtualfn)
skipped += 1
else:
self.add_info(virtualfn, info, cacheData, not cached)
if fn not in self.checked:
self.cacheValidUpdate(fn)
if self.cacheValid(fn):
multi = self.getVar('__VARIANTS', fn, True)
for cls in (multi or "").split() + [""]:
virtualfn = self.realfn2virtual(fn, cls)
if self.depends_cache[virtualfn]["__SKIPPED"]:
skipped += 1
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
continue
self.handle_data(virtualfn, cacheData)
virtuals += 1
return True, skipped, virtuals
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s" % fn)
bb_data = self.load_bbfile(fn, appends, cfgData)
for data in bb_data:
virtualfn = self.realfn2virtual(fn, data)
self.setData(virtualfn, fn, bb_data[data])
if self.getVar("__SKIPPED", virtualfn):
skipped += 1
bb.msg.debug(1, bb.msg.domain.Cache, "Skipping %s" % virtualfn)
else:
self.handle_data(virtualfn, cacheData)
virtuals += 1
return False, skipped, virtuals
return cached, skipped, virtuals
def cacheValid(self, fn):
"""
Is the cache valid for fn?
Fast version, no timestamps checked.
"""
if fn not in self.checked:
self.cacheValidUpdate(fn)
# Is cache enabled?
if not self.has_cache:
return False
@@ -376,67 +241,70 @@ class Cache(object):
if not self.has_cache:
return False
self.checked.add(fn)
self.checked[fn] = ""
# Pretend we're clean so getVar works
self.clean[fn] = ""
# File isn't in depends_cache
if not fn in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", fn)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s is not cached" % fn)
self.remove(fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
logger.debug(2, "Cache: %s no longer exists", fn)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s no longer exists" % fn)
self.remove(fn)
return False
info = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info.timestamp:
logger.debug(2, "Cache: %s changed", fn)
if mtime != self.getVar("CACHETIMESTAMP", fn, True):
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s changed" % fn)
self.remove(fn)
return False
# Check dependencies are still valid
depends = info.file_depends
depends = self.getVar("__depends", fn, True)
if depends:
for f, old_mtime in depends:
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
logger.debug(2, "Cache: %s's dependency %s was removed",
fn, f)
self.remove(fn)
return False
if (fmtime != old_mtime):
logger.debug(2, "Cache: %s's dependency %s changed",
fn, f)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
self.remove(fn)
return False
#bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
if not fn in self.clean:
self.clean[fn] = ""
invalid = False
for cls in info.variants:
# Mark extended class data as clean too
multi = self.getVar('__VARIANTS', fn, True)
for cls in (multi or "").split():
virtualfn = self.realfn2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", virtualfn)
self.clean[virtualfn] = ""
if not virtualfn in self.depends_cache:
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s is not cached" % virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
# If any one of the varients is not present, mark cache as invalid for all
if invalid:
for cls in info.variants:
for cls in (multi or "").split():
virtualfn = self.realfn2virtual(fn, cls)
if virtualfn in self.clean:
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
if fn in self.clean:
logger.debug(2, "Cache: Marking %s as not clean", fn)
self.clean.remove(fn)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: Removing %s from cache" % virtualfn)
del self.clean[virtualfn]
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: Removing %s from cache" % fn)
del self.clean[fn]
return False
self.clean.add(fn)
return True
def remove(self, fn):
@@ -444,61 +312,154 @@ class Cache(object):
Remove a fn from the cache
Called from the parser in error cases
"""
bb.msg.debug(1, bb.msg.domain.Cache, "Removing %s from cache" % fn)
if fn in self.depends_cache:
logger.debug(1, "Removing %s from cache", fn)
del self.depends_cache[fn]
if fn in self.clean:
logger.debug(1, "Marking %s as unclean", fn)
self.clean.remove(fn)
del self.clean[fn]
def sync(self):
"""
Save the cache
Called from the parser when complete (or exiting)
"""
import copy
if not self.has_cache:
return
if self.cacheclean:
logger.debug(2, "Cache is clean, not saving.")
bb.msg.note(1, bb.msg.domain.Cache, "Cache is clean, not saving.")
return
with open(self.cachefile, "wb") as cachefile:
pickler = pickle.Pickler(cachefile, pickle.HIGHEST_PROTOCOL)
pickler.dump(__cache_version__)
pickler.dump(bb.__version__)
for key, value in self.depends_cache.iteritems():
pickler.dump(key)
pickler.dump(value)
version_data = {}
version_data['CACHE_VER'] = __cache_version__
version_data['BITBAKE_VER'] = bb.__version__
del self.depends_cache
cache_data = copy.copy(self.depends_cache)
for fn in self.depends_cache:
if '__BB_DONT_CACHE' in self.depends_cache[fn] and self.depends_cache[fn]['__BB_DONT_CACHE']:
bb.msg.debug(2, bb.msg.domain.Cache, "Not caching %s, marked as not cacheable" % fn)
del cache_data[fn]
elif 'PV' in self.depends_cache[fn] and 'SRCREVINACTION' in self.depends_cache[fn]['PV']:
bb.msg.error(bb.msg.domain.Cache, "Not caching %s as it had SRCREVINACTION in PV. Please report this bug" % fn)
del cache_data[fn]
@staticmethod
def mtime(cachefile):
p = pickle.Pickler(file(self.cachefile, "wb" ), -1 )
p.dump([cache_data, version_data])
def mtime(self, cachefile):
return bb.parse.cached_mtime_noerror(cachefile)
def add_info(self, filename, info, cacheData, parsed=None):
cacheData.add_from_recipeinfo(filename, info)
if not self.has_cache:
return
if 'SRCREVINACTION' not in info.pv and not info.nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info
def add(self, file_name, data, cacheData, parsed=None):
def handle_data(self, file_name, cacheData):
"""
Save data we need into the cache
"""
realfn = self.virtualfn2realfn(file_name)[0]
info = RecipeInfo.from_metadata(realfn, data)
self.add_info(file_name, info, cacheData, parsed)
pn = self.getVar('PN', file_name, True)
pe = self.getVar('PE', file_name, True) or "0"
pv = self.getVar('PV', file_name, True)
if 'SRCREVINACTION' in pv:
bb.msg.note(1, bb.msg.domain.Cache, "Found SRCREVINACTION in PV (%s) or %s. Please report this bug." % (pv, file_name))
pr = self.getVar('PR', file_name, True)
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
packages = (self.getVar('PACKAGES', file_name, True) or "").split()
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
@staticmethod
def load_bbfile(bbfile, appends, config):
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name)
# build PackageName to FileName lookup table
if pn not in cacheData.pkg_pn:
cacheData.pkg_pn[pn] = []
cacheData.pkg_pn[pn].append(file_name)
cacheData.stamp[file_name] = self.getVar('STAMP', file_name, True)
cacheData.tasks[file_name] = self.getVar('__BBTASKS', file_name, True)
for t in cacheData.tasks[file_name]:
cacheData.basetaskhash[file_name + "." + t] = self.getVar("BB_BASEHASH_task-%s" % t, file_name, True)
# build FileName to PackageName lookup table
cacheData.pkg_fn[file_name] = pn
cacheData.pkg_pepvpr[file_name] = (pe, pv, pr)
cacheData.pkg_dp[file_name] = dp
provides = [pn]
for provide in (self.getVar("PROVIDES", file_name, True) or "").split():
if provide not in provides:
provides.append(provide)
# Build forward and reverse provider hashes
# Forward: virtual -> [filenames]
# Reverse: PN -> [virtuals]
if pn not in cacheData.pn_provides:
cacheData.pn_provides[pn] = []
cacheData.fn_provides[file_name] = provides
for provide in provides:
if provide not in cacheData.providers:
cacheData.providers[provide] = []
cacheData.providers[provide].append(file_name)
if not provide in cacheData.pn_provides[pn]:
cacheData.pn_provides[pn].append(provide)
cacheData.deps[file_name] = []
for dep in depends:
if not dep in cacheData.deps[file_name]:
cacheData.deps[file_name].append(dep)
if not dep in cacheData.all_depends:
cacheData.all_depends.append(dep)
# Build reverse hash for PACKAGES, so runtime dependencies
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
for package in packages:
if not package in cacheData.packages:
cacheData.packages[package] = []
cacheData.packages[package].append(file_name)
rprovides += (self.getVar("RPROVIDES_%s" % package, file_name, 1) or "").split()
for package in packages_dynamic:
if not package in cacheData.packages_dynamic:
cacheData.packages_dynamic[package] = []
cacheData.packages_dynamic[package].append(file_name)
for rprovide in rprovides:
if not rprovide in cacheData.rproviders:
cacheData.rproviders[rprovide] = []
cacheData.rproviders[rprovide].append(file_name)
# Build hash of runtime depends and rececommends
if not file_name in cacheData.rundeps:
cacheData.rundeps[file_name] = {}
if not file_name in cacheData.runrecs:
cacheData.runrecs[file_name] = {}
rdepends = self.getVar('RDEPENDS', file_name, True) or ""
rrecommends = self.getVar('RRECOMMENDS', file_name, True) or ""
for package in packages + [pn]:
if not package in cacheData.rundeps[file_name]:
cacheData.rundeps[file_name][package] = []
if not package in cacheData.runrecs[file_name]:
cacheData.runrecs[file_name][package] = []
cacheData.rundeps[file_name][package] = rdepends + " " + (self.getVar("RDEPENDS_%s" % package, file_name, True) or "")
cacheData.runrecs[file_name][package] = rrecommends + " " + (self.getVar("RRECOMMENDS_%s" % package, file_name, True) or "")
# Collect files we may need for possible world-dep
# calculations
if not self.getVar('BROKEN', file_name, True) and not self.getVar('EXCLUDE_FROM_WORLD', file_name, True):
cacheData.possible_world.append(file_name)
cacheData.hashfn[file_name] = self.getVar('BB_HASHFILENAME', file_name, True)
# Touch this to make sure its in the cache
self.getVar('__BB_DONT_CACHE', file_name, True)
self.getVar('__VARIANTS', file_name, True)
def load_bbfile(self, bbfile, appends, config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
@@ -524,16 +485,13 @@ class Cache(object):
try:
if appends:
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
bb_data = parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
bb_data = parse.handle(bbfile, bb_data) # read .bb data
if chdir_back: os.chdir(oldpath)
return bb_data
except:
if chdir_back:
os.chdir(oldpath)
if chdir_back: os.chdir(oldpath)
raise
def init(cooker):
"""
The Objective: Cache the minimum amount of data possible yet get to the
@@ -554,104 +512,48 @@ def init(cooker):
return Cache(cooker.configuration.data)
class CacheData(object):
#============================================================================#
# CacheData
#============================================================================#
class CacheData:
"""
The data structures we compile from the cached data
"""
def __init__(self):
# Direct cache variables
self.providers = defaultdict(list)
self.rproviders = defaultdict(list)
self.packages = defaultdict(list)
self.packages_dynamic = defaultdict(list)
"""
Direct cache variables
(from Cache.handle_data)
"""
self.providers = {}
self.rproviders = {}
self.packages = {}
self.packages_dynamic = {}
self.possible_world = []
self.pkg_pn = defaultdict(list)
self.pkg_pn = {}
self.pkg_fn = {}
self.pkg_pepvpr = {}
self.pkg_dp = {}
self.pn_provides = defaultdict(list)
self.pn_provides = {}
self.fn_provides = {}
self.all_depends = []
self.deps = defaultdict(list)
self.rundeps = defaultdict(lambda: defaultdict(list))
self.runrecs = defaultdict(lambda: defaultdict(list))
self.deps = {}
self.rundeps = {}
self.runrecs = {}
self.task_queues = {}
self.task_deps = {}
self.stamp = {}
self.stamp_extrainfo = {}
self.preferred = {}
self.tasks = {}
self.basetaskhash = {}
self.hashfn = {}
self.inherits = {}
self.summary = {}
self.license = {}
self.section = {}
self.fakerootenv = {}
self.fakerootdirs = {}
# Indirect Cache variables (set elsewhere)
"""
Indirect Cache variables
(set elsewhere)
"""
self.ignored_dependencies = []
self.world_target = set()
self.bbfile_priority = {}
self.bbfile_config_priorities = []
def add_from_recipeinfo(self, fn, info):
self.task_deps[fn] = info.task_deps
self.pkg_fn[fn] = info.pn
self.pkg_pn[info.pn].append(fn)
self.pkg_pepvpr[fn] = (info.pe, info.pv, info.pr)
self.pkg_dp[fn] = info.defaultpref
self.stamp[fn] = info.stamp
self.stamp_extrainfo[fn] = info.stamp_extrainfo
provides = [info.pn]
for provide in info.provides:
if provide not in provides:
provides.append(provide)
self.fn_provides[fn] = provides
for provide in provides:
self.providers[provide].append(fn)
if provide not in self.pn_provides[info.pn]:
self.pn_provides[info.pn].append(provide)
for dep in info.depends:
if dep not in self.deps[fn]:
self.deps[fn].append(dep)
if dep not in self.all_depends:
self.all_depends.append(dep)
rprovides = info.rprovides
for package in info.packages:
self.packages[package].append(fn)
rprovides += info.rprovides_pkg[package]
for rprovide in rprovides:
self.rproviders[rprovide].append(fn)
for package in info.packages_dynamic:
self.packages_dynamic[package].append(fn)
# Build hash of runtime depends and rececommends
for package in info.packages + [info.pn]:
self.rundeps[fn][package] = list(info.rdepends) + info.rdepends_pkg[package]
self.runrecs[fn][package] = list(info.rrecommends) + info.rrecommends_pkg[package]
# Collect files we may need for possible world-dep
# calculations
if not info.broken and not info.not_world:
self.possible_world.append(fn)
self.hashfn[fn] = info.hashfilename
for task, taskhash in info.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
self.basetaskhash[identifier] = taskhash
self.inherits[fn] = info.inherits
self.summary[fn] = info.summary
self.license[fn] = info.license
self.section[fn] = info.section
self.fakerootenv[fn] = info.fakerootenv
self.fakerootdirs[fn] = info.fakerootdirs

View File

@@ -1,21 +1,16 @@
from pysh import pyshyacc, pyshlex
from itertools import chain
from bb import msg, utils
import ast
import codegen
import logging
import os.path
import bb.utils, bb.data
from itertools import chain
from pysh import pyshyacc, pyshlex, sherrors
logger = logging.getLogger('BitBake.CodeParser')
PARSERCACHE_VERSION = 2
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
@@ -28,7 +23,7 @@ def check_indent(codestr):
return codestr
if codestr[i-1] is " " or codestr[i-1] is " ":
return "if 1:\n" + codestr
return "if 1:\n" + codestr
return codestr
@@ -36,18 +31,15 @@ pythonparsecache = {}
shellparsecache = {}
def parser_cachefile(d):
cachedir = (bb.data.getVar("PERSISTENT_DIR", d, True) or
bb.data.getVar("CACHE", d, True))
cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
if cachedir in [None, '']:
return None
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
logger.debug(1, "Using cache in '%s' for codeparser cache", cachefile)
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s' for codeparser cache" % cachefile)
return cachefile
def parser_cache_init(d):
global pythonparsecache
global shellparsecache
cachefile = parser_cachefile(d)
if not cachefile:
@@ -62,16 +54,17 @@ def parser_cache_init(d):
if version != PARSERCACHE_VERSION:
return
pythonparsecache = data[0]
shellparsecache = data[1]
bb.codeparser.pythonparsecache = data[0]
bb.codeparser.shellparsecache = data[1]
def parser_cache_save(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([[pythonparsecache, shellparsecache], PARSERCACHE_VERSION])
p.dump([[bb.codeparser.pythonparsecache, bb.codeparser.shellparsecache], PARSERCACHE_VERSION])
class PythonParser():
class ValueVisitor():
@@ -136,10 +129,10 @@ class PythonParser():
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
logger.debug(2, 'Failed to convert function and argument to source form')
msg.debug(2, None, "Failed to convert function and argument to source form")
else:
logger.debug(1, "Warning: in call to '%s', argument '%s' is "
"not a literal", funcstr, argstr)
msg.debug(1, None, "Warning: in call to '%s', argument '%s' is not a literal" %
(funcstr, argstr))
def visit_Call(self, node):
if self.compare_name(self.getvars, node.func):
@@ -191,7 +184,7 @@ class PythonParser():
self.execs = pythonparsecache[h]["execs"]
return
code = compile(check_indent(str(node)), "<string>", "exec",
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
visitor = self.ValueVisitor(code)
@@ -227,7 +220,7 @@ class ShellParser():
try:
tokens, _ = pyshyacc.parse(value, eof=True, debug=False)
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
raise ShellSyntaxError("Unexpected EOF")
for token in tokens:
self.process_tokens(token)
@@ -326,11 +319,11 @@ class ShellParser():
cmd = word[1]
if cmd.startswith("$"):
logger.debug(1, "Warning: execution of non-literal "
"command '%s'", cmd)
msg.debug(1, None, "Warning: execution of non-literal command '%s'" % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self.parse_shell(command)
else:
self.allexecs.add(cmd)
break

View File

@@ -35,25 +35,12 @@ import bb.data
async_cmds = {}
sync_cmds = {}
class CommandCompleted(bb.event.Event):
pass
class CommandExit(bb.event.Event):
def __init__(self, exitcode):
bb.event.Event.__init__(self)
self.exitcode = int(exitcode)
class CommandFailed(CommandExit):
def __init__(self, message):
self.error = message
CommandExit.__init__(self, 1)
class Command:
"""
A queue of asynchronous commands for bitbake
"""
def __init__(self, cooker):
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
@@ -94,8 +81,7 @@ class Command:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if (needcache and self.cooker.state in
(bb.cooker.state.initial, bb.cooker.state.parsing)):
if needcache and self.cooker.cookerState != bb.cooker.cookerParsed:
self.cooker.updateCache()
return True
else:
@@ -118,13 +104,11 @@ class Command:
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg:
bb.event.fire(CommandFailed(msg), self.cooker.configuration.event_data)
elif code:
bb.event.fire(CommandExit(code), self.cooker.configuration.event_data)
def finishAsyncCommand(self, error = None):
if error:
bb.event.fire(CookerCommandFailed(error), self.cooker.configuration.event_data)
else:
bb.event.fire(CommandCompleted(), self.cooker.configuration.event_data)
bb.event.fire(CookerCommandCompleted(), self.cooker.configuration.event_data)
self.currentAsyncCommand = None
@@ -139,13 +123,13 @@ class CommandsSync:
"""
Trigger cooker 'shutdown' mode
"""
command.cooker.shutdown()
command.cooker.cookerAction = bb.cooker.cookerShutdown
def stateStop(self, command, params):
"""
Stop the cooker
"""
command.cooker.stop()
command.cooker.cookerAction = bb.cooker.cookerStop
def getCmdLineAction(self, command, params):
"""
@@ -222,27 +206,6 @@ class CommandsAsync:
command.finishAsyncCommand()
generateDotGraph.needcache = True
def generateTargetsTree(self, command, params):
"""
Generate a tree of all buildable targets.
"""
klass = params[0]
command.cooker.generateTargetsTree(klass)
command.finishAsyncCommand()
generateTargetsTree.needcache = True
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
for the passed configuration variable. i.e. MACHINE
"""
varname = params[0]
command.cooker.findConfigFiles(varname)
command.finishAsyncCommand()
findConfigFiles.needcache = True
def showVersions(self, command, params):
"""
Show the currently selected versions
@@ -285,8 +248,33 @@ class CommandsAsync:
"""
Parse the .bb files
"""
if bb.fetch.fetcher_compare_revisions(command.cooker.configuration.data):
command.finishAsyncCommand(code=1)
else:
command.finishAsyncCommand()
command.cooker.compareRevisions()
command.finishAsyncCommand()
compareRevisions.needcache = True
#
# Events
#
class CookerCommandCompleted(bb.event.Event):
"""
Cooker command completed
"""
def __init__(self):
bb.event.Event.__init__(self)
class CookerCommandFailed(bb.event.Event):
"""
Cooker command completed
"""
def __init__(self, error):
bb.event.Event.__init__(self)
self.error = error
class CookerCommandSetExitCode(bb.event.Event):
"""
Set the exit code for a cooker command
"""
def __init__(self, exitcode):
bb.event.Event.__init__(self)
self.exitcode = int(exitcode)

File diff suppressed because it is too large Load Diff

View File

@@ -161,12 +161,10 @@ def expandKeys(alterdata, readdata = None):
def inheritFromOS(d):
"""Inherit variables from the environment."""
exportlist = bb.utils.preserved_envvars_exported()
for s in os.environ.keys():
try:
setVar(s, os.environ[s], d)
if s in exportlist:
setVarFlag(s, "export", True, d)
setVarFlag(s, "export", True, d)
except TypeError:
pass
@@ -192,8 +190,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
return 0
if all:
commentVal = re.sub('\n', '\n#', str(oval))
o.write('# %s=%s\n' % (var, commentVal))
o.write('# %s=%s\n' % (var, oval))
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return 0
@@ -202,7 +199,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if unexport:
o.write('unset %s\n' % varExpanded)
return 0
return 1
if not val:
return 0
@@ -220,9 +217,8 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
# if we're going to output this within doublequotes,
# to a shell, we need to escape the quotes in the var
alter = re.sub('"', '\\"', val.strip())
alter = re.sub('\n', ' \\\n', alter)
o.write('%s="%s"\n' % (varExpanded, alter))
return 0
return 1
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
@@ -248,12 +244,6 @@ def export_vars(d):
pass
return ret
def export_envvars(v, d):
for s in os.environ.keys():
if s not in v:
v[s] = os.environ[s]
return v
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
@@ -261,7 +251,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
for key in keys:
emit_var(key, o, d, False) and o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser().parse_shell(d.getVar(func, True))
seen = set()
while newdeps:
@@ -298,10 +288,9 @@ def build_dependencies(key, keys, shelldeps, d):
parser = d.expandWithRefs(d.getVar(key, False), key)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps |= set((d.getVarFlag(key, "vardeps", True) or "").split())
deps -= set((d.getVarFlag(key, "vardepsexclude", True) or "").split())
deps |= set((d.getVarFlag(key, "vardeps") or "").split())
except:
bb.note("Error expanding variable %s" % key)
bb.note("Error expanding variable %s" % key)
raise
return deps
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
@@ -313,10 +302,12 @@ def generate_dependencies(d):
shelldeps = set(key for key in keys if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
deps = {}
taskdeps = {}
tasklist = bb.data.getVar('__BBTASKS', d) or []
for task in tasklist:
deps[task] = build_dependencies(task, keys, shelldeps, d)
newdeps = deps[task]
seen = set()
while newdeps:
@@ -328,8 +319,9 @@ def generate_dependencies(d):
deps[dep] = build_dependencies(dep, keys, shelldeps, d)
newdeps |= deps[dep]
newdeps -= seen
taskdeps[task] = seen | newdeps
#print "For %s: %s" % (task, str(taskdeps[task]))
return tasklist, deps
return taskdeps, deps
def inherits_class(klass, d):
val = getVar('__inherit_cache', d) or []

View File

@@ -28,21 +28,17 @@ BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import copy, re
from collections import MutableMapping
import logging
import bb, bb.codeparser
import copy, re, sys
import bb
from bb import utils
from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
class VariableParse:
def __init__(self, varname, d, val = None):
self.varname = varname
@@ -73,35 +69,11 @@ class VariableParse:
self.references |= parser.references
self.execs |= parser.execs
value = utils.better_eval(codeobj, DataContext(self.d))
value = utils.better_eval(codeobj, {"d": self.d})
return str(value)
class DataContext(dict):
def __init__(self, metadata, **kwargs):
self.metadata = metadata
dict.__init__(self, **kwargs)
self['d'] = metadata
def __missing__(self, key):
value = self.metadata.getVar(key, True)
if value is None or self.metadata.getVarFlag(key, 'func'):
raise KeyError(key)
else:
return value
class ExpansionError(Exception):
def __init__(self, varname, expression, exception):
self.expression = expression
self.variablename = varname
self.exception = exception
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
def __str__(self):
return self.msg
class DataSmart(MutableMapping):
class DataSmart:
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
self.dict = {}
@@ -128,10 +100,11 @@ class DataSmart(MutableMapping):
s = __expand_python_regexp__.sub(varparse.python_sub, s)
if s == olds:
break
except ExpansionError:
except KeyboardInterrupt:
raise
except:
bb.msg.note(1, bb.msg.domain.Data, "%s:%s while evaluating:\n%s" % (sys.exc_info()[0], sys.exc_info()[1], s))
raise
except Exception as exc:
raise ExpansionError(varname, s, exc)
varparse.value = s
@@ -142,7 +115,7 @@ class DataSmart(MutableMapping):
def expand(self, s, varname):
return self.expandWithRefs(s, varname).value
def finalize(self):
"""Performs final steps upon the datastore, including application of overrides"""
@@ -176,34 +149,39 @@ class DataSmart(MutableMapping):
for var in vars:
name = var[:-l]
try:
self.setVar(name, self.getVar(var, False))
self[name] = self[var]
except Exception:
logger.info("Untracked delVar")
bb.msg.note(1, bb.msg.domain.Data, "Untracked delVar")
# now on to the appends and prepends
for op in __setvar_keyword__:
if op in self._special_values:
appends = self._special_values[op] or []
for append in appends:
keep = []
for (a, o) in self.getVarFlag(append, op) or []:
if o and not o in overrides:
keep.append((a ,o))
continue
if "_append" in self._special_values:
appends = self._special_values["_append"] or []
for append in appends:
for (a, o) in self.getVarFlag(append, "_append") or []:
# maybe the OVERRIDE was not yet added so keep the append
if (o and o in overrides) or not o:
self.delVarFlag(append, "_append")
if o and not o in overrides:
continue
if op is "_append":
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
elif op is "_prepend":
sval = a + (self.getVar(append, False) or "")
self.setVar(append, sval)
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
# We save overrides that may be applied at some later stage
if keep:
self.setVarFlag(append, op, keep)
else:
self.delVarFlag(append, op)
if "_prepend" in self._special_values:
prepends = self._special_values["_prepend"] or []
for prepend in prepends:
for (a, o) in self.getVarFlag(prepend, "_prepend") or []:
# maybe the OVERRIDE was not yet added so keep the prepend
if (o and o in overrides) or not o:
self.delVarFlag(prepend, "_prepend")
if o and not o in overrides:
continue
sval = a + (self.getVar(prepend, False) or "")
self.setVar(prepend, sval)
def initVar(self, var):
self.expand_cache = {}
@@ -304,17 +282,12 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
self.dict[var][flag] = flagvalue
def getVarFlag(self, var, flag, expand=False):
def getVarFlag(self, var, flag):
local_var = self._findVar(var)
value = None
if local_var:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "content" and "defaultval" in local_var:
value = copy.copy(local_var["defaultval"])
if expand and value:
value = self.expand(value, None)
return value
return copy.copy(local_var[flag])
return None
def delVarFlag(self, var, flag):
local_var = self._findVar(var)
@@ -376,53 +349,23 @@ class DataSmart(MutableMapping):
return data
def expandVarref(self, variable, parents=False):
"""Find all references to variable in the data and expand it
in place, optionally descending to parent datastores."""
if parents:
keys = iter(self)
else:
keys = self.localkeys()
ref = '${%s}' % variable
value = self.getVar(variable, False)
for key in keys:
referrervalue = self.getVar(key, False)
if referrervalue and ref in referrervalue:
self.setVar(key, referrervalue.replace(ref, value))
def localkeys(self):
for key in self.dict:
if key != '_data':
yield key
def __iter__(self):
seen = set()
def _keys(d):
# Dictionary Methods
def keys(self):
def _keys(d, mykey):
if "_data" in d:
for key in _keys(d["_data"]):
yield key
_keys(d["_data"], mykey)
for key in d:
for key in d.keys():
if key != "_data":
if not key in seen:
seen.add(key)
yield key
return _keys(self.dict)
def __len__(self):
return len(frozenset(self))
mykey[key] = None
keytab = {}
_keys(self.dict, keytab)
return keytab.keys()
def __getitem__(self, item):
value = self.getVar(item, False)
if value is None:
raise KeyError(item)
else:
return value
#print "Warning deprecated"
return self.getVar(item, False)
def __setitem__(self, var, value):
self.setVar(var, value)
def __delitem__(self, var):
self.delVar(var)
def __setitem__(self, var, data):
#print "Warning deprecated"
self.setVar(var, data)

View File

@@ -24,20 +24,16 @@ BitBake build tools.
import os, sys
import warnings
try:
import cPickle as pickle
except ImportError:
import pickle
import logging
import atexit
import bb.utils
import pickle
# This is the pid for which we should generate the event. This is set when
# the runqueue forks off.
worker_pid = 0
worker_pipe = None
useStdout = True
class Event(object):
class Event:
"""Base class for events"""
def __init__(self):
@@ -59,7 +55,8 @@ bb.utils._context["NotHandled"] = NotHandled
bb.utils._context["Handled"] = Handled
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
import bb.msg
if isinstance(event, bb.msg.MsgBase):
return
for handler in _handlers:
@@ -76,28 +73,7 @@ def fire_class_handlers(event, d):
h(event)
del event.data
ui_queue = []
@atexit.register
def print_ui_queue():
"""If we're exiting before a UI has been spawned, display any queued
LogRecords to the console."""
logger = logging.getLogger("BitBake")
if not _ui_handlers:
from bb.msg import BBLogFormatter
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
while ui_queue:
event = ui_queue.pop()
if isinstance(event, logging.LogRecord):
logger.handle(event)
def fire_ui_handlers(event, d):
if not _ui_handlers:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
@@ -128,11 +104,13 @@ def fire(event, d):
def worker_fire(event, d):
data = "<event>" + pickle.dumps(event) + "</event>"
worker_pipe.write(data)
worker_pipe.flush()
def fire_from_worker(event, d):
if not event.startswith("<event>") or not event.endswith("</event>"):
print("Error, not an event %s" % event)
return
#print "Got event %s" % event
event = pickle.loads(event[7:-8])
fire_ui_handlers(event, d)
@@ -145,7 +123,7 @@ def register(name, handler):
if handler is not None:
# handle string containing python code
if isinstance(handler, basestring):
if type(handler).__name__ == "str":
tmp = "def tmpHandler(e):\n%s" % handler
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
_handlers[name] = comp
@@ -161,6 +139,7 @@ def remove(name, handler):
def register_UIHhandler(handler):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
bb.event.useStdout = False
return _ui_handler_seq
def unregister_UIHhandler(handlerNum):
@@ -295,14 +274,10 @@ class MultipleProviders(Event):
"""
return self._candidates
class ParseStarted(Event):
"""Recipe parsing for the runqueue has begun"""
def __init__(self, total):
Event.__init__(self)
self.total = total
class ParseCompleted(Event):
"""Recipe parsing for the runqueue has completed"""
class ParseProgress(Event):
"""
Parsing Progress Event
"""
def __init__(self, cached, parsed, skipped, masked, virtuals, errors, total):
Event.__init__(self)
@@ -315,32 +290,6 @@ class ParseCompleted(Event):
self.sofar = cached + parsed
self.total = total
class ParseProgress(Event):
"""Recipe parsing progress"""
def __init__(self, current):
self.current = current
class CacheLoadStarted(Event):
"""Loading of the dependency cache has begun"""
def __init__(self, total):
Event.__init__(self)
self.total = total
class CacheLoadProgress(Event):
"""Cache loading progress"""
def __init__(self, current):
Event.__init__(self)
self.current = current
class CacheLoadCompleted(Event):
"""Cache loading is complete"""
def __init__(self, total, num_entries):
Event.__init__(self)
self.total = total
self.num_entries = num_entries
class DepTreeGenerated(Event):
"""
Event when a dependency tree has been generated
@@ -349,55 +298,3 @@ class DepTreeGenerated(Event):
def __init__(self, depgraph):
Event.__init__(self)
self._depgraph = depgraph
class TargetsTreeGenerated(Event):
"""
Event when a set of buildable targets has been generated
"""
def __init__(self, model):
Event.__init__(self)
self._model = model
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
"""
def __init__(self, variable, values):
Event.__init__(self)
self._variable = variable
self._values = values
class MsgBase(Event):
"""Base class for messages"""
def __init__(self, msg):
self._message = msg
Event.__init__(self)
class MsgDebug(MsgBase):
"""Debug Message"""
class MsgNote(MsgBase):
"""Note Message"""
class MsgWarn(MsgBase):
"""Warning Message"""
class MsgError(MsgBase):
"""Error Message"""
class MsgFatal(MsgBase):
"""Fatal Message"""
class MsgPlain(MsgBase):
"""General output"""
class LogHandler(logging.Handler):
"""Dispatch logging messages as bitbake events"""
def emit(self, record):
fire(record, None)
def filter(self, record):
record.taskpid = worker_pid
return True

View File

@@ -27,15 +27,9 @@ BitBake build tools.
from __future__ import absolute_import
from __future__ import print_function
import os, re
import logging
import bb
from bb import data
from bb import persist_data
from bb import utils
__version__ = "1"
logger = logging.getLogger("BitBake.Fetch")
class MalformedUrl(Exception):
"""Exception raised when encountering an invalid url"""
@@ -123,8 +117,9 @@ def encodeurl(decoded):
return url
def uri_replace(uri, uri_find, uri_replace, d):
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
if not uri or not uri_find or not uri_replace:
logger.debug(1, "uri_replace: passed an undefined value, not replacing")
bb.msg.debug(1, bb.msg.domain.Fetcher, "uri_replace: passed an undefined value, not replacing")
uri_decoded = list(decodeurl(uri))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
@@ -139,32 +134,38 @@ def uri_replace(uri, uri_find, uri_replace, d):
if d:
localfn = bb.fetch.localpath(uri, d)
if localfn:
result_decoded[loc] = os.path.join(os.path.dirname(result_decoded[loc]), os.path.basename(bb.fetch.localpath(uri, d)))
result_decoded[loc] = os.path.dirname(result_decoded[loc]) + "/" + os.path.basename(bb.fetch.localpath(uri, d))
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: matching %s against %s and replacing with %s" % (i, uri_decoded[loc], uri_replace_decoded[loc]))
else:
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: no match")
return uri
# else:
# for j in i:
# FIXME: apply replacements against options
return encodeurl(result_decoded)
methods = []
urldata_cache = {}
saved_headrevs = {}
persistent_database_connection = {}
def fetcher_init(d):
"""
Called to initialize the fetchers once the configuration data is known.
Calls before this must not hit the cache.
"""
pd = persist_data.persist(d)
pd = persist_data.PersistData(d, persistent_database_connection)
# When to drop SCM head revisions controlled by user policy
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
if srcrev_policy == "cache":
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
elif srcrev_policy == "clear":
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Clearing SRCREV cache due to cache policy of: %s" % srcrev_policy)
try:
bb.fetch.saved_headrevs = pd['BB_URI_HEADREVS'].items()
bb.fetch.saved_headrevs = pd.getKeyValues("BB_URI_HEADREVS")
except:
pass
del pd['BB_URI_HEADREVS']
pd.delDomain("BB_URI_HEADREVS")
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
@@ -172,24 +173,28 @@ def fetcher_init(d):
if hasattr(m, "init"):
m.init(d)
def fetcher_compare_revisions(d):
# Make sure our domains exist
pd.addDomain("BB_URI_HEADREVS")
pd.addDomain("BB_URI_LOCALCOUNT")
def fetcher_compare_revisons(d):
"""
Compare the revisions in the persistant cache with current values and
return true/false on whether they've changed.
"""
pd = persist_data.persist(d)
data = pd['BB_URI_HEADREVS'].items()
pd = persist_data.PersistData(d, persistent_database_connection)
data = pd.getKeyValues("BB_URI_HEADREVS")
data2 = bb.fetch.saved_headrevs
changed = False
for key in data:
if key not in data2 or data2[key] != data[key]:
logger.debug(1, "%s changed", key)
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s changed" % key)
changed = True
return True
else:
logger.debug(2, "%s did not change", key)
bb.msg.debug(2, bb.msg.domain.Fetcher, "%s did not change" % key)
return False
# Function call order is usually:
@@ -220,41 +225,11 @@ def init(urls, d, setup = True):
def mirror_from_string(data):
return [ i.split() for i in (data or "").replace('\\n','\n').split('\n') if i ]
def verify_checksum(u, ud, d):
"""
verify the MD5 and SHA256 checksum for downloaded src
return value:
- True: checksum matched
- False: checksum unmatched
if checksum is missing in recipes file, "BB_STRICT_CHECKSUM" decide the return value.
if BB_STRICT_CHECKSUM = "1" then return false as unmatched, otherwise return true as
matched
"""
if not ud.type in ["http", "https", "ftp", "ftps"]:
return
md5data = bb.utils.md5_file(ud.localpath)
sha256data = bb.utils.sha256_file(ud.localpath)
if (ud.md5_expected == None or ud.sha256_expected == None):
logger.warn('Missing SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data,
ud.sha256_name, sha256data)
if bb.data.getVar("BB_STRICT_CHECKSUM", d, True) == "1":
raise FetchError("No checksum specified for %s." % u)
return
if (ud.md5_expected != md5data or ud.sha256_expected != sha256data):
logger.error('The checksums for "%s" did not match.\n'
' MD5: expected "%s", got "%s"\n'
' SHA256: expected "%s", got "%s"\n',
ud.localpath, ud.md5_expected, md5data,
ud.sha256_expected, sha256data)
raise FetchError("%s checksum mismatch." % u)
def removefile(f):
try:
os.remove(f)
except:
pass
def go(d, urls = None):
"""
@@ -290,7 +265,7 @@ def go(d, urls = None):
localpath = ud.localpath
except FetchError:
# Remove any incomplete file
bb.utils.remove(ud.localpath)
removefile(ud.localpath)
# Finally, try fetching uri, u, from MIRRORS
mirrors = mirror_from_string(bb.data.getVar('MIRRORS', d, True))
localpath = try_mirrors (d, u, mirrors)
@@ -298,7 +273,6 @@ def go(d, urls = None):
raise FetchError("Unable to fetch URL %s from any source." % u)
ud.localpath = localpath
if os.path.exists(ud.md5):
# Touch the md5 file to show active use of the download
try:
@@ -307,26 +281,21 @@ def go(d, urls = None):
# Errors aren't fatal here
pass
else:
# Only check the checksums if we've not seen this item before
verify_checksum(u, ud, d)
Fetch.write_md5sum(u, ud, d)
bb.utils.unlockfile(lf)
def checkstatus(d, urls = None):
def checkstatus(d):
"""
Check all urls exist upstream
init must have previously been called
"""
urldata = init([], d, True)
if not urls:
urls = urldata
for u in urls:
for u in urldata:
ud = urldata[u]
m = ud.method
logger.debug(1, "Testing URL %s", u)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(bb.data.getVar('PREMIRRORS', d, True))
ret = try_mirrors(d, u, mirrors, True)
@@ -357,9 +326,6 @@ def localpaths(d):
srcrev_internal_call = False
def get_autorev(d):
return get_srcrev(d)
def get_srcrev(d):
"""
Return the version string for the current package
@@ -383,17 +349,17 @@ def get_srcrev(d):
scms = []
# Only call setup_localpath on URIs which supports_srcrev()
# Only call setup_localpath on URIs which suppports_srcrev()
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
for u in urldata:
ud = urldata[u]
if ud.method.supports_srcrev():
if ud.method.suppports_srcrev():
if not ud.setup:
ud.setup_localpath(d)
scms.append(u)
if len(scms) == 0:
logger.error("SRCREV was used yet no valid SCM was found in SRC_URI")
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
raise ParameterError
if bb.data.getVar('BB_SRCREV_POLICY', d, True) != "cache":
@@ -407,7 +373,7 @@ def get_srcrev(d):
#
format = bb.data.getVar('SRCREV_FORMAT', d, 1)
if not format:
logger.error("The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
bb.msg.error(bb.msg.domain.Fetcher, "The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
raise ParameterError
for scm in scms:
@@ -442,14 +408,14 @@ def runfetchcmd(cmd, d, quiet = False):
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST',
'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy',
'https_proxy', 'no_proxy', 'ALL_PROXY', 'all_proxy',
'KRB5CCNAME', 'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
for var in exportvars:
val = data.getVar(var, d, True)
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
logger.debug(1, "Running %s", cmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
# redirect stderr to stdout
stdout_handle = os.popen(cmd + " 2>&1", "r")
@@ -485,7 +451,7 @@ def try_mirrors(d, uri, mirrors, check = False, force = False):
"""
fpath = os.path.join(data.getVar("DL_DIR", d, 1), os.path.basename(uri))
if not check and os.access(fpath, os.R_OK) and not force:
logger.debug(1, "%s already exists, skipping checkout.", fpath)
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % fpath)
return fpath
ld = d.createCopy()
@@ -495,26 +461,24 @@ def try_mirrors(d, uri, mirrors, check = False, force = False):
try:
ud = FetchData(newuri, ld)
except bb.fetch.NoMethodError:
logger.debug(1, "No method for %s", uri)
bb.msg.debug(1, bb.msg.domain.Fetcher, "No method for %s" % uri)
continue
ud.setup_localpath(ld)
try:
if check:
found = ud.method.checkstatus(newuri, ud, ld)
if found:
return found
ud.method.checkstatus(newuri, ud, ld)
else:
ud.method.go(newuri, ud, ld)
return ud.localpath
return ud.localpath
except (bb.fetch.MissingParameterError,
bb.fetch.FetchError,
bb.fetch.MD5SumError):
import sys
(type, value, traceback) = sys.exc_info()
logger.debug(2, "Mirror fetch failure: %s", value)
bb.utils.remove(ud.localpath)
bb.msg.debug(2, bb.msg.domain.Fetcher, "Mirror fetch failure: %s" % value)
removefile(ud.localpath)
continue
return None
@@ -533,16 +497,6 @@ class FetchData(object):
if not self.pswd and "pswd" in self.parm:
self.pswd = self.parm["pswd"]
self.setup = False
if "name" in self.parm:
self.md5_name = "%s.md5sum" % self.parm["name"]
self.sha256_name = "%s.sha256sum" % self.parm["name"]
else:
self.md5_name = "md5sum"
self.sha256_name = "sha256sum"
self.md5_expected = bb.data.getVarFlag("SRC_URI", self.md5_name, d)
self.sha256_expected = bb.data.getVarFlag("SRC_URI", self.sha256_name, d)
for m in methods:
if m.supports(url, self, d):
self.method = m
@@ -605,13 +559,6 @@ class Fetch(object):
and duplicate code execution)
"""
return url
def _strip_leading_slashes(self, relpath):
"""
Remove leading slash as os.path.join can't cope
"""
while os.path.isabs(relpath):
relpath = relpath[1:]
return relpath
def setUrls(self, urls):
self.__urls = urls
@@ -627,7 +574,7 @@ class Fetch(object):
"""
return False
def supports_srcrev(self):
def suppports_srcrev(self):
"""
The fetcher supports auto source revisions (SRCREV)
"""
@@ -656,7 +603,7 @@ class Fetch(object):
Check the status of a URL
Assumes localpath was called first
"""
logger.info("URL %s could not be checked for status since no method exists.", url)
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s could not be checked for status since no method exists." % url)
return True
def getSRCDate(urldata, d):
@@ -697,14 +644,14 @@ class Fetch(object):
if not rev:
rev = data.getVar("SRCREV_pn-%s_%s" % (pn, ud.parm['name']), d, 1)
if not rev:
rev = data.getVar("SRCREV_%s" % (ud.parm['name']), d, 1)
rev = data.getVar("SRCREV_%s" % (ud.parm['name']), d, 1)
if not rev:
rev = data.getVar("SRCREV", d, 1)
if rev == "INVALID":
raise InvalidSRCREV("Please set SRCREV to a valid value")
if not rev:
return False
if rev == "SRCREVINACTION":
if rev is "SRCREVINACTION":
return True
return rev
@@ -731,7 +678,9 @@ class Fetch(object):
"""
Verify the md5sum we wanted with the one we got
"""
wanted_sum = ud.parm.get('md5sum')
wanted_sum = None
if 'md5sum' in ud.parm:
wanted_sum = ud.parm['md5sum']
if not wanted_sum:
return True
@@ -756,14 +705,14 @@ class Fetch(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError
pd = persist_data.persist(d)
revs = pd['BB_URI_HEADREVS']
pd = persist_data.PersistData(d, persistent_database_connection)
key = self.generate_revision_key(url, ud, d)
rev = revs[key]
rev = pd.getValue("BB_URI_HEADREVS", key)
if rev != None:
return str(rev)
revs[key] = rev = self._latest_revision(url, ud, d)
rev = self._latest_revision(url, ud, d)
pd.setValue("BB_URI_HEADREVS", key, rev)
return rev
def sortable_revision(self, url, ud, d):
@@ -773,18 +722,17 @@ class Fetch(object):
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
pd = persist_data.persist(d)
localcounts = pd['BB_URI_LOCALCOUNT']
pd = persist_data.PersistData(d, persistent_database_connection)
key = self.generate_revision_key(url, ud, d)
latest_rev = self._build_revision(url, ud, d)
last_rev = localcounts[key + '_rev']
last_rev = pd.getValue("BB_URI_LOCALCOUNT", key + "_rev")
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
count = None
if uselocalcount:
count = Fetch.localcount_internal_helper(ud, d)
if count is None:
count = localcounts[key + '_count']
count = pd.getValue("BB_URI_LOCALCOUNT", key + "_count")
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
@@ -800,8 +748,8 @@ class Fetch(object):
else:
count = str(int(count) + 1)
localcounts[key + '_rev'] = latest_rev
localcounts[key + '_count'] = count
pd.setValue("BB_URI_LOCALCOUNT", key + "_rev", latest_rev)
pd.setValue("BB_URI_LOCALCOUNT", key + "_count", count)
return str(count + "+" + latest_rev)

View File

@@ -25,10 +25,11 @@ BitBake 'Fetch' implementation for bzr.
import os
import sys
import logging
import bb
from bb import data
from bb.fetch import Fetch, FetchError, runfetchcmd, logger
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import runfetchcmd
class Bzr(Fetch):
def supports(self, url, ud, d):
@@ -37,7 +38,10 @@ class Bzr(Fetch):
def localpath (self, url, ud, d):
# Create paths to bzr checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
revision = Fetch.srcrev_internal_helper(ud, d)
@@ -61,7 +65,9 @@ class Bzr(Fetch):
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('proto', 'http')
proto = "http"
if "proto" in ud.parm:
proto = ud.parm["proto"]
bzrroot = ud.host + ud.path
@@ -87,29 +93,22 @@ class Bzr(Fetch):
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", loc)
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Update %s" % loc)
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
os.system("rm -rf %s" % os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)))
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
logger.debug(1, "BZR Checkout %s", loc)
bb.utils.mkdirhier(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Checkout %s" % loc)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % bzrcmd)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
try:
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d)
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.pkgdir)), d)
except:
t, v, tb = sys.exc_info()
try:
@@ -118,7 +117,7 @@ class Bzr(Fetch):
pass
raise t, v, tb
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
@@ -131,7 +130,7 @@ class Bzr(Fetch):
"""
Return the latest upstream revision number
"""
logger.debug(2, "BZR fetcher hitting network for %s", url)
bb.msg.debug(2, bb.msg.domain.Fetcher, "BZR fetcher hitting network for %s" % url)
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)

View File

@@ -27,10 +27,11 @@ BitBake build tools.
#
import os
import logging
import bb
from bb import data
from bb.fetch import Fetch, FetchError, MissingParameterError, logger
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
class Cvs(Fetch):
"""
@@ -47,7 +48,9 @@ class Cvs(Fetch):
raise MissingParameterError("cvs method needs a 'module' parameter")
ud.module = ud.parm["module"]
ud.tag = ud.parm.get('tag', "")
ud.tag = ""
if 'tag' in ud.parm:
ud.tag = ud.parm['tag']
# Override the default date in certain cases
if 'date' in ud.parm:
@@ -74,9 +77,17 @@ class Cvs(Fetch):
def go(self, loc, ud, d):
method = ud.parm.get('method', 'pserver')
localdir = ud.parm.get('localdir', ud.module)
cvs_port = ud.parm.get('port', '')
method = "pserver"
if "method" in ud.parm:
method = ud.parm["method"]
localdir = ud.module
if "localdir" in ud.parm:
localdir = ud.parm["localdir"]
cvs_port = ""
if "port" in ud.parm:
cvs_port = ud.parm["port"]
cvs_rsh = None
if method == "ext":
@@ -125,21 +136,21 @@ class Cvs(Fetch):
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug(2, "Fetch: checking for module directory")
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory")
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(moddir)
myret = os.system(cvsupdatecmd)
else:
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(pkgdir)
bb.mkdirhier(pkgdir)
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cvscmd)
myret = os.system(cvscmd)
if myret != 0 or not os.access(moddir, os.R_OK):
@@ -149,20 +160,14 @@ class Cvs(Fetch):
pass
raise FetchError(ud.module)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
if 'fullpath' in ud.parm:
os.chdir(pkgdir)
myret = os.system("tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir))
myret = os.system("tar -czf %s %s" % (ud.localpath, localdir))
else:
os.chdir(moddir)
os.chdir('..')
myret = os.system("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir)))
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
if myret != 0:
try:

View File

@@ -22,11 +22,9 @@ BitBake 'Fetch' git implementation
import os
import bb
import bb.persist_data
from bb import data
from bb.fetch import Fetch
from bb.fetch import runfetchcmd
from bb.fetch import logger
class Git(Fetch):
"""Class to fetch a module or modules from git repositories"""
@@ -118,7 +116,6 @@ class Git(Fetch):
repofile = os.path.join(data.getVar("DL_DIR", d, 1), ud.mirrortarball)
coname = '%s' % (ud.tag)
codir = os.path.join(ud.clonedir, coname)
@@ -131,7 +128,7 @@ class Git(Fetch):
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(repofile):
bb.utils.mkdirhier(ud.clonedir)
bb.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (repofile), d)
@@ -156,7 +153,7 @@ class Git(Fetch):
os.chdir(ud.clonedir)
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
if mirror_tarballs != "0" or 'fullclone' in ud.parm:
logger.info("Creating tarball of git repository")
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
if 'fullclone' in ud.parm:
@@ -182,25 +179,19 @@ class Git(Fetch):
readpathspec = ""
coprefix = os.path.join(codir, "git", "")
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
runfetchcmd("%s clone -n %s %s" % (ud.basecmd, ud.clonedir, coprefix), d)
os.chdir(coprefix)
runfetchcmd("%s checkout -q -f %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
else:
bb.utils.mkdirhier(codir)
os.chdir(ud.clonedir)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
runfetchcmd("%s checkout-index -q -f --prefix=%s -a" % (ud.basecmd, coprefix), d)
bb.mkdirhier(codir)
os.chdir(ud.clonedir)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
runfetchcmd("%s checkout-index -q -f --prefix=%s -a" % (ud.basecmd, coprefix), d)
os.chdir(codir)
logger.info("Creating tarball of git checkout")
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git checkout")
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
os.chdir(ud.clonedir)
bb.utils.prunedir(codir)
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _contains_ref(self, tag, d):
@@ -208,19 +199,11 @@ class Git(Fetch):
output = runfetchcmd("%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag), d, quiet=True)
return output.split()[0] != "0"
def _revision_key(self, url, ud, d, branch=False):
def _revision_key(self, url, ud, d):
"""
Return a unique key for the url
"""
key = 'git:' + ud.host + ud.path.replace('/', '.')
if branch:
return key + ud.branch
else:
return key
def generate_revision_key(self, url, ud, d, branch=False):
key = self._revision_key(url, ud, d, branch)
return "%s-%s" % (key, bb.data.getVar("PN", d, True) or "")
return "git:" + ud.host + ud.path.replace('/', '.') + ud.branch
def _latest_revision(self, url, ud, d):
"""
@@ -238,74 +221,6 @@ class Git(Fetch):
raise bb.fetch.FetchError("Fetch command %s gave empty output\n" % (cmd))
return output.split()[0]
def latest_revision(self, url, ud, d):
"""
Look in the cache for the latest revision, if not present ask the SCM.
"""
persisted = bb.persist_data.persist(d)
revs = persisted['BB_URI_HEADREVS']
key = self.generate_revision_key(url, ud, d, branch=True)
rev = revs[key]
if rev is None:
# Compatibility with old key format, no branch included
oldkey = self.generate_revision_key(url, ud, d, branch=False)
rev = revs[oldkey]
if rev is not None:
del revs[oldkey]
else:
rev = self._latest_revision(url, ud, d)
revs[key] = rev
return str(rev)
def sortable_revision(self, url, ud, d):
"""
"""
pd = bb.persist_data.persist(d)
localcounts = pd['BB_URI_LOCALCOUNT']
key = self.generate_revision_key(url, ud, d, branch=True)
oldkey = self.generate_revision_key(url, ud, d, branch=False)
latest_rev = self._build_revision(url, ud, d)
last_rev = localcounts[key + '_rev']
if last_rev is None:
last_rev = localcounts[oldkey + '_rev']
if last_rev is not None:
del localcounts[oldkey + '_rev']
localcounts[key + '_rev'] = last_rev
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
count = None
if uselocalcount:
count = Fetch.localcount_internal_helper(ud, d)
if count is None:
count = localcounts[key + '_count']
if count is None:
count = localcounts[oldkey + '_count']
if count is not None:
del localcounts[oldkey + '_count']
localcounts[key + '_count'] = count
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
buildindex_provided = hasattr(self, "_sortable_buildindex")
if buildindex_provided:
count = self._sortable_buildindex(url, ud, d, latest_rev)
if count is None:
count = "0"
elif uselocalcount or buildindex_provided:
count = str(count)
else:
count = str(int(count) + 1)
localcounts[key + '_rev'] = latest_rev
localcounts[key + '_count'] = count
return str(count + "+" + latest_rev)
def _build_revision(self, url, ud, d):
return ud.tag
@@ -323,7 +238,7 @@ class Git(Fetch):
print("no repo")
self.go(None, ud, d)
if not os.path.exists(ud.clonedir):
logger.error("GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value", url, ud.clonedir)
bb.msg.error(bb.msg.domain.Fetcher, "GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value" % (url, ud.clonedir))
return None
@@ -335,5 +250,5 @@ class Git(Fetch):
os.chdir(cwd)
buildindex = "%s" % output.split()[0]
logger.debug(1, "GIT repository for %s in %s is returning %s revisions in rev-list before %s", url, ud.clonedir, buildindex, rev)
bb.msg.debug(1, bb.msg.domain.Fetcher, "GIT repository for %s in %s is returning %s revisions in rev-list before %s" % (url, ud.clonedir, buildindex, rev))
return buildindex

View File

@@ -26,27 +26,21 @@ BitBake 'Fetch' implementation for mercurial DRCS (hg).
import os
import sys
import logging
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
from bb.fetch import logger
class Hg(Fetch):
"""Class to fetch from mercurial repositories"""
"""Class to fetch a from mercurial repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with mercurial.
"""
return ud.type in ['hg']
def forcefetch(self, url, ud, d):
revTag = ud.parm.get('rev', 'tip')
return revTag == "tip"
def localpath(self, url, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("hg method needs a 'module' parameter")
@@ -54,7 +48,10 @@ class Hg(Fetch):
ud.module = ud.parm["module"]
# Create paths to mercurial checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
@@ -81,7 +78,9 @@ class Hg(Fetch):
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('proto', 'http')
proto = "http"
if "proto" in ud.parm:
proto = ud.parm["proto"]
host = ud.host
if proto == "file":
@@ -117,41 +116,34 @@ class Hg(Fetch):
def go(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
updatecmd = self._buildhgcommand(ud, d, "pull")
logger.info("Update " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
os.chdir(ud.pkgdir)
try:
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d)
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
try:
@@ -160,7 +152,7 @@ class Hg(Fetch):
pass
raise t, v, tb
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _latest_revision(self, url, ud, d):

View File

@@ -66,7 +66,7 @@ class Local(Fetch):
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
logger.info("URL %s looks like a glob and was therefore not checked.", url)
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
return True
if os.path.exists(urldata.localpath):
return True

View File

@@ -8,10 +8,8 @@ Based on the svn "Fetch" implementation.
import os
import sys
import logging
import bb
from bb import data
from bb import utils
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
@@ -34,7 +32,10 @@ class Osc(Fetch):
ud.module = ud.parm["module"]
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
@@ -60,7 +61,9 @@ class Osc(Fetch):
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('proto', 'ocs')
proto = "ocs"
if "proto" in ud.parm:
proto = ud.parm["proto"]
options = []
@@ -69,7 +72,10 @@ class Osc(Fetch):
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
coroot = ud.path
if coroot.startswith('/'):
# Remove leading slash as os.path.join can't cope
coroot= coroot[1:]
if command is "fetch":
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
@@ -85,22 +91,22 @@ class Osc(Fetch):
Fetch url
"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Update "+ loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscupdatecmd)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % oscfetchcmd)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
@@ -123,8 +129,9 @@ class Osc(Fetch):
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
bb.utils.remove(config_path)
config_path = "%s/oscrc" % data.expand('${OSCDIR}', d)
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
f.write("[general]\n")

View File

@@ -27,12 +27,10 @@ BitBake build tools.
from future_builtins import zip
import os
import logging
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import logger
class Perforce(Fetch):
def supports(self, url, ud, d):
@@ -88,10 +86,10 @@ class Perforce(Fetch):
depot += "@%s" % (p4date)
p4cmd = data.getVar('FETCHCOMMAND_p4', d, 1)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.readline().strip()
logger.debug(1, "READ %s", cset)
bb.msg.debug(1, bb.msg.domain.Fetcher, "READ %s" % (cset))
if not cset:
return -1
@@ -113,7 +111,8 @@ class Perforce(Fetch):
if which != -1:
base = path[:which]
base = self._strip_leading_slashes(base)
if base[0] == "/":
base = base[1:]
cset = Perforce.getcset(d, path, host, user, pswd, parm)
@@ -133,7 +132,10 @@ class Perforce(Fetch):
else:
path = depot
module = parm.get('module', os.path.basename(path))
if "module" in parm:
module = parm["module"]
else:
module = os.path.basename(path)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
@@ -153,13 +155,13 @@ class Perforce(Fetch):
p4cmd = data.getVar('FETCHCOMMAND', localdata, 1)
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
raise FetchError(module)
if "label" in parm:
@@ -169,12 +171,12 @@ class Perforce(Fetch):
depot = "%s@%s" % (depot, cset)
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "%s%s files %s" % (p4cmd, p4opt, depot))
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
if not p4file:
logger.error("Fetch: unable to get the P4 files from %s", depot)
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to get the P4 files from %s" % (depot))
raise FetchError(module)
count = 0
@@ -192,7 +194,7 @@ class Perforce(Fetch):
count = count + 1
if count == 0:
logger.error("Fetch: No files gathered from the P4 fetch")
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: No files gathered from the P4 fetch")
raise FetchError(module)
myret = os.system("tar -czf %s %s" % (ud.localpath, module))
@@ -203,4 +205,4 @@ class Perforce(Fetch):
pass
raise FetchError(module)
# cleanup
bb.utils.prunedir(tmpfile)
os.system('rm -rf %s' % tmpfile)

View File

@@ -45,11 +45,24 @@ class Repo(Fetch):
"master".
"""
ud.proto = ud.parm.get('protocol', 'git')
ud.branch = ud.parm.get('branch', 'master')
ud.manifest = ud.parm.get('manifest', 'default.xml')
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
if "protocol" in ud.parm:
ud.proto = ud.parm["protocol"]
else:
ud.proto = "git"
if "branch" in ud.parm:
ud.branch = ud.parm["branch"]
else:
ud.branch = "master"
if "manifest" in ud.parm:
manifest = ud.parm["manifest"]
if manifest.endswith(".xml"):
ud.manifest = manifest
else:
ud.manifest = manifest + ".xml"
else:
ud.manifest = "default.xml"
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
@@ -59,7 +72,7 @@ class Repo(Fetch):
"""Fetch url"""
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping repo init / sync." % ud.localpath)
return
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
@@ -71,7 +84,7 @@ class Repo(Fetch):
else:
username = ""
bb.utils.mkdirhier(os.path.join(codir, "repo"))
bb.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
@@ -79,16 +92,10 @@ class Repo(Fetch):
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
runfetchcmd("tar --exclude=.repo --exclude=.git -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
def suppports_srcrev(self):
return False
def _build_revision(self, url, ud, d):

View File

@@ -26,13 +26,11 @@ This implementation is for svk. It is based on the svn implementation
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import logger
class Svk(Fetch):
"""Class to fetch a module or modules from svk repositories"""
@@ -48,14 +46,18 @@ class Svk(Fetch):
else:
ud.module = ud.parm["module"]
ud.revision = ud.parm.get('rev', "")
ud.revision = ""
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def forcefetch(self, url, ud, d):
return ud.date == "now"
if (ud.date == "now"):
return True
return False
def go(self, loc, ud, d):
"""Fetch urls"""
@@ -70,19 +72,19 @@ class Svk(Fetch):
# create temp directory
localdata = data.createCopy(d)
data.update_data(localdata)
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
raise FetchError(ud.module)
# check out sources there
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.debug(1, "Running %s", svkcmd)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svkcmd)
myret = os.system(svkcmd)
if myret != 0:
try:
@@ -101,4 +103,4 @@ class Svk(Fetch):
pass
raise FetchError(ud.module)
# cleanup
bb.utils.prunedir(tmpfile)
os.system('rm -rf %s' % tmpfile)

View File

@@ -25,14 +25,12 @@ BitBake 'Fetch' implementation for svn.
import os
import sys
import logging
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
from bb.fetch import logger
class Svn(Fetch):
"""Class to fetch a module or modules from svn repositories"""
@@ -49,7 +47,10 @@ class Svn(Fetch):
ud.module = ud.parm["module"]
# Create paths to svn checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
@@ -91,7 +92,9 @@ class Svn(Fetch):
basecmd = data.expand('${FETCHCMD_svn}', d)
proto = ud.parm.get('proto', 'svn')
proto = "svn"
if "proto" in ud.parm:
proto = ud.parm["proto"]
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
@@ -133,34 +136,28 @@ class Svn(Fetch):
def go(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", svnupdatecmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupdatecmd)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnfetchcmd)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
try:
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d)
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
try:
@@ -169,7 +166,7 @@ class Svn(Fetch):
pass
raise t, v, tb
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
@@ -182,7 +179,7 @@ class Svn(Fetch):
"""
Return the latest upstream revision number
"""
logger.debug(2, "SVN fetcher hitting network for %s", url)
bb.msg.debug(2, bb.msg.domain.Fetcher, "SVN fetcher hitting network for %s" % url)
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)

View File

@@ -26,11 +26,12 @@ BitBake build tools.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
import urllib
from bb import data
from bb.fetch import Fetch, FetchError, encodeurl, decodeurl, logger, runfetchcmd
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import encodeurl, decodeurl
from bb.fetch import runfetchcmd
class Wget(Fetch):
"""Class to fetch urls via 'wget'"""
@@ -44,7 +45,7 @@ class Wget(Fetch):
url = encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
ud.localfile = data.expand(os.path.basename(url), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
@@ -67,14 +68,15 @@ class Wget(Fetch):
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
bb.msg.debug(2, bb.msg.domain.Fetcher, "executing " + fetchcmd)
runfetchcmd(fetchcmd, d)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
logger.debug(2, "The fetch command for %s returned success but %s doesn't exist?...", uri, ud.localpath)
bb.msg.debug(2, bb.msg.domain.Fetcher, "The fetch command for %s returned success but %s doesn't exist?..." % (uri, ud.localpath))
return False
return True

File diff suppressed because it is too large Load Diff

View File

@@ -1,143 +0,0 @@
"""
BitBake 'Fetch' implementation for bzr.
"""
# Copyright (C) 2007 Ross Burton
# Copyright (C) 2007 Richard Purdie
#
# Classes for obtaining upstream sources for the
# BitBake build tools.
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Bzr(FetchMethod):
def supports(self, url, ud, d):
return ud.type in ['bzr']
def urldata_init(self, ud, d):
"""
init bzr specific variable within url data
"""
# Create paths to bzr checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
ud.setup_revisons(d)
if not ud.revision:
ud.revision = self.latest_revision(ud.url, ud, d)
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildbzrcommand(self, ud, d, command):
"""
Build up an bzr commandline based on ud
command is "fetch", "update", "revno"
"""
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('proto', 'http')
bzrroot = ud.host + ud.path
options = []
if command is "revno":
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
if command is "fetch":
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command is "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid bzr command %s" % command, ud.url)
return bzrcmd
def download(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", loc)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", loc)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "bzr:" + ud.pkgdir
def _latest_revision(self, url, ud, d, name):
"""
Return the latest upstream revision number
"""
logger.debug(2, "BZR fetcher hitting network for %s", url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
return output.strip()
def _sortable_revision(self, url, ud, d):
"""
Return a sortable revision number which in our case is the revision number
"""
return self._build_revision(url, ud, d)
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -1,181 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
#
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod, FetchError, MissingParameterError, logger
from bb.fetch2 import runfetchcmd
class Cvs(FetchMethod):
"""
Class to fetch a module or modules from cvs repositories
"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with cvs.
"""
return ud.type in ['cvs']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("module", ud.url)
ud.module = ud.parm["module"]
ud.tag = ud.parm.get('tag', "")
# Override the default date in certain cases
if 'date' in ud.parm:
ud.date = ud.parm['date']
elif ud.tag:
ud.date = ""
norecurse = ''
if 'norecurse' in ud.parm:
norecurse = '_norecurse'
fullpath = ''
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, url, ud, d):
if (ud.date == "now"):
return True
if not os.path.exists(ud.localpath):
return True
return False
def download(self, loc, ud, d):
method = ud.parm.get('method', 'pserver')
localdir = ud.parm.get('localdir', ud.module)
cvs_port = ud.parm.get('port', '')
cvs_rsh = None
if method == "ext":
if "rsh" in ud.parm:
cvs_rsh = ud.parm["rsh"]
if method == "dir":
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
if ud.pswd:
cvsroot += ":" + ud.pswd
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
options = []
if 'norecurse' in ud.parm:
options.append("-l")
if ud.date:
# treat YYYYMMDDHHMM specially for CVS
if len(ud.date) == 12:
options.append("-D \"%s %s:%s UTC\"" % (ud.date[0:8], ud.date[8:10], ud.date[10:12]))
else:
options.append("-D \"%s UTC\"" % ud.date)
if ud.tag:
options.append("-r %s" % ud.tag)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
data.setVar('CVSROOT', cvsroot, localdata)
data.setVar('CVSCOOPTS', " ".join(options), localdata)
data.setVar('CVSMODULE', ud.module, localdata)
cvscmd = data.getVar('FETCHCOMMAND', localdata, True)
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, True)
if cvs_rsh:
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + loc)
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
# update sources there
os.chdir(moddir)
cmd = cvsupdatecmd
else:
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(pkgdir)
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
runfetchcmd(cmd, d, cleanup = [moddir])
if not os.access(moddir, os.R_OK):
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
if 'fullpath' in ud.parm:
os.chdir(pkgdir)
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
os.chdir(moddir)
os.chdir('..')
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
runfetchcmd(cmd, d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = data.expand('${PN}', d)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -1,245 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' git implementation
"""
#Copyright (C) 2005 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Git(FetchMethod):
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
#
# Only enable _sortable revision if the key is set
#
if bb.data.getVar("BB_GIT_CLONE_FOR_SRCREV", d, True):
self._sortable_buildindex = self._sortable_buildindex_disabled
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
return ud.type in ['git']
def urldata_init(self, ud, d):
"""
init git specific variable within url data
so that the git method like latest_revision() can work
"""
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
elif not ud.host:
ud.proto = 'file'
else:
ud.proto = "rsync"
ud.nocheckout = False
if 'nocheckout' in ud.parm:
ud.nocheckout = True
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.branches = {}
for name in ud.names:
branch = branches[ud.names.index(name)]
ud.branches[name] = branch
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
ud.write_tarballs = (data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0"
ud.localfile = ud.clonedir
ud.setup_revisons(d)
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
def localpath(self, url, ud, d):
return ud.clonedir
def need_update(self, u, ud, d):
if not os.path.exists(ud.clonedir):
return True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud.revisions[name], d):
return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
return False
def try_premirror(self, u, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if bb.data.getVar("BB_FETCH_PREMIRRORONLY", d, True) is not None:
return True
if os.path.exists(ud.clonedir):
return False
return True
def download(self, loc, ud, d):
"""Fetch url"""
if ud.user:
username = ud.user + '@'
else:
username = ""
ud.repochanged = not os.path.exists(ud.fullmirror)
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
bb.fetch2.check_network_access(d, "git clone --bare %s%s" % (ud.host, ud.path))
runfetchcmd("%s clone --bare %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
os.chdir(ud.clonedir)
# Update the checkout if needed
needupdate = False
for name in ud.names:
if not self._contains_ref(ud.revisions[name], d):
needupdate = True
if needupdate:
bb.fetch2.check_network_access(d, "git fetch %s%s" % (ud.host, ud.path), ud.url)
try:
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add origin %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
runfetchcmd("%s fetch --all -t" % ud.basecmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
ud.repochanged = True
def build_mirror_data(self, url, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % (subdir)
else:
readpathspec = ""
destdir = os.path.join(destdir, "git/")
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
runfetchcmd("git clone -s -n %s %s" % (ud.clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
return True
def clean(self, ud, d):
""" clean the git directory """
bb.utils.remove(ud.localpath, True)
bb.utils.remove(ud.fullmirror)
def supports_srcrev(self):
return True
def _contains_ref(self, tag, d):
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
output = runfetchcmd("%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag), d, quiet=True)
return output.split()[0] != "0"
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "git:" + ud.host + ud.path.replace('/', '.') + ud.branches[name]
def _latest_revision(self, url, ud, d, name):
"""
Compute the HEAD revision for the url
"""
if ud.user:
username = ud.user + '@'
else:
username = ""
bb.fetch2.check_network_access(d, "git ls-remote %s%s %s" % (ud.host, ud.path, ud.branches[name]))
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s %s" % (basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
return output.split()[0]
def _build_revision(self, url, ud, d, name):
return ud.revisions[name]
def _sortable_buildindex_disabled(self, url, ud, d, rev):
"""
Return a suitable buildindex for the revision specified. This is done by counting revisions
using "git rev-list" which may or may not work in different circumstances.
"""
cwd = os.getcwd()
# Check if we have the rev already
if not os.path.exists(ud.clonedir):
print("no repo")
self.download(None, ud, d)
if not os.path.exists(ud.clonedir):
logger.error("GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value", url, ud.clonedir)
return None
os.chdir(ud.clonedir)
if not self._contains_ref(rev, d):
self.download(None, ud, d)
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
os.chdir(cwd)
buildindex = "%s" % output.split()[0]
logger.debug(1, "GIT repository for %s in %s is returning %s revisions in rev-list before %s", url, ud.clonedir, buildindex, rev)
return buildindex

View File

@@ -1,176 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for mercurial DRCS (hg).
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2004 Marcin Juszkiewicz
# Copyright (C) 2007 Robert Schuster
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Hg(FetchMethod):
"""Class to fetch from mercurial repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with mercurial.
"""
return ud.type in ['hg']
def urldata_init(self, ud, d):
"""
init hg specific variable within url data
"""
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.module = ud.parm["module"]
# Create paths to mercurial checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
elif not ud.revision:
ud.revision = self.latest_revision(ud.url, ud, d)
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def need_update(self, url, ud, d):
revTag = ud.parm.get('rev', 'tip')
if revTag == "tip":
return True
if not os.path.exists(ud.localpath):
return True
return False
def _buildhgcommand(self, ud, d, command):
"""
Build up an hg commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('proto', 'http')
host = ud.host
if proto == "file":
host = "/"
ud.host = "localhost"
if not ud.user:
hgroot = host + ud.path
else:
hgroot = ud.user + "@" + host + ud.path
if command is "info":
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
if ud.revision:
options.append("-r %s" % ud.revision)
if command is "fetch":
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
elif command is "pull":
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
cmd = "%s pull" % (basecmd)
elif command is "update":
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid hg command %s" % command, ud.url)
return cmd
def download(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
updatecmd = self._buildhgcommand(ud, d, "pull")
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
bb.fetch2.check_network_access(d, updatecmd, ud.url)
runfetchcmd(updatecmd, d)
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
os.chdir(ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
def _latest_revision(self, url, ud, d, name):
"""
Compute tip revision for the url
"""
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"))
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
return output.strip()
def _build_revision(self, url, ud, d):
return ud.revision
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "hg:" + ud.moddir

View File

@@ -1,93 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod
class Local(FetchMethod):
def supports(self, url, urldata, d):
"""
Check to see if a given url represents a local fetch.
"""
return urldata.type in ['file']
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.basename = os.path.basename(ud.url.split("://")[1].split(";")[0])
return
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
path = url.split("://")[1]
path = path.split(";")[0]
newpath = path
dldirfile = os.path.join(data.getVar("DL_DIR", d, True), os.path.basename(path))
if os.path.exists(dldirfile):
return dldirfile
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, True)
if filespath:
newpath = bb.utils.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
newpath = os.path.join(filesdir, path)
if not os.path.exists(newpath) and path.find("*") == -1:
return dldirfile
return newpath
def need_update(self, url, ud, d):
if url.find("*") != -1:
return False
if os.path.exists(ud.localpath):
return False
return True
def download(self, url, urldata, d):
"""Fetch urls (no-op for Local method)"""
# no need to fetch local files, we'll deal with them in place.
return 1
def checkstatus(self, url, urldata, d):
"""
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
logger.info("URL %s looks like a glob and was therefore not checked.", url)
return True
if os.path.exists(urldata.localpath):
return True
return False
def clean(self, urldata, d):
return

View File

@@ -1,135 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
Bitbake "Fetch" implementation for osc (Opensuse build service client).
Based on the svn "Fetch" implementation.
"""
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
class Osc(FetchMethod):
"""Class to fetch a module or modules from Opensuse build server
repositories."""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with osc.
"""
return ud.type in ['osc']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.module = ud.parm["module"]
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
pv = data.getVar("PV", d, 0)
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev and rev != True:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
def _buildosccommand(self, ud, d, command):
"""
Build up an ocs commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('proto', 'ocs')
options = []
config = "-c %s" % self.generate_config(ud, d)
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
if command is "fetch":
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command is "update":
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command, ud.url)
return osccmd
def download(self, loc, ud, d):
"""
Fetch url
"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return False
def generate_config(self, ud, d):
"""
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
f.write("[general]\n")
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()
return config_path

View File

@@ -1,196 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Perforce(FetchMethod):
def supports(self, url, ud, d):
return ud.type in ['p4']
def doparse(url, d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(user, pswd, host, port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
(host, port) = data.getVar('P4PORT', d).split(':')
user = ""
pswd = ""
if path.find(";") != -1:
keys=[]
values=[]
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
keys.append(key)
values.append(value)
parm = dict(zip(keys, values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
return host, path, user, pswd, parm
doparse = staticmethod(doparse)
def getcset(d, depot, host, user, pswd, parm):
p4opt = ""
if "cset" in parm:
return parm["cset"];
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
p4date = data.getVar("P4DATE", d, True)
if "revision" in parm:
depot += "#%s" % (parm["revision"])
elif "label" in parm:
depot += "@%s" % (parm["label"])
elif p4date:
depot += "@%s" % (p4date)
p4cmd = data.getVar('FETCHCOMMAND_p4', d, True)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.readline().strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
return cset.split(' ')[1]
getcset = staticmethod(getcset)
def urldata_init(self, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(ud.url, d)
# If a label is specified, we use that as our filename
if "label" in parm:
ud.localfile = "%s.tar.gz" % (parm["label"])
return
base = path
which = path.find('/...')
if which != -1:
base = path[:which]
base = self._strip_leading_slashes(base)
cset = Perforce.getcset(d, path, host, user, pswd, parm)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base.replace('/', '.'), cset), d)
def download(self, loc, ud, d):
"""
Fetch urls
"""
(host, depot, user, pswd, parm) = Perforce.doparse(loc, d)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
else:
path = depot
module = parm.get('module', os.path.basename(path))
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
# Get the p4 command
p4opt = ""
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
p4cmd = data.getVar('FETCHCOMMAND', localdata, True)
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
count = 0
for file in p4file:
list = file.split()
if list[2] == "delete":
continue
dest = list[0][len(path)+1:]
where = dest.find("#")
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
count = count + 1
if count == 0:
logger.error()
raise FetchError("Fetch: No files gathered from the P4 fetch", loc)
runfetchcmd("tar -czf %s %s" % (ud.localpath, module), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -1,98 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake "Fetch" repo (git) implementation
"""
# Copyright (C) 2009 Tom Rini <trini@embeddedalley.com>
#
# Based on git.py which is:
#Copyright (C) 2005 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class Repo(FetchMethod):
"""Class to fetch a module or modules from repo (git) repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with repo.
"""
return ud.type in ["repo"]
def urldata_init(self, ud, d):
"""
We don"t care about the git rev of the manifests repository, but
we do care about the manifest to use. The default is "default".
We also care about the branch or tag to be used. The default is
"master".
"""
ud.proto = ud.parm.get('protocol', 'git')
ud.branch = ud.parm.get('branch', 'master')
ud.manifest = ud.parm.get('manifest', 'default.xml')
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
def download(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
username = ud.user + "@"
else:
username = ""
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
return False
def _build_revision(self, url, ud, d):
return ud.manifest
def _want_sortable_revision(self, url, ud, d):
return False

View File

@@ -1,120 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
'''
BitBake 'Fetch' implementations
This implementation is for Secure Shell (SSH), and attempts to comply with the
IETF secsh internet draft:
http://tools.ietf.org/wg/secsh/draft-ietf-secsh-scp-sftp-ssh-uri/
Currently does not support the sftp parameters, as this uses scp
Also does not support the 'fingerprint' connection parameter.
'''
# Copyright (C) 2006 OpenedHand Ltd.
#
#
# Based in part on svk.py:
# Copyright (C) 2006 Holger Hans Peter Freyther
# Based on svn.py:
# Copyright (C) 2003, 2004 Chris Larson
# Based on functions from the base bb module:
# Copyright 2003 Holger Schurig
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
__pattern__ = re.compile(r'''
\s* # Skip leading whitespace
ssh:// # scheme
( # Optional username/password block
(?P<user>\S+) # username
(:(?P<pass>\S+))? # colon followed by the password (optional)
)?
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
@
(?P<host>\S+?) # non-greedy match of the host
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
/
(?P<path>[^;]+) # path on the remote system, may be absolute or relative,
# and may include the use of '~' to reference the remote home
# directory
(?P<sparam>(;[^;]+)*)? # parameters block (optional)
$
''', re.VERBOSE)
class SSH(FetchMethod):
'''Class to fetch a module or modules via Secure Shell'''
def supports(self, url, urldata, d):
return __pattern__.match(url) != None
def localpath(self, url, urldata, d):
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
lpath = os.path.join(data.getVar('DL_DIR', d, True), host, os.path.basename(path))
return lpath
def download(self, url, urldata, d):
dldir = data.getVar('DL_DIR', d, True)
m = __pattern__.match(url)
path = m.group('path')
host = m.group('host')
port = m.group('port')
user = m.group('user')
password = m.group('pass')
ldir = os.path.join(dldir, host)
lpath = os.path.join(ldir, os.path.basename(path))
if not os.path.exists(ldir):
os.makedirs(ldir)
if port:
port = '-P %s' % port
else:
port = ''
if user:
fr = user
if password:
fr += ':%s' % password
fr += '@%s' % host
else:
fr = host
fr += ':%s' % path
import commands
cmd = 'scp -B -r %s %s %s/' % (
port,
commands.mkarg(fr),
commands.mkarg(ldir)
)
bb.fetch2.check_network_access(d, cmd, urldata.url)
runfetchcmd(cmd, d)

View File

@@ -1,97 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
This implementation is for svk. It is based on the svn implementation
"""
# Copyright (C) 2006 Holger Hans Peter Freyther
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Svk(FetchMethod):
"""Class to fetch a module or modules from svk repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svk.
"""
return ud.type in ['svk']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
else:
ud.module = ud.parm["module"]
ud.revision = ud.parm.get('rev', "")
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
def need_update(self, url, ud, d):
if ud.date == "now":
return True
if not os.path.exists(ud.localpath):
return True
return False
def download(self, loc, ud, d):
"""Fetch urls"""
svkroot = ud.host + ud.path
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
if ud.revision:
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
# create temp directory
localdata = data.createCopy(d)
data.update_data(localdata)
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error()
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
# check out sources there
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.debug(1, "Running %s", svkcmd)
runfetchcmd(svkcmd, d, cleanup = [tmpfile])
os.chdir(os.path.join(tmpfile, os.path.dirname(ud.module)))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -1,182 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for svn.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2004 Marcin Juszkiewicz
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Svn(FetchMethod):
"""Class to fetch a module or modules from svn repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svn.
"""
return ud.type in ['svn']
def urldata_init(self, ud, d):
"""
init svn specific variable within url data
"""
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.module = ud.parm["module"]
# Create paths to svn checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = data.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildsvncommand(self, ud, d, command):
"""
Build up an svn commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_svn}', d)
proto = ud.parm.get('proto', 'svn')
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
svn_rsh = ud.parm["rsh"]
svnroot = ud.host + ud.path
options = []
if ud.user:
options.append("--username %s" % ud.user)
if ud.pswd:
options.append("--password %s" % ud.pswd)
if command is "info":
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
if ud.revision:
options.append("-r %s" % ud.revision)
suffix = "@%s" % (ud.revision)
if command is "fetch":
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
elif command is "update":
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
if svn_rsh:
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
return svncmd
def download(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean SVN specific files and dirs """
bb.utils.remove(ud.localpath)
bb.utils.remove(ud.moddir, True)
def supports_srcrev(self):
return True
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "svn:" + ud.moddir
def _latest_revision(self, url, ud, d, name):
"""
Return the latest upstream revision number
"""
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "info"))
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
revision = None
for line in output.splitlines():
if "Last Changed Rev" in line:
revision = line.split(":")[1].strip()
return revision
def _sortable_revision(self, url, ud, d):
"""
Return a sortable revision number which in our case is the revision number
"""
return self._build_revision(url, ud, d)
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -1,91 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
import urllib
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import encodeurl
from bb.fetch2 import decodeurl
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with wget.
"""
return ud.type in ['http', 'https', 'ftp']
def urldata_init(self, ud, d):
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, uri, ud, d, checkonly = False):
"""Fetch urls"""
def fetch_uri(uri, ud, d):
if checkonly:
fetchcmd = data.getVar("CHECKCOMMAND", d, True)
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = data.getVar("RESUMECOMMAND", d, True)
else:
fetchcmd = data.getVar("FETCHCOMMAND", d, True)
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
fetch_uri(uri, ud, localdata)
return True
def checkstatus(self, uri, ud, d):
return self.download(uri, ud, d, True)

View File

@@ -23,66 +23,12 @@ Message handling infrastructure for bitbake
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import logging
import collections
from itertools import groupby
import warnings
import bb
import bb.event
class BBLogFormatter(logging.Formatter):
"""Formatter which ensures that our 'plain' messages (logging.INFO + 1) are used as is"""
DEBUG3 = logging.DEBUG - 2
DEBUG2 = logging.DEBUG - 1
DEBUG = logging.DEBUG
VERBOSE = logging.INFO - 1
NOTE = logging.INFO
PLAIN = logging.INFO + 1
ERROR = logging.ERROR
WARNING = logging.WARNING
CRITICAL = logging.CRITICAL
levelnames = {
DEBUG3 : 'DEBUG',
DEBUG2 : 'DEBUG',
DEBUG : 'DEBUG',
VERBOSE: 'NOTE',
NOTE : 'NOTE',
PLAIN : '',
WARNING : 'WARNING',
ERROR : 'ERROR',
CRITICAL: 'ERROR',
}
def getLevelName(self, levelno):
try:
return self.levelnames[levelno]
except KeyError:
self.levelnames[levelno] = value = 'Level %d' % levelno
return value
def format(self, record):
record.levelname = self.getLevelName(record.levelno)
if record.levelno == self.PLAIN:
return record.getMessage()
else:
return logging.Formatter.format(self, record)
class Loggers(dict):
def __getitem__(self, key):
if key in self:
return dict.__getitem__(self, key)
else:
log = logging.getLogger("BitBake.%s" % domain._fields[key])
dict.__setitem__(self, key, log)
return log
class DebugLevel(dict):
def __getitem__(self, key):
if key == "default":
key = domain.Default
return get_debug_level(key)
debug_level = collections.defaultdict(lambda: 0)
verbose = False
def _NamedTuple(name, fields):
Tuple = collections.namedtuple(name, " ".join(fields))
@@ -102,99 +48,97 @@ domain = _NamedTuple("Domain", (
"RunQueue",
"TaskData",
"Util"))
logger = logging.getLogger("BitBake")
loggers = Loggers()
debug_level = DebugLevel()
class MsgBase(bb.event.Event):
"""Base class for messages"""
def __init__(self, msg):
self._message = msg
bb.event.Event.__init__(self)
class MsgDebug(MsgBase):
"""Debug Message"""
class MsgNote(MsgBase):
"""Note Message"""
class MsgWarn(MsgBase):
"""Warning Message"""
class MsgError(MsgBase):
"""Error Message"""
class MsgFatal(MsgBase):
"""Fatal Message"""
class MsgPlain(MsgBase):
"""General output"""
#
# Message control functions
#
def set_debug_level(level):
for log in loggers.itervalues():
log.setLevel(logging.NOTSET)
if level:
logger.setLevel(logging.DEBUG - level + 1)
else:
logger.setLevel(logging.INFO)
for d in domain:
debug_level[d] = level
debug_level[domain.Default] = level
def get_debug_level(msgdomain = domain.Default):
if not msgdomain:
level = logger.getEffectiveLevel()
else:
level = loggers[msgdomain].getEffectiveLevel()
return max(0, logging.DEBUG - level + 1)
return debug_level[msgdomain]
def set_verbose(level):
if level:
logger.setLevel(BBLogFormatter.VERBOSE)
else:
logger.setLevel(BBLogFormatter.INFO)
verbose = level
def set_debug_domains(domainargs):
for (domainarg, iterator) in groupby(domainargs):
for index, msgdomain in enumerate(domain._fields):
if msgdomain == domainarg:
level = len(tuple(iterator))
if level:
loggers[index].setLevel(logging.DEBUG - level + 1)
def set_debug_domains(strdomains):
for domainstr in strdomains:
for d in domain:
if domain._fields[d] == domainstr:
debug_level[d] += 1
break
else:
warn(None, "Logging domain %s is not valid, ignoring" % domainarg)
warn(None, "Logging domain %s is not valid, ignoring" % domainstr)
#
# Message handling functions
#
def debug(level, msgdomain, msg):
warnings.warn("bb.msg.debug will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
level = logging.DEBUG - (level - 1)
def debug(level, msgdomain, msg, fn = None):
if not msgdomain:
logger.debug(level, msg)
else:
loggers[msgdomain].debug(level, msg)
msgdomain = domain.Default
def plain(msg):
warnings.warn("bb.msg.plain will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
logger.plain(msg)
if debug_level[msgdomain] >= level:
bb.event.fire(MsgDebug(msg), None)
if bb.event.useStdout:
print('DEBUG: %s' % (msg))
def note(level, msgdomain, msg):
warnings.warn("bb.msg.note will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
if level > 1:
if msgdomain:
logger.verbose(msg)
else:
loggers[msgdomain].verbose(msg)
else:
if msgdomain:
logger.info(msg)
else:
loggers[msgdomain].info(msg)
def warn(msgdomain, msg):
warnings.warn("bb.msg.warn will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
def note(level, msgdomain, msg, fn = None):
if not msgdomain:
logger.warn(msg)
else:
loggers[msgdomain].warn(msg)
msgdomain = domain.Default
def error(msgdomain, msg):
warnings.warn("bb.msg.error will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
if not msgdomain:
logger.error(msg)
else:
loggers[msgdomain].error(msg)
if level == 1 or verbose or debug_level[msgdomain] >= 1:
bb.event.fire(MsgNote(msg), None)
if bb.event.useStdout:
print('NOTE: %s' % (msg))
def fatal(msgdomain, msg):
warnings.warn("bb.msg.fatal will soon be deprecated in favor of raising appropriate exceptions",
PendingDeprecationWarning, stacklevel=2)
if not msgdomain:
logger.critical(msg)
else:
loggers[msgdomain].critical(msg)
def warn(msgdomain, msg, fn = None):
bb.event.fire(MsgWarn(msg), None)
if bb.event.useStdout:
print('WARNING: %s' % (msg))
def error(msgdomain, msg, fn = None):
bb.event.fire(MsgError(msg), None)
if bb.event.useStdout:
print('ERROR: %s' % (msg))
def fatal(msgdomain, msg, fn = None):
bb.event.fire(MsgFatal(msg), None)
if bb.event.useStdout:
print('FATAL: %s' % (msg))
sys.exit(1)
def plain(msg, fn = None):
bb.event.fire(MsgPlain(msg), None)
if bb.event.useStdout:
print(msg)

View File

@@ -26,15 +26,10 @@ File parsers for the BitBake build tools.
handlers = []
import os
import stat
import logging
import bb
import bb, os
import bb.utils
import bb.siggen
logger = logging.getLogger("BitBake.Parsing")
class ParseError(Exception):
"""Exception raised when parsing fails"""
@@ -44,19 +39,19 @@ class SkipPackage(Exception):
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
__mtime_cache[f] = os.stat(f)[8]
return __mtime_cache[f]
def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
__mtime_cache[f] = os.stat(f)[8]
except OSError:
return 0
return __mtime_cache[f]
def update_mtime(f):
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
__mtime_cache[f] = os.stat(f)[8]
return __mtime_cache[f]
def mark_dependency(d, f):
@@ -85,18 +80,18 @@ def init(fn, data):
if h['supports'](fn):
return h['init'](data)
def init_parser(d):
bb.parse.siggen = bb.siggen.init(d)
def init_parser(d, dumpsigs):
bb.parse.siggen = bb.siggen.init(d, dumpsigs)
def resolve_file(fn, d):
if not os.path.isabs(fn):
bbpath = bb.data.getVar("BBPATH", d, True)
newfn = bb.utils.which(bbpath, fn)
newfn = bb.which(bbpath, fn)
if not newfn:
raise IOError("file %s not found in %s" % (fn, bbpath))
fn = newfn
logger.debug(2, "LOAD %s", fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "LOAD %s" % fn)
return fn
# Used by OpenEmbedded metadata

View File

@@ -23,14 +23,11 @@
from __future__ import absolute_import
from future_builtins import filter
import re
import string
import logging
import bb
import itertools
import bb, re, string
from bb import methodpool
from bb.parse import logger
import itertools
__word__ = re.compile(r"\S+")
__parsed_methods__ = bb.methodpool.get_parsed_dict()
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
@@ -40,14 +37,13 @@ class StatementGroup(list):
statement.eval(data)
class AstNode(object):
def __init__(self, filename, lineno):
self.filename = filename
self.lineno = lineno
pass
class IncludeNode(AstNode):
def __init__(self, filename, lineno, what_file, force):
AstNode.__init__(self, filename, lineno)
def __init__(self, what_file, fn, lineno, force):
self.what_file = what_file
self.from_fn = fn
self.from_lineno = lineno
self.force = force
def eval(self, data):
@@ -55,17 +51,16 @@ class IncludeNode(AstNode):
Include the file and evaluate the statements
"""
s = bb.data.expand(self.what_file, data)
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
bb.msg.debug(3, bb.msg.domain.Parsing, "CONF %s:%d: including %s" % (self.from_fn, self.from_lineno, s))
# TODO: Cache those includes... maybe not here though
if self.force:
bb.parse.ConfHandler.include(self.filename, s, data, "include required")
bb.parse.ConfHandler.include(self.from_fn, s, data, "include required")
else:
bb.parse.ConfHandler.include(self.filename, s, data, False)
bb.parse.ConfHandler.include(self.from_fn, s, data, False)
class ExportNode(AstNode):
def __init__(self, filename, lineno, var):
AstNode.__init__(self, filename, lineno)
def __init__(self, var):
self.var = var
def eval(self, data):
@@ -78,8 +73,7 @@ class DataNode(AstNode):
this need to be re-evaluated... we might be able to do
that faster with multiple classes.
"""
def __init__(self, filename, lineno, groupd):
AstNode.__init__(self, filename, lineno)
def __init__(self, groupd):
self.groupd = groupd
def getFunc(self, key, data):
@@ -115,22 +109,26 @@ class DataNode(AstNode):
if 'flag' in groupd and groupd['flag'] != None:
bb.data.setVarFlag(key, groupd['flag'], val, data)
elif groupd["lazyques"]:
assigned = bb.data.getVar("__lazy_assigned", data) or []
assigned.append(key)
bb.data.setVar("__lazy_assigned", assigned, data)
bb.data.setVarFlag(key, "defaultval", val, data)
else:
bb.data.setVar(key, val, data)
class MethodNode(AstNode):
def __init__(self, filename, lineno, func_name, body):
AstNode.__init__(self, filename, lineno)
class MethodNode:
def __init__(self, func_name, body, lineno, fn):
self.func_name = func_name
self.body = body
self.fn = fn
self.lineno = lineno
def eval(self, data):
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-', '____'))))
funcname = ("__anon_%s_%s" % (self.lineno, self.fn.translate(string.maketrans('/.+-', '____'))))
if not funcname in bb.methodpool._parsed_fns:
text = "def %s(d):\n" % (funcname) + '\n'.join(self.body)
bb.methodpool.insert_method(funcname, text, self.filename)
bb.methodpool.insert_method(funcname, text, self.fn)
anonfuncs = bb.data.getVar('__BBANONFUNCS', data) or []
anonfuncs.append(funcname)
bb.data.setVar('__BBANONFUNCS', anonfuncs, data)
@@ -139,26 +137,25 @@ class MethodNode(AstNode):
bb.data.setVar(self.func_name, '\n'.join(self.body), data)
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, define, body):
AstNode.__init__(self, filename, lineno)
self.function = function
self.define = define
def __init__(self, funcname, root, body, fn):
self.func_name = funcname
self.root = root
self.body = body
self.fn = fn
def eval(self, data):
# Note we will add root to parsedmethods after having parse
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
if not bb.methodpool.parsed_module(self.define):
bb.methodpool.insert_method(self.define, text, self.filename)
bb.data.setVarFlag(self.function, "func", 1, data)
bb.data.setVarFlag(self.function, "python", 1, data)
bb.data.setVar(self.function, text, data)
if not bb.methodpool.parsed_module(self.root):
bb.methodpool.insert_method(self.root, text, self.fn)
bb.data.setVarFlag(self.func_name, "func", 1, data)
bb.data.setVarFlag(self.func_name, "python", 1, data)
bb.data.setVar(self.func_name, text, data)
class MethodFlagsNode(AstNode):
def __init__(self, filename, lineno, key, m):
AstNode.__init__(self, filename, lineno)
def __init__(self, key, m):
self.key = key
self.m = m
@@ -178,9 +175,8 @@ class MethodFlagsNode(AstNode):
bb.data.delVarFlag(self.key, "fakeroot", data)
class ExportFuncsNode(AstNode):
def __init__(self, filename, lineno, fns, classes):
AstNode.__init__(self, filename, lineno)
self.n = fns.split()
def __init__(self, fns, classes):
self.n = __word__.findall(fns)
self.classes = classes
def eval(self, data):
@@ -218,8 +214,7 @@ class ExportFuncsNode(AstNode):
bb.data.setVarFlag(var, 'export_func', '1', data)
class AddTaskNode(AstNode):
def __init__(self, filename, lineno, func, before, after):
AstNode.__init__(self, filename, lineno)
def __init__(self, func, before, after):
self.func = func
self.before = before
self.after = after
@@ -250,9 +245,8 @@ class AddTaskNode(AstNode):
bb.data.setVarFlag(entry, "deps", [var] + existing, data)
class BBHandlerNode(AstNode):
def __init__(self, filename, lineno, fns):
AstNode.__init__(self, filename, lineno)
self.hs = fns.split()
def __init__(self, fns):
self.hs = __word__.findall(fns)
def eval(self, data):
bbhands = bb.data.getVar('__BBHANDLERS', data) or []
@@ -262,51 +256,56 @@ class BBHandlerNode(AstNode):
bb.data.setVar('__BBHANDLERS', bbhands, data)
class InheritNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
self.classes = classes
def __init__(self, files):
self.n = __word__.findall(files)
def eval(self, data):
bb.parse.BBHandler.inherit(self.classes, data)
bb.parse.BBHandler.inherit(self.n, data)
def handleInclude(statements, filename, lineno, m, force):
statements.append(IncludeNode(filename, lineno, m.group(1), force))
def handleInclude(statements, m, fn, lineno, force):
statements.append(IncludeNode(m.group(1), fn, lineno, force))
def handleExport(statements, filename, lineno, m):
statements.append(ExportNode(filename, lineno, m.group(1)))
def handleExport(statements, m):
statements.append(ExportNode(m.group(1)))
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
def handleData(statements, groupd):
statements.append(DataNode(groupd))
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handleMethod(statements, func_name, lineno, fn, body):
statements.append(MethodNode(func_name, body, lineno, fn))
def handlePythonMethod(statements, filename, lineno, funcname, root, body):
statements.append(PythonMethodNode(filename, lineno, funcname, root, body))
def handlePythonMethod(statements, funcname, root, body, fn):
statements.append(PythonMethodNode(funcname, root, body, fn))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
def handleMethodFlags(statements, key, m):
statements.append(MethodFlagsNode(key, m))
def handleExportFuncs(statements, filename, lineno, m, classes):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classes))
def handleExportFuncs(statements, m, classes):
statements.append(ExportFuncsNode(m.group(1), classes))
def handleAddTask(statements, filename, lineno, m):
def handleAddTask(statements, m):
func = m.group("func")
before = m.group("before")
after = m.group("after")
if func is None:
return
statements.append(AddTaskNode(filename, lineno, func, before, after))
statements.append(AddTaskNode(func, before, after))
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
def handleBBHandlers(statements, m):
statements.append(BBHandlerNode(m.group(1)))
def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes.split()))
def handleInherit(statements, m):
files = m.group(1)
n = __word__.findall(files)
statements.append(InheritNode(m.group(1)))
def finalize(fn, d, variant = None):
for lazykey in bb.data.getVar("__lazy_assigned", d) or ():
if bb.data.getVar(lazykey, d) is None:
val = bb.data.getVarFlag(lazykey, "defaultval", d)
bb.data.setVar(lazykey, val, d)
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
@@ -366,7 +365,7 @@ def _expand_versions(versions):
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND", True) or "").split()
for append in appends:
logger.debug(2, "Appending .bbappend file %s to %s", append, fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "Appending .bbappend file " + append + " to " + fn)
bb.parse.BBHandler.handle(append, d, True)
safe_d = d

View File

@@ -27,12 +27,11 @@
from __future__ import absolute_import
import re, bb, os
import logging
import bb.build, bb.utils
import bb.fetch, bb.build, bb.utils
from bb import data
from . import ConfHandler
from .. import resolve_file, ast, logger
from .. import resolve_file, ast
from .ConfHandler import include, init
# For compatibility
@@ -65,8 +64,7 @@ IN_PYTHON_EOF = -9999999999999
def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
return fn[-3:] == ".bb" or fn[-8:] == ".bbclass" or fn[-4:] == ".inc"
def inherit(files, d):
__inherit_cache = data.getVar('__inherit_cache', d) or []
@@ -74,11 +72,11 @@ def inherit(files, d):
lineno = 0
for file in files:
file = data.expand(file, d)
if not os.path.isabs(file) and not file.endswith(".bbclass"):
if file[0] != "/" and file[-8:] != ".bbclass":
file = os.path.join('classes', '%s.bbclass' % file)
if not file in __inherit_cache:
logger.log(logging.DEBUG -1, "BB %s:%d: inheriting %s", fn, lineno, file)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
__inherit_cache.append( file )
data.setVar('__inherit_cache', __inherit_cache, d)
include(fn, file, d, "inherit")
@@ -117,12 +115,12 @@ def handle(fn, d, include):
if include == 0:
logger.debug(2, "BB %s: handle(data)", fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data)")
else:
logger.debug(2, "BB %s: handle(data, include)", fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data, include)")
base_name = os.path.basename(fn)
(root, ext) = os.path.splitext(base_name)
(root, ext) = os.path.splitext(os.path.basename(fn))
base_name = "%s%s" % (root, ext)
init(d)
if ext == ".bbclass":
@@ -172,7 +170,7 @@ def feeder(lineno, s, fn, root, statements):
if __infunc__:
if s == '}':
__body__.append('')
ast.handleMethod(statements, fn, lineno, __infunc__, __body__)
ast.handleMethod(statements, __infunc__, lineno, fn, __body__)
__infunc__ = ""
__body__ = []
else:
@@ -185,22 +183,16 @@ def feeder(lineno, s, fn, root, statements):
__body__.append(s)
return
else:
ast.handlePythonMethod(statements, fn, lineno, __inpython__,
root, __body__)
ast.handlePythonMethod(statements, __inpython__, root, __body__, fn)
__body__ = []
__inpython__ = False
if lineno == IN_PYTHON_EOF:
return
# fall through
# Skip empty lines
if s == '':
return
if s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
bb.error("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
if s == '' or s[0] == '#': return # skip comments and empty lines
if s[-1] == '\\':
__residue__.append(s[:-1])
@@ -209,14 +201,10 @@ def feeder(lineno, s, fn, root, statements):
s = "".join(__residue__) + s
__residue__ = []
# Skip comments
if s[0] == '#':
return
m = __func_start_regexp__.match(s)
if m:
__infunc__ = m.group("func") or "__anonymous"
ast.handleMethodFlags(statements, fn, lineno, __infunc__, m)
ast.handleMethodFlags(statements, __infunc__, m)
return
m = __def_regexp__.match(s)
@@ -228,22 +216,22 @@ def feeder(lineno, s, fn, root, statements):
m = __export_func_regexp__.match(s)
if m:
ast.handleExportFuncs(statements, fn, lineno, m, classes)
ast.handleExportFuncs(statements, m, classes)
return
m = __addtask_regexp__.match(s)
if m:
ast.handleAddTask(statements, fn, lineno, m)
ast.handleAddTask(statements, m)
return
m = __addhandler_regexp__.match(s)
if m:
ast.handleBBHandlers(statements, fn, lineno, m)
ast.handleBBHandlers(statements, m)
return
m = __inherit_regexp__.match(s)
if m:
ast.handleInherit(statements, fn, lineno, m)
ast.handleInherit(statements, m)
return
return ConfHandler.feeder(lineno, s, fn, statements)

View File

@@ -25,9 +25,8 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, bb.data, os
import logging
import bb.utils
from bb.parse import ParseError, resolve_file, ast, logger
from bb.parse import ParseError, resolve_file, ast
#__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}]+)\s*(?P<colon>:)?(?P<ques>\?)?=\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
@@ -46,10 +45,10 @@ def supports(fn, d):
def include(oldfn, fn, data, error_out):
"""
error_out If True a ParseError will be raised if the to be included
config-files could not be included.
error_out If True a ParseError will be reaised if the to be included
"""
if oldfn == fn: # prevent infinite recursion
if oldfn == fn: # prevent infinate recursion
return None
import bb
@@ -69,7 +68,7 @@ def include(oldfn, fn, data, error_out):
except IOError:
if error_out:
raise ParseError("Could not %(error_out)s file %(fn)s" % vars() )
logger.debug(2, "CONF file '%s' not found", fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF file '%s' not found" % fn)
def handle(fn, data, include):
init(data)
@@ -113,22 +112,22 @@ def feeder(lineno, s, fn, statements):
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
ast.handleData(statements, fn, lineno, groupd)
ast.handleData(statements, groupd)
return
m = __include_regexp__.match(s)
if m:
ast.handleInclude(statements, fn, lineno, m, False)
ast.handleInclude(statements, m, fn, lineno, False)
return
m = __require_regexp__.match(s)
if m:
ast.handleInclude(statements, fn, lineno, m, True)
ast.handleInclude(statements, m, fn, lineno, True)
return
m = __export_regexp__.match(s)
if m:
ast.handleExport(statements, fn, lineno, m)
ast.handleExport(statements, m)
return
raise ParseError("%s:%d: unparsed line: '%s'" % (fn, lineno, s));

View File

@@ -1,12 +1,6 @@
"""BitBake Persistent Data Store
Used to store data in a central location such that other threads/tasks can
access them at some future date. Acts as a convenience wrapper around sqlite,
currently, providing a key/value store accessed by 'domain'.
"""
# BitBake Persistent Data Store
#
# Copyright (C) 2007 Richard Purdie
# Copyright (C) 2010 Chris Larson <chris_larson@mentor.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -21,174 +15,119 @@ currently, providing a key/value store accessed by 'domain'.
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import collections
import logging
import os.path
import sys
import warnings
import bb.msg, bb.data, bb.utils
import bb, os
import bb.utils
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
try:
from pysqlite2 import dbapi2 as sqlite3
except ImportError:
bb.msg.fatal(bb.msg.domain.PersistData, "Importing sqlite3 and pysqlite2 failed, please install one of them. Python 2.5 or a 'python-pysqlite2' like package is likely to be what you need.")
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
raise Exception("sqlite3 version 3.3.0 or later is required.")
bb.msg.fatal(bb.msg.domain.PersistData, "sqlite3 version 3.3.0 or later is required.")
class PersistData:
"""
BitBake Persistent Data Store
logger = logging.getLogger("BitBake.PersistData")
Used to store data in a central location such that other threads/tasks can
access them at some future date.
The "domain" is used as a key to isolate each data pool and in this
implementation corresponds to an SQL table. The SQL table consists of a
simple key and value pair.
class SQLTable(collections.MutableMapping):
"""Object representing a table/domain in the database"""
def __init__(self, cursor, table):
self.cursor = cursor
self.table = table
Why sqlite? It handles all the locking issues for us.
"""
def __init__(self, d, persistent_database_connection):
if "connection" in persistent_database_connection:
self.cursor = persistent_database_connection["connection"].cursor()
return
self.cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
if self.cachedir in [None, '']:
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'PERSISTENT_DIR' or 'CACHE' variable.")
try:
os.stat(self.cachedir)
except OSError:
bb.utils.mkdirhier(self.cachedir)
self._execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);"
% table)
self.cachefile = os.path.join(self.cachedir, "bb_persist_data.sqlite3")
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
def _execute(self, *query):
"""Execute a query, waiting to acquire a lock if necessary"""
count = 0
while True:
try:
return self.cursor.execute(*query)
except sqlite3.OperationalError as exc:
if 'database is locked' in str(exc) and count < 500:
count = count + 1
continue
raise
def __getitem__(self, key):
data = self._execute("SELECT * from %s where key=?;" %
self.table, [key])
for row in data:
return row[1]
def __delitem__(self, key):
self._execute("DELETE from %s where key=?;" % self.table, [key])
def __setitem__(self, key, value):
data = self._execute("SELECT * from %s where key=?;" %
self.table, [key])
exists = len(list(data))
if exists:
self._execute("UPDATE %s SET value=? WHERE key=?;" % self.table,
[value, key])
else:
self._execute("INSERT into %s(key, value) values (?, ?);" %
self.table, [key, value])
def __contains__(self, key):
return key in set(self)
def __len__(self):
data = self._execute("SELECT COUNT(key) FROM %s;" % self.table)
for row in data:
return row[0]
def __iter__(self):
data = self._execute("SELECT key FROM %s;" % self.table)
for row in data:
yield row[0]
def iteritems(self):
data = self._execute("SELECT * FROM %s;" % self.table)
for row in data:
yield row[0], row[1]
def itervalues(self):
data = self._execute("SELECT value FROM %s;" % self.table)
for row in data:
yield row[0]
class SQLData(object):
"""Object representing the persistent data"""
def __init__(self, filename):
bb.utils.mkdirhier(os.path.dirname(filename))
self.filename = filename
self.connection = sqlite3.connect(filename, timeout=30,
isolation_level=None)
self.cursor = self.connection.cursor()
self._tables = {}
def __getitem__(self, table):
if not isinstance(table, basestring):
raise TypeError("table argument must be a string, not '%s'" %
type(table))
if table in self._tables:
return self._tables[table]
else:
tableobj = self._tables[table] = SQLTable(self.cursor, table)
return tableobj
def __delitem__(self, table):
if table in self._tables:
del self._tables[table]
self.cursor.execute("DROP TABLE IF EXISTS %s;" % table)
class PersistData(object):
"""Deprecated representation of the bitbake persistent data store"""
def __init__(self, d):
warnings.warn("Use of PersistData will be deprecated in the future",
category=PendingDeprecationWarning,
stacklevel=2)
self.data = persist(d)
logger.debug(1, "Using '%s' as the persistent data cache",
self.data.filename)
connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
persistent_database_connection["connection"] = connection
self.cursor = persistent_database_connection["connection"].cursor()
def addDomain(self, domain):
"""
Add a domain (pending deprecation)
Should be called before any domain is used
Creates it if it doesn't exist.
"""
return self.data[domain]
self._execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
def delDomain(self, domain):
"""
Removes a domain and all the data it contains
"""
del self.data[domain]
self._execute("DROP TABLE IF EXISTS %s;" % domain)
def getKeyValues(self, domain):
"""
Return a list of key + value pairs for a domain
"""
return self.data[domain].items()
ret = {}
data = self._execute("SELECT key, value from %s;" % domain)
for row in data:
ret[str(row[0])] = str(row[1])
return ret
def getValue(self, domain, key):
"""
Return the value of a key for a domain
"""
return self.data[domain][key]
data = self._execute("SELECT * from %s where key=?;" % domain, [key])
for row in data:
return row[1]
def setValue(self, domain, key, value):
"""
Sets the value of a key for a domain
"""
self.data[domain][key] = value
data = self._execute("SELECT * from %s where key=?;" % domain, [key])
rows = 0
for row in data:
rows = rows + 1
if rows:
self._execute("UPDATE %s SET value=? WHERE key=?;" % domain, [value, key])
else:
self._execute("INSERT into %s(key, value) values (?, ?);" % domain, [key, value])
def delValue(self, domain, key):
"""
Deletes a key/value pair
"""
del self.data[domain][key]
self._execute("DELETE from %s where key=?;" % domain, [key])
def persist(d):
"""Convenience factory for construction of SQLData based upon metadata"""
cachedir = (bb.data.getVar("PERSISTENT_DIR", d, True) or
bb.data.getVar("CACHE", d, True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
sys.exit(1)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
return SQLData(cachefile)
#
# We wrap the sqlite execute calls as on contended machines or single threaded
# systems we can have multiple processes trying to access the DB at once and it seems
# sqlite sometimes doesn't wait for the timeout. We therefore loop but put in an
# emergency brake too
#
def _execute(self, *query):
count = 0
while True:
try:
ret = self.cursor.execute(*query)
#print "Had to retry %s times" % count
return ret
except sqlite3.OperationalError as e:
if 'database is locked' in str(e) and count < 500:
count = count + 1
continue
raise

View File

@@ -1,109 +0,0 @@
import logging
import signal
import subprocess
logger = logging.getLogger('BitBake.Process')
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect.
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
class CmdError(RuntimeError):
def __init__(self, command, msg=None):
self.command = command
self.msg = msg
def __str__(self):
if not isinstance(self.command, basestring):
cmd = subprocess.list2cmdline(self.command)
else:
cmd = self.command
msg = "Execution of '%s' failed" % cmd
if self.msg:
msg += ': %s' % self.msg
return msg
class NotFoundError(CmdError):
def __str__(self):
return CmdError.__str__(self) + ": command not found"
class ExecutionError(CmdError):
def __init__(self, command, exitcode, stdout = None, stderr = None):
CmdError.__init__(self, command)
self.exitcode = exitcode
self.stdout = stdout
self.stderr = stderr
def __str__(self):
message = ""
if self.stderr:
message += self.stderr
if self.stdout:
message += self.stdout
if message:
message = ":\n" + message
return (CmdError.__str__(self) +
" with exit code %s" % self.exitcode + message)
class Popen(subprocess.Popen):
defaults = {
"close_fds": True,
"preexec_fn": subprocess_setup,
"stdout": subprocess.PIPE,
"stderr": subprocess.STDOUT,
"stdin": subprocess.PIPE,
"shell": False,
}
def __init__(self, *args, **kwargs):
options = dict(self.defaults)
options.update(kwargs)
subprocess.Popen.__init__(self, *args, **options)
def _logged_communicate(pipe, log, input):
if pipe.stdin:
if input is not None:
pipe.stdin.write(input)
pipe.stdin.close()
bufsize = 512
outdata, errdata = [], []
while pipe.poll() is None:
if pipe.stdout is not None:
data = pipe.stdout.read(bufsize)
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stderr is not None:
data = pipe.stderr.read(bufsize)
if data is not None:
errdata.append(data)
log.write(data)
return ''.join(outdata), ''.join(errdata)
def run(cmd, input=None, log=None, **options):
"""Convenience function to run a command and return its output, raising an
exception when the command fails"""
if isinstance(cmd, basestring) and not "shell" in options:
options["shell"] = True
try:
pipe = Popen(cmd, **options)
except OSError, exc:
if exc.errno == 2:
raise NotFoundError(cmd)
else:
raise CmdError(cmd, exc)
if log:
stdout, stderr = _logged_communicate(pipe, log, input)
else:
stdout, stderr = pipe.communicate(input)
if pipe.returncode != 0:
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
return stdout, stderr

View File

@@ -22,12 +22,9 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re
import logging
from bb import data, utils
import bb
logger = logging.getLogger("BitBake.Provider")
class NoProvider(Exception):
"""Exception raised when no provider of a build dependency can be found"""
@@ -123,9 +120,9 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
if item:
itemstr = " (for item %s)" % item
if preferred_file is None:
logger.info("preferred version %s of %s not available%s", pv_str, pn, itemstr)
bb.msg.note(1, bb.msg.domain.Provider, "preferred version %s of %s not available%s" % (pv_str, pn, itemstr))
else:
logger.debug(1, "selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
bb.msg.debug(1, bb.msg.domain.Provider, "selecting %s as PREFERRED_VERSION %s of package %s%s" % (preferred_file, pv_str, pn, itemstr))
return (preferred_ver, preferred_file)
@@ -192,7 +189,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug(1, "providers for %s are: %s", item, pkg_pn.keys())
bb.msg.debug(1, bb.msg.domain.Provider, "providers for %s are: %s" % (item, pkg_pn.keys()))
# First add PREFERRED_VERSIONS
for pn in pkg_pn:
@@ -209,7 +206,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
eligible.append(preferred_versions[pn][1])
if len(eligible) == 0:
logger.error("no eligible providers for %s", item)
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
return 0
# If pn == item, give it a slight default preference
@@ -245,13 +242,13 @@ def filterProviders(providers, item, cfgData, dataCache):
for p in eligible:
pn = dataCache.pkg_fn[p]
if dataCache.preferred[item] == pn:
logger.verbose("selecting %s to satisfy %s due to PREFERRED_PROVIDERS", pn, item)
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
eligible.remove(p)
eligible = [p] + eligible
foundUnique = True
break
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
return eligible, foundUnique
@@ -267,31 +264,27 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
bb.msg.note(2, bb.msg.domain.Provider, "checking PREFERRED_PROVIDER_%s" % (provide))
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
logger.verbose("checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
if prefervar == pn:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to %s" % (pn, item, var))
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
eligible.remove(p)
eligible = [p] + eligible
preferred.append(p)
break
numberPreferred = len(preferred)
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s", item, preferred, preferred_vars)
bb.msg.error(bb.msg.domain.Provider, "Conflicting PREFERRED_PROVIDER entries were found which resulted in an attempt to select multiple providers (%s) for runtime dependecy %s\nThe entries resulting in this conflict were: %s" % (preferred, item, preferred_vars))
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
return eligible, numberPreferred
@@ -321,7 +314,7 @@ def getRuntimeProviders(dataCache, rdepend):
try:
regexp = re.compile(pattern)
except:
logger.error("Error parsing regular expression '%s'", pattern)
bb.msg.error(bb.msg.domain.Provider, "Error parsing re expression: %s" % pattern)
raise
regexp_cache[pattern] = regexp
if regexp.match(rdepend):

File diff suppressed because it is too large Load Diff

View File

@@ -18,21 +18,28 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements a passthrough server for BitBake.
This module implements an xmlrpc server for BitBake.
Use register_idle_function() to add a function which the server
calls from within idle_commands when no requests are pending. Make sure
Use this by deriving a class from BitBakeXMLRPCServer and then adding
methods which you want to "export" via XMLRPC. If the methods have the
prefix xmlrpc_, then registering those function will happen automatically,
if not, you need to call register_function.
Use register_idle_function() to add a function which the xmlrpc server
calls from within server_forever when no requests are pending. Make sure
that those functions are non-blocking or else you will introduce latency
in the server's main loop.
"""
import time
import bb
from bb.ui import uievent
import xmlrpclib
import pickle
import signal
DEBUG = False
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import inspect, select
class BitBakeServerCommands():
@@ -79,22 +86,18 @@ class BBUIEventQueue:
self.BBServer = BBServer
self.EventHandle = bb.event.register_UIHhandler(self)
def __popEvent(self):
if len(self.eventQueue) == 0:
return None
return self.eventQueue.pop(0)
def getEvent(self):
if len(self.eventQueue) == 0:
self.BBServer.idle_commands(0)
return self.__popEvent()
return None
return self.eventQueue.pop(0)
def waitEvent(self, delay):
event = self.__popEvent()
event = self.getEvent()
if event:
return event
self.BBServer.idle_commands(delay)
return self.__popEvent()
return self.getEvent()
def queue_event(self, event):
self.eventQueue.append(event)
@@ -102,10 +105,6 @@ class BBUIEventQueue:
def system_quit( self ):
bb.event.unregister_UIHhandler(self.EventHandle)
# Dummy signal handler to ensure we break out of sleep upon SIGCHLD
def chldhandler(signum, stackframe):
pass
class BitBakeServer():
# remove this when you're done with debugging
# allow_reuse_address = True
@@ -145,9 +144,7 @@ class BitBakeServer():
pass
if nextsleep is not None:
#print "Sleeping for %s (%s)" % (nextsleep, delay)
signal.signal(signal.SIGCHLD, chldhandler)
time.sleep(nextsleep)
signal.signal(signal.SIGCHLD, signal.SIG_DFL)
def server_exit(self):
# Tell idle functions we're exiting
@@ -163,22 +160,15 @@ class BitbakeServerInfo():
self.commands = server.commands
class BitBakeServerFork():
def __init__(self, cooker, server, serverinfo, logfile):
def __init__(self, serverinfo, command, logfile):
serverinfo.forkCommand = command
serverinfo.logfile = logfile
serverinfo.cooker = cooker
serverinfo.server = server
class BitbakeUILauch():
def launch(self, serverinfo, uifunc, *args):
return bb.cooker.server_main(serverinfo.cooker, uifunc, *args)
class BitBakeServerConnection():
def __init__(self, serverinfo):
self.server = serverinfo.server
self.connection = serverinfo.commands
self.events = bb.server.none.BBUIEventQueue(self.server)
for event in bb.event.ui_queue:
self.events.queue_event(event)
def terminate(self):
try:

View File

@@ -45,82 +45,6 @@ if sys.hexversion < 0x020600F0:
print("Sorry, python 2.6 or later is required for bitbake's XMLRPC mode")
sys.exit(1)
##
# The xmlrpclib.Transport class has undergone various changes in Python 2.7
# which break BitBake's XMLRPC implementation.
# To work around this we subclass Transport and have a copy/paste of method
# implementations from Python 2.6.6's xmlrpclib.
#
# Upstream Python bug is #8194 (http://bugs.python.org/issue8194)
# This bug is relevant for Python 2.7.0 and 2.7.1 but was fixed for
# Python > 2.7.2
##
class BBTransport(xmlrpclib.Transport):
def request(self, host, handler, request_body, verbose=0):
h = self.make_connection(host)
if verbose:
h.set_debuglevel(1)
self.send_request(h, handler, request_body)
self.send_host(h, host)
self.send_user_agent(h)
self.send_content(h, request_body)
errcode, errmsg, headers = h.getreply()
if errcode != 200:
raise ProtocolError(
host + handler,
errcode, errmsg,
headers
)
self.verbose = verbose
try:
sock = h._conn.sock
except AttributeError:
sock = None
return self._parse_response(h.getfile(), sock)
def make_connection(self, host):
import httplib
host, extra_headers, x509 = self.get_host_info(host)
return httplib.HTTP(host)
def _parse_response(self, file, sock):
p, u = self.getparser()
while 1:
if sock:
response = sock.recv(1024)
else:
response = file.read(1024)
if not response:
break
if self.verbose:
print "body:", repr(response)
p.feed(response)
file.close()
p.close()
return u.close()
def _create_server(host, port):
# Python 2.7.0 and 2.7.1 have a buggy Transport implementation
# For those versions of Python, and only those versions, use our
# own copy/paste BBTransport class.
if (2, 7, 0) <= sys.version_info < (2, 7, 2):
t = BBTransport()
s = xmlrpclib.Server("http://%s:%d/" % (host, port), transport=t, allow_none=True)
else:
s = xmlrpclib.Server("http://%s:%d/" % (host, port), allow_none=True)
return s
class BitBakeServerCommands():
def __init__(self, server, cooker):
self.cooker = cooker
@@ -130,8 +54,7 @@ class BitBakeServerCommands():
"""
Register a remote UI Event Handler
"""
s = _create_server(host, port)
s = xmlrpclib.Server("http://%s:%d" % (host, port), allow_none=True)
return bb.event.register_UIHhandler(s)
def unregisterEventHandler(self, handlerNum):
@@ -176,7 +99,6 @@ class BitBakeServer(SimpleXMLRPCServer):
#self.register_introspection_functions()
commands = BitBakeServerCommands(self, cooker)
self.autoregister_all_functions(commands, "")
self.cooker = cooker
def autoregister_all_functions(self, context, prefix):
"""
@@ -194,9 +116,6 @@ class BitBakeServer(SimpleXMLRPCServer):
self._idlefuns[function] = data
def serve_forever(self):
bb.cooker.server_main(self.cooker, self._serve_forever)
def _serve_forever(self):
"""
Serve Requests. Overloaded to honor a quit command
"""
@@ -245,19 +164,13 @@ class BitbakeServerInfo():
self.port = server.port
class BitBakeServerFork():
def __init__(self, cooker, server, serverinfo, logfile):
daemonize.createDaemon(server.serve_forever, logfile)
class BitbakeUILauch():
def launch(self, serverinfo, uifunc, *args):
return uifunc(*args)
def __init__(self, serverinfo, command, logfile):
daemonize.createDaemon(command, logfile)
class BitBakeServerConnection():
def __init__(self, serverinfo):
self.connection = _create_server(serverinfo.host, serverinfo.port)
self.connection = xmlrpclib.Server("http://%s:%s" % (serverinfo.host, serverinfo.port), allow_none=True)
self.events = uievent.BBUIEventQueue(self.connection)
for event in bb.event.ui_queue:
self.events.queue_event(event)
def terminate(self):
# Don't wait for server indefinitely

View File

@@ -180,9 +180,11 @@ class BitBakeShellCommands:
last_exception = Providers.NoProvider
except runqueue.TaskFailure as fnids:
for fnid in fnids:
print("ERROR: '%s' failed" % td.fn_index[fnid])
last_exception = runqueue.TaskFailure
except build.FuncFailed as e:
except build.EventException as e:
print("ERROR: Couldn't build '%s'" % names)
last_exception = e
@@ -245,7 +247,7 @@ class BitBakeShellCommands:
cooker.buildFile(bf, cmd)
except parse.ParseError:
print("ERROR: Unable to open or parse '%s'" % bf)
except build.FuncFailed as e:
except build.EventException as e:
print("ERROR: Couldn't build '%s'" % name)
last_exception = e
@@ -272,7 +274,9 @@ class BitBakeShellCommands:
bbfile = params[0]
print("SHELL: Parsing '%s'" % bbfile)
parse.update_mtime( bbfile )
cooker.parser.reparse(bbfile)
cooker.bb_cache.cacheValidUpdate(bbfile)
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data, cooker.status)
cooker.bb_cache.sync()
if False: #fromCache:
print("SHELL: File has not been updated, not reparsing")
else:
@@ -441,7 +445,7 @@ SRC_URI = ""
name, var = params
bbfile = self._findProvider( name )
if bbfile is not None:
the_data = cache.Cache.loadDataFull(bbfile, cooker.configuration.data)
the_data = cooker.bb_cache.loadDataFull(bbfile, cooker.configuration.data)
value = the_data.getVar( var, 1 )
print(value)
else:

View File

@@ -1,64 +1,50 @@
import hashlib
import logging
import re
import bb.data
logger = logging.getLogger('BitBake.SigGen')
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
def init(d):
def init(d, dumpsigs):
siggens = [obj for obj in globals().itervalues()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
desired = bb.data.getVar("BB_SIGNATURE_HANDLER", d, True) or "noop"
for sg in siggens:
if desired == sg.name:
return sg(d)
return sg(d, dumpsigs)
break
else:
logger.error("Invalid signature generator '%s', using default 'noop'\n"
"Available generators: %s",
', '.join(obj.name for obj in siggens))
return SignatureGenerator(d)
bb.error("Invalid signature generator '%s', using default 'noop' generator" % desired)
bb.error("Available generators: %s" % ", ".join(obj.name for obj in siggens))
return SignatureGenerator(d, dumpsigs)
class SignatureGenerator(object):
"""
"""
name = "noop"
def __init__(self, data):
def __init__(self, data, dumpsigs):
return
def finalise(self, fn, d, varient):
def finalise(self, fn, d):
return
def get_taskhash(self, fn, task, deps, dataCache):
return 0
def set_taskdata(self, hashes, deps):
return
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
name = "basic"
def __init__(self, data):
def __init__(self, data, dumpsigs):
self.basehash = {}
self.taskhash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.gendeps = {}
self.lookupcache = {}
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST", True) or "").split())
self.basewhitelist = (data.getVar("BB_HASHBASE_WHITELIST", True) or "").split()
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
if self.taskwhitelist:
@@ -66,33 +52,21 @@ class SignatureGeneratorBasic(SignatureGenerator):
else:
self.twl = None
self.dumpsigs = dumpsigs
def _build_data(self, fn, d):
tasklist, gendeps = bb.data.generate_dependencies(d)
taskdeps, gendeps = bb.data.generate_dependencies(d)
taskdeps = {}
basehash = {}
lookupcache = {}
for task in tasklist:
for task in taskdeps:
data = d.getVar(task, False)
lookupcache[task] = data
newdeps = gendeps[task]
seen = set()
while newdeps:
nextdeps = newdeps
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in self.basewhitelist:
continue
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = seen - self.basewhitelist
for dep in sorted(alldeps):
for dep in sorted(taskdeps[task]):
if dep in self.basewhitelist:
continue
if dep in lookupcache:
var = lookupcache[dep]
else:
@@ -100,14 +74,13 @@ class SignatureGeneratorBasic(SignatureGenerator):
lookupcache[dep] = var
if var:
data = data + var
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
taskdeps[task] = sorted(alldeps)
#bb.note("Hash for %s is %s" % (task, tashhash[task]))
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
self.lookupcache[fn] = lookupcache
if self.dumpsigs:
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
self.lookupcache[fn] = lookupcache
return taskdeps
@@ -128,18 +101,14 @@ class SignatureGeneratorBasic(SignatureGenerator):
def get_taskhash(self, fn, task, deps, dataCache):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.runtaskdeps[k] = []
self.runtaskdeps[k] = deps
for dep in sorted(deps):
# We only manipulate the dependencies for packages not in the whitelist
if self.twl and not self.twl.search(dataCache.pkg_fn[fn]):
# then process the actual dependencies
dep_fn = re.search("(?P<fn>.*)\..*", dep).group('fn')
if self.twl.search(dataCache.pkg_fn[dep_fn]):
continue
if self.twl and self.twl.search(dataCache.pkg_fn[fn]):
#bb.note("Skipping %s" % dep)
continue
if dep not in self.taskhash:
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?", dep)
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?", dep)
data = data + self.taskhash[dep]
self.runtaskdeps[k].append(dep)
h = hashlib.md5(data).hexdigest()
self.taskhash[k] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
@@ -158,8 +127,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
else:
sigfile = stampbase + "." + task + ".sigbasedata" + "." + self.basehash[k]
bb.utils.mkdirhier(os.path.dirname(sigfile))
data = {}
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
@@ -190,23 +157,11 @@ class SignatureGeneratorBasic(SignatureGenerator):
if k not in self.taskhash:
continue
if dataCache.basetaskhash[k] != self.basehash[k]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
bb.error("Bitbake's cached basehash does not match the one we just generated!")
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
def stampfile(self, stampbase, fn, taskname, extrainfo):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
k = fn + "." + taskname[:-9]
else:
k = fn + "." + taskname
h = self.taskhash[k]
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
def dump_this_task(outfile, d):
import bb.parse
fn = d.getVar("BB_FILENAME", True)
task = "do_" + d.getVar("BB_CURRENTTASK", True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile")
@@ -217,6 +172,10 @@ def compare_sigfiles(a, b):
p2 = pickle.Unpickler(file(b, "rb"))
b_data = p2.load()
#print "Checking"
#print str(a_data)
#print str(b_data)
def dict_diff(a, b):
sa = set(a.keys())
sb = set(b.keys())
@@ -227,7 +186,7 @@ def compare_sigfiles(a, b):
changed.add(i)
added = sa - sb
removed = sb - sa
return changed, added, removed
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
print "basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist'])
@@ -257,20 +216,18 @@ def compare_sigfiles(a, b):
if changed:
for dep in changed:
print "Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep])
#if added:
# print "Dependency on variable %s was added (value %s)" % (dep, b_data['gendeps'][dep])
#if removed:
# print "Dependency on Variable %s was removed (value %s)" % (dep, a_data['gendeps'][dep])
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
changed, added, removed = dict_diff(a_data['runtaskhashes'], b_data['runtaskhashes'])
if added:
for dep in added:
print "Dependency on task %s was added" % (dep)
if removed:
for dep in removed:
print "Dependency on task %s was removed" % (dep)
if changed:
for dep in changed:
if 'runtaskdeps' in a_data and 'runtaskdeps' in b_data and a_data['runtaskdeps'] != b_data['runtaskdeps']:
print "Tasks this task depends on changed from %s to %s" % (a_data['taskdeps'], b_data['taskdeps'])
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
if a_data['runtaskhashes'][dep] != b_data['runtaskhashes'][dep]:
print "Hash for dependent task %s changed from %s to %s" % (dep, a_data['runtaskhashes'][dep], b_data['runtaskhashes'][dep])
elif 'runtaskdeps' in a_data and 'runtaskdeps' in b_data and sorted(a_data['runtaskdeps']) != sorted(b_data['runtaskdeps']):
print "Tasks this task depends on changed from %s to %s" % (sorted(a_data['runtaskdeps']), sorted(b_data['runtaskdeps']))
def dump_sigfile(a):
p1 = pickle.Unpickler(file(a, "rb"))
@@ -296,3 +253,8 @@ def dump_sigfile(a):
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
print "Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep])

View File

@@ -23,19 +23,20 @@ Task data collection and handling
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import re
import bb
logger = logging.getLogger("BitBake.TaskData")
def re_match_strings(target, strings):
"""
Whether or not the string 'target' matches
any one string of the strings which can be regular expression string
"""
return any(name == target or re.match(name, target)
for name in strings)
import re
for name in strings:
if (name==target or
re.search(name, target)!=None):
return True
return False
class TaskData:
"""
@@ -181,7 +182,7 @@ class TaskData:
if not fnid in self.depids:
dependids = {}
for depend in dataCache.deps[fn]:
logger.debug(2, "Added dependency %s for %s", depend, fn)
bb.msg.debug(2, bb.msg.domain.TaskData, "Added dependency %s for %s" % (depend, fn))
dependids[self.getbuild_id(depend)] = None
self.depids[fnid] = dependids.keys()
@@ -191,12 +192,12 @@ class TaskData:
rdepends = dataCache.rundeps[fn]
rrecs = dataCache.runrecs[fn]
for package in rdepends:
for rdepend in rdepends[package]:
logger.debug(2, "Added runtime dependency %s for %s", rdepend, fn)
for rdepend in bb.utils.explode_deps(rdepends[package]):
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime dependency %s for %s" % (rdepend, fn))
rdependids[self.getrun_id(rdepend)] = None
for package in rrecs:
for rdepend in rrecs[package]:
logger.debug(2, "Added runtime recommendation %s for %s", rdepend, fn)
for rdepend in bb.utils.explode_deps(rrecs[package]):
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime recommendation %s for %s" % (rdepend, fn))
rdependids[self.getrun_id(rdepend)] = None
self.rdepids[fnid] = rdependids.keys()
@@ -396,7 +397,7 @@ class TaskData:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding %s to satisfy %s", fn, item)
bb.msg.debug(2, bb.msg.domain.Provider, "adding %s to satisfy %s" % (fn, item))
self.add_build_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -449,7 +450,7 @@ class TaskData:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
bb.msg.debug(2, bb.msg.domain.Provider, "adding '%s' to satisfy runtime '%s'" % (fn, item))
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -462,7 +463,7 @@ class TaskData:
"""
if fnid in self.failed_fnids:
return
logger.debug(1, "File '%s' is unbuildable, removing...", self.fn_index[fnid])
bb.msg.debug(1, bb.msg.domain.Provider, "File '%s' is unbuildable, removing..." % self.fn_index[fnid])
self.failed_fnids.append(fnid)
for target in self.build_targets:
if fnid in self.build_targets[target]:
@@ -484,12 +485,12 @@ class TaskData:
missing_list = [self.build_names_index[targetid]]
else:
missing_list = [self.build_names_index[targetid]] + missing_list
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.build_names_index[targetid], missing_list)
bb.msg.note(2, bb.msg.domain.Provider, "Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
self.failed_deps.append(targetid)
dependees = self.get_dependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_idepends)):
for taskid in range(len(self.tasks_idepends)):
idepends = self.tasks_idepends[taskid]
for (idependid, idependtask) in idepends:
if idependid == targetid:
@@ -497,7 +498,7 @@ class TaskData:
if self.abort and targetid in self.external_targets:
target = self.build_names_index[targetid]
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (target, missing_list))
raise bb.providers.NoProvider(target)
def remove_runtarget(self, targetid, missing_list = []):
@@ -510,7 +511,7 @@ class TaskData:
else:
missing_list = [self.run_names_index[targetid]] + missing_list
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.run_names_index[targetid], missing_list)
bb.msg.note(1, bb.msg.domain.Provider, "Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.run_names_index[targetid], missing_list))
self.failed_rdeps.append(targetid)
dependees = self.get_rdependees(targetid)
for fnid in dependees:
@@ -520,7 +521,7 @@ class TaskData:
"""
Resolve all unresolved build and runtime targets
"""
logger.info("Resolving any missing task queue dependencies")
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
while True:
added = 0
for target in self.get_unresolved_build_targets(dataCache):
@@ -538,7 +539,7 @@ class TaskData:
added = added + 1
except bb.providers.NoRProvider:
self.remove_runtarget(self.getrun_id(target))
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
bb.msg.debug(1, bb.msg.domain.TaskData, "Resolved " + str(added) + " extra dependencies")
if added == 0:
break
# self.dump_data()
@@ -547,40 +548,40 @@ class TaskData:
"""
Dump some debug information on the internal data structures
"""
logger.debug(3, "build_names:")
logger.debug(3, ", ".join(self.build_names_index))
bb.msg.debug(3, bb.msg.domain.TaskData, "build_names:")
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.build_names_index))
logger.debug(3, "run_names:")
logger.debug(3, ", ".join(self.run_names_index))
bb.msg.debug(3, bb.msg.domain.TaskData, "run_names:")
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.run_names_index))
logger.debug(3, "build_targets:")
for buildid in xrange(len(self.build_names_index)):
bb.msg.debug(3, bb.msg.domain.TaskData, "build_targets:")
for buildid in range(len(self.build_names_index)):
target = self.build_names_index[buildid]
targets = "None"
if buildid in self.build_targets:
targets = self.build_targets[buildid]
logger.debug(3, " (%s)%s: %s", buildid, target, targets)
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (buildid, target, targets))
logger.debug(3, "run_targets:")
for runid in xrange(len(self.run_names_index)):
bb.msg.debug(3, bb.msg.domain.TaskData, "run_targets:")
for runid in range(len(self.run_names_index)):
target = self.run_names_index[runid]
targets = "None"
if runid in self.run_targets:
targets = self.run_targets[runid]
logger.debug(3, " (%s)%s: %s", runid, target, targets)
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (runid, target, targets))
logger.debug(3, "tasks:")
for task in xrange(len(self.tasks_name)):
logger.debug(3, " (%s)%s - %s: %s",
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task])
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
for task in range(len(self.tasks_name)):
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task]))
logger.debug(3, "dependency ids (per fn):")
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
for fnid in self.depids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.depids[fnid])
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.depids[fnid]))
logger.debug(3, "runtime dependency ids (per fn):")
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
for fnid in self.rdepids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.rdepids[fnid])
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))

View File

@@ -1,5 +1,5 @@
#
# Gtk+ UI pieces for BitBake
# BitBake UI Implementation
#
# Copyright (C) 2006-2007 Richard Purdie
#

View File

@@ -1,137 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
from bb.ui.crumbs.progress import ProgressBar
progress_total = 0
class HobHandler(gobject.GObject):
"""
This object does BitBake event handling for the hob gui.
"""
__gsignals__ = {
"machines-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"distros-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"data-generated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def __init__(self, taskmodel, server):
gobject.GObject.__init__(self)
self.model = taskmodel
self.server = server
self.current_command = None
self.building = False
self.command_map = {
"findConfigFilesDistro" : ("findConfigFiles", "MACHINE", "findConfigFilesMachine"),
"findConfigFilesMachine" : ("generateTargetsTree", "classes/image.bbclass", None),
"generateTargetsTree" : (None, None, None),
}
def run_next_command(self):
# FIXME: this is ugly and I *will* replace it
if self.current_command:
next_cmd = self.command_map[self.current_command]
command = next_cmd[0]
argument = next_cmd[1]
self.current_command = next_cmd[2]
if command == "generateTargetsTree":
self.emit("generating-data")
self.server.runCommand([command, argument])
def handle_event(self, event, running_build, pbar=None):
if not event:
return
# If we're running a build, use the RunningBuild event handler
if self.building:
running_build.handle_event(event)
elif isinstance(event, bb.event.TargetsTreeGenerated):
self.emit("data-generated")
if event._model:
self.model.populate(event._model)
elif isinstance(event, bb.event.ConfigFilesFound):
var = event._variable
if var == "distro":
distros = event._values
distros.sort()
self.emit("distros-updated", distros)
elif var == "machine":
machines = event._values
machines.sort()
self.emit("machines-updated", machines)
elif isinstance(event, bb.command.CommandCompleted):
self.run_next_command()
elif isinstance(event, bb.event.CacheLoadStarted) and pbar:
pbar.set_title("Loading cache")
bb.ui.crumbs.hobeventhandler.progress_total = event.total
pbar.update(0, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.CacheLoadProgress) and pbar:
pbar.update(event.current, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.CacheLoadCompleted) and pbar:
pbar.update(bb.ui.crumbs.hobeventhandler.progress_total, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.ParseStarted) and pbar:
pbar.set_title("Processing recipes")
bb.ui.crumbs.hobeventhandler.progress_total = event.total
pbar.update(0, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.ParseProgress) and pbar:
pbar.update(event.current, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.ParseCompleted) and pbar:
pbar.hide()
return
def event_handle_idle_func (self, eventHandler, running_build, pbar):
# Consume as many messages as we can in the time available to us
event = eventHandler.getEvent()
while event:
self.handle_event(event, running_build, pbar)
event = eventHandler.getEvent()
return True
def set_machine(self, machine):
self.server.runCommand(["setVariable", "MACHINE", machine])
self.current_command = "findConfigFilesMachine"
self.run_next_command()
def set_distro(self, distro):
self.server.runCommand(["setVariable", "DISTRO", distro])
def run_build(self, targets):
self.building = True
self.server.runCommand(["buildTargets", targets, "build"])
def cancel_build(self):
# Note: this may not be the right way to stop an in-progress build
self.server.runCommand(["stateStop"])

View File

@@ -1,20 +0,0 @@
import gtk
class ProgressBar(gtk.Dialog):
def __init__(self, parent):
gtk.Dialog.__init__(self, flags=(gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT))
self.set_title("Parsing metadata, please wait...")
self.set_default_size(500, 0)
self.set_transient_for(parent)
self.progress = gtk.ProgressBar()
self.vbox.pack_start(self.progress)
self.show_all()
def update(self, x, y):
self.progress.set_fraction(float(x)/float(y))
self.progress.set_text("%2d %%" % (x*100/y))
def pulse(self):
self.progress.set_text("Loading...")
self.progress.pulse()

View File

@@ -1,4 +1,3 @@
#
# BitBake Graphical GTK User Interface
#
@@ -21,20 +20,9 @@
import gtk
import gobject
import logging
import time
import urllib
import urllib2
class Colors(object):
OK = "#ffffff"
RUNNING = "#aaffaa"
WARNING ="#f88017"
ERROR = "#ffaaaa"
class RunningBuildModel (gtk.TreeStore):
(COL_LOG, COL_PACKAGE, COL_TASK, COL_MESSAGE, COL_ICON, COL_COLOR, COL_NUM_ACTIVE) = range(7)
(COL_TYPE, COL_PACKAGE, COL_TASK, COL_MESSAGE, COL_ICON, COL_ACTIVE) = (0, 1, 2, 3, 4, 5)
def __init__ (self):
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
@@ -42,8 +30,7 @@ class RunningBuildModel (gtk.TreeStore):
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_INT)
gobject.TYPE_BOOLEAN)
class RunningBuild (gobject.GObject):
__gsignals__ = {
@@ -61,7 +48,7 @@ class RunningBuild (gobject.GObject):
gobject.GObject.__init__ (self)
self.model = RunningBuildModel()
def handle_event (self, event, pbar=None):
def handle_event (self, event):
# Handle an event from the event queue, this may result in updating
# the model and thus the UI. Or it may be to tell us that the build
# has finished successfully (or not, as the case may be.)
@@ -76,42 +63,32 @@ class RunningBuild (gobject.GObject):
# for the message.
if hasattr(event, 'pid'):
pid = event.pid
if hasattr(event, 'process'):
pid = event.process
if pid in self.pids_to_task:
(package, task) = self.pids_to_task[pid]
parent = self.tasks_to_iter[(package, task)]
if pid and pid in self.pids_to_task:
(package, task) = self.pids_to_task[pid]
parent = self.tasks_to_iter[(package, task)]
if(isinstance(event, logging.LogRecord)):
if (event.msg.startswith ("Running task")):
return # don't add these to the list
if event.levelno >= logging.ERROR:
icon = "dialog-error"
color = Colors.ERROR
elif event.levelno >= logging.WARNING:
if isinstance(event, bb.msg.Msg):
# Set a pretty icon for the message based on it's type.
if isinstance(event, bb.msg.MsgWarn):
icon = "dialog-warning"
color = Colors.WARNING
elif isinstance(event, bb.msg.MsgErr):
icon = "dialog-error"
else:
icon = None
color = Colors.OK
# if we know which package we belong to, we'll append onto its list.
# otherwise, we'll jump to the top of the master list
if parent:
tree_add = self.model.append
else:
tree_add = self.model.prepend
tree_add(parent,
(None,
package,
task,
event.getMessage(),
icon,
color,
0))
# Ignore the "Running task i of n .." messages
if (event._message.startswith ("Running task")):
return
# Add the message to the tree either at the top level if parent is
# None otherwise as a descendent of a task.
self.model.append (parent,
(event.__name__.split()[-1], # e.g. MsgWarn, MsgError
package,
task,
event._message,
icon,
False))
elif isinstance(event, bb.build.TaskStarted):
(package, task) = (event._package, event._task)
@@ -124,142 +101,68 @@ class RunningBuild (gobject.GObject):
if ((package, None) in self.tasks_to_iter):
parent = self.tasks_to_iter[(package, None)]
else:
parent = self.model.prepend(None, (None,
parent = self.model.append (None, (None,
package,
None,
"Package: %s" % (package),
None,
Colors.OK,
0))
False))
self.tasks_to_iter[(package, None)] = parent
# Because this parent package now has an active child mark it as
# such.
# @todo if parent is already in error, don't mark it green
self.model.set(parent, self.model.COL_ICON, "gtk-execute",
self.model.COL_COLOR, Colors.RUNNING)
self.model.set(parent, self.model.COL_ICON, "gtk-execute")
# Add an entry in the model for this task
i = self.model.append (parent, (None,
package,
task,
"Task: %s" % (task),
"gtk-execute",
Colors.RUNNING,
0))
# update the parent's active task count
num_active = self.model.get(parent, self.model.COL_NUM_ACTIVE)[0] + 1
self.model.set(parent, self.model.COL_NUM_ACTIVE, num_active)
None,
False))
# Save out the iter so that we can find it when we have a message
# that we need to attach to a task.
self.tasks_to_iter[(package, task)] = i
elif isinstance(event, bb.build.TaskBase):
current = self.tasks_to_iter[(package, task)]
parent = self.tasks_to_iter[(package, None)]
# Mark this task as active.
self.model.set(i, self.model.COL_ICON, "gtk-execute")
# remove this task from the parent's active count
num_active = self.model.get(parent, self.model.COL_NUM_ACTIVE)[0] - 1
self.model.set(parent, self.model.COL_NUM_ACTIVE, num_active)
elif isinstance(event, bb.build.Task):
if isinstance(event, bb.build.TaskFailed):
# Mark the task and parent as failed
icon = "dialog-error"
color = Colors.ERROR
# Mark the task as failed
i = self.tasks_to_iter[(package, task)]
self.model.set(i, self.model.COL_ICON, "dialog-error")
logfile = event.logfile
if logfile and os.path.exists(logfile):
with open(logfile) as f:
logdata = f.read()
self.model.append(current, ('pastebin', None, None, logdata, 'gtk-error', Colors.OK, 0))
for i in (current, parent):
self.model.set(i, self.model.COL_ICON, icon,
self.model.COL_COLOR, color)
else:
icon = None
color = Colors.OK
# Mark the task as inactive
self.model.set(current, self.model.COL_ICON, icon,
self.model.COL_COLOR, color)
# Mark the parent package as inactive, but make sure to
# preserve error and active states
# Mark the parent package as failed
i = self.tasks_to_iter[(package, None)]
if self.model.get(parent, self.model.COL_ICON) != 'dialog-error':
self.model.set(parent, self.model.COL_ICON, icon)
if num_active == 0:
self.model.set(parent, self.model.COL_COLOR, Colors.OK)
self.model.set(i, self.model.COL_ICON, "dialog-error")
else:
# Mark the task as inactive
i = self.tasks_to_iter[(package, task)]
self.model.set(i, self.model.COL_ICON, None)
# Mark the parent package as inactive
i = self.tasks_to_iter[(package, None)]
self.model.set(i, self.model.COL_ICON, None)
# Clear the iters and the pids since when the task goes away the
# pid will no longer be used for messages
del self.tasks_to_iter[(package, task)]
del self.pids_to_task[pid]
elif isinstance(event, bb.event.BuildStarted):
self.model.prepend(None, (None,
None,
None,
"Build Started (%s)" % time.strftime('%m/%d/%Y %H:%M:%S'),
None,
Colors.OK,
0))
elif isinstance(event, bb.event.BuildCompleted):
failures = int (event._failures)
self.model.prepend(None, (None,
None,
None,
"Build Completed (%s)" % time.strftime('%m/%d/%Y %H:%M:%S'),
None,
Colors.OK,
0))
# Emit the appropriate signal depending on the number of failures
if (failures >= 1):
if (failures > 1):
self.emit ("build-failed")
else:
self.emit ("build-succeeded")
elif isinstance(event, bb.event.CacheLoadStarted) and pbar:
pbar.set_title("Loading cache")
self.progress_total = event.total
pbar.update(0, self.progress_total)
elif isinstance(event, bb.event.CacheLoadProgress) and pbar:
pbar.update(event.current, self.progress_total)
elif isinstance(event, bb.event.CacheLoadCompleted) and pbar:
pbar.update(self.progress_total, self.progress_total)
elif isinstance(event, bb.event.ParseStarted) and pbar:
pbar.set_title("Processing recipes")
self.progress_total = event.total
pbar.update(0, self.progress_total)
elif isinstance(event, bb.event.ParseProgress) and pbar:
pbar.update(event.current, self.progress_total)
elif isinstance(event, bb.event.ParseCompleted) and pbar:
pbar.hide()
return
def do_pastebin(text):
url = 'http://pastebin.com/api_public.php'
params = {'paste_code': text, 'paste_format': 'text'}
req = urllib2.Request(url, urllib.urlencode(params))
response = urllib2.urlopen(req)
paste_url = response.read()
return paste_url
class RunningBuildTreeView (gtk.TreeView):
__gsignals__ = {
"button_press_event" : "override"
}
def __init__ (self):
gtk.TreeView.__init__ (self)
@@ -270,42 +173,6 @@ class RunningBuildTreeView (gtk.TreeView):
self.append_column (col)
# The message of the build.
self.message_renderer = gtk.CellRendererText ()
self.message_column = gtk.TreeViewColumn ("Message", self.message_renderer, text=3)
self.message_column.add_attribute(self.message_renderer, 'background', 5)
self.message_renderer.set_property('editable', 5)
self.append_column (self.message_column)
def do_button_press_event(self, event):
gtk.TreeView.do_button_press_event(self, event)
if event.button == 3:
selection = super(RunningBuildTreeView, self).get_selection()
(model, iter) = selection.get_selected()
if iter is not None:
can_paste = model.get(iter, model.COL_LOG)[0]
if can_paste == 'pastebin':
# build a simple menu with a pastebin option
menu = gtk.Menu()
menuitem = gtk.MenuItem("Send log to pastebin")
menu.append(menuitem)
menuitem.connect("activate", self.pastebin_handler, (model, iter))
menuitem.show()
menu.show()
menu.popup(None, None, None, event.button, event.time)
def pastebin_handler(self, widget, data):
"""
Send the log data to pastebin, then add the new paste url to the
clipboard.
"""
(model, iter) = data
paste_url = do_pastebin(model.get(iter, model.COL_MESSAGE)[0])
# @todo Provide visual feedback to the user that it is done and that
# it worked.
print paste_url
clipboard = gtk.clipboard_get()
clipboard.set_text(paste_url)
clipboard.store()
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Message", renderer, text=3)
self.append_column (col)

View File

@@ -1,346 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
class TaskListModel(gtk.ListStore):
"""
This class defines an gtk.ListStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC) = range(8)
__gsignals__ = {
"tasklist-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
"""
"""
def __init__(self):
self.contents = None
self.tasks = None
self.packages = None
self.images = None
gtk.ListStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN)
"""
Create, if required, and return a filtered gtk.TreeModel
containing only the items which are to be included in the
image
"""
def contents_model(self):
if not self.contents:
self.contents = self.filter_new()
self.contents.set_visible_column(self.COL_INC)
return self.contents
"""
Helper function to determine whether an item is a task
"""
def task_model_filter(self, model, it):
if model.get_value(it, self.COL_TYPE) == 'task':
return True
else:
return False
"""
Create, if required, and return a filtered gtk.TreeModel
containing only the items which are tasks
"""
def tasks_model(self):
if not self.tasks:
self.tasks = self.filter_new()
self.tasks.set_visible_func(self.task_model_filter)
return self.tasks
"""
Helper function to determine whether an item is an image
"""
def image_model_filter(self, model, it):
if model.get_value(it, self.COL_TYPE) == 'image':
return True
else:
return False
"""
Create, if required, and return a filtered gtk.TreeModel
containing only the items which are images
"""
def images_model(self):
if not self.images:
self.images = self.filter_new()
self.images.set_visible_func(self.image_model_filter)
return self.images
"""
Helper function to determine whether an item is a package
"""
def package_model_filter(self, model, it):
if model.get_value(it, self.COL_TYPE) == 'package':
return True
else:
return False
"""
Create, if required, and return a filtered gtk.TreeModel
containing only the items which are packages
"""
def packages_model(self):
if not self.packages:
self.packages = self.filter_new()
self.packages.set_visible_func(self.package_model_filter)
return self.packages
"""
The populate() function takes as input the data from a
bb.event.TargetsTreeGenerated event and populates the TaskList.
Once the population is done it emits gsignal tasklist-populated
to notify any listeners that the model is ready
"""
def populate(self, event_model):
for item in event_model["pn"]:
atype = 'package'
name = item
summary = event_model["pn"][item]["summary"]
license = event_model["pn"][item]["license"]
group = event_model["pn"][item]["section"]
depends = event_model["depends"].get(item, "")
rdepends = event_model["rdepends-pn"].get(item, "")
depends = depends + rdepends
self.squish(depends)
deps = " ".join(depends)
if name.count('task-') > 0:
atype = 'task'
elif name.count('-image-') > 0:
atype = 'image'
self.set(self.append(), self.COL_NAME, name, self.COL_DESC, summary,
self.COL_LIC, license, self.COL_GROUP, group,
self.COL_DEPS, deps, self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False)
self.emit("tasklist-populated")
"""
squish lst so that it doesn't contain any duplicates
"""
def squish(self, lst):
seen = {}
for l in lst:
seen[l] = 1
return seen.keys()
"""
Mark the item at path as not included
NOTE:
path should be a gtk.TreeModelPath into self (not a filtered model)
"""
def remove_item_path(self, path):
self[path][self.COL_BINB] = ""
self[path][self.COL_INC] = False
"""
"""
def mark(self, path):
name = self[path][self.COL_NAME]
it = self.get_iter_first()
removals = []
#print("Removing %s" % name)
self.remove_item_path(path)
# Remove all dependent packages, update binb
while it:
path = self.get_path(it)
# FIXME: need to ensure partial name matching doesn't happen, regexp?
if self[path][self.COL_INC] and self[path][self.COL_DEPS].count(name):
#print("%s depended on %s, marking for removal" % (self[path][self.COL_NAME], name))
# found a dependency, remove it
self.mark(path)
if self[path][self.COL_INC] and self[path][self.COL_BINB].count(name):
binb = self.find_alt_dependency(self[path][self.COL_NAME])
#print("%s was brought in by %s, binb set to %s" % (self[path][self.COL_NAME], name, binb))
self[path][self.COL_BINB] = binb
it = self.iter_next(it)
"""
"""
def sweep_up(self):
removals = []
it = self.get_iter_first()
while it:
path = self.get_path(it)
binb = self[path][self.COL_BINB]
if binb == "" or binb is None:
#print("Sweeping up %s" % self[path][self.COL_NAME])
if not path in removals:
removals.extend(path)
it = self.iter_next(it)
while removals:
path = removals.pop()
self.mark(path)
"""
Remove an item from the contents
"""
def remove_item(self, path):
self.mark(path)
self.sweep_up()
"""
Find the name of an item in the image contents which depends on the item
at contents_path returns either an item name (str) or None
NOTE:
contents_path must be a path in the self.contents gtk.TreeModel
"""
def find_alt_dependency(self, name):
it = self.get_iter_first()
while it:
# iterate all items in the model
path = self.get_path(it)
deps = self[path][self.COL_DEPS]
itname = self[path][self.COL_NAME]
inc = self[path][self.COL_INC]
if itname != name and inc and deps.count(name) > 0:
# if this item depends on the item, return this items name
#print("%s depends on %s" % (itname, name))
return itname
it = self.iter_next(it)
return ""
"""
Convert a path in self to a path in the filtered contents model
"""
def contents_path_for_path(self, path):
return self.contents.convert_child_path_to_path(path)
"""
Check the self.contents gtk.TreeModel for an item
where COL_NAME matches item_name
Returns True if a match is found, False otherwise
"""
def contents_includes_name(self, item_name):
it = self.contents.get_iter_first()
while it:
path = self.contents.get_path(it)
if self.contents[path][self.COL_NAME] == item_name:
return True
it = self.contents.iter_next(it)
return False
"""
Add this item, and any of its dependencies, to the image contents
"""
def include_item(self, item_path, binb=""):
name = self[item_path][self.COL_NAME]
deps = self[item_path][self.COL_DEPS]
cur_inc = self[item_path][self.COL_INC]
#print("Adding %s for %s dependency" % (name, binb))
if not cur_inc:
self[item_path][self.COL_INC] = True
self[item_path][self.COL_BINB] = binb
if deps:
#print("Dependencies of %s are %s" % (name, deps))
# add all of the deps and set their binb to this item
for dep in deps.split(" "):
# FIXME: this skipping virtuals can't be right? Unless we choose only to show target
# packages? In which case we should handle this server side...
# If the contents model doesn't already contain dep, add it
if not dep.startswith("virtual") and not self.contents_includes_name(dep):
path = self.find_path_for_item(dep)
if path:
self.include_item(path, name)
else:
pass
"""
Find the model path for the item_name
Returns the path in the model or None
"""
def find_path_for_item(self, item_name):
it = self.get_iter_first()
path = None
while it:
path = self.get_path(it)
if (self[path][self.COL_NAME] == item_name):
return path
else:
it = self.iter_next(it)
return None
"""
Empty self.contents by setting the include of each entry to None
"""
def reset(self):
it = self.contents.get_iter_first()
while it:
path = self.contents.get_path(it)
opath = self.contents.convert_path_to_child_path(path)
self[opath][self.COL_INC] = False
self[opath][self.COL_BINB] = ""
# As we've just removed the first item...
it = self.contents.get_iter_first()
"""
Returns True if one of the selected tasks is an image, False otherwise
"""
def targets_contains_image(self):
it = self.images.get_iter_first()
while it:
path = self.images.get_path(it)
inc = self.images[path][self.COL_INC]
if inc:
return True
it = self.images.iter_next(it)
return False
"""
Return a list of all selected items which are not -native or -cross
"""
def get_targets(self):
tasks = []
it = self.contents.get_iter_first()
while it:
path = self.contents.get_path(it)
name = self.contents[path][self.COL_NAME]
stype = self.contents[path][self.COL_TYPE]
if not name.count('-native') and not name.count('-cross'):
tasks.append(name)
it = self.contents.iter_next(it)
return tasks

View File

@@ -19,12 +19,8 @@
import gobject
import gtk
import Queue
import threading
import xmlrpclib
import bb
import bb.event
from bb.ui.crumbs.progress import ProgressBar
# Package Model
(COL_PKG_NAME) = (0)
@@ -33,7 +29,6 @@ from bb.ui.crumbs.progress import ProgressBar
(TYPE_DEP, TYPE_RDEP) = (0, 1)
(COL_DEP_TYPE, COL_DEP_PARENT, COL_DEP_PACKAGE) = (0, 1, 2)
class PackageDepView(gtk.TreeView):
def __init__(self, model, dep_type, label):
gtk.TreeView.__init__(self)
@@ -54,7 +49,6 @@ class PackageDepView(gtk.TreeView):
self.current = package
self.filter_model.refilter()
class PackageReverseDepView(gtk.TreeView):
def __init__(self, model, label):
gtk.TreeView.__init__(self)
@@ -72,7 +66,6 @@ class PackageReverseDepView(gtk.TreeView):
self.current = package
self.filter_model.refilter()
class DepExplorer(gtk.Window):
def __init__(self):
gtk.Window.__init__(self)
@@ -82,9 +75,7 @@ class DepExplorer(gtk.Window):
# Create the data models
self.pkg_model = gtk.ListStore(gobject.TYPE_STRING)
self.pkg_model.set_sort_column_id(COL_PKG_NAME, gtk.SORT_ASCENDING)
self.depends_model = gtk.ListStore(gobject.TYPE_INT, gobject.TYPE_STRING, gobject.TYPE_STRING)
self.depends_model.set_sort_column_id(COL_DEP_PACKAGE, gtk.SORT_ASCENDING)
pane = gtk.HPaned()
pane.set_position(250)
@@ -94,11 +85,9 @@ class DepExplorer(gtk.Window):
scrolled = gtk.ScrolledWindow()
scrolled.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
scrolled.set_shadow_type(gtk.SHADOW_IN)
self.pkg_treeview = gtk.TreeView(self.pkg_model)
self.pkg_treeview.get_selection().connect("changed", self.on_cursor_changed)
column = gtk.TreeViewColumn("Package", gtk.CellRendererText(), text=COL_PKG_NAME)
self.pkg_treeview.append_column(column)
self.pkg_treeview.append_column(gtk.TreeViewColumn("Package", gtk.CellRendererText(), text=COL_PKG_NAME))
pane.add1(scrolled)
scrolled.add(self.pkg_treeview)
@@ -164,6 +153,7 @@ class DepExplorer(gtk.Window):
def parse(depgraph, pkg_model, depends_model):
for package in depgraph["pn"]:
pkg_model.set(pkg_model.append(), COL_PKG_NAME, package)
@@ -181,6 +171,17 @@ def parse(depgraph, pkg_model, depends_model):
COL_DEP_PARENT, package,
COL_DEP_PACKAGE, rdepend)
class ProgressBar(gtk.Window):
def __init__(self):
gtk.Window.__init__(self)
self.set_title("Parsing .bb files, please wait...")
self.set_default_size(500, 0)
self.connect("delete-event", gtk.main_quit)
self.progress = gtk.ProgressBar()
self.add(self.progress)
self.show_all()
class gtkthread(threading.Thread):
quit = threading.Event()
@@ -195,8 +196,8 @@ class gtkthread(threading.Thread):
gtk.main()
gtkthread.quit.set()
def init(server, eventHandler):
def main(server, eventHandler):
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline or cmdline[0] != "generateDotGraph":
@@ -216,83 +217,46 @@ def main(server, eventHandler):
gtkgui.start()
gtk.gdk.threads_enter()
pbar = ProgressBar()
dep = DepExplorer()
pbar = ProgressBar(dep)
pbar.connect("delete-event", gtk.main_quit)
gtk.gdk.threads_leave()
progress_total = 0
while True:
try:
event = eventHandler.waitEvent(0.25)
if gtkthread.quit.isSet():
server.runCommand(["stateStop"])
break
if event is None:
continue
if isinstance(event, bb.event.CacheLoadStarted):
progress_total = event.total
gtk.gdk.threads_enter()
pbar.set_title("Loading Cache")
pbar.update(0, progress_total)
gtk.gdk.threads_leave()
if isinstance(event, bb.event.CacheLoadProgress):
x = event.current
gtk.gdk.threads_enter()
pbar.update(x, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.CacheLoadCompleted):
gtk.gdk.threads_enter()
pbar.update(progress_total, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.ParseStarted):
progress_total = event.total
gtk.gdk.threads_enter()
pbar.set_title("Processing recipes")
pbar.update(0, progress_total)
gtk.gdk.threads_leave()
if isinstance(event, bb.event.ParseProgress):
x = event.current
x = event.sofar
y = event.total
if x == y:
print(("\nParsing finished. %d cached, %d parsed, %d skipped, %d masked, %d errors."
% ( event.cached, event.parsed, event.skipped, event.masked, event.errors)))
pbar.hide()
gtk.gdk.threads_enter()
pbar.update(x, progress_total)
pbar.progress.set_fraction(float(x)/float(y))
pbar.progress.set_text("%d/%d (%2d %%)" % (x, y, x*100/y))
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.ParseCompleted):
pbar.hide()
continue
if isinstance(event, bb.event.DepTreeGenerated):
gtk.gdk.threads_enter()
parse(event._depgraph, dep.pkg_model, dep.depends_model)
gtk.gdk.threads_leave()
if isinstance(event, bb.command.CommandCompleted):
if isinstance(event, bb.command.CookerCommandCompleted):
continue
if isinstance(event, bb.command.CommandFailed):
if isinstance(event, bb.command.CookerCommandFailed):
print("Command execution failed: %s" % event.error)
return event.exitcode
if isinstance(event, bb.command.CommandExit):
return event.exitcode
break
if isinstance(event, bb.cooker.CookerExit):
break
continue
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
if shutdown == 2:
print("\nThird Keyboard Interrupt, exit.\n")

View File

@@ -22,34 +22,17 @@ import gobject
import gtk
import xmlrpclib
from bb.ui.crumbs.runningbuild import RunningBuildTreeView, RunningBuild
from bb.ui.crumbs.progress import ProgressBar
import Queue
def event_handle_idle_func (eventHandler, build, pbar):
def event_handle_idle_func (eventHandler, build):
# Consume as many messages as we can in the time available to us
event = eventHandler.getEvent()
while event:
build.handle_event (event, pbar)
build.handle_event (event)
event = eventHandler.getEvent()
return True
def scroll_tv_cb (model, path, iter, view):
view.scroll_to_cell (path)
# @todo hook these into the GUI so the user has feedback...
def running_build_failed_cb (running_build):
pass
def running_build_succeeded_cb (running_build):
pass
class MainWindow (gtk.Window):
def __init__ (self):
gtk.Window.__init__ (self, gtk.WINDOW_TOPLEVEL)
@@ -58,29 +41,21 @@ class MainWindow (gtk.Window):
scrolled_window = gtk.ScrolledWindow ()
self.add (scrolled_window)
self.cur_build_tv = RunningBuildTreeView()
self.connect("delete-event", gtk.main_quit)
self.set_default_size(640, 480)
scrolled_window.add (self.cur_build_tv)
def main (server, eventHandler):
def init (server, eventHandler):
gobject.threads_init()
gtk.gdk.threads_init()
window = MainWindow ()
window.show_all ()
pbar = ProgressBar(window)
pbar.connect("delete-event", gtk.main_quit)
# Create the object for the current build
running_build = RunningBuild ()
window.cur_build_tv.set_model (running_build.model)
running_build.model.connect("row-inserted", scroll_tv_cb, window.cur_build_tv)
running_build.connect ("build-succeeded", running_build_succeeded_cb)
running_build.connect ("build-failed", running_build_failed_cb)
try:
cmdline = server.runCommand(["getCmdLineAction"])
print(cmdline)
if not cmdline:
return 1
ret = server.runCommand(cmdline)
@@ -93,20 +68,9 @@ def main (server, eventHandler):
# Use a timeout function for probing the event queue to find out if we
# have a message waiting for us.
gobject.timeout_add (100,
gobject.timeout_add (200,
event_handle_idle_func,
eventHandler,
running_build,
pbar)
try:
gtk.main()
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
pass
finally:
server.runCommand(["stateStop"])
running_build)
gtk.main()

View File

@@ -1,596 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import gtk
from bb.ui.crumbs.progress import ProgressBar
from bb.ui.crumbs.tasklistmodel import TaskListModel
from bb.ui.crumbs.hobeventhandler import HobHandler
from bb.ui.crumbs.runningbuild import RunningBuildTreeView, RunningBuild
import xmlrpclib
import logging
import Queue
class MainWindow (gtk.Window):
def __init__(self, taskmodel, handler, curr_mach=None, curr_distro=None):
gtk.Window.__init__(self, gtk.WINDOW_TOPLEVEL)
self.model = taskmodel
self.model.connect("tasklist-populated", self.update_model)
self.curr_mach = curr_mach
self.curr_distro = curr_distro
self.handler = handler
self.set_border_width(10)
self.connect("delete-event", gtk.main_quit)
self.set_title("BitBake Image Creator")
self.set_default_size(700, 600)
self.build = RunningBuild()
self.build.connect("build-succeeded", self.running_build_succeeded_cb)
self.build.connect("build-failed", self.running_build_failed_cb)
createview = self.create_build_gui()
buildview = self.view_build_gui()
self.nb = gtk.Notebook()
self.nb.append_page(createview)
self.nb.append_page(buildview)
self.nb.set_current_page(0)
self.nb.set_show_tabs(False)
self.add(self.nb)
self.generating = False
def scroll_tv_cb(self, model, path, it, view):
view.scroll_to_cell(path)
def running_build_failed_cb(self, running_build):
# FIXME: handle this
return
def running_build_succeeded_cb(self, running_build):
label = gtk.Label("Build completed, start another build?")
dialog = gtk.Dialog("Build complete",
self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_NO, gtk.RESPONSE_NO,
gtk.STOCK_YES, gtk.RESPONSE_YES))
dialog.vbox.pack_start(label)
label.show()
response = dialog.run()
dialog.destroy()
if not response == gtk.RESPONSE_YES:
self.model.reset() # NOTE: really?
self.nb.set_current_page(0)
return
def machine_combo_changed_cb(self, combo, handler):
mach = combo.get_active_text()
if mach != self.curr_mach:
self.curr_mach = mach
handler.set_machine(mach)
def update_machines(self, handler, machines):
active = 0
for machine in machines:
self.machine_combo.append_text(machine)
if machine == self.curr_mach:
self.machine_combo.set_active(active)
active = active + 1
self.machine_combo.connect("changed", self.machine_combo_changed_cb, handler)
def update_distros(self, handler, distros):
# FIXME: when we add UI for changing distro this will be used
return
def data_generated(self, handler):
self.generating = False
def spin_idle_func(self, pbar):
if self.generating:
pbar.pulse()
return True
else:
pbar.hide()
return False
def busy(self, handler):
self.generating = True
pbar = ProgressBar(self)
pbar.connect("delete-event", gtk.main_quit) # NOTE: questionable...
pbar.pulse()
gobject.timeout_add (200,
self.spin_idle_func,
pbar)
def update_model(self, model):
pkgsaz_model = gtk.TreeModelSort(self.model.packages_model())
pkgsaz_model.set_sort_column_id(self.model.COL_NAME, gtk.SORT_ASCENDING)
self.pkgsaz_tree.set_model(pkgsaz_model)
# FIXME: need to implement a custom sort function, as otherwise the column
# is re-ordered when toggling the inclusion state (COL_INC)
pkgsgrp_model = gtk.TreeModelSort(self.model.packages_model())
pkgsgrp_model.set_sort_column_id(self.model.COL_GROUP, gtk.SORT_ASCENDING)
self.pkgsgrp_tree.set_model(pkgsgrp_model)
self.contents_tree.set_model(self.model.contents_model())
self.images_tree.set_model(self.model.images_model())
self.tasks_tree.set_model(self.model.tasks_model())
def reset_clicked_cb(self, button):
label = gtk.Label("Are you sure you want to reset the image contents?")
dialog = gtk.Dialog("Confirm reset", self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_CANCEL, gtk.RESPONSE_REJECT,
gtk.STOCK_OK, gtk.RESPONSE_ACCEPT))
dialog.vbox.pack_start(label)
label.show()
response = dialog.run()
dialog.destroy()
if (response == gtk.RESPONSE_ACCEPT):
self.model.reset()
return
def bake_clicked_cb(self, button):
if not self.model.targets_contains_image():
label = gtk.Label("No image was selected. Just build the selected packages?")
dialog = gtk.Dialog("Warning, no image selected",
self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_NO, gtk.RESPONSE_NO,
gtk.STOCK_YES, gtk.RESPONSE_YES))
dialog.vbox.pack_start(label)
label.show()
response = dialog.run()
dialog.destroy()
if not response == gtk.RESPONSE_YES:
return
# Note: We could "squash" the targets list to only include things not brought in by an image
task_list = self.model.get_targets()
if len(task_list):
tasks = " ".join(task_list)
# TODO: show a confirmation dialog
print("Including these extra tasks in IMAGE_INSTALL: %s" % tasks)
else:
return
self.nb.set_current_page(1)
self.handler.run_build(task_list)
return
def advanced_expander_cb(self, expander, param):
return
def images(self):
self.images_tree = gtk.TreeView()
self.images_tree.set_headers_visible(True)
self.images_tree.set_headers_clickable(False)
self.images_tree.set_enable_search(True)
self.images_tree.set_search_column(0)
self.images_tree.get_selection().set_mode(gtk.SELECTION_NONE)
col = gtk.TreeViewColumn('Package')
col1 = gtk.TreeViewColumn('Description')
col2 = gtk.TreeViewColumn('License')
col3 = gtk.TreeViewColumn('Include')
col3.set_resizable(False)
self.images_tree.append_column(col)
self.images_tree.append_column(col1)
self.images_tree.append_column(col2)
self.images_tree.append_column(col3)
cell = gtk.CellRendererText()
cell1 = gtk.CellRendererText()
cell2 = gtk.CellRendererText()
cell3 = gtk.CellRendererToggle()
cell3.set_property('activatable', True)
cell3.connect("toggled", self.toggle_include_cb, self.images_tree)
col.pack_start(cell, True)
col1.pack_start(cell1, True)
col2.pack_start(cell2, True)
col3.pack_start(cell3, True)
col.set_attributes(cell, text=self.model.COL_NAME)
col1.set_attributes(cell1, text=self.model.COL_DESC)
col2.set_attributes(cell2, text=self.model.COL_LIC)
col3.set_attributes(cell3, active=self.model.COL_INC)
self.images_tree.show()
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(self.images_tree)
return scroll
def toggle_package(self, path, model):
# Convert path to path in original model
opath = model.convert_path_to_child_path(path)
# current include status
inc = self.model[opath][self.model.COL_INC]
if inc:
self.model.mark(opath)
self.model.sweep_up()
#self.model.remove_package_full(cpath)
else:
self.model.include_item(opath)
return
def remove_package_cb(self, cell, path):
model = self.model.contents_model()
label = gtk.Label("Are you sure you want to remove this item?")
dialog = gtk.Dialog("Confirm removal", self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_CANCEL, gtk.RESPONSE_REJECT,
gtk.STOCK_OK, gtk.RESPONSE_ACCEPT))
dialog.vbox.pack_start(label)
label.show()
response = dialog.run()
dialog.destroy()
if (response == gtk.RESPONSE_ACCEPT):
self.toggle_package(path, model)
def toggle_include_cb(self, cell, path, tv):
model = tv.get_model()
self.toggle_package(path, model)
def toggle_pkg_include_cb(self, cell, path, tv):
# there's an extra layer of models in the packages case.
sort_model = tv.get_model()
cpath = sort_model.convert_path_to_child_path(path)
self.toggle_package(cpath, sort_model.get_model())
def pkgsaz(self):
self.pkgsaz_tree = gtk.TreeView()
self.pkgsaz_tree.set_headers_visible(True)
self.pkgsaz_tree.set_headers_clickable(True)
self.pkgsaz_tree.set_enable_search(True)
self.pkgsaz_tree.set_search_column(0)
self.pkgsaz_tree.get_selection().set_mode(gtk.SELECTION_NONE)
col = gtk.TreeViewColumn('Package')
col1 = gtk.TreeViewColumn('Description')
col1.set_resizable(True)
col2 = gtk.TreeViewColumn('License')
col2.set_resizable(True)
col3 = gtk.TreeViewColumn('Group')
col4 = gtk.TreeViewColumn('Include')
col4.set_resizable(False)
self.pkgsaz_tree.append_column(col)
self.pkgsaz_tree.append_column(col1)
self.pkgsaz_tree.append_column(col2)
self.pkgsaz_tree.append_column(col3)
self.pkgsaz_tree.append_column(col4)
cell = gtk.CellRendererText()
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 20)
cell2 = gtk.CellRendererText()
cell2.set_property('width-chars', 20)
cell3 = gtk.CellRendererText()
cell4 = gtk.CellRendererToggle()
cell4.set_property('activatable', True)
cell4.connect("toggled", self.toggle_pkg_include_cb, self.pkgsaz_tree)
col.pack_start(cell, True)
col1.pack_start(cell1, True)
col2.pack_start(cell2, True)
col3.pack_start(cell3, True)
col4.pack_start(cell4, True)
col.set_attributes(cell, text=self.model.COL_NAME)
col1.set_attributes(cell1, text=self.model.COL_DESC)
col2.set_attributes(cell2, text=self.model.COL_LIC)
col3.set_attributes(cell3, text=self.model.COL_GROUP)
col4.set_attributes(cell4, active=self.model.COL_INC)
self.pkgsaz_tree.show()
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(self.pkgsaz_tree)
return scroll
def pkgsgrp(self):
self.pkgsgrp_tree = gtk.TreeView()
self.pkgsgrp_tree.set_headers_visible(True)
self.pkgsgrp_tree.set_headers_clickable(False)
self.pkgsgrp_tree.set_enable_search(True)
self.pkgsgrp_tree.set_search_column(0)
self.pkgsgrp_tree.get_selection().set_mode(gtk.SELECTION_NONE)
col = gtk.TreeViewColumn('Package')
col1 = gtk.TreeViewColumn('Description')
col1.set_resizable(True)
col2 = gtk.TreeViewColumn('License')
col2.set_resizable(True)
col3 = gtk.TreeViewColumn('Group')
col4 = gtk.TreeViewColumn('Include')
col4.set_resizable(False)
self.pkgsgrp_tree.append_column(col)
self.pkgsgrp_tree.append_column(col1)
self.pkgsgrp_tree.append_column(col2)
self.pkgsgrp_tree.append_column(col3)
self.pkgsgrp_tree.append_column(col4)
cell = gtk.CellRendererText()
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 20)
cell2 = gtk.CellRendererText()
cell2.set_property('width-chars', 20)
cell3 = gtk.CellRendererText()
cell4 = gtk.CellRendererToggle()
cell4.set_property("activatable", True)
cell4.connect("toggled", self.toggle_pkg_include_cb, self.pkgsgrp_tree)
col.pack_start(cell, True)
col1.pack_start(cell1, True)
col2.pack_start(cell2, True)
col3.pack_start(cell3, True)
col4.pack_start(cell4, True)
col.set_attributes(cell, text=self.model.COL_NAME)
col1.set_attributes(cell1, text=self.model.COL_DESC)
col2.set_attributes(cell2, text=self.model.COL_LIC)
col3.set_attributes(cell3, text=self.model.COL_GROUP)
col4.set_attributes(cell4, active=self.model.COL_INC)
self.pkgsgrp_tree.show()
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(self.pkgsgrp_tree)
return scroll
def tasks(self):
self.tasks_tree = gtk.TreeView()
self.tasks_tree.set_headers_visible(True)
self.tasks_tree.set_headers_clickable(False)
self.tasks_tree.set_enable_search(True)
self.tasks_tree.set_search_column(0)
self.tasks_tree.get_selection().set_mode(gtk.SELECTION_NONE)
col = gtk.TreeViewColumn('Package')
col1 = gtk.TreeViewColumn('Description')
col2 = gtk.TreeViewColumn('Include')
col2.set_resizable(False)
self.tasks_tree.append_column(col)
self.tasks_tree.append_column(col1)
self.tasks_tree.append_column(col2)
cell = gtk.CellRendererText()
cell1 = gtk.CellRendererText()
cell2 = gtk.CellRendererToggle()
cell2.set_property('activatable', True)
cell2.connect("toggled", self.toggle_include_cb, self.tasks_tree)
col.pack_start(cell, True)
col1.pack_start(cell1, True)
col2.pack_start(cell2, True)
col.set_attributes(cell, text=self.model.COL_NAME)
col1.set_attributes(cell1, text=self.model.COL_DESC)
col2.set_attributes(cell2, active=self.model.COL_INC)
self.tasks_tree.show()
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(self.tasks_tree)
return scroll
def cancel_build(self, button):
label = gtk.Label("Do you really want to stop this build?")
dialog = gtk.Dialog("Cancel build",
self,
gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_NO, gtk.RESPONSE_NO,
gtk.STOCK_YES, gtk.RESPONSE_YES))
dialog.vbox.pack_start(label)
label.show()
response = dialog.run()
dialog.destroy()
if response == gtk.RESPONSE_YES:
self.handler.cancel_build()
return
def view_build_gui(self):
vbox = gtk.VBox(False, 6)
vbox.show()
build_tv = RunningBuildTreeView()
build_tv.show()
build_tv.set_model(self.build.model)
self.build.model.connect("row-inserted", self.scroll_tv_cb, build_tv)
scrolled_view = gtk.ScrolledWindow ()
scrolled_view.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
scrolled_view.add(build_tv)
scrolled_view.show()
vbox.pack_start(scrolled_view, expand=True, fill=True)
hbox = gtk.HBox(False, 6)
hbox.show()
vbox.pack_start(hbox, expand=False, fill=False)
cancel = gtk.Button(stock=gtk.STOCK_CANCEL)
cancel.connect("clicked", self.cancel_build)
cancel.show()
hbox.pack_end(cancel, expand=False, fill=False)
return vbox
def create_build_gui(self):
vbox = gtk.VBox(False, 6)
vbox.show()
hbox = gtk.HBox(False, 6)
hbox.show()
vbox.pack_start(hbox, expand=False, fill=False)
label = gtk.Label("Machine:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
self.machine_combo = gtk.combo_box_new_text()
self.machine_combo.set_active(0)
self.machine_combo.show()
self.machine_combo.set_tooltip_text("Selects the architecture of the target board for which you would like to build an image.")
hbox.pack_start(self.machine_combo, expand=False, fill=False, padding=6)
ins = gtk.Notebook()
vbox.pack_start(ins, expand=True, fill=True)
ins.set_show_tabs(True)
label = gtk.Label("Images")
label.show()
ins.append_page(self.images(), tab_label=label)
label = gtk.Label("Tasks")
label.show()
ins.append_page(self.tasks(), tab_label=label)
label = gtk.Label("Packages (by Group)")
label.show()
ins.append_page(self.pkgsgrp(), tab_label=label)
label = gtk.Label("Packages (by Name)")
label.show()
ins.append_page(self.pkgsaz(), tab_label=label)
ins.set_current_page(0)
ins.show_all()
hbox = gtk.HBox()
hbox.show()
vbox.pack_start(hbox, expand=False, fill=False)
label = gtk.Label("Image contents:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
con = self.contents()
con.show()
vbox.pack_start(con, expand=True, fill=True)
#advanced = gtk.Expander(label="Advanced")
#advanced.connect("notify::expanded", self.advanced_expander_cb)
#advanced.show()
#vbox.pack_start(advanced, expand=False, fill=False)
hbox = gtk.HBox()
hbox.show()
vbox.pack_start(hbox, expand=False, fill=False)
bake = gtk.Button("Bake")
bake.connect("clicked", self.bake_clicked_cb)
bake.show()
hbox.pack_end(bake, expand=False, fill=False, padding=6)
reset = gtk.Button("Reset")
reset.connect("clicked", self.reset_clicked_cb)
reset.show()
hbox.pack_end(reset, expand=False, fill=False, padding=6)
return vbox
def contents(self):
self.contents_tree = gtk.TreeView()
self.contents_tree.set_headers_visible(True)
self.contents_tree.get_selection().set_mode(gtk.SELECTION_NONE)
# allow searching in the package column
self.contents_tree.set_search_column(0)
col = gtk.TreeViewColumn('Package')
col.set_sort_column_id(0)
col1 = gtk.TreeViewColumn('Brought in by')
col1.set_resizable(True)
col2 = gtk.TreeViewColumn('Remove')
col2.set_expand(False)
self.contents_tree.append_column(col)
self.contents_tree.append_column(col1)
self.contents_tree.append_column(col2)
cell = gtk.CellRendererText()
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 20)
cell2 = gtk.CellRendererToggle()
cell2.connect("toggled", self.remove_package_cb)
col.pack_start(cell, True)
col1.pack_start(cell1, True)
col2.pack_start(cell2, True)
col.set_attributes(cell, text=self.model.COL_NAME)
col1.set_attributes(cell1, text=self.model.COL_BINB)
col2.set_attributes(cell2, active=self.model.COL_INC)
self.contents_tree.show()
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(self.contents_tree)
return scroll
def main (server, eventHandler):
gobject.threads_init()
gtk.gdk.threads_init()
taskmodel = TaskListModel()
handler = HobHandler(taskmodel, server)
mach = server.runCommand(["getVariable", "MACHINE"])
distro = server.runCommand(["getVariable", "DISTRO"])
window = MainWindow(taskmodel, handler, mach, distro)
window.show_all ()
handler.connect("machines-updated", window.update_machines)
handler.connect("distros-updated", window.update_distros)
handler.connect("generating-data", window.busy)
handler.connect("data-generated", window.data_generated)
pbar = ProgressBar(window)
pbar.connect("delete-event", gtk.main_quit)
try:
# kick the while thing off
handler.current_command = "findConfigFilesDistro"
server.runCommand(["findConfigFiles", "DISTRO"])
except xmlrpclib.Fault:
print("XMLRPC Fault getting commandline:\n %s" % x)
return 1
# This timeout function regularly probes the event queue to find out if we
# have any messages waiting for us.
gobject.timeout_add (100,
handler.event_handle_idle_func,
eventHandler,
window.build,
pbar)
try:
gtk.main()
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
finally:
server.runCommand(["stateStop"])

View File

@@ -22,49 +22,15 @@ from __future__ import division
import os
import sys
import itertools
import xmlrpclib
import logging
import progressbar
import bb.msg
from bb import ui
from bb.ui import uihelper
logger = logging.getLogger("BitBake")
interactive = sys.stdout.isatty()
class BBProgress(progressbar.ProgressBar):
def __init__(self, msg, maxval):
self.msg = msg
widgets = [progressbar.Percentage(), ' ', progressbar.Bar(), ' ',
progressbar.ETA()]
parsespin = itertools.cycle( r'|/-\\' )
progressbar.ProgressBar.__init__(self, maxval, [self.msg + ": "] + widgets)
class NonInteractiveProgress(object):
fobj = sys.stdout
def __init__(self, msg, maxval):
self.msg = msg
self.maxval = maxval
def start(self):
self.fobj.write("%s..." % self.msg)
self.fobj.flush()
return self
def update(self, value):
pass
def finish(self):
self.fobj.write("done.\n")
self.fobj.flush()
def new_progress(msg, maxval):
if interactive:
return BBProgress(msg, maxval)
else:
return NonInteractiveProgress(msg, maxval)
def main(server, eventHandler):
def init(server, eventHandler):
# Get values of variables which control our output
includelogs = server.runCommand(["getVariable", "BBINCLUDELOGS"])
@@ -72,13 +38,9 @@ def main(server, eventHandler):
helper = uihelper.BBUIHelper()
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
console.setFormatter(format)
logger.addHandler(console)
try:
cmdline = server.runCommand(["getCmdLineAction"])
#print cmdline
if not cmdline:
return 1
ret = server.runCommand(cmdline)
@@ -89,9 +51,6 @@ def main(server, eventHandler):
print("XMLRPC Fault getting commandline:\n %s" % x)
return 1
parseprogress = None
cacheprogress = None
shutdown = 0
return_value = 0
while True:
@@ -99,6 +58,7 @@ def main(server, eventHandler):
event = eventHandler.waitEvent(0.25)
if event is None:
continue
#print event
helper.eventHandler(event)
if isinstance(event, bb.runqueue.runQueueExitWait):
if not shutdown:
@@ -107,21 +67,31 @@ def main(server, eventHandler):
activetasks, failedtasks = helper.getTasks()
if activetasks:
print("Waiting for %s active tasks to finish:" % len(activetasks))
for tasknum, task in enumerate(activetasks):
tasknum = 1
for task in activetasks:
print("%s: %s (pid %s)" % (tasknum, activetasks[task]["title"], task))
tasknum = tasknum + 1
if isinstance(event, logging.LogRecord):
if event.levelno >= format.ERROR:
return_value = 1
# For "normal" logging conditions, don't show note logs from tasks
# but do show them if the user has changed the default log level to
# include verbose/debug messages
if logger.getEffectiveLevel() > format.VERBOSE:
if event.taskpid != 0 and event.levelno <= format.NOTE:
continue
logger.handle(event)
if isinstance(event, bb.msg.MsgPlain):
print(event._message)
continue
if isinstance(event, bb.msg.MsgDebug):
print('DEBUG: ' + event._message)
continue
if isinstance(event, bb.msg.MsgNote):
print('NOTE: ' + event._message)
continue
if isinstance(event, bb.msg.MsgWarn):
print('WARNING: ' + event._message)
continue
if isinstance(event, bb.msg.MsgError):
return_value = 1
print('ERROR: ' + event._message)
continue
if isinstance(event, bb.msg.MsgFatal):
return_value = 1
print('FATAL: ' + event._message)
continue
if isinstance(event, bb.build.TaskFailed):
return_value = 1
logfile = event.logfile
@@ -147,47 +117,42 @@ def main(server, eventHandler):
for line in lines:
print(line)
if isinstance(event, bb.build.TaskBase):
logger.info(event._message)
continue
if isinstance(event, bb.event.ParseStarted):
parseprogress = new_progress("Parsing recipes", event.total).start()
print("NOTE: %s" % event._message)
continue
if isinstance(event, bb.event.ParseProgress):
parseprogress.update(event.current)
continue
if isinstance(event, bb.event.ParseCompleted):
parseprogress.finish()
print(("Parsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors)))
x = event.sofar
y = event.total
if os.isatty(sys.stdout.fileno()):
sys.stdout.write("\rNOTE: Handling BitBake files: %s (%04d/%04d) [%2d %%]" % ( next(parsespin), x, y, x*100//y ) )
sys.stdout.flush()
else:
if x == 1:
sys.stdout.write("Parsing .bb files, please wait...")
sys.stdout.flush()
if x == y:
sys.stdout.write("done.")
sys.stdout.flush()
if x == y:
print(("\nParsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors)))
continue
if isinstance(event, bb.event.CacheLoadStarted):
cacheprogress = new_progress("Loading cache", event.total).start()
continue
if isinstance(event, bb.event.CacheLoadProgress):
cacheprogress.update(event.current)
continue
if isinstance(event, bb.event.CacheLoadCompleted):
cacheprogress.finish()
print("Loaded %d entries from dependency cache." % event.num_entries)
continue
if isinstance(event, bb.command.CommandCompleted):
if isinstance(event, bb.command.CookerCommandCompleted):
break
if isinstance(event, bb.command.CommandFailed):
return_value = event.exitcode
logger.error("Command execution failed: %s", event.error)
break
if isinstance(event, bb.command.CommandExit):
if isinstance(event, bb.command.CookerCommandSetExitCode):
return_value = event.exitcode
continue
if isinstance(event, bb.command.CookerCommandFailed):
return_value = 1
print("Command execution failed: %s" % event.error)
break
if isinstance(event, bb.cooker.CookerExit):
break
if isinstance(event, bb.event.MultipleProviders):
logger.info("multiple providers are available for %s%s (%s)", event._is_runtime and "runtime " or "",
event._item,
", ".join(event._candidates))
logger.info("consider defining a PREFERRED_PROVIDER entry to match %s", event._item)
print("NOTE: multiple providers are available for %s%s (%s)" % (event._is_runtime and "runtime " or "",
event._item,
", ".join(event._candidates)))
print("NOTE: consider defining a PREFERRED_PROVIDER entry to match %s" % event._item)
continue
if isinstance(event, bb.event.NoProvider):
if event._runtime:
@@ -196,26 +161,9 @@ def main(server, eventHandler):
r = ""
if event._dependees:
logger.error("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)", r, event._item, ", ".join(event._dependees), r)
print("ERROR: Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r))
else:
logger.error("Nothing %sPROVIDES '%s'", r, event._item)
continue
if isinstance(event, bb.runqueue.runQueueTaskStarted):
if event.noexec:
tasktype = 'noexec task'
else:
tasktype = 'task'
logger.info("Running %s %s of %s (ID: %s, %s)",
tasktype,
event.stats.completed + event.stats.active +
event.stats.failed + 1,
event.stats.total, event.taskid, event.taskstring)
continue
if isinstance(event, bb.runqueue.runQueueTaskFailed):
logger.error("Task %s (%s) failed with exit code '%s'",
event.taskid, event.taskstring, event.exitcode)
print("ERROR: Nothing %sPROVIDES '%s'" % (r, event._item))
continue
# ignore
@@ -227,12 +175,8 @@ def main(server, eventHandler):
bb.runqueue.runQueueExitWait)):
continue
logger.error("Unknown event: %s", event)
print("Unknown Event: %s" % event)
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
if shutdown == 2:
print("\nThird Keyboard Interrupt, exit.\n")

View File

@@ -44,9 +44,8 @@
"""
from __future__ import division
import logging
import os, sys, curses, itertools, time
import bb
import xmlrpclib
@@ -247,35 +246,29 @@ class NCursesUI:
event = eventHandler.waitEvent(0.25)
if not event:
continue
helper.eventHandler(event)
#mw.appendText("%s\n" % event[0])
if isinstance(event, bb.build.TaskBase):
mw.appendText("NOTE: %s\n" % event._message)
if isinstance(event, logging.LogRecord):
mw.appendText(logging.getLevelName(event.levelno) + ': ' + event.getMessage() + '\n')
if isinstance(event, bb.event.CacheLoadStarted):
self.parse_total = event.total
if isinstance(event, bb.event.CacheLoadProgress):
x = event.current
y = self.parse_total
mw.setStatus("Loading Cache: %s [%2d %%]" % ( next(parsespin), x*100/y ) )
if isinstance(event, bb.event.CacheLoadCompleted):
mw.setStatus("Idle")
mw.appendText("Loaded %d entries from dependency cache.\n"
% ( event.num_entries))
if isinstance(event, bb.event.ParseStarted):
self.parse_total = event.total
if isinstance(event, bb.msg.MsgDebug):
mw.appendText('DEBUG: ' + event._message + '\n')
if isinstance(event, bb.msg.MsgNote):
mw.appendText('NOTE: ' + event._message + '\n')
if isinstance(event, bb.msg.MsgWarn):
mw.appendText('WARNING: ' + event._message + '\n')
if isinstance(event, bb.msg.MsgError):
mw.appendText('ERROR: ' + event._message + '\n')
if isinstance(event, bb.msg.MsgFatal):
mw.appendText('FATAL: ' + event._message + '\n')
if isinstance(event, bb.event.ParseProgress):
x = event.current
y = self.parse_total
mw.setStatus("Parsing Recipes: %s [%2d %%]" % ( next(parsespin), x*100/y ) )
if isinstance(event, bb.event.ParseCompleted):
mw.setStatus("Idle")
mw.appendText("Parsing finished. %d cached, %d parsed, %d skipped, %d masked.\n"
x = event.sofar
y = event.total
if x == y:
mw.setStatus("Idle")
mw.appendText("Parsing finished. %d cached, %d parsed, %d skipped, %d masked."
% ( event.cached, event.parsed, event.skipped, event.masked ))
else:
mw.setStatus("Parsing: %s (%04d/%04d) [%2d %%]" % ( next(parsespin), x, y, x*100//y ) )
# if isinstance(event, bb.build.TaskFailed):
# if event.logfile:
# if data.getVar("BBINCLUDELOGS", d):
@@ -295,16 +288,12 @@ class NCursesUI:
# else:
# bb.msg.error(bb.msg.domain.Build, "see log in %s" % logfile)
if isinstance(event, bb.command.CommandCompleted):
# stop so the user can see the result of the build, but
# also allow them to now exit with a single ^C
shutdown = 2
if isinstance(event, bb.command.CommandFailed):
if isinstance(event, bb.command.CookerCommandCompleted):
exitflag = True
if isinstance(event, bb.command.CookerCommandFailed):
mw.appendText("Command execution failed: %s" % event.error)
time.sleep(2)
exitflag = True
if isinstance(event, bb.command.CommandExit):
exitflag = True
if isinstance(event, bb.cooker.CookerExit):
exitflag = True
@@ -315,18 +304,13 @@ class NCursesUI:
if activetasks:
taw.appendText("Active Tasks:\n")
for task in activetasks.itervalues():
taw.appendText(task["title"] + '\n')
taw.appendText(task["title"])
if failedtasks:
taw.appendText("Failed Tasks:\n")
for task in failedtasks:
taw.appendText(task["title"] + '\n')
taw.appendText(task["title"])
curses.doupdate()
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
if shutdown == 2:
mw.appendText("Third Keyboard Interrupt, exit.\n")
@@ -340,7 +324,7 @@ class NCursesUI:
shutdown = shutdown + 1
pass
def main(server, eventHandler):
def init(server, eventHandler):
if not os.isatty(sys.stdout.fileno()):
print("FATAL: Unable to run 'ncurses' UI without a TTY.")
return

View File

@@ -390,7 +390,7 @@ def running_build_failed_cb (running_build, manager):
print("build failed")
manager.notify_build_failed ()
def main (server, eventHandler):
def init (server, eventHandler):
# Initialise threading...
gobject.threads_init()
gtk.gdk.threads_init()

View File

@@ -37,8 +37,8 @@ class BBUIEventQueue:
self.BBServer = BBServer
self.t = threading.Thread()
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.setDaemon(True)
self.t.run = self.startCallbackHandler
self.t.start()
def getEvent(self):
@@ -63,20 +63,17 @@ class BBUIEventQueue:
def queue_event(self, event):
self.eventQueueLock.acquire()
self.eventQueue.append(event)
self.eventQueue.append(pickle.loads(event))
self.eventQueueNotify.set()
self.eventQueueLock.release()
def send_event(self, event):
self.queue_event(pickle.loads(event))
def startCallbackHandler(self):
server = UIXMLRPCServer()
self.host, self.port = server.socket.getsockname()
self.host, self.port = server.socket.getsockname()
server.register_function( self.system_quit, "event.quit" )
server.register_function( self.send_event, "event.send" )
server.register_function( self.queue_event, "event.send" )
server.socket.settimeout(1)
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)
@@ -86,7 +83,7 @@ class BBUIEventQueue:
server.handle_request()
server.server_close()
def system_quit( self ):
def system_quit( self ):
"""
Shut down the callback thread
"""
@@ -98,11 +95,11 @@ class BBUIEventQueue:
class UIXMLRPCServer (SimpleXMLRPCServer):
def __init__( self, interface = ("localhost", 0) ):
def __init__( self, interface = ("localhost", 0) ):
self.quit = False
SimpleXMLRPCServer.__init__( self,
interface,
requestHandler=SimpleXMLRPCRequestHandler,
SimpleXMLRPCServer.__init__( self,
interface,
requestHandler=SimpleXMLRPCRequestHandler,
logRequests=False, allow_none=True)
def get_request(self):
@@ -124,4 +121,4 @@ class UIXMLRPCServer (SimpleXMLRPCServer):
if request is None:
return
SimpleXMLRPCServer.process_request(self, request, client_address)

View File

@@ -17,8 +17,6 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import bb.build
class BBUIHelper:
def __init__(self):
self.needUpdate = False
@@ -37,6 +35,16 @@ class BBUIHelper:
self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
self.needUpdate = True
# Add runqueue event handling
#if isinstance(event, bb.runqueue.runQueueTaskCompleted):
# a = 1
#if isinstance(event, bb.runqueue.runQueueTaskStarted):
# a = 1
#if isinstance(event, bb.runqueue.runQueueTaskFailed):
# a = 1
#if isinstance(event, bb.runqueue.runQueueExitWait):
# a = 1
def getTasks(self):
self.needUpdate = False
return (self.running_tasks, self.failed_tasks)

View File

@@ -21,14 +21,10 @@ BitBake Utility Functions
import re, fcntl, os, string, stat, shutil, time
import sys
import errno
import logging
import bb
import errno
import bb.msg
from commands import getstatusoutput
from contextlib import contextmanager
logger = logging.getLogger("BitBake.Util")
# Version comparison
separators = ".-"
@@ -94,7 +90,7 @@ def vercmp(ta, tb):
(ea, va, ra) = ta
(eb, vb, rb) = tb
r = int(ea or 0) - int(eb or 0)
r = int(ea)-int(eb)
if (r == 0):
r = vercmp_part(va, vb)
if (r == 0):
@@ -195,10 +191,10 @@ def vercmp_string(val1, val2):
val2 = val2[0].split('.')
# add back decimal point so that .03 does not become "3" !
for x in xrange(1, len(val1)):
for x in range(1, len(val1)):
if val1[x][0] == '0' :
val1[x] = '.' + val1[x]
for x in xrange(1, len(val2)):
for x in range(1, len(val2)):
if val2[x][0] == '0' :
val2[x] = '.' + val2[x]
@@ -215,10 +211,10 @@ def vercmp_string(val1, val2):
val2[-1] += '_' + val2_prepart
# The above code will extend version numbers out so they
# have the same number of digits.
for x in xrange(0, len(val1)):
for x in range(0, len(val1)):
cmp1 = relparse(val1[x])
cmp2 = relparse(val2[x])
for y in xrange(0, 3):
for y in range(0, 3):
myret = cmp1[y] - cmp2[y]
if myret != 0:
__vercmp_cache__[valkey] = myret
@@ -279,7 +275,7 @@ def explode_dep_versions(s):
return r
def join_deps(deps, commasep=True):
def join_deps(deps):
"""
Take the result from explode_dep_versions and generate a dependency string
"""
@@ -289,10 +285,18 @@ def join_deps(deps, commasep=True):
result.append(dep + " (" + deps[dep] + ")")
else:
result.append(dep)
if commasep:
return ", ".join(result)
else:
return " ".join(result)
return ", ".join(result)
def extend_deps(dest, src):
"""
Extend the results from explode_dep_versions by appending all of the items
in the second list, avoiding duplicates.
"""
for dep in src:
if dep not in dest:
dest[dep] = src[dep]
elif dest[dep] != src[dep]:
dest[dep] = src[dep]
def _print_trace(body, line):
"""
@@ -300,12 +304,10 @@ def _print_trace(body, line):
"""
# print the environment of the method
min_line = max(1, line-4)
max_line = min(line + 4, len(body))
for i in xrange(min_line, max_line + 1):
if line == i:
logger.error(' *** %.4d:%s', i, body[i-1])
else:
logger.error(' %.4d:%s', i, body[i-1])
max_line = min(line + 4, len(body)-1)
for i in range(min_line, max_line + 1):
bb.msg.error(bb.msg.domain.Util, "\t%.4d:%s" % (i, body[i-1]) )
def better_compile(text, file, realfile, mode = "exec"):
"""
@@ -317,69 +319,50 @@ def better_compile(text, file, realfile, mode = "exec"):
except Exception as e:
# split the text into lines again
body = text.split('\n')
logger.error("Error in compiling python function in %s", realfile)
logger.error(str(e))
bb.msg.error(bb.msg.domain.Util, "Error in compiling python function in: %s" % (realfile))
bb.msg.error(bb.msg.domain.Util, str(e))
if e.lineno:
logger.error("The lines leading to this error were:")
logger.error("\t%d:%s:'%s'", e.lineno, e.__class__.__name__, body[e.lineno-1])
bb.msg.error(bb.msg.domain.Util, "The lines leading to this error were:")
bb.msg.error(bb.msg.domain.Util, "\t%d:%s:'%s'" % (e.lineno, e.__class__.__name__, body[e.lineno-1]))
_print_trace(body, e.lineno)
else:
logger.error("The function causing this error was:")
bb.msg.error(bb.msg.domain.Util, "The function causing this error was:")
for line in body:
logger.error(line)
bb.msg.error(bb.msg.domain.Util, line)
raise
def better_exec(code, context, text, realfile = "<code>"):
def better_exec(code, context, text, realfile):
"""
Similiar to better_compile, better_exec will
print the lines that are responsible for the
error.
"""
import bb.parse
if not hasattr(code, "co_filename"):
code = better_compile(code, realfile, realfile)
try:
exec(code, _context, context)
except Exception:
except:
(t, value, tb) = sys.exc_info()
if t in [bb.parse.SkipPackage, bb.build.FuncFailed]:
raise
import traceback
exception = traceback.format_exception_only(t, value)
logger.error('Error executing a python function in %s:\n%s',
realfile, ''.join(exception))
# print the Header of the Error Message
bb.msg.error(bb.msg.domain.Util, "Error in executing python function in: %s" % realfile)
bb.msg.error(bb.msg.domain.Util, "Exception:%s Message:%s" % (t, value))
# Strip 'us' from the stack (better_exec call)
tb = tb.tb_next
textarray = text.split('\n')
linefailed = traceback.tb_lineno(tb)
import traceback
tbextract = traceback.extract_tb(tb)
tbformat = "\n".join(traceback.format_list(tbextract))
logger.error("The stack trace of python calls that resulted in this exception/failure was:")
for line in tbformat.split('\n'):
logger.error(line)
tbextract = "\n".join(traceback.format_list(tbextract))
bb.msg.error(bb.msg.domain.Util, "Traceback:")
for line in tbextract.split('\n'):
bb.msg.error(bb.msg.domain.Util, line)
logger.error("The code that was being executed was:")
_print_trace(textarray, linefailed)
logger.error("(file: '%s', lineno: %s, function: %s)", tbextract[0][0], tbextract[0][1], tbextract[0][2])
# See if this is a function we constructed and has calls back into other functions in
# "text". If so, try and improve the context of the error by diving down the trace
level = 0
nexttb = tb.tb_next
while nexttb is not None:
if tbextract[level][0] == tbextract[level+1][0] and tbextract[level+1][2] == tbextract[level][0]:
_print_trace(textarray, tbextract[level+1][1])
logger.error("(file: '%s', lineno: %s, function: %s)", tbextract[level+1][0], tbextract[level+1][1], tbextract[level+1][2])
else:
break
nexttb = tb.tb_next
level = level + 1
line = traceback.tb_lineno(tb)
bb.msg.error(bb.msg.domain.Util, "The lines leading to this error were:")
_print_trace( text.split('\n'), line )
raise
@@ -389,36 +372,16 @@ def simple_exec(code, context):
def better_eval(source, locals):
return eval(source, _context, locals)
@contextmanager
def fileslocked(files):
"""Context manager for locking and unlocking file locks."""
locks = []
if files:
for lockfile in files:
locks.append(bb.utils.lockfile(lockfile))
yield
for lock in locks:
bb.utils.unlockfile(lock)
def lockfile(name, shared=False):
def lockfile(name):
"""
Use the file fn as a lock file, return when the lock has been acquired.
Returns a variable to pass to unlockfile().
"""
dirname = os.path.dirname(name)
mkdirhier(dirname)
if not os.access(dirname, os.W_OK):
logger.error("Unable to acquire lock '%s', directory is not writable",
name)
path = os.path.dirname(name)
if not os.path.isdir(path):
bb.msg.error(bb.msg.domain.Util, "Error, lockfile path does not exist!: %s" % path)
sys.exit(1)
op = fcntl.LOCK_EX
if shared:
op = fcntl.LOCK_SH
while True:
# If we leave the lockfiles lying around there is no problem
# but we should clean up after ourselves. This gives potential
@@ -431,31 +394,25 @@ def lockfile(name, shared=False):
# lock is the most likely to win it.
try:
lf = open(name, 'a+')
fileno = lf.fileno()
fcntl.flock(fileno, op)
statinfo = os.fstat(fileno)
lf = open(name, "a + ")
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
statinfo = os.fstat(lf.fileno())
if os.path.exists(lf.name):
statinfo2 = os.stat(lf.name)
if statinfo.st_ino == statinfo2.st_ino:
return lf
lf.close()
except Exception:
# File no longer exists or changed, retry
lf.close
except Exception as e:
continue
def unlockfile(lf):
"""
Unlock a file locked using lockfile()
"""
try:
# If we had a shared lock, we need to promote to exclusive before
# removing the lockfile. Attempt this, ignore failures.
fcntl.flock(lf.fileno(), fcntl.LOCK_EX|fcntl.LOCK_NB)
os.unlink(lf.name)
except (IOError, OSError):
pass
os.unlink(lf.name)
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
lf.close()
lf.close
def md5_file(filename):
"""
@@ -489,25 +446,13 @@ def sha256_file(filename):
s.update(line)
return s.hexdigest()
def preserved_envvars_exported():
"""Variables which are taken from the environment and placed in and exported
from the metadata"""
def preserved_envvars_list():
return [
'BBPATH',
'BB_PRESERVE_ENV',
'BB_ENV_WHITELIST',
'BB_ENV_EXTRAWHITE',
'BB_TASKHASH',
'HOME',
'LOGNAME',
'PATH',
'PWD',
'SHELL',
'TERM',
'USER',
'USERNAME',
]
def preserved_envvars_exported_interactive():
"""Variables which are taken from the environment and placed in and exported
from the metadata, for interactive tasks"""
return [
'COLORTERM',
'DBUS_SESSION_BUS_ADDRESS',
'DESKTOP_SESSION',
@@ -517,26 +462,23 @@ def preserved_envvars_exported_interactive():
'GNOME_KEYRING_SOCKET',
'GPG_AGENT_INFO',
'GTK_RC_FILES',
'HOME',
'LANG',
'LOGNAME',
'PATH',
'PWD',
'SESSION_MANAGER',
'KRB5CCNAME',
'SHELL',
'SSH_AUTH_SOCK',
'TERM',
'USER',
'USERNAME',
'_',
'XAUTHORITY',
'XDG_DATA_DIRS',
'XDG_SESSION_COOKIE',
]
def preserved_envvars():
"""Variables which are taken from the environment and placed in the metadata"""
v = [
'BBPATH',
'BB_PRESERVE_ENV',
'BB_ENV_WHITELIST',
'BB_ENV_EXTRAWHITE',
'LANG',
'_',
]
return v + preserved_envvars_exported() + preserved_envvars_exported_interactive()
def filter_environment(good_vars):
"""
Create a pristine environment for bitbake. This will remove variables that
@@ -553,14 +495,10 @@ def filter_environment(good_vars):
del os.environ[key]
if len(removed_vars):
logger.debug(1, "Removed the following variables from the environment: %s", ", ".join(removed_vars))
bb.msg.debug(1, bb.msg.domain.Util, "Removed the following variables from the environment: %s" % (", ".join(removed_vars)))
return removed_vars
def create_interactive_env(d):
for k in preserved_envvars_exported_interactive():
os.setenv(k, bb.data.getVar(k, d, True))
def clean_environment():
"""
Clean up any spurious environment variables. This will remove any
@@ -570,7 +508,7 @@ def clean_environment():
if 'BB_ENV_WHITELIST' in os.environ:
good_vars = os.environ['BB_ENV_WHITELIST'].split()
else:
good_vars = preserved_envvars()
good_vars = preserved_envvars_list()
if 'BB_ENV_EXTRAWHITE' in os.environ:
good_vars.extend(os.environ['BB_ENV_EXTRAWHITE'].split())
filter_environment(good_vars)
@@ -593,20 +531,6 @@ def build_environment(d):
if export:
os.environ[var] = bb.data.getVar(var, d, True) or ""
def remove(path, recurse=False):
"""Equivalent to rm -f or rm -rf"""
if not path:
return
import os, errno, shutil, glob
for name in glob.glob(path):
try:
os.unlink(name)
except OSError as exc:
if recurse and exc.errno == errno.EISDIR:
shutil.rmtree(name)
elif exc.errno != errno.ENOENT:
raise
def prunedir(topdir):
# Delete everything reachable from the directory named in 'topdir'.
# CAUTION: This is dangerous!
@@ -632,13 +556,15 @@ def prune_suffix(var, suffixes, d):
return var.replace(suffix, "")
return var
def mkdirhier(directory):
def mkdirhier(dir):
"""Create a directory like 'mkdir -p', but does not complain if
directory already exists like os.makedirs
"""
bb.msg.debug(3, bb.msg.domain.Util, "mkdirhier(%s)" % dir)
try:
os.makedirs(directory)
os.makedirs(dir)
bb.msg.debug(2, bb.msg.domain.Util, "created " + dir)
except OSError as e:
if e.errno != errno.EEXIST:
raise e
@@ -772,23 +698,16 @@ def copyfile(src, dest, newmtime = None, sstat = None):
return False
if stat.S_ISREG(sstat[stat.ST_MODE]):
try:
srcchown = False
if not os.access(src, os.R_OK):
# Make sure we can read it
srcchown = True
os.chmod(src, sstat[stat.ST_MODE] | stat.S_IRUSR)
# For safety copy then move it over.
os.chmod(src, stat.S_IRUSR) # Make sure we can read it
try: # For safety copy then move it over.
shutil.copyfile(src, dest + "#new")
os.rename(dest + "#new", dest)
except Exception as e:
print('copyfile: copy', src, '->', dest, 'failed.', e)
return False
finally:
if srcchown:
os.chmod(src, sstat[stat.ST_MODE])
os.utime(src, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
os.chmod(src, sstat[stat.ST_MODE])
os.utime(src, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
else:
#we don't yet handle special, so we need to fall back to /bin/mv
@@ -831,24 +750,13 @@ def init_logger(logger, verbose, debug, debug_domains):
Set verbosity and debug levels in the logger
"""
if verbose:
logger.set_verbose(True)
if debug:
bb.msg.set_debug_level(debug)
elif verbose:
bb.msg.set_verbose(True)
logger.set_debug_level(debug)
else:
bb.msg.set_debug_level(0)
logger.set_debug_level(0)
if debug_domains:
bb.msg.set_debug_domains(debug_domains)
def to_boolean(string, default=None):
if not string:
return default
normalized = string.lower()
if normalized in ("y", "yes", "1", "true"):
return True
elif normalized in ("n", "no", "0", "false"):
return False
else:
raise ValueError("Invalid value for to_boolean: %s" % string)
logger.set_debug_domains(debug_domains)

View File

@@ -1,384 +0,0 @@
#!/usr/bin/python
# -*- coding: iso-8859-1 -*-
#
# progressbar - Text progressbar library for python.
# Copyright (c) 2005 Nilton Volpato
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
"""Text progressbar library for python.
This library provides a text mode progressbar. This is typically used
to display the progress of a long running operation, providing a
visual clue that processing is underway.
The ProgressBar class manages the progress, and the format of the line
is given by a number of widgets. A widget is an object that may
display diferently depending on the state of the progress. There are
three types of widget:
- a string, which always shows itself;
- a ProgressBarWidget, which may return a diferent value every time
it's update method is called; and
- a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it
expands to fill the remaining width of the line.
The progressbar module is very easy to use, yet very powerful. And
automatically supports features like auto-resizing when available.
"""
from __future__ import division
__author__ = "Nilton Volpato"
__author_email__ = "first-name dot last-name @ gmail.com"
__date__ = "2006-05-07"
__version__ = "2.3-dev"
import sys, time, os
from array import array
try:
from fcntl import ioctl
import termios
except ImportError:
pass
import signal
try:
basestring
except NameError:
basestring = (str,)
class ProgressBarWidget(object):
"""This is an element of ProgressBar formatting.
The ProgressBar object will call it's update value when an update
is needed. It's size may change between call, but the results will
not be good if the size changes drastically and repeatedly.
"""
def update(self, pbar):
"""Returns the string representing the widget.
The parameter pbar is a reference to the calling ProgressBar,
where one can access attributes of the class for knowing how
the update must be made.
At least this function must be overriden."""
pass
class ProgressBarWidgetHFill(object):
"""This is a variable width element of ProgressBar formatting.
The ProgressBar object will call it's update value, informing the
width this object must the made. This is like TeX \\hfill, it will
expand to fill the line. You can use more than one in the same
line, and they will all have the same width, and together will
fill the line.
"""
def update(self, pbar, width):
"""Returns the string representing the widget.
The parameter pbar is a reference to the calling ProgressBar,
where one can access attributes of the class for knowing how
the update must be made. The parameter width is the total
horizontal width the widget must have.
At least this function must be overriden."""
pass
class ETA(ProgressBarWidget):
"Widget for the Estimated Time of Arrival"
def format_time(self, seconds):
return time.strftime('%H:%M:%S', time.gmtime(seconds))
def update(self, pbar):
if pbar.currval == 0:
return 'ETA: --:--:--'
elif pbar.finished:
return 'Time: %s' % self.format_time(pbar.seconds_elapsed)
else:
elapsed = pbar.seconds_elapsed
eta = elapsed * pbar.maxval / pbar.currval - elapsed
return 'ETA: %s' % self.format_time(eta)
class FileTransferSpeed(ProgressBarWidget):
"Widget for showing the transfer speed (useful for file transfers)."
def __init__(self, unit='B'):
self.unit = unit
self.fmt = '%6.2f %s'
self.prefixes = ['', 'K', 'M', 'G', 'T', 'P']
def update(self, pbar):
if pbar.seconds_elapsed < 2e-6:#== 0:
bps = 0.0
else:
bps = pbar.currval / pbar.seconds_elapsed
spd = bps
for u in self.prefixes:
if spd < 1000:
break
spd /= 1000
return self.fmt % (spd, u + self.unit + '/s')
class RotatingMarker(ProgressBarWidget):
"A rotating marker for filling the bar of progress."
def __init__(self, markers='|/-\\'):
self.markers = markers
self.curmark = -1
def update(self, pbar):
if pbar.finished:
return self.markers[0]
self.curmark = (self.curmark + 1) % len(self.markers)
return self.markers[self.curmark]
class Percentage(ProgressBarWidget):
"Just the percentage done."
def update(self, pbar):
return '%3d%%' % pbar.percentage()
class SimpleProgress(ProgressBarWidget):
"Returns what is already done and the total, e.g.: '5 of 47'"
def __init__(self, sep=' of '):
self.sep = sep
def update(self, pbar):
return '%d%s%d' % (pbar.currval, self.sep, pbar.maxval)
class Bar(ProgressBarWidgetHFill):
"The bar of progress. It will stretch to fill the line."
def __init__(self, marker='#', left='|', right='|'):
self.marker = marker
self.left = left
self.right = right
def _format_marker(self, pbar):
if isinstance(self.marker, basestring):
return self.marker
else:
return self.marker.update(pbar)
def update(self, pbar, width):
percent = pbar.percentage()
cwidth = width - len(self.left) - len(self.right)
marked_width = int(percent * cwidth // 100)
m = self._format_marker(pbar)
bar = (self.left + (m * marked_width).ljust(cwidth) + self.right)
return bar
class ReverseBar(Bar):
"The reverse bar of progress, or bar of regress. :)"
def update(self, pbar, width):
percent = pbar.percentage()
cwidth = width - len(self.left) - len(self.right)
marked_width = int(percent * cwidth // 100)
m = self._format_marker(pbar)
bar = (self.left + (m*marked_width).rjust(cwidth) + self.right)
return bar
default_widgets = [Percentage(), ' ', Bar()]
class ProgressBar(object):
"""This is the ProgressBar class, it updates and prints the bar.
A common way of using it is like:
>>> pbar = ProgressBar().start()
>>> for i in xrange(100):
... # do something
... pbar.update(i+1)
...
>>> pbar.finish()
You can also use a progressbar as an iterator:
>>> progress = ProgressBar()
>>> for i in progress(some_iterable):
... # do something
...
But anything you want to do is possible (well, almost anything).
You can supply different widgets of any type in any order. And you
can even write your own widgets! There are many widgets already
shipped and you should experiment with them.
The term_width parameter must be an integer or None. In the latter case
it will try to guess it, if it fails it will default to 80 columns.
When implementing a widget update method you may access any
attribute or function of the ProgressBar object calling the
widget's update method. The most important attributes you would
like to access are:
- currval: current value of the progress, 0 <= currval <= maxval
- maxval: maximum (and final) value of the progress
- finished: True if the bar has finished (reached 100%), False o/w
- start_time: the time when start() method of ProgressBar was called
- seconds_elapsed: seconds elapsed since start_time
- percentage(): percentage of the progress [0..100]. This is a method.
The attributes above are unlikely to change between different versions,
the other ones may change or cease to exist without notice, so try to rely
only on the ones documented above if you are extending the progress bar.
"""
__slots__ = ('currval', 'fd', 'finished', 'last_update_time', 'maxval',
'next_update', 'num_intervals', 'seconds_elapsed',
'signal_set', 'start_time', 'term_width', 'update_interval',
'widgets', '_iterable')
_DEFAULT_MAXVAL = 100
def __init__(self, maxval=None, widgets=default_widgets, term_width=None,
fd=sys.stderr):
self.maxval = maxval
self.widgets = widgets
self.fd = fd
self.signal_set = False
if term_width is not None:
self.term_width = term_width
else:
try:
self._handle_resize(None, None)
signal.signal(signal.SIGWINCH, self._handle_resize)
self.signal_set = True
except (SystemExit, KeyboardInterrupt):
raise
except:
self.term_width = int(os.environ.get('COLUMNS', 80)) - 1
self.currval = 0
self.finished = False
self.start_time = None
self.last_update_time = None
self.seconds_elapsed = 0
self._iterable = None
def __call__(self, iterable):
try:
self.maxval = len(iterable)
except TypeError:
# If the iterable has no length, then rely on the value provided
# by the user, otherwise fail.
if not (isinstance(self.maxval, (int, long)) and self.maxval > 0):
raise RuntimeError('Could not determine maxval from iterable. '
'You must explicitly provide a maxval.')
self._iterable = iter(iterable)
self.start()
return self
def __iter__(self):
return self
def next(self):
try:
next = self._iterable.next()
self.update(self.currval + 1)
return next
except StopIteration:
self.finish()
raise
def _handle_resize(self, signum, frame):
h, w = array('h', ioctl(self.fd, termios.TIOCGWINSZ, '\0' * 8))[:2]
self.term_width = w
def percentage(self):
"Returns the percentage of the progress."
return self.currval * 100.0 / self.maxval
def _format_widgets(self):
r = []
hfill_inds = []
num_hfill = 0
currwidth = 0
for i, w in enumerate(self.widgets):
if isinstance(w, ProgressBarWidgetHFill):
r.append(w)
hfill_inds.append(i)
num_hfill += 1
elif isinstance(w, basestring):
r.append(w)
currwidth += len(w)
else:
weval = w.update(self)
currwidth += len(weval)
r.append(weval)
for iw in hfill_inds:
widget_width = int((self.term_width - currwidth) // num_hfill)
r[iw] = r[iw].update(self, widget_width)
return r
def _format_line(self):
return ''.join(self._format_widgets()).ljust(self.term_width)
def _next_update(self):
return int((int(self.num_intervals *
(self.currval / self.maxval)) + 1) *
self.update_interval)
def _need_update(self):
"""Returns true when the progressbar should print an updated line.
You can override this method if you want finer grained control over
updates.
The current implementation is optimized to be as fast as possible and
as economical as possible in the number of updates. However, depending
on your usage you may want to do more updates. For instance, if your
progressbar stays in the same percentage for a long time, and you want
to update other widgets, like ETA, then you could return True after
some time has passed with no updates.
Ideally you could call self._format_line() and see if it's different
from the previous _format_line() call, but calling _format_line() takes
around 20 times more time than calling this implementation of
_need_update().
"""
return self.currval >= self.next_update
def update(self, value):
"Updates the progress bar to a new value."
assert 0 <= value <= self.maxval, '0 <= %d <= %d' % (value, self.maxval)
self.currval = value
if not self._need_update():
return
if self.start_time is None:
raise RuntimeError('You must call start() before calling update()')
now = time.time()
self.seconds_elapsed = now - self.start_time
self.next_update = self._next_update()
self.fd.write(self._format_line() + '\r')
self.last_update_time = now
def start(self):
"""Starts measuring time, and prints the bar at 0%.
It returns self so you can use it like this:
>>> pbar = ProgressBar().start()
>>> for i in xrange(100):
... # do something
... pbar.update(i+1)
...
>>> pbar.finish()
"""
if self.maxval is None:
self.maxval = self._DEFAULT_MAXVAL
assert self.maxval > 0
self.num_intervals = max(100, self.term_width)
self.update_interval = self.maxval / self.num_intervals
self.next_update = 0
self.start_time = self.last_update_time = time.time()
self.update(0)
return self
def finish(self):
"""Used to tell the progress is finished."""
self.finished = True
self.update(self.maxval)
self.fd.write('\n')
if self.signal_set:
signal.signal(signal.SIGWINCH, signal.SIG_DFL)

View File

@@ -7,7 +7,6 @@
"""PLY grammar file.
"""
import os.path
import sys
import pyshlex
@@ -649,10 +648,7 @@ def p_error(p):
try:
import pyshtables
except ImportError:
outputdir = os.path.dirname(__file__)
if not os.access(outputdir, os.W_OK):
outputdir = ''
yacc.yacc(tabmodule = 'pyshtables', outputdir = outputdir, debug = 0)
yacc.yacc(tabmodule = 'pyshtables')
else:
yacc.yacc(tabmodule = 'pysh.pyshtables', write_tables = 0, debug = 0)
@@ -708,9 +704,6 @@ def format_commands(v):
if v.reverse_status:
name = '!' + name
return [name, format_commands(v.commands)]
elif isinstance(v, Case):
name = ['Case']
name += [v.name, format_commands(v.items)]
elif isinstance(v, SimpleCommand):
name = ['SimpleCommand']
if v.words:

View File

@@ -1,64 +0,0 @@
# You must call this Makefile using the following form:
#
# make
# make html
# make pdf
# make tarball
# make clean
# make publish
#
# "make" creates the HTML, PDF, and tarballs.
# "make html" creates just the HTML
# "make pdf" creates just the PDF
# "make tarball" creates the tarball
# "make clean" removes the HTML and PDF files
# "make publish" pushes the HTML, PDF, figures, and stylesheet to the web server
#
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam appendix.autolabel A \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
VER = 1.0
DOC = adt-manual
ALLPREQ = html pdf tarball
TARFILES = adt-manual.html adt-manual.pdf style.css figures/adt-title.png
MANUALS = $(DOC).html $(DOC).pdf
FIGURES = figures
STYLESHEET = *.css
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
all: html pdf tarball
pdf:
../tools/poky-docbook-to-pdf adt-manual.xml ../template
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
html:
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
# xsltproc $(XSLTOPTS) -o adt-manual.html $(XSL_XHTML_URI) adt-manual.xml
xsltproc $(XSLTOPTS) -o adt-manual.html adt-manual-customization.xsl adt-manual.xml
tarball: html
cd $(DOC); tar -cvzf $(DOC).tgz $(TARFILES); cd ..
validate:
xmllint --postvalid --xinclude --noout adt-manual.xml
publish:
scp -r $(MANUALS) $(STYLESHEET) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)
scp -r $(FIGURES) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)/figures
clean:
rm -f $(MANUALS)

View File

@@ -1,66 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='using-the-command-line'>
<title>Using the Command Line</title>
<para>
Recall that earlier we talked about how to use an existing toolchain
tarball that had been installed into <filename>/opt/poky</filename>,
which is outside of the Poky build environment
(see <xref linkend='using-an-existing-toolchain-tarball'>
“Using an Existing Toolchain Tarball”)</xref>.
And, that sourcing your architecture-specific environment setup script
initializes a suitable development environment.
This setup occurs by adding the compiler, QEMU scripts, QEMU binary,
a special version of <filename>pkgconfig</filename> and other useful
utilities to the <filename>PATH</filename> variable.
Variables to assist pkgconfig and autotools are also defined so that,
for example, <filename>configure.sh</filename> can find pre-generated
test results for tests that need target hardware on which to run.
These conditions allow you to easily use the toolchain outside of the
Poky build environment on both autotools-based projects and
makefile-based projects.
</para>
<section id='autotools-based-projects'>
<title>Autotools-Based Projects</title>
<para>
For an autotools-based project you can use the cross-toolchain by just
passing the appropriate host option to <filename>configure.sh</filename>.
The host option you use is derived from the name of the environment setup
script in <filename>/opt/poky</filename> resulting from unpacking the
cross-toolchain tarball.
For example, the host option for an ARM-based target that uses the GNU EABI
is <filename>armv5te-poky-linux-gnueabi</filename>.
Note that the name of the script is
<filename>environment-setup-armv5te-poky-linux-gnueabi</filename>.
Thus, the following command works:
<literallayout class='monospaced'>
$ configure &dash;&dash;host-armv5te-poky-linux-gnueabi &dash;&dash;with-libtool-sysroot=&lt;sysroot-dir&gt;
</literallayout>
</para>
<para>
This single command updates your project and rebuilds it using the appropriate
cross-toolchain tools.
</para>
</section>
<section id='makefile-based-projects'>
<title>Makefile-Based Projects</title>
<para>
For a makefile-based project you use the cross-toolchain by making sure
the tools are used.
You can do this as follows:
<literallayout class='monospaced'>
CC=arm-poky-linux-gnueabi-gcc
LD=arm-poky-linux-gnueabi-ld
CFLAGS=”${CFLAGS} &dash;&dash;sysroot=&lt;sysroot-dir&gt;
CXXFLAGS=”${CXXFLAGS} &dash;&dash;sysroot=&lt;sysroot-dir&gt;
</literallayout>
</para>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,435 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='adt-eclipse'>
<title>Working Within Eclipse</title>
<para>
The Eclipse IDE is a popular development environment and it fully supports
development using Yocto Project.
When you install and configure the Eclipse Yocto Project Plug-in into
the Eclipse IDE you maximize your Yocto Project design experience.
Installing and configuring the Plug-in results in an environment that
has extensions specifically designed to let you more easily develop software.
These extensions allow for cross-compilation and deployment and execution of
your output into a QEMU emulation session.
You can also perform cross-debugging and profiling.
The environment also has a suite of tools that allows you to perform
remote profiling, tracing, collection of power data, collection of
latency data, and collection of performance data.
</para>
<para>
This section describes how to install and configure the Eclipse IDE
Yocto Plug-in and how to use it to develop your Yocto Project.
</para>
<section id='setting-up-the-eclipse-ide'>
<title>Setting Up the Eclipse IDE</title>
<para>
To develop within the Eclipse IDE you need to do the following:
<orderedlist>
<listitem><para>Be sure the optimal version of Eclipse IDE
is installed.</para></listitem>
<listitem><para>Install required Eclipse plug-ins prior to installing
the Eclipse Yocto Plug-in.</para></listitem>
<listitem><para>Configure the Eclipse Yocto Plug-in.</para></listitem>
</orderedlist>
</para>
<section id='installing-eclipse-ide'>
<title>Installing Eclipse IDE</title>
<para>
It is recommended that you have the Helios 3.6.1 version of the
Eclipse IDE installed on your development system.
If you dont have this version you can find it at
<ulink url='http://www.eclipse.org/downloads'></ulink>.
From that site, choose the Eclipse Classic version.
This version contains the Eclipse Platform, the Java Development
Tools (JDT), and the Plug-in Development Environment.
</para>
<para>
Once you have downloaded the tarball, extract it into a clean
directory and complete the installation.
</para>
<para>
One issue exists that you need to be aware of regarding the Java
Virtual machines garbage collection (GC) process.
The GC process does not clean up the permanent generation
space (PermGen).
This space stores meta-data descriptions of classes.
The default value is set too small and it could trigger an
out-of-memory error such as the following:
<literallayout class='monospaced'>
Java.lang.OutOfMemoryError: PermGen space
</literallayout>
</para>
<para>
This error causes the application to hang.
</para>
<para>
To fix this issue you can use the &dash;&dash;vmargs option when you start
Eclipse to increase the size of the permanent generation space:
<literallayout class='monospaced'>
eclipse &dash;&dash;vmargs &dash;&dash;XX:PermSize=256M
</literallayout>
</para>
</section>
<section id='installing-required-plug-ins-and-the-eclipse-yocto-plug-in'>
<title>Installing Required Plug-ins and the Eclipse Yocto Plug-in</title>
<para>
Before installing the Yocto Plug-in you need to be sure that the
CDT 7.0, RSE 3.2, and Autotools plug-ins are all installed in the
following order.
After installing these three plug-ins, you can install the
Eclipse Yocto Plug-in.
Use the following URLs for the plug-ins:
<orderedlist>
<listitem><para><emphasis>CDT 7.0</emphasis>
<ulink url='http://download.eclipse.org/tools/cdt/releases/helios/'></ulink>:
For CDT main features select the checkbox so you get all items.
For CDT optional features expand the selections and check
“C/C++ Remote Launch”.</para></listitem>
<listitem><para><emphasis>RSE 3.2</emphasis>
<ulink url='http://download.eclipse.org/tm/updates/3.2'></ulink>:
Check the box next to “TM and RSE Main Features” so you select all
those items.
Note that all items in the main features depend on 3.2.1 version.
Expand the items under “TM and RSE Uncategorized 3.2.1” and
select the following: “Remote System Explorer End-User Runtime”,
“Remote System Explorer Extended SDK”, “Remote System Explorer User Actions”,
“RSE Core”, “RSE Terminals UI”, and “Target Management Terminal”.</para></listitem>
<listitem><para><emphasis>Autotools</emphasis>
<ulink url='http://download.eclipse.org/technology/linuxtools/update/'></ulink>:
Expand the items under “Linux Tools” and select “Autotools support for
CDT (Incubation)”.</para></listitem>
<listitem><para><emphasis>Yocto Plug-in</emphasis>
<ulink url='http://www.yoctoproject.org/downloads/eclipse-plugin/1.0'></ulink>:
Check the box next to “Development tools &amp; SDKs for Yocto Linux”
to select all the items.</para></listitem>
</orderedlist>
</para>
<para>
Follow these general steps to install a plug-in:
<orderedlist>
<listitem><para>From within the Eclipse IDE select the
“Install New Software” item from the “Help” menu.</para></listitem>
<listitem><para>Click “Add…” in the “Work with:” area.</para></listitem>
<listitem><para>Enter the URL for the repository and leave the “Name”
field blank.</para></listitem>
<listitem><para>Check the boxes next to the software you need to
install and then complete the installation.
For information on the specific software packages you need to include,
see the previous list.</para></listitem>
</orderedlist>
</para>
</section>
<section id='configuring-the-plug-in'>
<title>Configuring the Plug-in</title>
<para>
Configuring the Eclipse Yocto Plug-in involves choosing the Cross
Compiler Options, selecting the Target Architecture, and choosing
the Target Options.
These settings are the default settings for all projects.
You do have opportunities to change them later if you choose to when
you configure the project.
See “Configuring the Cross Toolchain” section later in the manual.
</para>
<para>
To start, you need to do the following from within the Eclipse IDE:
<itemizedlist>
<listitem><para>Choose Windows -&gt; Preferences to display
the Preferences Dialog</para></listitem>
<listitem><para>Click “Yocto SDK”</para></listitem>
</itemizedlist>
</para>
<section id='configuring-the-cross-compiler-options'>
<title>Configuring the Cross-Compiler Options</title>
<para>
Choose between SDK Root Mode and Poky Tree Mode for Cross
Compiler Options.
<itemizedlist>
<listitem><para><emphasis>SDK Root Mode</emphasis> Select this mode
when you are not concerned with building an image or you do not have
a Poky build tree on your system.
For example, suppose you are an application developer and do not
need to build an image.
You just want to use an architecture-specific toolchain on an
existing kernel and root filesystem.
When you use SDK Root Mode you are using the toolchain installed
in the <filename>/opt/poky</filename> directory.</para></listitem>
<listitem><para><emphasis>Poky Tree Mode</emphasis> Select this mode
if you are concerned with building images for hardware or your
development environment already has a build tree.
In this case you likely already have a Poky build tree installed on
your system or you (or someone else) will be building one.
When you use the Poky Tree Mode you are using the toolchain bundled
inside the Poky build tree.
If you use this mode you must also supply the Poky Root Location
in the Preferences Dialog.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='configuring-the-sysroot'>
<title>Configuring the Sysroot</title>
<para>
Specify the sysroot, which is used by both the QEMU user-space
NFS boot process and by the cross-toolchain regardless of the
mode you select (SDK Root Mode or Poky Tree Mode).
For example, sysroot is the location to which you extract the
downloaded images root filesystem to through the ADT Installer.
</para>
</section>
<section id='selecting-the-target-architecture'>
<title>Selecting the Target Architecture</title>
<para>
Use the pull-down Target Architecture menu and select the
target architecture.
</para>
<para>
The Target Architecture is the type of hardware you are
going to use or emulate.
This pull-down menu should have the supported architectures.
If the architecture you need is not listed in the menu then you
will need to re-visit
<xref linkend='adt-prepare'>
“Preparing to Use the Application Development Toolkit (ADT)”</xref>
section earlier in this document.
</para>
</section>
<section id='choosing-the-target-options'>
<title>Choosing the Target Options</title>
<para>
You can choose to emulate hardware using the QEMU emulator, or you
can choose to use actual hardware.
<itemizedlist>
<listitem><para><emphasis>External HW</emphasis> Select this option
if you will be using actual hardware.</para></listitem>
<listitem><para><emphasis>QEMU</emphasis> Select this option if
you will be using the QEMU emulator.
If you are using the emulator you also need to locate the Kernel
and you can specify custom options.</para>
<para>In Poky Tree Mode the kernel you built will be located in the
Poky Build tree in <filename>tmp/deploy/images</filename> directory.
In SDK Root Mode the pre-built kernel you downloaded is located
in the directory you specified when you downloaded the image.</para>
<para>Most custom options are for advanced QEMU users to further
customize their QEMU instance.
These options are specified between paired angled brackets.
Some options must be specified outside the brackets.
In particular, the options <filename>serial</filename>,
<filename>nographic</filename>, and <filename>kvm</filename> must all
be outside the brackets.
Use the <filename>man qemu</filename> command to get help on all the options
and their use.
The following is an example:
<literallayout class='monospaced'>
serial &lt;-m 256 -full-screen&gt;
</literallayout>
</para>
<para>
Regardless of the mode, Sysroot is already defined in the “Sysroot”
field.</para></listitem>
</itemizedlist>
</para>
<para>
Click the “OK” button to save your plug-in configurations.
</para>
</section>
</section>
</section>
<section id='creating-the-project'>
<title>Creating the Project</title>
<para>
You can create two types of projects: Autotools-based, or Makefile-based.
This section describes how to create autotools-based projects from within
the Eclipse IDE.
For information on creating projects in a terminal window see
<xref linkend='using-the-command-line'> “Using the Command Line”</xref>
section.
</para>
<para>
To create a project based on a Yocto template and then display the source code,
follow these steps:
<orderedlist>
<listitem><para>Select File -> New -> Project.</para></listitem>
<listitem><para>Double click “CC++”.</para></listitem>
<listitem><para>Double click “C Project” to create the project.</para></listitem>
<listitem><para>Double click “Yocto SDK Project”.</para></listitem>
<listitem><para>Select “Hello World ANSI C Autotools Project”.
This is an Autotools-based project based on a Yocto Project template.</para></listitem>
<listitem><para>Put a name in the “Project name:” field.</para></listitem>
<listitem><para>Click “Next”.</para></listitem>
<listitem><para>Add information in the “Author” field.</para></listitem>
<listitem><para>Use “GNU General Public License v2.0” for the License.</para></listitem>
<listitem><para>Click “Finish”.</para></listitem>
<listitem><para>Answer Yes” to the open perspective prompt.</para></listitem>
<listitem><para>In the Project Explorer expand your project.</para></listitem>
<listitem><para>Expand src.</para></listitem>
<listitem><para>Double click on your source file and the code appears
in the window.
This is the template.</para></listitem>
</orderedlist>
</para>
</section>
<section id='configuring-the-cross-toolchains'>
<title>Configuring the Cross-Toolchains</title>
<para>
The previous section, <xref linkend='configuring-the-cross-compiler-options'>
“Configuring the Cross-Compiler Options”</xref>, set up the default project
configurations.
You can change these settings for a given project by following these steps:
<orderedlist>
<listitem><para>Select Project -> Invoke Yocto Tools -> Reconfigure Yocto.
This brings up the project Yocto Settings Dialog.
Settings are inherited from the default project configuration.
The information in this dialogue is identical to that chosen earlier
for the Cross Compiler Option (SDK Root Mode or Poky Tree Mode),
the Target Architecture, and the Target Options.
The settings are inherited from the Yocto Plug-in configuration performed
after installing the plug-in.</para></listitem>
<listitem><para>Select Project -> Reconfigure Project.
This runs the <filename>autogen.sh</filename> in the workspace for your project.
The script runs <filename>libtoolize</filename>, <filename>aclocal</filename>,
<filename>autoconf</filename>, <filename>autoheader</filename>,
<filename>automake &dash;&dash;a</filename>, and
<filename>./configure</filename>.</para></listitem>
</orderedlist>
</para>
</section>
<section id='building-the-project'>
<title>Building the Project</title>
<para>
To build the project, select Project -&gt; Build Project.
You should see the console updated and you can note the cross-compiler you are using.
</para>
</section>
<section id='starting-qemu-in-user-space-nfs-mode'>
<title>Starting QEMU in User Space NFS Mode</title>
<para>
To start the QEMU emulator from within Eclipse, follow these steps:
<orderedlist>
<listitem><para>Select Run -> External Tools -> External Tools Configurations...
This selection brings up the External Tools Configurations Dialogue.</para></listitem>
<listitem><para>Go to the left navigation area and expand Program.
You should find the image listed.
For example, qemu-x86_64-poky-linux.</para></listitem>
<listitem><para>Click on the image.
This brings up a new environment in the main area of the External
Tools Configurations Dialogue.
The Main tab is selected.</para></listitem>
<listitem><para>Click “Run” next.
This brings up a shell window.</para></listitem>
<listitem><para>Enter your host root password in the shell window at the prompt.
This sets up a Tap 0 connection needed for running in user-space NFS mode.</para></listitem>
<listitem><para>Wait for QEMU to launch.</para></listitem>
<listitem><para>Once QEMU launches you need to determine the IP Address
for the user-space NFS.
You can do that by going to a terminal in the QEMU and entering the
<filename>ipconfig</filename> command.</para></listitem>
</orderedlist>
</para>
</section>
<section id='deploying-and-debugging-the-application'>
<title>Deploying and Debugging the Application</title>
<para>
Once QEMU is running you can deploy your application and use the emulator
to perform debugging.
Follow these steps to deploy the application.
<orderedlist>
<listitem><para>Select Run -> Debug Configurations...</para></listitem>
<listitem><para>In the left area expand “C/C++Remote Application”.</para></listitem>
<listitem><para>Locate your project and select it to bring up a new
tabbed view in the Debug Configurations dialogue.</para></listitem>
<listitem><para>Enter the absolute path into which you want to deploy
the application.
Use the Remote Absolute File Path for C/C++Application:.
For example, enter <filename>/usr/bin/&lt;programname&gt;</filename>.</para></listitem>
<listitem><para>Click on the Debugger tab to see the cross-tool debugger
you are using.</para></listitem>
<listitem><para>Create a new connection to the QEMU instance
by clicking on “new”.</para></listitem>
<listitem><para>Select “TCF, which means Target Communication Framework.</para></listitem>
<listitem><para>Click “Next”.</para></listitem>
<listitem><para>Clear out the “host name” field and enter the IP Address
determined earlier.</para></listitem>
<listitem><para>Click Finish to close the new connections dialogue.</para></listitem>
<listitem><para>Use the drop-down menu now in the “Connection” field and pick
the IP Address you entered.</para></listitem>
<listitem><para>Click “Debug” to bring up a login screen and login.</para></listitem>
<listitem><para>Accept the debug perspective.</para></listitem>
</orderedlist>
</para>
</section>
<section id='running-user-space-tools'>
<title>Running User-Space Tools</title>
<para>
As mentioned earlier in the manual several tools exist that enhance
your development experience.
These tools are aids in developing and debugging applications and images.
You can run these user-space tools from within the Yocto Eclipse
Plug-in through the Window -> YoctoTools menu.
</para>
<para>
Once you pick a tool you need to configure it for the remote target.
Every tool needs to have the connection configured.
You must select an existing TCF-based RSE connection to the remote target.
If one does not exist, click "New" to create one.
</para>
<para>
Here are some specifics about the remote tools:
<itemizedlist>
<listitem><para><emphasis>OProfile:</emphasis> Selecting this tool causes
the oprofile-server on the remote target to launch on the local host machine.
The oprofile-viewer must be installed on the local host machine and the
oprofile-server must be installed on the remote target, respectively, in order
to use.
You can locate both the viewer and server from
<ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/oprofileui/'></ulink>.
You need to compile and install the oprofile-viewer from the source code
on your local host machine.
The oprofile-server is installed by default in the image.</para></listitem>
<listitem><para><emphasis>Lttng-ust:</emphasis> Selecting this tool runs
"usttrace" on the remote target, transfers the output data back to the
local host machine and uses "lttv-gui" to graphically display the output.
The "lttv-gui" must be installed on the local host machine to use this tool.
For information on how to use "lttng" to trace an application, see
<ulink url='http://lttng.org/files/ust/manual/ust.html'></ulink>.</para>
<para>For "Application" you must supply the absolute path name of the
application to be traced by user mode lttng.
For example, typing <filename>/path/to/foo</filename> triggers
<filename>usttrace /path/to/foo</filename> on the remote target to trace the
program <filename>/path/to/foo</filename>.</para>
<para>"Argument" is passed to <filename>usttrace</filename>
running on the remote target.</para></listitem>
<listitem><para><emphasis>PowerTOP:</emphasis> Selecting this tool runs
"PowerTOP" on the remote target machine and displays the results in a
new view called "powertop".</para>
<para>"Time to gather data(sec):" is the time passed in seconds before data
is gathered from the remote target for analysis.</para>
<para>"show pids in wakeups list:" corresponds to the -p argument
passed to "powertop".</para></listitem>
<listitem><para><emphasis>LatencyTOP and Perf:</emphasis> "LatencyTOP"
identifies system latency, while "perf" monitors the system's
performance counter registers.
Selecting either of these tools causes an RSE terminal view to appear
from which you can run the tools.
Both tools refresh the entire screen to display results while they run.</para></listitem>
</itemizedlist>
</para>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,117 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='adt-intro'>
<title>Application Development Toolkit (ADT) User's Guide</title>
<para>
Welcome to the Application Development Toolkit Users Guide. This manual provides
information that lets you get going with the ADT to develop projects using the Yocto
Project.
</para>
<section id='book-intro'>
<title>Introducing the Application Development Toolkit (ADT)</title>
<para>
Fundamentally, the ADT consists of an architecture-specific cross-toolchain and
a matching sysroot that are both built by the Poky build system.
The toolchain and sysroot are based on a metadata configuration and extensions,
which allows you to cross develop for the target on the host machine.
</para>
<para>
Additionally, to provide an effective development platform, the Yocto Project
makes available and suggests other tools as part of the ADT.
These other tools include the Eclipse IDE Yocto Plug-in, an emulator (QEMU),
and various user-space tools that greatly enhance your development experience.
</para>
<para>
The resulting combination of the architecture-specific cross-toolchain and sysroot
along with these additional tools yields a custom-built, cross-development platform
for a user-targeted product.
</para>
<section id='the-cross-toolchain'>
<title>The Cross-Toolchain</title>
<para>
The cross-toolchain consists of a cross-compiler, cross-linker, and cross-debugger
that are all generated through a Poky build that is based on your metadata
configuration or extension for your targeted device.
The cross-toolchain works with a matching target sysroot.
</para>
</section>
<section id='sysroot'>
<title>Sysroot</title>
<para>
The matching target sysroot contains needed headers and libraries for generating
binaries that run on the target architecture.
The sysroot is based on the target root filesystem image that is built by
Poky and uses the same metadata configuration used to build the cross-toolchain.
</para>
</section>
<section id='the-qemu-emulator'>
<title>The QEMU Emulator</title>
<para>
The QEMU emulator allows you to simulate your hardware while running your
application or image.
QEMU is installed several ways: as part of the Poky tree, ADT installation
through a toolchain tarball, or through the ADT Installer.
</para>
</section>
<section id='user-space-tools'>
<title>User-Space Tools</title>
<para>
User-space tools are included as part of the distribution.
You will find these tools helpful during development.
The tools include LatencyTOP, PowerTOP, OProfile, Perf, SystemTap, and Lttng-ust.
These tools are common development tools for the Linux platform.
<itemizedlist>
<listitem><para><emphasis>LatencyTOP</emphasis> LatencyTOP focuses on latency
that causes skips in audio,
stutters in your desktop experience, or situations that overload your server
even when you have plenty of CPU power left.
You can find out more about LatencyTOP at
<ulink url='http://www.latencytop.org/'></ulink>.
</para></listitem>
<listitem><para><emphasis>PowerTOP</emphasis> Helps you determine what
software is using the most power.
You can find out more about PowerTOP at
<ulink url='http://www.linuxpowertop.org/'></ulink>.
</para></listitem>
<listitem><para><emphasis>OProfile</emphasis> A system-wide profiler for Linux
systems that is capable
of profiling all running code at low overhead.
You can find out more about OProfile at
<ulink url='http://oprofile.sourceforge.net/about/'></ulink>.
</para></listitem>
<listitem><para><emphasis>Perf</emphasis> Performance counters for Linux used
to keep track of certain
types of hardware and software events.
For more information on these types of counters see
<ulink url='https://perf.wiki.kernel.org/index.php'></ulink> and click
on “Perf tools.”
</para></listitem>
<listitem><para><emphasis>SystemTap</emphasis> A free software infrastructure
that simplifies
information gathering about a running Linux system.
This information helps you diagnose performance or functional problems.
SystemTap is not available as a user-space tool through the Yocto Eclipse IDE Plug-in.
See <ulink url='http://sourceware.org/systemtap'></ulink> for more information
on SystemTap.
</para></listitem>
<listitem><para><emphasis>Lttng-ust</emphasis> A User-space Tracer designed to
provide detailed information on user-space activity.
See <ulink url='http://lttng.org/ust'></ulink> for more information on Lttng-ust.
</para></listitem>
</itemizedlist>
</para>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,8 +0,0 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
</xsl:stylesheet>

View File

@@ -1,70 +0,0 @@
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id='adt-manual' lang='en'
xmlns:xi="http://www.w3.org/2003/XInclude"
xmlns="http://docbook.org/ns/docbook"
>
<bookinfo>
<mediaobject>
<imageobject>
<imagedata fileref='figures/adt-title.png'
format='SVG'
align='left' scalefit='1' width='100%'/>
</imageobject>
</mediaobject>
<title></title>
<authorgroup>
<author>
<firstname>Jessica</firstname> <surname>Zhang</surname>
<affiliation>
<orgname>Intel Corporation</orgname>
</affiliation>
<email>jessica.zhang@intel.com</email>
</author>
</authorgroup>
<revhistory>
<revision>
<revnumber>1.0</revnumber>
<date>6 April 2011</date>
<revremark>Initial Document released with Yocto Project 1.0 on 6 April 2011.</revremark>
</revision>
</revhistory>
<copyright>
<year>2010-2011</year>
<holder>Linux Foundation</holder>
</copyright>
<legalnotice>
<para>
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
</para>
</legalnotice>
</bookinfo>
<xi:include href="adt-intro.xml"/>
<xi:include href="adt-prepare.xml"/>
<xi:include href="adt-package.xml"/>
<xi:include href="adt-eclipse.xml"/>
<xi:include href="adt-command.xml"/>
<!-- <index id='index'>
<title>Index</title>
</index>
-->
</book>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,82 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='adt-package'>
<title>Optionally Customizing the Development Packages Installation</title>
<para>
Because the Yocto Project is suited for embedded Linux development it is
likely that you will need to customize your development packages installation.
For example, if you are developing a minimal image then you might not need
certain packages (e.g. graphics support packages).
Thus, you would like to be able to remove those packages from your sysroot.
</para>
<section id='package-management-systems'>
<title>Package Management Systems</title>
<para>
The Yocto Project supports the generation of root filesystem files using
three different Package Management Systems (PMS):
<itemizedlist>
<listitem><para><emphasis>OPKG</emphasis> A less well known PMS whose use
originated in the OpenEmbedded and OpenWrt embedded Linux projects.
This PMS works with files packaged in an <filename>.ipk</filename> format.
See <ulink url='http://en.wikipedia.org/wiki/Opkg'></ulink> for more
information about OPKG.</para></listitem>
<listitem><para><emphasis>RPM</emphasis> A more widely known PMS intended for GNU/Linux
distributions.
This PMS works with files packaged in an <filename>.rms</filename> format.
The Yocto Project currently installs through this PMS by default.
See <ulink url='http://en.wikipedia.org/wiki/RPM_Package_Manager'></ulink>
for more information about RPM.</para></listitem>
<listitem><para><emphasis>Debian</emphasis> The PMS for Debian-based systems
is built on many PMS tools.
The lower-level PMS tool dpkg forms the base of the Debian PMS.
For information on dpkg see
<ulink url='http://en.wikipedia.org/wiki/Dpkg'></ulink>.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='configuring-the-pms'>
<title>Configuring the PMS</title>
<para>
Whichever PMS you are using you need to be sure that the
<filename>PACKAGE_CLASSES</filename> variable in the <filename>conf/local.conf</filename>
file is set to reflect that system.
The first value you choose for the variable specifies the package file format for the root
filesystem.
Additional values specify additional formats for convenience or testing.
See the configuration file for details.
</para>
<para>
As an example, consider a scenario where you are using OPKG and you want to add
the libglade package to sysroot.
</para>
<para>
First, you should generate the ipk file for the libglade package and add it
into a working opkg repository.
Use these commands:
<literallayout class='monospaced'>
$ bitbake libglade
$ bitbake package-index
</literallayout>
</para>
<para>
Next, source the environment setup script.
Follow that by setting up the installation destination to point to your
sysroot as <filename>&lt;sysroot dir&gt;</filename>.
Finally, have an opkg configuration file <filename>&lt;conf file&gt;</filename>
that corresponds to the opkg repository you have just created.
The following command forms should now work:
<literallayout class='monospaced'>
$ opkg-cl f &lt;conf file&gt; -o &lt;sysroot dir&gt; update
$ opkg-cl f &lt;conf file&gt;> -o &lt;sysroot dir&gt; --force-overwrite install libglade
$ opkg-cl f &lt;conf file&gt; -o &lt;sysroot dir&gt; --force-overwrite install libglade-dbg
$ opkg-cl f &lt;conf file&gt; -o &lt;sysroot dir&gt; --force-overwrite install libglade-dev
</literallayout>
</para>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -1,244 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='adt-prepare'>
<title>Preparing to Use the Application Development Toolkit (ADT)</title>
<para>
In order to use the ADT it must be installed, the environment setup script must be
sourced, and the kernel and filesystem image specific to the target architecture must exist.
This section describes how to install the ADT, set up the environment, and provides
some reference information on kernels and filesystem images.
</para>
<section id='installing-the-adt'>
<title>Installing the ADT</title>
<para>
You can install the ADT three ways.
However, we recommend configuring and running the ADT Installer script.
Running this script automates much of the process for you.
For example, the script allows you to install the QEMU emulator and
user-space NFS, define which root filesystem profiles to download,
and allows you to define the target sysroot location.
</para>
<note>
If you need to generate the ADT tarball you can do so using the following command:
<literallayout class='monospaced'>
$ bitbake adt-installer
</literallayout>
This command generates the file <filename>adt-installer.tar.bz2</filename>
in the <filename>../build/tmp/deploy/sdk</filename> directory.
</note>
<section id='configuring-and-running-the-adt-installer'>
<title>Configuring and Running the ADT Installer</title>
<para>
The ADT Installer is contained in a tarball that can be built using
<filename>bitbake adt-installer</filename>.
Yocto Project has a pre-built ADT Installer tarball that you can download
from <filename>tmp/deploy/sdk</filename> located in the build directory.
</para>
<note>
You can install and run the ADT Installer tarball in any directory you want.
</note>
<para>
Before running the ADT Installer you need to configure it by editing
the <filename>adt-installer.conf</filename> file, which is located in the
directory where the ADT Installer tarball was installed.
Your configurations determine which kernel and filesystem image are downloaded.
The following list describes the variables you can define for the ADT Installer.
For configuration values and restrictions see the comments in
the <filename>adt-installer.conf</filename> file:
<itemizedlist>
<listitem><para><filename>YOCTOADT_IPKG_REPO</filename> This area
includes the IPKG-based packages and the root filesystem upon which
the installation is based.
If you want to set up your own IPKG repository pointed to by
<filename>YOCTOADT_IPKG_REPO</filename>, you need to be sure that the
directory structure follows the same layout as the reference directory
set up at <ulink url='http://adtrepo.yoctoproject.org'></ulink>.
Also, your repository needs to be accessible through HTTP.
</para></listitem>
<listitem><para><filename>YOCTOADT-TARGETS</filename> The machine
target architectures for which you want to set up cross-development
environments.
</para></listitem>
<listitem><para><filename>YOCTOADT_QEMU</filename> Indicates whether
or not to install the emulator QEMU.
</para></listitem>
<listitem><para><filename>YOCTOADT_NFS_UTIL</filename> Indicates whether
or not to install user-mode NFS.
If you plan to use the Yocto Eclipse IDE plug-in against QEMU,
you should install NFS.
<note>
To boot QEMU images using our userspace NFS server, you need
to be running portmap or rpcbind.
If you are running rpcbind, you will also need to add the -i
option when rpcbind starts up.
Please make sure you understand the security implications of doing this.
Your firewall settings may also have to be modified to allow
NFS booting to work.
</note>
</para></listitem>
<listitem><para><filename>YOCTOADT_ROOTFS_&lt;arch&gt;</filename> - The root
filesystem images you want to download.
</para></listitem>
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_IMAGE_&lt;arch&gt;</filename> - The
root filesystem used to extract and create the target sysroot.
</para></listitem>
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_LOC_&lt;arch&gt;</filename> - The
location of the target sysroot that will be set up on the development machine.
</para></listitem>
</itemizedlist>
</para>
<para>
After you have configured the <filename>adt-installer.conf</filename> file,
run the installer using the following command:
<literallayout class='monospaced'>
$ adt_installer
</literallayout>
</para>
<para>
Once the installer begins to run you are asked whether you want to run in
interactive or silent mode.
If you want to closely monitor the installation then choose “I” for interactive
mode rather than “S” for silent mode.
Follow the prompts from the script to complete the installation.
</para>
<para>
Once the installation completes, the cross-toolchain is installed in
<filename>/opt/poky/$SDKVERSION</filename>.
</para>
<para>
Before using the ADT you need to run the environment setup script for
your target architecture also located in <filename>/opt/poky/$SDKVERSION</filename>.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
section for information.
</para>
</section>
<section id='using-an-existing-toolchain-tarball'>
<title>Using an Existing Toolchain Tarball</title>
<para>
If you do not want to use the ADT Installer you can install the toolchain
and the sysroot by hand.
Follow these steps:
<orderedlist>
<listitem><para>Locate and download the architecture-specific toolchain
tarball from <ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0'></ulink>.
Look in the toolchain folder and then open up the folder that matches your
host development system (i.e. 'i686' for 32-bit machines or 'x86_64'
for 64-bit machines).
Then, select the toolchain tarball whose name includes the appropriate
target architecture.
<note>
If you need to build the toolchain tarball use the
<filename>bitbake meta-toolchain</filename> command after you have
sourced the poky-build-init script.
The tarball will be located in the build directory at
<filename>tmp/deploy/sdk</filename> after the build.
</note>
</para></listitem>
<listitem><para>Make sure you are in the root directory and then expand
the tarball.
The tarball expands into the <filename>/opt/poky/$SDKVERSION</filename> directory.
</para></listitem>
<listitem><para>Set up the environment by sourcing the environment set up
script.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
for information.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='using-the-toolchain-from-within-the-build-tree'>
<title>Using the Toolchain from Within the Build Tree</title>
<para>
A final way of accessing the toolchain is from the build tree.
The build tree can be set up to contain the architecture-specific cross toolchain.
To populate the build tree with the toolchain you need to run the following command:
<literallayout class='monospaced'>
$ bitbake meta-ide-support
</literallayout>
</para>
<para>
Before running the command you need to be sure that the
<filename>conf/local.conf</filename> file in the build directory has
the desired architecture specified for the <filename>MACHINE</filename>
variable.
See the <filename>local.conf</filename> file for a list of values you
can supply for this variable.
You can populate the build tree with the cross-toolchains for more
than a single architecture.
You just need to edit the <filename>local.conf</filename> file and re-run
the BitBake command.
</para>
<para>
Once the build tree has the toolchain you need to source the environment
setup script so that you can run the cross-tools without having to locate them.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
for information.
</para>
</section>
</section>
<section id='setting-up-the-environment'>
<title>Setting Up the Environment</title>
<para>
Before you can use the cross-toolchain you need to set up the environment by
sourcing the environment setup script.
If you used adt_installer or used an existing ADT tarball to install the ADT,
then you can find this script in the <filename>/opt/poky/$SDKVERSION</filename>
directory.
If you are using the ADT from a Poky build tree, then look in the build
directory in <filename>tmp</filename> for the setup script.
</para>
<para>
Be sure to run the environment setup script that matches the architecture for
which you are developing.
Environment setup scripts begin with the string “environment-setup” and include as
part of their name the architecture.
For example, the environment setup script for a 64-bit IA-based architecture would
be the following:
<literallayout class='monospaced'>
/opt/poky/environment-setup-x86_64-poky-linux
</literallayout>
</para>
</section>
<section id='kernels-and-filesystem-images'>
<title>Kernels and Filesystem Images</title>
<para>
You will need to have a kernel and filesystem image to boot using your
hardware or the QEMU emulator.
That means you either have to build them or know where to get them.
You can find lots of details on how to get or build images and kernels for your
architecture in the "Yocto Project Quick Start" found at
<ulink url='http://www.yoctoproject.org/docs/yocto-quick-start/yocto-project-qs.html'></ulink>.
<note>
Yocto Project provides basic kernels and filesystem images for several
architectures (x86, x86-64, mips, powerpc, and arm) that can be used
unaltered in the QEMU emulator.
These kernels and filesystem images reside in the Yocto Project release
area - <ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/'></ulink>
and are ideal for experimentation within Yocto Project.
</note>
</para>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.4 KiB

View File

@@ -1,968 +0,0 @@
/*
Generic XHTML / DocBook XHTML CSS Stylesheet.
Browser wrangling and typographic design by
Oyvind Kolas / pippin@gimp.org
Customised for Poky by
Matthew Allum / mallum@o-hand.com
Thanks to:
Liam R. E. Quin
William Skaggs
Jakub Steiner
Structure
---------
The stylesheet is divided into the following sections:
Positioning
Margins, paddings, width, font-size, clearing.
Decorations
Borders, style
Colors
Colors
Graphics
Graphical backgrounds
Nasty IE tweaks
Workarounds needed to make it work in internet explorer,
currently makes the stylesheet non validating, but up until
this point it is validating.
Mozilla extensions
Transparency for footer
Rounded corners on boxes
*/
/*************** /
/ Positioning /
/ ***************/
body {
font-family: Verdana, Sans, sans-serif;
min-width: 640px;
width: 80%;
margin: 0em auto;
padding: 2em 5em 5em 5em;
color: #333;
}
.reviewer {
color: red;
}
h1,h2,h3,h4,h5,h6,h7 {
font-family: Arial, Sans;
color: #00557D;
clear: both;
}
h1 {
font-size: 2em;
text-align: left;
padding: 0em 0em 0em 0em;
margin: 2em 0em 0em 0em;
}
h2.subtitle {
margin: 0.10em 0em 3.0em 0em;
padding: 0em 0em 0em 0em;
font-size: 1.8em;
padding-left: 20%;
font-weight: normal;
font-style: italic;
}
h2 {
margin: 2em 0em 0.66em 0em;
padding: 0.5em 0em 0em 0em;
font-size: 1.5em;
font-weight: bold;
}
h3.subtitle {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
font-size: 142.14%;
text-align: right;
}
h3 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 140%;
font-weight: bold;
}
h4 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 120%;
font-weight: bold;
}
h5 {
margin: 1em 0em 0.5em 0em;
padding: 1em 0em 0em 0em;
font-size: 110%;
font-weight: bold;
}
h6 {
margin: 1em 0em 0em 0em;
padding: 1em 0em 0em 0em;
font-size: 80%;
font-weight: bold;
}
.authorgroup {
background-color: transparent;
background-repeat: no-repeat;
padding-top: 256px;
background-image: url("figures/adt-title.png");
background-position: left top;
margin-top: -256px;
padding-right: 50px;
margin-left: 0px;
text-align: right;
width: 740px;
}
h3.author {
margin: 0em 0me 0em 0em;
padding: 0em 0em 0em 0em;
font-weight: normal;
font-size: 100%;
color: #333;
clear: both;
}
.author tt.email {
font-size: 66%;
}
.titlepage hr {
width: 0em;
clear: both;
}
.revhistory {
padding-top: 2em;
clear: both;
}
.toc,
.list-of-tables,
.list-of-examples,
.list-of-figures {
padding: 1.33em 0em 2.5em 0em;
color: #00557D;
}
.toc p,
.list-of-tables p,
.list-of-figures p,
.list-of-examples p {
padding: 0em 0em 0em 0em;
padding: 0em 0em 0.3em;
margin: 1.5em 0em 0em 0em;
}
.toc p b,
.list-of-tables p b,
.list-of-figures p b,
.list-of-examples p b{
font-size: 100.0%;
font-weight: bold;
}
.toc dl,
.list-of-tables dl,
.list-of-figures dl,
.list-of-examples dl {
margin: 0em 0em 0.5em 0em;
padding: 0em 0em 0em 0em;
}
.toc dt {
margin: 0em 0em 0em 0em;
padding: 0em 0em 0em 0em;
}
.toc dd {
margin: 0em 0em 0em 2.6em;
padding: 0em 0em 0em 0em;
}
div.glossary dl,
div.variablelist dl {
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
font-weight: normal;
width: 20em;
text-align: right;
}
.variablelist dl dt {
margin-top: 0.5em;
}
.glossary dl dd,
.variablelist dl dd {
margin-top: -1em;
margin-left: 25.5em;
}
.glossary dd p,
.variablelist dd p {
margin-top: 0em;
margin-bottom: 1em;
}
div.calloutlist table td {
padding: 0em 0em 0em 0em;
margin: 0em 0em 0em 0em;
}
div.calloutlist table td p {
margin-top: 0em;
margin-bottom: 1em;
}
div p.copyright {
text-align: left;
}
div.legalnotice p.legalnotice-title {
margin-bottom: 0em;
}
p {
line-height: 1.5em;
margin-top: 0em;
}
dl {
padding-top: 0em;
}
hr {
border: solid 1px;
}
.mediaobject,
.mediaobjectco {
text-align: center;
}
img {
border: none;
}
ul {
padding: 0em 0em 0em 1.5em;
}
ul li {
padding: 0em 0em 0em 0em;
}
ul li p {
text-align: left;
}
table {
width :100%;
}
th {
padding: 0.25em;
text-align: left;
font-weight: normal;
vertical-align: top;
}
td {
padding: 0.25em;
vertical-align: top;
}
p a[id] {
margin: 0px;
padding: 0px;
display: inline;
background-image: none;
}
a {
text-decoration: underline;
color: #444;
}
pre {
overflow: auto;
}
a:hover {
text-decoration: underline;
/*font-weight: bold;*/
}
div.informalfigure,
div.informalexample,
div.informaltable,
div.figure,
div.table,
div.example {
margin: 1em 0em;
padding: 1em;
page-break-inside: avoid;
}
div.informalfigure p.title b,
div.informalexample p.title b,
div.informaltable p.title b,
div.figure p.title b,
div.example p.title b,
div.table p.title b{
padding-top: 0em;
margin-top: 0em;
font-size: 100%;
font-weight: normal;
}
.mediaobject .caption,
.mediaobject .caption p {
text-align: center;
font-size: 80%;
padding-top: 0.5em;
padding-bottom: 0.5em;
}
.epigraph {
padding-left: 55%;
margin-bottom: 1em;
}
.epigraph p {
text-align: left;
}
.epigraph .quote {
font-style: italic;
}
.epigraph .attribution {
font-style: normal;
text-align: right;
}
span.application {
font-style: italic;
}
.programlisting {
font-family: monospace;
font-size: 80%;
white-space: pre;
margin: 1.33em 0em;
padding: 1.33em;
}
.tip,
.warning,
.caution,
.note {
margin-top: 1em;
margin-bottom: 1em;
}
/* force full width of table within div */
.tip table,
.warning table,
.caution table,
.note table {
border: none;
width: 100%;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
padding: 0.8em 0.0em 0.0em 0.0em;
margin : 0em 0em 0em 0em;
}
.tip p,
.warning p,
.caution p,
.note p {
margin-top: 0.5em;
margin-bottom: 0.5em;
padding-right: 1em;
text-align: left;
}
.acronym {
text-transform: uppercase;
}
b.keycap,
.keycap {
padding: 0.09em 0.3em;
margin: 0em;
}
.itemizedlist li {
clear: none;
}
.filename {
font-size: medium;
font-family: Courier, monospace;
}
div.navheader, div.heading{
position: absolute;
left: 0em;
top: 0em;
width: 100%;
background-color: #cdf;
width: 100%;
}
div.navfooter, div.footing{
position: fixed;
left: 0em;
bottom: 0em;
background-color: #eee;
width: 100%;
}
div.navheader td,
div.navfooter td {
font-size: 66%;
}
div.navheader table th {
/*font-family: Georgia, Times, serif;*/
/*font-size: x-large;*/
font-size: 80%;
}
div.navheader table {
border-left: 0em;
border-right: 0em;
border-top: 0em;
width: 100%;
}
div.navfooter table {
border-left: 0em;
border-right: 0em;
border-bottom: 0em;
width: 100%;
}
div.navheader table td a,
div.navfooter table td a {
color: #777;
text-decoration: none;
}
/* normal text in the footer */
div.navfooter table td {
color: black;
}
div.navheader table td a:visited,
div.navfooter table td a:visited {
color: #444;
}
/* links in header and footer */
div.navheader table td a:hover,
div.navfooter table td a:hover {
text-decoration: underline;
background-color: transparent;
color: #33a;
}
div.navheader hr,
div.navfooter hr {
display: none;
}
.qandaset tr.question td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.qandaset tr.answer td p {
margin: 0em 0em 1em 0em;
padding: 0em 0em 0em 0em;
}
.answer td {
padding-bottom: 1.5em;
}
.emphasis {
font-weight: bold;
}
/************* /
/ decorations /
/ *************/
.titlepage {
}
.part .title {
}
.subtitle {
border: none;
}
/*
h1 {
border: none;
}
h2 {
border-top: solid 0.2em;
border-bottom: solid 0.06em;
}
h3 {
border-top: 0em;
border-bottom: solid 0.06em;
}
h4 {
border: 0em;
border-bottom: solid 0.06em;
}
h5 {
border: 0em;
}
*/
.programlisting {
border: solid 1px;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example {
border: 1px solid;
}
.tip,
.warning,
.caution,
.note {
border: 1px solid;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom: 1px solid;
}
.question td {
border-top: 1px solid black;
}
.answer {
}
b.keycap,
.keycap {
border: 1px solid;
}
div.navheader, div.heading{
border-bottom: 1px solid;
}
div.navfooter, div.footing{
border-top: 1px solid;
}
/********* /
/ colors /
/ *********/
body {
color: #333;
background: white;
}
a {
background: transparent;
}
a:hover {
background-color: #dedede;
}
h1,
h2,
h3,
h4,
h5,
h6,
h7,
h8 {
background-color: transparent;
}
hr {
border-color: #aaa;
}
.tip, .warning, .caution, .note {
border-color: #aaa;
}
.tip table th,
.warning table th,
.caution table th,
.note table th {
border-bottom-color: #aaa;
}
.warning {
background-color: #fea;
}
.caution {
background-color: #fea;
}
.tip {
background-color: #eff;
}
.note {
background-color: #dfc;
}
.glossary dl dt,
.variablelist dl dt,
.variablelist dl dt span.term {
color: #044;
}
div.figure,
div.table,
div.example,
div.informalfigure,
div.informaltable,
div.informalexample {
border-color: #aaa;
}
pre.programlisting {
color: black;
background-color: #fff;
border-color: #aaa;
border-width: 2px;
}
.guimenu,
.guilabel,
.guimenuitem {
background-color: #eee;
}
b.keycap,
.keycap {
background-color: #eee;
border-color: #999;
}
div.navheader {
border-color: black;
}
div.navfooter {
border-color: black;
}
/*********** /
/ graphics /
/ ***********/
/*
body {
background-image: url("images/body_bg.jpg");
background-attachment: fixed;
}
.navheader,
.note,
.tip {
background-image: url("images/note_bg.jpg");
background-attachment: fixed;
}
.warning,
.caution {
background-image: url("images/warning_bg.jpg");
background-attachment: fixed;
}
.figure,
.informalfigure,
.example,
.informalexample,
.table,
.informaltable {
background-image: url("images/figure_bg.jpg");
background-attachment: fixed;
}
*/
h1,
h2,
h3,
h4,
h5,
h6,
h7{
}
/*
Example of how to stick an image as part of the title.
div.article .titlepage .title
{
background-image: url("figures/white-on-black.png");
background-position: center;
background-repeat: repeat-x;
}
*/
div.preface .titlepage .title,
div.colophon .title,
div.chapter .titlepage .title,
div.article .titlepage .title
{
}
div.section div.section .titlepage .title,
div.sect2 .titlepage .title {
background: none;
}
h1.title {
background-color: transparent;
background-image: url("figures/yocto-project-bw.png");
background-repeat: no-repeat;
height: 256px;
text-indent: -9000px;
overflow:hidden;
}
h2.subtitle {
background-color: transparent;
text-indent: -9000px;
overflow:hidden;
width: 0px;
display: none;
}
/*************************************** /
/ pippin.gimp.org specific alterations /
/ ***************************************/
/*
div.heading, div.navheader {
color: #777;
font-size: 80%;
padding: 0;
margin: 0;
text-align: left;
position: absolute;
top: 0px;
left: 0px;
width: 100%;
height: 50px;
background: url('/gfx/heading_bg.png') transparent;
background-repeat: repeat-x;
background-attachment: fixed;
border: none;
}
div.heading a {
color: #444;
}
div.footing, div.navfooter {
border: none;
color: #ddd;
font-size: 80%;
text-align:right;
width: 100%;
padding-top: 10px;
position: absolute;
bottom: 0px;
left: 0px;
background: url('/gfx/footing_bg.png') transparent;
}
*/
/****************** /
/ nasty ie tweaks /
/ ******************/
/*
div.heading, div.navheader {
width:expression(document.body.clientWidth + "px");
}
div.footing, div.navfooter {
width:expression(document.body.clientWidth + "px");
margin-left:expression("-5em");
}
body {
padding:expression("4em 5em 0em 5em");
}
*/
/**************************************** /
/ mozilla vendor specific css extensions /
/ ****************************************/
/*
div.navfooter, div.footing{
-moz-opacity: 0.8em;
}
div.figure,
div.table,
div.informalfigure,
div.informaltable,
div.informalexample,
div.example,
.tip,
.warning,
.caution,
.note {
-moz-border-radius: 0.5em;
}
b.keycap,
.keycap {
-moz-border-radius: 0.3em;
}
*/
table tr td table tr td {
display: none;
}
hr {
display: none;
}
table {
border: 0em;
}
.photo {
float: right;
margin-left: 1.5em;
margin-bottom: 1.5em;
margin-top: 0em;
max-width: 17em;
border: 1px solid gray;
padding: 3px;
background: white;
}
.seperator {
padding-top: 2em;
clear: both;
}
#validators {
margin-top: 5em;
text-align: right;
color: #777;
}
@media print {
body {
font-size: 8pt;
}
.noprint {
display: none;
}
}
.tip,
.note {
background: #666666;
color: #fff;
padding: 20px;
margin: 20px;
}
.tip h3,
.note h3 {
padding: 0em;
margin: 0em;
font-size: 2em;
font-weight: bold;
color: #fff;
}
.tip a,
.note a {
color: #fff;
text-decoration: underline;
}

View File

@@ -1,58 +0,0 @@
# You must call this Makefile using the following form:
#
# make
# make html
# make pdf
# make tarball
# make clean
# make publish
#
# "make" creates the HTML, PDF, and tarballs.
# "make html" creates just the HTML
# "make pdf" creates just the PDF
# "make tarball" creates the tarball
# "make clean" removes the HTML and PDF files
# "make publish" pushes the HTML, PDF, figures, and stylesheet to the web server
#
XSLTOPTS = --stringparam html.stylesheet style.css \
--stringparam chapter.autolabel 1 \
--stringparam section.autolabel 1 \
--stringparam section.label.includes.component.label 1 \
--xinclude
VER = 1.0
DOC = bsp-guide
ALLPREQ = html pdf tarball
TARFILES = bsp-guide.html bsp-guide.pdf style.css figures/bsp-title.png
MANUALS = $(DOC).html $(DOC).pdf
FIGURES = figures
STYLESHEET = *.css
##
# These URI should be rewritten by your distribution's xml catalog to
# match your localy installed XSL stylesheets.
XSL_BASE_URI = http://docbook.sourceforge.net/release/xsl/current
XSL_XHTML_URI = $(XSL_BASE_URI)/xhtml/docbook.xsl
all: html pdf tarball
pdf:
../tools/poky-docbook-to-pdf bsp-guide.xml ../template
html:
# See http://www.sagehill.net/docbookxsl/HtmlOutput.html
xsltproc $(XSLTOPTS) -o bsp-guide.html bsp-guide-customization.xsl bsp-guide.xml
tarball: html
cd $(DOC); tar -cvzf $(DOC).tgz $(TARFILES); cd ..
validate:
xmllint --postvalid --xinclude --noout bsp-guide.xml
publish:
scp -r $(MANUALS) $(STYLESHEET) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)
scp -r $(FIGURES) www.yoctoproject.org:/srv/www/www.yoctoproject.org-docs/$(VER)/$(DOC)/figures
clean:
rm -f $(MANUALS)

View File

@@ -1,6 +0,0 @@
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
</xsl:stylesheet>

Some files were not shown because too many files have changed in this diff Show More