Compare commits

..

47 Commits

Author SHA1 Message Date
Marcin Juszkiewicz
a340e3c7dc libice-native: update to 1.0.4 2009-03-12 09:04:45 +01:00
Koen Kooi
bd417f959e package bbclass: add an 'allow_links' param to get symlinks packaged, usefull for splitting out libraries 2009-03-11 12:08:35 +01:00
Robert Schuster
947e8d5fd6 base.bbclass: Add subdir feature to SRC_URI entries (from OE) 2009-03-11 12:04:16 +01:00
Marcin Juszkiewicz
130ccf8b72 libice-native: added 1.0.3 2009-03-10 12:57:22 +01:00
Marcin Juszkiewicz
d0db241a0a libxt-native: added 1.0.5 2009-03-10 12:57:22 +01:00
Marcin Juszkiewicz
84d814b5e8 libsm-native: added 1.0.3 2009-03-10 12:57:22 +01:00
Marcin Juszkiewicz
f1a5c811b5 libxdmcp-native: fix PROVIDES to empty value (from trunk) 2009-03-10 12:57:21 +01:00
Marcin Juszkiewicz
f5b492949e libx11-native: fix PROVIDES to empty value (from trunk) 2009-03-10 12:57:21 +01:00
Marcin Juszkiewicz
dce1887d8b checksums.ini: merge with trunk 2009-02-26 16:32:49 +01:00
Marcin Juszkiewicz
deddd3a450 shared-mime-info: fixed dependencies for native version (from trunk) 2009-02-23 17:06:59 +01:00
Marcin Juszkiewicz
2a643be89d ldconfig-native: set $S to proper value 2009-02-23 11:32:32 +01:00
Marcin Juszkiewicz
2b91ffc371 bitbake.conf: update Poky Maintainer name to "Poky Team <poky@openedhand.com>" (from trunk) 2009-02-20 17:33:33 +01:00
Marcin Juszkiewicz
f29120d571 base.bbclass, bitbake.conf: add support for BP/BPN variables (backported from trunk)
commit 94c895aad5
Author: Richard Purdie <rpurdie@linux.intel.com>
Date:   Fri Jan 2 10:15:45 2009 +0000

bitbake.conf: Create BPN variable containing the pruned version of
PN with various suffixes removed and use this for S and FILESPATH.
This uses naming from OE but with improved code
2009-02-20 17:33:13 +01:00
Marcin Juszkiewicz
b5a58d32f0 bitbake.conf: add IMAGE_ROOTFS_SIZE (from OE) 2009-01-23 17:48:38 +00:00
Richard Purdie
7361e38803 exmap-console: Backport DEPENDS fix from trunk (from hrw) 2009-01-23 16:51:15 +00:00
Richard Purdie
2cf9a10026 bitbake.conf/image.bbclass: Backport the magic rootfs sizing code from trunk (from hrw) 2009-01-23 16:27:25 +00:00
Richard Purdie
ebe7499a5e libtool-native: Stage libltdl headers (backport from master) 2009-01-02 12:09:34 +00:00
Ross Burton
cd4c255c94 hicolor-icon-theme: add size/stock directories to hicolor for compatibility with OpenMoko 2008-12-23 15:19:28 +00:00
Ross Burton
e6fe4911cf hicolor-icon-theme: ship a custom index.theme which includes the Hildon icon sizes 2008-12-16 18:06:55 +00:00
Ross Burton
0111b3ac5f openmoko-icon-theme-standard2: add a compat package which symlinks the new icons into hicolor 2008-12-16 15:37:16 +00:00
Ross Burton
0e3639209f Bump matchbox-desktop srcrev
This version of Matchbox Desktop correctly recurses when loading desktop files,
so it handles Hildon-style application folders.
2008-12-04 15:28:13 +00:00
Ross Burton
fb265757b9 clutter: fix -examples package naming 2008-12-01 17:56:39 +00:00
Richard Purdie
893cc30c71 clutter-0.6: Bump PR
git-svn-id: https://svn.o-hand.com/repos/poky/trunk@5411 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-12-01 17:56:35 +00:00
Richard Purdie
e43ee1281d clutter-0.6: Bump PR
git-svn-id: https://svn.o-hand.com/repos/poky/trunk@5410 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-12-01 17:56:35 +00:00
Richard Purdie
7edfdb4961 clutter-cairo: Set S correctly
git-svn-id: https://svn.o-hand.com/repos/poky/trunk@5409 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-12-01 17:56:34 +00:00
Robert Bragg
bfb1d06d69 Some build fixes for clutter-cairo-0.6, and clutter-gtk_svn
git-svn-id: https://svn.o-hand.com/repos/poky/trunk@5394 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-12-01 17:56:34 +00:00
Robert Bragg
eea56a7f4d Merge Clutter packaging rewrite from master.
- This adds clutter-{gst,gtk,cairo}-0.8 recipes and clutter-{gst,gtk,cairo}-0.6 recipes.
- It removes the 0.4 recipes.
- It renames things so that the major.minor revision is now part of the package name.
  This lets us correctly specify SRCREVs for each branch, and allows parallell install.
- All the SRCREVs have been updated to the heads of their corresponding branches

git-svn-id: https://svn.o-hand.com/repos/poky/trunk@5384 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-12-01 17:56:33 +00:00
Ross Burton
13ac7cba37 Add gst-plugin-pulse 2008-12-01 17:56:29 +00:00
Ross Burton
1a5ccf1dde Fix compile failures on Intrepid 2008-12-01 17:56:15 +00:00
Ross Burton
64c2e76123 Fix qemu build on 2.6.27
Linux 2.6.27 removed linux/dirent.h, which qemu was included. Change this to
include dirent.h.
2008-12-01 17:54:49 +00:00
Ross Burton
83b96f079c Add gnome-icon-theme
This is required by some of the OpenMoko packages.  No idea how it didn't get
into elroy...
2008-10-30 11:40:06 +00:00
Ross Burton
cb404c508d poky-image-openmoko.bb: remove matchbox-applet-startup-monitor
We don't ship matchbox-applet-startup-monitor, so remove it.
2008-10-23 15:08:02 +01:00
Ross Burton
b797b8034c poky-image-openmoko.bb: remove unused PR 2008-10-23 15:07:42 +01:00
Richard Purdie
d39962a9fa Merged revisions 5355 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5355 | josh | 2008-10-01 00:05:39 +0100 (Wed, 01 Oct 2008) | 2 lines
  
  Fix a typo in the COMPATIBLE_MACHINE list
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5377 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-10-02 08:49:56 +00:00
Richard Purdie
ac9664c930 Merged revisions 5310 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5310 | richard | 2008-09-29 17:15:07 +0100 (Mon, 29 Sep 2008) | 1 line
  
  omap-3430: Generate jffs2 images
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5312 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-29 16:54:34 +00:00
Richard Purdie
dcf7185bab Merged revisions 5309 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5309 | richard | 2008-09-29 17:06:49 +0100 (Mon, 29 Sep 2008) | 1 line
  
  linux-omap-3430ldp: Build jffs2 support in.
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5311 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-29 16:53:36 +00:00
Ross Burton
c40aba1b89 Merged revisions 5306 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5306 | ross | 2008-09-29 13:49:37 +0100 (Mon, 29 Sep 2008) | 1 line
  
  libnotify: move from meta-extras to meta-openmoko
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5307 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-29 12:50:33 +00:00
Ross Burton
c2d945ef1c Merged revisions 5294 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5294 | ross | 2008-09-26 13:54:49 +0100 (Fri, 26 Sep 2008) | 1 line
  
  poky-fixed-revisions.inc: bump matchbox-wm-2 for work area fixes
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5295 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 12:58:00 +00:00
Ross Burton
65dc3dd729 Merged revisions 5287 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5287 | ross | 2008-09-26 10:29:52 +0100 (Fri, 26 Sep 2008) | 1 line
  
  clutter.inc: use eglnative on 3430sdp
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5289 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 09:32:57 +00:00
Ross Burton
dbc3a1ecb3 Blocked revisions 5237-5277 via svnmerge
........
  r5237 | rob | 2008-09-23 11:38:54 +0100 (Tue, 23 Sep 2008) | 1 line
  
  glib-2.0: Add 2.18.1 (new stable release)
........
  r5238 | rob | 2008-09-23 12:21:52 +0100 (Tue, 23 Sep 2008) | 1 line
  
  glib-2.0: Update default revision to 2.18.1
........
  r5239 | rob | 2008-09-23 12:22:01 +0100 (Tue, 23 Sep 2008) | 1 line
  
  glib-2.0: Remove old bleeding version.
........
  r5240 | hrw | 2008-09-23 12:26:18 +0100 (Tue, 23 Sep 2008) | 1 line
  
  checksums.ini: added some entries
........
  r5241 | hrw | 2008-09-23 13:54:15 +0100 (Tue, 23 Sep 2008) | 1 line
  
  metacity: do not require gdk-pixbuf-csource
........
  r5242 | rob | 2008-09-23 15:18:17 +0100 (Tue, 23 Sep 2008) | 1 line
  
  iso-codes: Add iso-codes package
........
  r5243 | rob | 2008-09-23 15:23:46 +0100 (Tue, 23 Sep 2008) | 1 line
  
  intltool: Update to intltool 0.37.1
........
  r5244 | rob | 2008-09-23 15:29:10 +0100 (Tue, 23 Sep 2008) | 1 line
  
  iso-codes: Make PACKAGE_ARCH=all
........
  r5245 | hrw | 2008-09-23 17:32:24 +0100 (Tue, 23 Sep 2008) | 1 line
  
  mesa-dri: fix packaging so test apps will really land in own package
........
  r5246 | hrw | 2008-09-23 17:32:37 +0100 (Tue, 23 Sep 2008) | 1 line
  
  xf86-video-intel: mark as x86 only
........
  r5247 | hrw | 2008-09-23 17:42:03 +0100 (Tue, 23 Sep 2008) | 1 line
  
  mesa-xlib: added non-dri version of mesa
........
  r5248 | hrw | 2008-09-23 17:42:13 +0100 (Tue, 23 Sep 2008) | 1 line
  
  mesa-dri: make it non-default for targets other then EeePC 901
........
  r5249 | rob | 2008-09-23 17:57:48 +0100 (Tue, 23 Sep 2008) | 1 line
  
  libxklavier: add libxklavier 3.7 (XKB wrapper library)
........
  r5250 | rob | 2008-09-23 17:57:57 +0100 (Tue, 23 Sep 2008) | 3 lines
  
  gtk+: Add gtk+ 2.14.2
  
  (with rebased hardcoded_libtool.patch and new disable-gio-png-sniff-test.diff)
........
  r5251 | hrw | 2008-09-24 09:10:15 +0100 (Wed, 24 Sep 2008) | 1 line
  
  libtiff: set LICENSE
........
  r5252 | tf | 2008-09-24 09:56:27 +0100 (Wed, 24 Sep 2008) | 1 line
  
  added recipe for metacity-clutter
........
  r5253 | ross | 2008-09-24 14:29:51 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-industrial-engine: remove ancient package
........
  r5254 | ross | 2008-09-24 14:46:17 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-clearlooks-engine_0.6.2.bb: remove old version
........
  r5255 | ross | 2008-09-24 14:46:50 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-engines: set PACKAGES_DYNAMIC
........
  r5256 | ross | 2008-09-24 14:49:08 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-theme-darkilouche.bb: depend on gtk-engines so that clearlooks is built, and rdepend on the engine not the theme
........
  r5257 | hrw | 2008-09-24 15:10:25 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gdk-pixbuf-csource: added 2.12.7
........
  r5258 | hrw | 2008-09-24 15:10:41 +0100 (Wed, 24 Sep 2008) | 1 line
  
  metacity(-clutter): use gdk-pixbuf-csource-native
........
  r5259 | hrw | 2008-09-24 15:20:08 +0100 (Wed, 24 Sep 2008) | 1 line
  
  jpeg: added native version
........
  r5260 | hrw | 2008-09-24 15:39:17 +0100 (Wed, 24 Sep 2008) | 1 line
  
  matchbox-themes-gtk: fixed dependencies after clearlooks cleanup
........
  r5261 | ross | 2008-09-24 15:43:58 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-engines-2.10.2: remove old release
........
  r5262 | ross | 2008-09-24 15:46:42 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-engines_2.14.0.bb: add PACKAGES_DYNAMIC
........
  r5263 | ross | 2008-09-24 15:47:09 +0100 (Wed, 24 Sep 2008) | 1 line
  
  gtk-engines-2.12: remove
........
  r5264 | rob | 2008-09-24 19:55:15 +0100 (Wed, 24 Sep 2008) | 1 line
  
  qemu-sdk: Build i386 QEMU for inclusion in the sdk.
........
  r5265 | rob | 2008-09-24 21:15:19 +0100 (Wed, 24 Sep 2008) | 1 line
  
  libx11-trim: Add missing dep on xf86bigfontproto
........
  r5266 | josh | 2008-09-25 10:50:05 +0100 (Thu, 25 Sep 2008) | 3 lines
  
  Initial support for netbooks with a poky-image-netbook(-live) image target.
  This needs much love from folk with UI and WM skills.
........
  r5267 | josh | 2008-09-25 14:27:10 +0100 (Thu, 25 Sep 2008) | 2 lines
  
  Netbooks will use Sato too for now.
........
  r5268 | hrw | 2008-09-25 15:09:07 +0100 (Thu, 25 Sep 2008) | 1 line
  
  qemu targets: added IMAGE_ROOTFS_SIZE for ext3 filesystems
........
  r5269 | hrw | 2008-09-25 15:09:22 +0100 (Thu, 25 Sep 2008) | 7 lines
  
  image.bbclass: make ext2/ext3 images autoresize
  
  New variable IMAGE_EXTRA_SPACE contains amount of kilobytes which has
  to be added to size of IMAGE_ROOTFS. Resulting size is then passed to
  genext2fs util.
  
  As a result we do not have to specify size for ROOTFS_SIZE anymore.
........
  r5270 | sameo | 2008-09-25 15:41:56 +0100 (Thu, 25 Sep 2008) | 4 lines
  
  acpid: Initial poky commit
  
  Needed on x86 machines.
........
  r5271 | sameo | 2008-09-25 15:47:22 +0100 (Thu, 25 Sep 2008) | 5 lines
  
  pm-utils: Initial commit
  
  This is a set of scripts usually needed by the ACPI 
  sleep/hibernate hooks.
........
  r5272 | sameo | 2008-09-25 15:58:13 +0100 (Thu, 25 Sep 2008) | 4 lines
  
  eee-acpi-scripts: Initial commit
  
  eeePC specific ACPI hooks.
........
  r5273 | sameo | 2008-09-25 16:06:11 +0100 (Thu, 25 Sep 2008) | 4 lines
  
  eee901: Add acpi and eee-acpi-scripts
  
  We can now suspend/resume the eee901 through the Fn keys.
........
  r5274 | richard | 2008-09-25 16:10:47 +0100 (Thu, 25 Sep 2008) | 1 line
  
  xcb-proto: Fix -dev and -dbg dependencies
........
  r5275 | josh | 2008-09-25 17:38:36 +0100 (Thu, 25 Sep 2008) | 2 lines
  
  Update fixed revision for the latest, greatest, metacity-clutter.
........
  r5276 | sameo | 2008-09-25 18:02:04 +0100 (Thu, 25 Sep 2008) | 2 lines
  
  eee-acpi-scripts: SRCREV should be in the distro config files
........
  r5277 | richard | 2008-09-25 21:38:26 +0100 (Thu, 25 Sep 2008) | 1 line
  
  eee-acpi-scripts: Set PV correctly
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5288 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 09:30:10 +00:00
Ross Burton
0d09d7af3e Blocked revisions 5166-5168,5171-5176,5179-5180,5182,5184,5186,5188,5199-5202,5204-5234 via svnmerge
........
  r5166 | sameo | 2008-09-10 16:37:14 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  linux-moblin2: Add moblin2 kernel
........
  r5167 | sameo | 2008-09-10 16:43:46 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  eee901: Initial support
........
  r5168 | sameo | 2008-09-10 16:44:26 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  formfactor: Add eee901 config file
........
  r5171 | sameo | 2008-09-10 22:12:46 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  eee901: Add video kernel options, vesa for now.
........
  r5172 | sameo | 2008-09-10 22:19:19 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  bootimg: Adding a rootfs to the disk image
........
  r5173 | sameo | 2008-09-10 22:21:23 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  initrdscripts: Simple init files for initrd/initramfs images
........
  r5174 | sameo | 2008-09-10 22:24:24 +0100 (Wed, 10 Sep 2008) | 4 lines
  
  Added live USB poky images
  
  We support sato and minimal live USB image production.
........
  r5175 | richard | 2008-09-10 23:07:29 +0100 (Wed, 10 Sep 2008) | 1 line
  
  poky-image-live.inc: Only run the FSTYPES check when the task is run
........
  r5176 | sameo | 2008-09-10 23:21:00 +0100 (Wed, 10 Sep 2008) | 2 lines
  
  exmap-console: Bump to the latest SVN
........
  r5179 | richard | 2008-09-10 23:54:21 +0100 (Wed, 10 Sep 2008) | 1 line
  
  pokyABConfig.py: Add eee901 builds
........
  r5180 | sameo | 2008-09-11 09:59:00 +0100 (Thu, 11 Sep 2008) | 5 lines
  
  image-live: exclude from world builds
  
  We also remove a video kernel command line option, as this is platform 
  specific.
........
  r5182 | richard | 2008-09-11 15:37:35 +0100 (Thu, 11 Sep 2008) | 1 line
  
  local.conf.sample: Add comment about eee901
........
  r5184 | sameo | 2008-09-11 16:57:47 +0100 (Thu, 11 Sep 2008) | 2 lines
  
  rt2860: Support for the rt2860 RaLink 802.11n chipset
........
  r5186 | sameo | 2008-09-11 17:35:54 +0100 (Thu, 11 Sep 2008) | 2 lines
  
  eee901: Add wifi support
........
  r5188 | richard | 2008-09-11 21:13:05 +0100 (Thu, 11 Sep 2008) | 1 line
  
  rt2860: Set COMPATIBLE_MACHINE for now to avoid builds with elderly kernels
........
  r5199 | josh | 2008-09-18 19:01:51 +0100 (Thu, 18 Sep 2008) | 2 lines
  
  Make the eee use the correct DPI.
........
  r5200 | josh | 2008-09-18 19:02:38 +0100 (Thu, 18 Sep 2008) | 2 lines
  
  Native pacage for the OpenSuse build service client.
........
  r5201 | josh | 2008-09-18 19:03:20 +0100 (Thu, 18 Sep 2008) | 2 lines
  
  Fetch implementation for the OpenSuse build service.
........
  r5202 | richard | 2008-09-19 18:29:23 +0100 (Fri, 19 Sep 2008) | 1 line
  
  gcc: Add 4.3.2 recipes
........
  r5204 | richard | 2008-09-22 12:48:01 +0100 (Mon, 22 Sep 2008) | 1 line
  
  cdrtools-native: Don't look for headers in /usr/src/linux, that would be crazy
........
  r5205 | bob | 2008-09-22 14:33:19 +0100 (Mon, 22 Sep 2008) | 20 lines
  
  Adds recipes to support building X servers based on the xfree86 DDX instead
  of kdrive and building mesa. It's a big commit and it's still rather rough
  around the edges, but there is a desire to get this in early so people can
  review the work and help polish the changes.
  
  Some of the notable bits:
  • DRI support in mesa and the X server. (configured in machine conf via
    MACHINE_DRI_MODULES variable)
  • XCB backend for xlib
  • A fairly lite X server build with lots of legacy modules disabled.
  
  I'm sure there is plenty of other fairly low hanging fruit if we want to
  put more effort into reducing the size of the xserver build. Currently the
  server build comes in @ ~2.3MB vs a kdrive fbdev server build @ ~1MB. E.g
  xaa could be made conditional to save ~320K. Of course the kdrive server
  doesn't include glx stuff, which is a pretty big chunk.
  
  Also thanks to hrw, since I nabbed a some patches from him for this, and RP,
  for various bits of Poky style advice.
........
  r5206 | hrw | 2008-09-22 14:50:05 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libxdmcp-native: do not provide xdmcp
........
  r5207 | hrw | 2008-09-22 14:50:15 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libx11-native: do not provide virtual/libx11
........
  r5208 | ross | 2008-09-22 14:53:19 +0100 (Mon, 22 Sep 2008) | 1 line
  
  Move libsoup to its own directory
........
  r5209 | hrw | 2008-09-22 14:58:57 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libxkbfile-native: needed by xkbcomp-native
........
  r5210 | hrw | 2008-09-22 14:59:11 +0100 (Mon, 22 Sep 2008) | 1 line
  
  xkbcomp: added 1.0.5 required by xkeyboard-config
........
  r5211 | hrw | 2008-09-22 14:59:21 +0100 (Mon, 22 Sep 2008) | 1 line
  
  xkeyboard-config: add keymaps for X11
........
  r5212 | hrw | 2008-09-22 14:59:31 +0100 (Mon, 22 Sep 2008) | 1 line
  
  xkeyboard-config: provide 'xorg' rules which are link to 'base' ones
........
  r5213 | hrw | 2008-09-22 15:03:19 +0100 (Mon, 22 Sep 2008) | 1 line
  
  xf86-video-intel: add missing dependencies
........
  r5214 | hrw | 2008-09-22 15:46:13 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libx11-trim: fix location on keysymdef.h
........
  r5215 | tf | 2008-09-22 16:37:55 +0100 (Mon, 22 Sep 2008) | 1 line
  
  added missing dri2proto dependency
........
  r5216 | josh | 2008-09-22 17:43:39 +0100 (Mon, 22 Sep 2008) | 2 lines
  
  Basic recipe for Metacity. Needs some tweaking to strip out themes into separate packages and strip some unneeded binaries.
........
  r5217 | ross | 2008-09-22 17:47:42 +0100 (Mon, 22 Sep 2008) | 1 line
  
  gnome-icon-theme: add
........
  r5218 | ross | 2008-09-22 17:48:52 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libsoup-2.4: add libsoup 2.4
........
  r5219 | bob | 2008-09-22 18:19:02 +0100 (Mon, 22 Sep 2008) | 2 lines
  
  Some fixes for the xorg.conf for xserver-xf86-dri-lite
........
  r5220 | rob | 2008-09-22 19:32:20 +0100 (Mon, 22 Sep 2008) | 1 line
  
  Update to gtk-engines 2.14
........
  r5221 | rob | 2008-09-22 19:32:31 +0100 (Mon, 22 Sep 2008) | 1 line
  
  Add the Darkilouche dark theme
........
  r5222 | sameo | 2008-09-22 19:48:09 +0100 (Mon, 22 Sep 2008) | 2 lines
  
  libgdbus: Adding libgdbus as connman needs it
........
  r5223 | sameo | 2008-09-22 19:54:37 +0100 (Mon, 22 Sep 2008) | 2 lines
  
  libgdbus: Add latest git as SRCREV
........
  r5224 | sameo | 2008-09-22 19:56:51 +0100 (Mon, 22 Sep 2008) | 2 lines
  
  resolvconf: Adding resolvconf as connman needs it
........
  r5225 | sameo | 2008-09-22 20:02:30 +0100 (Mon, 22 Sep 2008) | 5 lines
  
  connman: Initial poky commit
  
  We're adding both connman the daemon and connman-gnome which is a gnome 
  applet.
........
  r5226 | hrw | 2008-09-22 20:08:58 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libx11-trim: added missing dependencies
........
  r5227 | hrw | 2008-09-22 20:09:23 +0100 (Mon, 22 Sep 2008) | 1 line
  
  libxrender: added missing dependencies
........
  r5228 | bob | 2008-09-23 00:28:49 +0100 (Tue, 23 Sep 2008) | 5 lines
  
  increments task-poky revision, which fixed a dependency problem for me and
  may help with similar problems others are seeing with the new X builds. (The
  problem seemed to be related to the XSERVER variable which is referred to in
  task-poky.bb)
........
  r5229 | bob | 2008-09-23 00:40:14 +0100 (Tue, 23 Sep 2008) | 3 lines
  
  Adds eee901 specific support into clutter.inc and adds a new virtual/libgl
  for clutter to depend on which all mesa build variants provide.
........
  r5230 | bob | 2008-09-23 03:00:56 +0100 (Tue, 23 Sep 2008) | 2 lines
  
  Ensures the themes get packaged with metacity
........
  r5231 | bob | 2008-09-23 03:01:11 +0100 (Tue, 23 Sep 2008) | 3 lines
  
  Bumps the mesa-dri revision to 7.2 and adds a mesa-xdemos package including
  e.g glxinfo
........
  r5232 | bob | 2008-09-23 04:21:06 +0100 (Tue, 23 Sep 2008) | 2 lines
  
  Makes metacity install as an alternative x-window-manager
........
  r5233 | josh | 2008-09-23 04:30:23 +0100 (Tue, 23 Sep 2008) | 3 lines
  
  Add a bzip2-full-native recipe and make the python-native recipe depend on it.
  Yum requires bzip2 support in Python so our native Python package needs something to provide it.
........
  r5234 | rob | 2008-09-23 10:02:49 +0100 (Tue, 23 Sep 2008) | 1 line
  
  gnome-icon-theme: add an RRECOMMENDS on librsvg-gtk
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5286 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 09:22:12 +00:00
Ross Burton
005f63f560 Merged revisions 5203,5235-5236,5278-5279 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5203 | richard | 2008-09-19 18:32:35 +0100 (Fri, 19 Sep 2008) | 1 line
  
  tune-xscale.inc: Compile cairo for armv4 to avoid alighment trap issues with double instruction
........
  r5235 | ross | 2008-09-23 10:54:16 +0100 (Tue, 23 Sep 2008) | 1 line
  
  poky-fixed-revisions.inc: bump matchbox-desktop srvrev to fix icon loading bug
........
  r5236 | ross | 2008-09-23 11:14:38 +0100 (Tue, 23 Sep 2008) | 1 line
  
  poky-fixed-revisions.inc: fix typo
........
  r5278 | ross | 2008-09-25 21:52:29 +0100 (Thu, 25 Sep 2008) | 1 line
  
  dialer: specify revision instead of using autorev
........
  r5279 | richard | 2008-09-25 22:03:38 +0100 (Thu, 25 Sep 2008) | 1 line
  
  xserver-kdrive: Feed xrandr calls to the framebuffer driver in case it can do better than software rotation
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5285 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 09:02:47 +00:00
Ross Burton
cd8b52418c Merged revisions 5189-5198 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5189 | richard | 2008-09-11 23:41:08 +0100 (Thu, 11 Sep 2008) | 1 line
  
  local.conf.sample: Make the parallel threads documentation more visible and update with a quadcore example
........
  r5190 | richard | 2008-09-12 00:02:51 +0100 (Fri, 12 Sep 2008) | 1 line
  
  handbook/quickstart: Improve documentation on the options available in local.conf
........
  r5191 | richard | 2008-09-12 00:11:45 +0100 (Fri, 12 Sep 2008) | 1 line
  
  handbook/faq.xml: Add a QA about proxy server setup
........
  r5192 | ross | 2008-09-12 16:43:10 +0100 (Fri, 12 Sep 2008) | 1 line
  
  ref-variables.xml: Add POKY_EXTRA_INSTALL
........
  r5193 | ross | 2008-09-16 10:00:50 +0100 (Tue, 16 Sep 2008) | 1 line
  
  poky-fixed-revisions.inc: bump matchbox-wm-2
........
  r5194 | ross | 2008-09-16 16:36:32 +0100 (Tue, 16 Sep 2008) | 1 line
  
  poky-fixed-revisions: bump libowl srvrev
........
  r5195 | ross | 2008-09-16 17:24:31 +0100 (Tue, 16 Sep 2008) | 1 line
  
  poky-fixed-revisions.inc: bump matchbox-desktop srvrev
........
  r5196 | richard | 2008-09-16 20:14:49 +0100 (Tue, 16 Sep 2008) | 1 line
  
  bitbake parse/__init_.py: Add missing update_mtime function fixing bitbake shell reparse failures
........
  r5197 | richard | 2008-09-16 21:09:03 +0100 (Tue, 16 Sep 2008) | 1 line
  
  ConfHandler.py: revert accidental commit
........
  r5198 | ross | 2008-09-18 10:35:14 +0100 (Thu, 18 Sep 2008) | 1 line
  
  poky-eabi.conf: add dialer to as-needed blacklist
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5284 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 08:58:28 +00:00
Ross Burton
75f3f9bb6d Merged revisions 5177-5178,5181,5183,5185,5187 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5177 | richard | 2008-09-10 23:29:43 +0100 (Wed, 10 Sep 2008) | 1 line
  
  package.bbclass: Adjust to handle split packages already being present in PACKAGES
........
  r5178 | richard | 2008-09-10 23:32:22 +0100 (Wed, 10 Sep 2008) | 1 line
  
  gst-plugins-good: Remove bogus RPROVIDES and add to PACKAGES instead now the package class can handle this
........
  r5181 | richard | 2008-09-11 12:00:49 +0100 (Thu, 11 Sep 2008) | 1 line
  
  poky-eabi.inc/poky-fixed-revisions.inc: Bump the matchbox-wm-2 revision and remove from the asneeded blacklist
........
  r5183 | richard | 2008-09-11 16:43:25 +0100 (Thu, 11 Sep 2008) | 1 line
  
  eds-dbus: Add missing DEPENDS on libglade
........
  r5185 | ross | 2008-09-11 17:07:38 +0100 (Thu, 11 Sep 2008) | 1 line
  
  eds-dbus: package the glade files into libedataserverui
........
  r5187 | ross | 2008-09-11 17:57:47 +0100 (Thu, 11 Sep 2008) | 1 line
  
  eds-dbus: add libedataserverui to PACKAGES, fixing the borked packaging
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5283 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 08:55:15 +00:00
Ross Burton
148459e611 Merged revisions 5169-5170 via svnmerge from
https://svn.o-hand.com/repos/poky/trunk

........
  r5169 | ross | 2008-09-10 17:17:54 +0100 (Wed, 10 Sep 2008) | 1 line
  
  gaku: clean up depends/recommends
........
  r5170 | richard | 2008-09-10 17:25:46 +0100 (Wed, 10 Sep 2008) | 20 lines
  
  bitbake hg fetcher: Add fix from Matt Hoosier
  
  The Mercurial fetcher right now will fail when used to incrementally
  fetch an update to a local clone of a repository already fetched at
  some prior revision. The culprit is the sequence:
  
   hg pull -r <rev>
   hg update -C <rev>
    
  A subtlety in the way that Mercurial stores its tags (in a normally
  version-controlled file called .hgtags) has the side-effect that a
  repository fetched at a tag "foo" will not actually contain a
  new-enough copy of the .hgtags file to be self-aware of the foo tag's
  existence.
    
  The solution is just to get all the changesets in the repository on
  incremental upgrades, so that the following "hg update" will be able
  to resolve the tag.
    
........


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5282 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 08:46:30 +00:00
Ross Burton
a96187d660 Initialized merge tracking via "svnmerge" with revisions "1-5165" from
https://svn.o-hand.com/repos/poky/trunk


git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5281 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 08:44:52 +00:00
Ross Burton
0935caeded Branch for Elroy
git-svn-id: https://svn.o-hand.com/repos/poky/branches/elroy@5280 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-09-26 08:39:10 +00:00
6244 changed files with 1230927 additions and 404925 deletions

20
.gitignore vendored
View File

@@ -1,20 +0,0 @@
*.pyc
*.pyo
build*/conf/local.conf
build*/conf/bblayers.conf
build*/downloads
build*/tmp/
build*/sstate-cache
build*/pyshtables.py
pstage/
scripts/oe-git-proxy-socks
sources/
meta-*
!meta-skeleton
!meta-demoapps
*.swp
*.orig
*.rej
*~

View File

@@ -5,10 +5,7 @@ bitbake/COPYING (GPLv2)
meta/COPYING.MIT (MIT)
meta-extras/COPYING.MIT (MIT)
which cover the components in those subdirectories. This means all
metadata is MIT licensed unless otherwise stated. Source code included
in tree for individual recipes is under the LICENSE stated in the .bb
file for those software projects unless otherwise stated.
which cover the components in those subdirectories.
License information for any other files is either explicitly stated
or defaults to GPL version 2.

34
README
View File

@@ -1,29 +1,15 @@
Poky
====
Poky is an integration of various components to form a complete prepackaged
build system and development environment. It features support for building
customised embedded device style images. There are reference demo images
featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
cross-architecture application development using QEMU emulation and a
standalone toolchain and SDK with IDE integration.
Poky platform builder is a combined cross build system and development
environment. It features support for building X11/Matchbox/GTK based
filesystem images for various embedded devices and boards. It also
supports cross-architecture application development using QEMU emulation
and a standalone toolchain and SDK with IDE integration.
Poky has an extensive handbook, the source of which is contained in
the handbook directory. For compiled HTML or pdf versions of this,
see the Poky website http://pokylinux.org.
Additional information on the specifics of hardware that Poky supports
is available in README.hardware. Further hardware support can easily be added
in the form of layers which extend the systems capabilities in a modular way.
As an integration layer Poky consists of several upstream projects such as
BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/community/documentation
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
http://www.openembedded.org/
is available in README.hardware.

View File

@@ -1,447 +1,436 @@
Poky Hardware README
====================
Poky Hardware Reference Guide
=============================
This file gives details about using Poky with different hardware reference
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device.
boards and consumer devices. A full list of target machines can be found by
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
your hardware, consult the documentation for your board/device. To discuss
support for further hardware reference boards/devices please contact OpenedHand.
Support for additional devices is normally added by creating BSP layers - for
more information please see the Yocto Board Support Package (BSP) Developer's
Guide - documentation source is in documentation/bspguide or download the PDF
from:
http://yoctoproject.org/community/documentation
Support for machines other than QEMU may be moved out to separate BSP layers in
future versions.
QEMU Emulation Targets
======================
QEMU Emulation Images (qemuarm and qemux86)
===========================================
To simplify development Poky supports building images to work with the QEMU
emulator in system emulation mode. Several architectures are currently
supported:
* ARM (qemuarm)
* x86 (qemux86)
* x86-64 (qemux86-64)
* PowerPC (qemuppc)
* MIPS (qemumips)
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
MACHINE setting corresponding to the target is given in brackets.
emulator in system emulation mode. Two architectures are currently supported,
ARM (via qemuarm) and x86 (via qemux86). Use of the QEMU images is covered
in the Poky Handbook.
Hardware Reference Boards
=========================
The following boards are supported by Poky's core layer:
The following boards are supported by Poky:
* Texas Instruments Beagleboard (beagleboard)
* Freescale MPC8315E-RDB (mpc8315e-rdb)
* Ubiquiti Networks RouterStation Pro (routerstationpro)
* Compulab CM-X270 (cm-x270)
* Compulab EM-X270 (em-x270)
* FreeScale iMX31ADS (mx31ads)
* Marvell PXA3xx Zylonite (zylonite)
* Logic iMX31 Lite Kit (mx31litekit)
* Phytec phyCORE-iMX31 (mx31phy)
For more information see the board's section below. The Poky MACHINE setting
For more information see board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Consumer Devices
================
The following consumer devices are supported by Poky's core layer:
The following consumer devices are supported by Poky:
* Intel Atom based PCs and devices (atom-pc)
* FIC Neo1973 GTA01 smartphone (fic-gta01)
* HTC Universal (htcuniversal)
* Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
* Sharp Zaurus SL-C7x0 series (c7x0)
* Sharp Zaurus SL-C1000 (akita)
* Sharp Zaurus SL-C3x00 series (spitz)
For more information see the device's section below. The Poky MACHINE setting
corresponding to the device is given in brackets.
For more information see board's section below. The Poky MACHINE setting
corresponding to the board is given in brackets.
Poky Boot CD (bootcdx86)
========================
The Poky boot CD iso images are designed as a demonstration of the Poky
environment and to show the versatile image formats Poky can generate. It will
run on Pentium2 or greater PC style computers. The iso image can be
burnt to CD and then booted from.
Specific Hardware Documentation
===============================
Hardware Reference Boards
=========================
Intel Atom based PCs and devices (atom-pc)
==========================================
Compulab CM-X270 (cm-x270)
==========================
The atom-pc MACHINE is tested on the following platforms:
The bootloader on this board doesn't support writing jffs2 images directly to
NAND and normally uses a proprietary kernel flash driver. To allow the use of
jffs2 images, a two stage updating procedure is needed. Firstly, an initramfs
is booted which contains mtd utilities and this is then used to write the main
filesystem.
o Asus EeePC 901
o Acer Aspire One
o Toshiba NB305
o Intel Embedded Development Board 1-N450 (Black Sand)
It is assumed the board is connected to a network where a TFTP server is
available and that a serial terminal is available to communicate with the
bootloader (38400, 8N1). If a DHCP server is available the device will use it
to obtain an IP address. If not, run:
and is likely to work on many unlisted Atom based devices. The MACHINE type
supports ethernet, wifi, sound, and i915 graphics by default in addition to
common PC input devices, busses, and so on.
ARMmon > setip dhcp off
ARMmon > setip ip 192.168.1.203
ARMmon > setip mask 255.255.255.0
Depending on the device, it can boot from a traditional hard-disk, a USB device,
or over the network. Writing poky generated images to physical media is
straightforward with a caveat for USB devices. The following examples assume the
target boot device is /dev/sdb, be sure to verify this and use the correct
device as the following commands are run as root and are not reversable.
To reflash the kernel:
USB Device:
1. Build a live image. This image type consists of a simple filesystem
without a partition table, which is suitable for USB keys, and with the
default setup for the atom-pc machine, this image type is built
automatically for any image you build. For example:
ARMmon > download kernel tftp zimage 192.168.1.202
ARMmon > flash kernel
$ bitbake core-image-minimal
where zimage is the name of the kernel on the TFTP server and its IP address is
192.168.1.202. The names of the files must be all lowercase.
2. Use the "dd" utility to write the image to the raw block device. For
example:
To reflash the initrd/initramfs:
# dd if=core-image-minimal-atom-pc.hddimg of=/dev/sdb
ARMmon > download ramdisk tftp diskimage 192.168.1.202
ARMmon > flash ramdisk
If the device fails to boot with "Boot error" displayed, it is likely the BIOS
cannot understand the physical layout of the disk (or rather it expects a
particular layout and cannot handle anything else). There are two possible
solutions to this problem:
where diskimage is the name of the initramfs image (a cpio.gz file).
1. Change the BIOS USB Device setting to HDD mode. The label will vary by
device, but the idea is to force BIOS to read the Cylinder/Head/Sector
geometry from the device.
To boot the initramfs:
2. Without such an option, the BIOS generally boots the device in USB-ZIP
mode.
ARMmon > ramdisk on
ARMmon > bootos "console=ttyS0,38400 rdinit=/sbin/init"
a. Configure the USB device for USB-ZIP mode:
# mkdiskimage -4 /dev/sdb 0 63 62
To reflash the main image login to the system as user "root", then run:
Where 63 and 62 are the head and sector count as reported by fdisk.
Remove and reinsert the device to allow the kernel to detect the new
partition layout.
# ifconfig eth0 192.168.1.203
# tftp -g -r mainimage 192.168.1.202
# flash_eraseall /dev/mtd1
# nandwrite /dev/mtd1 mainimage
b. Copy the contents of the poky image to the USB-ZIP mode device:
which configures the network interface with the IP address 192.168.1.203,
downloads the "mainimage" file from the TFTP server at 192.168.1.202, erases
the flash and then writes the new image to the flash.
# mount -o loop core-image-minimal-atom-pc.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
The main image can then be booted with:
c. Install the syslinux boot loader:
ARMmon > bootos "console=ttyS0,38400 root=/dev/mtdblock1 rootfstype=jffs2"
# syslinux /dev/sdb4
Note that the initramfs image is built by poky in a slightly different mode to
normal since it uses uclibc. To generate this use a command like:
Install the boot device in the target board and configure the BIOS to boot
from it.
IMAGE_FSTYPES=cpio.gz MACHINE=cm-x270 POKYLIBC=uclibc bitbake poky-image-minimal-mtdutils
For more details on the USB-ZIP scenario, see the syslinux documentation:
http://git.kernel.org/?p=boot/syslinux/syslinux.git;a=blob_plain;f=doc/usbkey.txt;hb=HEAD
Compulab EM-X270 (em-x270)
==========================
Texas Instruments Beagleboard (beagleboard)
===========================================
Fetch the "Linux - kernel and run-time image (Angstrom)" ZIP file from the
Compulab website. Inside the images directory of this ZIP file is another ZIP
file called 'LiveDisk.zip'. Extract this over a cleanly formatted vfat USB flash
drive. Replace the 'em_x270.img' file with the 'updater-em-x270.ext2' file.
The Beagleboard is an ARM Cortex-A8 development board with USB, DVI-D, S-Video,
2D/3D accelerated graphics, audio, serial, JTAG, and SD/MMC. The xM adds a
faster CPU, more RAM, an ethernet port, more USB ports, microSD, and removes
the NAND flash. The beagleboard MACHINE is tested on the following platforms:
Insert this USB disk into the supplied adapter and connect this to the
board. Whilst holding down the the suspend button press the reset button. The
board will now boot off the USB key and into a version of Angstrom. On the
desktop is an icon labelled "Updater". Run this program to launch the updater
that will flash the Poky kernel and rootfs to the board.
o Beagleboard C4
o Beagleboard xM rev A & B
The Beagleboard C4 has NAND, while the xM does not. For the sake of simplicity,
these instructions assume you have erased the NAND on the C4 so its boot
behavior matches that of the xM. To do this, issue the following commands from
the u-boot prompt (note that the unlock may be unecessary depending on the
version of u-boot installed on your board and only one of the erase commands
will succeed):
FreeScale iMX31ADS (mx31ads)
===========================
# nand unlock
# nand erase
# nand erase.chip
The correct serial port is the top-most female connector to the right of the
ethernet socket.
To further tailor these instructions for your board, please refer to the
documentation at http://www.beagleboard.org.
For uploading data to RedBoot we are going to use tftp. In this example we
assume that the tftpserver is on 192.168.9.1 and the board is on192.168.9.2.
From a Linux system with access to the image files perform the following steps
as root, replacing mmcblk0* with the SD card device on your machine (such as sdc
if used via a usb card reader):
To set the IP address, run:
1. Partition and format an SD card:
# fdisk -lu /dev/mmcblk0
Disk /dev/mmcblk0: 3951 MB, 3951034368 bytes
255 heads, 63 sectors/track, 480 cylinders, total 7716864 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c Win95 FAT32 (LBA)
/dev/mmcblk0p2 144585 465884 160650 83 Linux
ip_address -l 192.168.9.2/24 -h 192.168.9.1
# mkfs.vfat -F 16 -n "boot" /dev/mmcblk0p1
# mke2fs -j -L "root" /dev/mmcblk0p2
To download a kernel called "zimage" from the TFTP server, run:
The following assumes the SD card partition 1 and 2 are mounted at
/media/boot and /media/root respectively. Removing the card and reinserting
it will do just that on most modern Linux desktop environments.
The files referenced below are made available after the build in
build/tmp/deploy/images.
load -r -b 0x100000 zimage
2. Install the boot loaders
# cp MLO-beagleboard /media/boot/MLO
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
To write the kernel to flash run:
3. Install the root filesystem
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beagleboard.tar.bz2
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
fis create kernel
4. Install the kernel uImage
# cp uImage-beagleboard.bin /media/boot/uImage
To download a rootfs jffs2 image "rootfs" from the TFTP server, run:
5. Prepare a u-boot script to simplify the boot process
The Beagleboard can be made to boot at this point from the u-boot command
shell. To automate this process, generate a user.scr script as follows.
load -r -b 0x100000 rootfs
Install uboot-mkimage (from uboot-mkimage on Ubuntu or uboot-tools on Fedora).
To write the root filesystem to flash run:
Prepare a script config:
fis create root
# (cat << EOF
setenv bootcmd 'mmc init; fatload mmc 0:1 0x80300000 uImage; bootm 0x80300000'
setenv bootargs 'console=tty0 console=ttyO2,115200n8 root=/dev/mmcblk0p2 rootwait rootfstype=ext3 ro'
boot
EOF
) > serial-boot.cmd
# mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Core Minimal" -d ./serial-boot.cmd ./boot.scr
# cp boot.scr /media/boot
To load and boot a kernel and rootfs from flash:
6. Unmount the SD partitions, insert the SD card into the Beagleboard, and
boot the Beagleboard
fis load kernel
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/mtdblock2 rootfstype=jffs2 init=linuxrc ip=none"
Note: As of the 2.6.37 linux-yocto kernel recipe, the Beagleboard uses the
OMAP_SERIAL device (ttyO2). If you are using an older kernel, such as the
2.6.34 linux-yocto-stable, be sure to replace ttyO2 with ttyS2 above. You
should also override the machine SERIAL_CONSOLE in your local.conf in
order to setup the getty on the serial line:
To load and boot a kernel from a TFTP server with the rootfs over NFS:
SERIAL_CONSOLE_beagleboard = "115200 ttyS2"
load -r -b 0x100000 zimage
exec -b 0x100000 -l 0x200000 -c "noinitrd console=ttymxc0,115200 root=/dev/nfs nfsroot=192.168.9.1:/mnt/nfsmx31 rw ip=192.168.9.2::192.168.9.1:255.255.255.0"
The instructions above are for using the (default) NOR flash on the board,
there is also 128M of NAND flash. It is possible to install Poky to the NAND
flash which gives more space for the rootfs and instructions for using this are
given below. To switch to the NAND flash:
Freescale MPC8315E-RDB (mpc8315e-rdb)
=====================================
factive NAND
The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
software development of network attached storage (NAS) and digital media server
applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
includes a built-in security accelerator.
This will then restart RedBoot using the NAND rather than the NOR. If you
have not used the NAND before then it is unlikely that there will be a
partition table yet. You can get the list of partitions with 'fis list'.
(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
same board in an enclosure with accessories. In any case it is fully
compatible with the instructions given here.)
If this shows no partitions then you can create them with:
Setup instructions
------------------
fis init
You will need the following:
* NFS root setup on your workstation
* TFTP server installed on your workstation
* Null modem cable connected from your workstation to the first serial port
on the board
* Ethernet connected to the first ethernet port on the board
The output of 'fis list' should now show:
--- Preparation ---
Name FLASH addr Mem addr Length Entry point
RedBoot 0xE0000000 0xE0000000 0x00040000 0x00000000
FIS directory 0xE7FF4000 0xE7FF4000 0x00003000 0x00000000
RedBoot config 0xE7FF7000 0xE7FF7000 0x00001000 0x00000000
Note: if you have altered your board's ethernet MAC address(es) from the
defaults, or you need to do so because you want multiple boards on the same
network, then you will need to change the values in the dts file (patch
linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
you have left them at the factory default then you shouldn't need to do
anything here.
Partitions for the kernel and rootfs need to be created:
--- Booting from NFS root ---
fis create -l 0x1A0000 -e 0x00100000 kernel
fis create -l 0x5000000 -e 0x00100000 root
Load the kernel and dtb (device tree blob), and boot the system as follows:
You may now use the instructions above for flashing. However it is important
to note that the erase block size for the NAND is different to the NOR so the
JFFS erase size will need to be changed to 0x4000. Stardard images are built
for NOR and you will need to build custom images for NAND.
1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
files from the Poky build tmp/deploy directory, and make them available on
your TFTP server.
You will also need to update the kernel command line to use the correct root
filesystem. This should be '/dev/mtdblock7' if you adhere to the partitioning
scheme shown above. If this fails then you can doublecheck against the output
from the kernel when it evaluates the available mtd partitions.
2. Connect the board's first serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
$ picocom /dev/ttyUSB0 -b 115200
Marvell PXA3xx Zylonite (zylonite)
==================================
3. Power up or reset the board and press a key on the terminal when prompted
to get to the U-Boot command line
These instructions assume the Zylonite is connected to a machine running a TFTP
server at address 192.168.123.5 and that a serial link (38400 8N1) is available
to access the blob bootloader. The kernel is on the TFTP server as
"zylonite-kernel" and the root filesystem jffs2 file is "zylonite-rootfs" and
the images are to be saved in NAND flash.
4. Set up the environment in U-Boot:
The following commands setup blob:
=> setenv ipaddr <board ip>
=> setenv serverip <tftp server ip>
=> setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
blob> setip client 192.168.123.4
blob> setip server 192.168.123.5
5. Download the kernel and dtb, and boot:
To flash the kernel:
=> tftp 800000 uImage-mpc8315e-rdb.bin
=> tftp 780000 uImage-mpc8315e-rdb.dtb
=> bootm 800000 - 780000
blob> tftp zylonite-kernel
blob> nandwrite -j 0x80800000 0x60000 0x200000
To flash the rootfs:
Ubiquiti Networks RouterStation Pro (routerstationpro)
======================================================
blob> tftp zylonite-rootfs
blob> nanderase -j 0x260000 0x5000000
blob> nandwrite -j 0x80800000 0x260000 <length>
The RouterStation Pro is an Atheros AR7161 MIPS-based board. Geared towards
networking applications, it has all of the usual features as well as three
type IIIA mini-PCI slots and an on-board 3-port 10/100/1000 Ethernet switch,
in addition to the 10/100/1000 Ethernet WAN port which supports
Power-over-Ethernet.
(where <length> is the rootfs size which will be printed by the tftp step)
Setup instructions
------------------
To boot the board:
You will need the following:
* A serial cable - female to female (or female to male + gender changer)
NOTE: cable must be straight through, *not* a null modem cable.
* USB flash drive or hard disk that is able to be powered from the
board's USB port.
* tftp server installed on your workstation
blob> nkernel
blob> boot
NOTE: in the following instructions it is assumed that /dev/sdb corresponds
to the USB disk when it is plugged into your workstation. If this is not the
case in your setup then please be careful to substitute the correct device
name in all commands where appropriate.
--- Preparation ---
Logic iMX31 Lite Kit (mx31litekit)
===============================
1) Build an image (e.g. core-image-minimal) using "routerstationpro" as the
MACHINE
The easiest method to boot this board is to take an MMC/SD card and format
the first partition as ext2, then extract the poky image onto this as root.
Assuming the board is network connected, a TFTP server is available at
192.168.1.33 and a serial terminal is available (115200 8N1), the following
commands will boot a kernel called "mx31kern" from the TFTP server:
2) Partition the USB drive so that primary partition 1 is type Linux (83).
Minimum size depends on your root image size - core-image-minimal probably
only needs 8-16MB, other images will need more.
losh> ifconfig sm0 192.168.1.203 255.255.255.0 192.168.1.33
losh> load raw 0x80100000 0x200000 /tftp/192.168.1.33:mx31kern
losh> exec 0x80100000 -
# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 4011 MB, 4011491328 bytes
124 heads, 62 sectors/track, 1019 cylinders, total 7834944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009e87d
Phytec phyCORE-iMX31 (mx31phy)
==============================
Device Boot Start End Blocks Id System
/dev/sdb1 62 1952751 976345 83 Linux
Support for this board is currently being developed. Experimental jffs2
images and a suitable kernel are available and are known to work with the
board.
3) Format partition 1 on the USB as ext3
# mke2fs -j /dev/sdb1
Consumer Devices
================
4) Mount partition 1 and then extract the contents of
tmp/deploy/images/core-image-XXXX.tar.bz2 into it (preserving permissions).
FIC Neo1973 GTA01 smartphone (fic-gta01)
========================================
# mount /dev/sdb1 /media/sdb1
# cd /media/sdb1
# tar -xvjpf tmp/deploy/images/core-image-XXXX.tar.bz2
To install Poky on a GTA01 smartphone you will need "dfu-util" tool
which you can build with "bitbake dfu-util-native" command.
5) Unmount the USB drive and then plug it into the board's USB port
Flashing requires these steps:
6) Connect the board's serial port to your workstation and then start up
your favourite serial terminal so that you will be able to interact with
the serial console. If you don't have a favourite, picocom is suggested:
1. Power down the device.
2. Connect the device to the host machine via USB.
3. Hold AUX key and press Power key. There should be a bootmenu
on screen.
4. Run "dfu-util -l" to check if the phone is visible on the USB bus.
The output should look like this:
$ picocom /dev/ttyUSB0 -b 115200
dfu-util - (C) 2007 by OpenMoko Inc.
This program is Free Software and has ABSOLUTELY NO WARRANTY
7) Connect the network into eth0 (the one that is NOT the 3 port switch). If
you are using power-over-ethernet then the board will power up at this point.
Found Runtime: [0x1457:0x5119] devnum=19, cfg=0, intf=2, alt=0, name="USB Device Firmware Upgrade"
8) Start up the board, watch the serial console. Hit Ctrl+C to abort the
autostart if the board is configured that way (it is by default). The
bootloader's fconfig command can be used to disable autostart and configure
the IP settings if you need to change them (default IP is 192.168.1.20).
5. Flash the kernel with "dfu-util -a kernel -D uImage-2.6.21.6-moko11-r2-fic-gta01.bin"
6. Flash rootfs with "dfu-util -a rootfs -D <image>", where <image> is the
jffs2 image file to use as the root filesystem
(e.g. ./tmp/deploy/images/poky-image-sato-fic-gta01.jffs2)
9) Make the kernel (tmp/deploy/images/vmlinux-routerstationpro.bin) available
on the tftp server.
10) If you are going to write the kernel to flash (optional - see "Booting a
kernel directly" below for the alternative), remove the current kernel and
rootfs flash partitions. You can list the partitions using the following
bootloader command:
HTC Universal (htcuniversal)
============================
RedBoot> fis list
Note: HTC Universal support is highly experimental.
You can delete the existing kernel and rootfs with these commands:
On the HTC Universal, entirely replacing the Windows installation is not
supported, instead Poky is booted from an MMC/SD card from Windows. Once Poky
has booted, Windows is no longer in memory or active but when power is removed,
the user will be returned to windows and will need to return to Linux from
there.
RedBoot> fis delete kernel
RedBoot> fis delete rootfs
Once an MMC/SD card is available it is suggested its split into two partitions,
one for a program called HaRET which lets you boot Linux from within Windows
and the second for the rootfs. The HaRET partition should be the first partition
on the card and be vfat formatted. It doesn't need to be large, just enough for
HaRET and a kernel (say 5MB max). The rootfs should be ext2 and is usually the
second partition. The first partition should be vfat so Windows recognises it
as if it doesn't, it has been known to reformat cards.
--- Booting a kernel directly ---
On the first partition you need three files:
1) Load the kernel using the following bootloader command:
* a HaRET binary (version 0.5.1 works well and a working version
should be part of the last Poky release)
* a kernel renamed to "zImage"
* a default.txt which contains:
RedBoot> load -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin
set kernel "zImage"
set mtype "855"
set cmdline "root=/dev/mmcblk0p2 rw console=ttyS0,115200n8 console=tty0 rootdelay=5 fbcon=rotate:1"
boot2
You should see a message on it being successfully loaded.
On the second parition the root file system is extracted as root. A different
partition layout or other kernel options can be changed in the default.txt file.
2) Execute the kernel:
When inserted into the device, Windows should see the card and let you browse
its contents using File Explorer. Running the HaRET binary will present a dialog
box (maybe after messages warning about running unsigned binaries) where you
select OK and you should then see Poky boot. Kernel messages can be seen by
adding psplash=false to the kernel commandline.
RedBoot> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
Note that specifying the command line with -c is important as linux-yocto does
not provide a default command line.
Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)
============================================================
--- Writing a kernel to flash ---
Note: Nokia tablet support is highly experimental.
1) Go to your tftp server and gzip the kernel you want in flash. It should
halve the size.
The Nokia internet tablet devices are OMAP based tablet formfactor devices
with large screens (800x480), wifi and touchscreen.
2) Load the kernel using the following bootloader command:
To flash images to these devices you need the "flasher" utility which can be
downloaded from the http://tablets-dev.nokia.com/d3.php?f=flasher-3.0. This
utility needs to be run as root and the usb filesystem needs to be mounted
although most distributions will have done this for you. Once you have this
follow these steps:
RedBoot> load -r -b 0x80600000 -m tftp -h <ip of tftp server> vmlinux-routerstationpro.bin.gz
1. Power down the device.
2. Connect the device to the host machine via USB
(connecting power to the device doesn't hurt either).
3. Run "flasher -i"
4. Power on the device.
5. The program should give an indication it's found
a tablet device. If not, recheck the cables, make sure you're
root and usbfs/usbdevfs is mounted.
6. Run "flasher -r <image> -k <kernel> -f", where <image> is the
jffs2 image file to use as the root filesystem
(e.g. ./tmp/deploy/images/poky-image-sato-nokia800.jffs2)
and <kernel> is the kernel to use
(e.g. ./tmp/deploy/images/zImage-nokia800.bin).
7. Run "flasher -R" to reboot the device.
8. The device should boot into Poky.
This should output something similar to the following:
The nokia800 images and kernel will run on both the N800 and N810.
Raw file loaded 0x80600000-0x8087c537, assumed entry at 0x80600000
Calculate the length by subtracting the first number from the second number
and then rounding the result up to the nearest 0x1000.
Sharp Zaurus SL-C7x0 series (c7x0)
==================================
3) Using the length calculated above, create a flash partition for the kernel:
The Sharp Zaurus c7x0 series (SL-C700, SL-C750, SL-C760, SL-C860, SL-7500)
are PXA25x based handheld PDAs with VGA screens. To install Poky images on
these devices follow these steps:
RedBoot> fis create -b 0x80600000 -l 0x240000 kernel
1. Obtain an SD/MMC or CF card with a vfat or ext2 filesystem.
2. Copy a jffs2 image file (e.g. poky-image-sato-c7x0.jffs2) onto the
card as "initrd.bin":
(change 0x240000 to your rounded length -- change "kernel" to whatever
you want to name your kernel)
$ cp ./tmp/deploy/images/poky-image-sato-c7x0.jffs2 /path/to/my-cf-card/initrd.bin
--- Booting a kernel from flash ---
3. Copy an Linux kernel file (zImage-c7x0.bin) onto the card as
"zImage.bin":
To boot the flashed kernel perform the following steps.
$ cp ./tmp/deploy/images/zImage-c7x0.bin /path/to/my-cf-card/zImage.bin
1) At the bootloader prompt, load the kernel:
4. Copy an updater script (updater.sh.c7x0) onto the card
as "updater.sh":
RedBoot> fis load -d -e kernel
$ cp ./tmp/deploy/images/updater.sh.c7x0 /path/to/my-cf-card/updater.sh
(Change the name "kernel" above if you chose something different earlier)
5. Power down the Zaurus.
6. Hold "OK" key and power on the device. An update menu should appear
(in Japanese).
7. Choose "Update" (item 4).
8. The next screen will ask for the source, choose the appropriate
card (CF or SD).
9. Make sure AC power is connected.
10. The next screen asks for confirmation, choose "Yes" (the left button).
11. The update process will start, flash the files on the card onto
the device and the device will then reboot into Poky.
(-e means 'elf', -d 'decompress')
2) Execute the kernel using the exec command as above.
Sharp Zaurus SL-C1000 (akita)
=============================
--- Automating the boot process ---
The Sharp Zaurus SL-C1000 is a PXA270 based device otherwise similar to the
c7x0. To install Poky images on this device follow the instructions for
the c7x0 but replace "c7x0" with "akita" where appropriate.
Sharp Zaurus SL-C3x00 series (spitz)
====================================
The Sharp Zaurus SL-C3x00 devices are PXA270 based devices similar
to akita but with an internal microdrive. The installation procedure
assumes a standard microdrive based device where the root (first)
partition has been enlarged to fit the image (at least 100MB,
400MB for the SDK).
The procedure is the same as for the c7x0 and akita models with the
following differences:
1. Instead of a jffs2 image you need to copy a compressed tarball of the
root fileystem (e.g. poky-image-sato-spitz.tar.gz) onto the
card as "hdimage1.tgz":
$ cp ./tmp/deploy/images/poky-image-sato-spitz.tar.gz /path/to/my-cf-card/hdimage1.tgz
2. You additionally need to copy a special tar utility (gnu-tar) onto
the card as "gnu-tar":
$ cp ./tmp/deploy/images/gnu-tar /path/to/my-cf-card/gnu-tar
After writing the kernel to flash and testing the load and exec commands
manually, you can automate the boot process with a boot script.
1) RedBoot> fconfig
(Answer the questions not specified here as they pertain to your environment)
2) Run script at boot: true
Boot script:
.. fis load -d -e kernel
.. exec
Enter script, terminate with empty line
>> fis load -d -e kernel
>> exec -c "console=ttyS0,115200 root=/dev/sda1 rw rootdelay=2 board=UBNT-RSPRO"
>>
3) Answer the remaining questions and write the changes to flash:
Update RedBoot non-volatile configuration - continue (y/n)? y
... Erase from 0xbfff0000-0xc0000000: .
... Program from 0x87ff0000-0x88000000 at 0xbfff0000: .
4) Power cycle the board.

View File

@@ -1,8 +1,8 @@
Tim Ansell <mithro@mithis.net>
Phil Blundell <pb@handhelds.org>
Seb Frankengul <seb@frankengul.org>
Holger Freyther <holger@moiji-mobile.com>
Marcin Juszkiewicz <marcin@juszkiewicz.com.pl>
Holger Freyther <zecke@handhelds.org>
Marcin Juszkiewicz <hrw@hrw.one.pl>
Chris Larson <kergoth@handhelds.org>
Ulrich Luckas <luckas@musoft.de>
Mickey Lauer <mickey@Vanille.de>

View File

@@ -1,99 +1,5 @@
Changes in Bitbake 1.9.x:
- Add PE (Package Epoch) support from Philipp Zabel (pH5)
- Treat python functions the same as shell functions for logging
- Use TMPDIR/anonfunc as a __anonfunc temp directory (T)
- Catch truncated cache file errors
- Allow operations other than assignment on flag variables
- Add code to handle inter-task dependencies
- Fix cache errors when generation dotGraphs
- Make sure __inherit_cache is updated before calling include() (from Michael Krelin)
- Fix bug when target was in ASSUME_PROVIDED (#2236)
- Raise ParseError for filenames with multiple underscores instead of infinitely looping (#2062)
- Fix invalid regexp in BBMASK error handling (missing import) (#1124)
- Promote certain warnings from debug to note 2 level
- Update manual
- Correctly redirect stdin when forking
- If parsing errors are found, exit, too many users miss the errors
- Remove supriours PREFERRED_PROVIDER warnings
- svn fetcher: Add _buildsvncommand function
- Improve certain error messages
- Rewrite svn fetcher to make adding extra operations easier
as part of future SRCDATE="now" fixes
(requires new FETCHCMD_svn definition in bitbake.conf)
- Change SVNDIR layout to be more unique (fixes #2644 and #2624)
- Add ConfigParsed Event after configuration parsing is complete
- Add SRCREV support for svn fetcher
- data.emit_var() - only call getVar if we need the variable
- Stop generating the A variable (seems to be legacy code)
- Make sure intertask depends get processed correcting in recursive depends
- Add pn-PN to overrides when evaluating PREFERRED_VERSION
- Improve the progress indicator by skipping tasks that have
already run before starting the build rather than during it
- Add profiling option (-P)
- Add BB_SRCREV_POLICY variable (clear or cache) to control SRCREV cache
- Add SRCREV_FORMAT support
- Fix local fetcher's localpath return values
- Apply OVERRIDES before performing immediate expansions
- Allow the -b -e option combination to take regular expressions
- Fix handling of variables with expansion in the name using _append/_prepend
e.g. RRECOMMENDS_${PN}_append_xyz = "abc"
- Add plain message function to bb.msg
- Sort the list of providers before processing so dependency problems are
reproducible rather than effectively random
- Fix/improve bitbake -s output
- Add locking for fetchers so only one tries to fetch a given file at a given time
- Fix int(0)/None confusion in runqueue.py which causes random gaps in dependency chains
- Expand data in addtasks
- Print the list of missing DEPENDS,RDEPENDS for the "No buildable providers available for required...."
error message.
- Rework add_task to be more efficient (6% speedup, 7% number of function calls reduction)
- Sort digraph output to make builds more reproducible
- Split expandKeys into two for loops to benefit from the expand_cache (12% speedup)
- runqueue.py: Fix idepends handling to avoid dependency errors
- Clear the terminal TOSTOP flag if set (and warn the user)
- Fix regression from r653 and make SRCDATE/CVSDATE work for packages again
- Fix a bug in bb.decodeurl where http://some.where.com/somefile.tgz decoded to host="" (#1530)
- Warn about malformed PREFERRED_PROVIDERS (#1072)
- Add support for BB_NICE_LEVEL option (#1627)
- Psyco is used only on x86 as there is no support for other architectures.
- Sort initial providers list by default preference (#1145, #2024)
- Improve provider sorting so prefered versions have preference over latest versions (#768)
- Detect builds of tasks with overlapping providers and warn (will become a fatal error) (#1359)
- Add MULTI_PROVIDER_WHITELIST variable to allow known safe multiple providers to be listed
- Handle paths in svn fetcher module parameter
- Support the syntax "export VARIABLE"
- Add bzr fetcher
- Add support for cleaning directories before a task in the form:
do_taskname[cleandirs] = "dir"
- bzr fetcher tweaks from Robert Schuster (#2913)
- Add mercurial (hg) fetcher from Robert Schuster (#2913)
- Don't add duplicates to BBPATH
- Fix preferred_version return values (providers.py)
- Fix 'depends' flag splitting
- Fix unexport handling (#3135)
- Add bb.copyfile function similar to bb.movefile (and improve movefile error reporting)
- Allow multiple options for deptask flag
- Use git-fetch instead of git-pull removing any need for merges when
fetching (we don't care about the index). Fixes fetch errors.
- Add BB_GENERATE_MIRROR_TARBALLS option, set to 0 to make git fetches
faster at the expense of not creating mirror tarballs.
- SRCREV handling updates, improvements and fixes from Poky
- Add bb.utils.lockfile() and bb.utils.unlockfile() from Poky
- Add support for task selfstamp and lockfiles flags
- Disable task number acceleration since it can allow the tasks to run
out of sequence
- Improve runqueue code comments
- Add task scheduler abstraction and some example schedulers
- Improve circular dependency chain debugging code and user feedback
- Don't give a stacktrace for invalid tasks, have a user friendly message (#3431)
- Add support for "-e target" (#3432)
- Fix shell showdata command (#3259)
- Fix shell data updating problems (#1880)
- Properly raise errors for invalid source URI protocols
- Change the wget fetcher failure handling to avoid lockfile problems
- Add support for branches in git fetcher (Otavio Salvador, Michael Lauer)
- Make taskdata and runqueue errors more user friendly
- Add norecurse and fullpath options to cvs fetcher
Changes in BitBake 1.8.x:
- Fix -f (force) in conjunction with -b
- Fix exit code for build failures in --continue mode
- Fix git branch tags fetching
- Change parseConfigurationFile so it works on real data, not a copy
@@ -118,10 +24,8 @@ Changes in Bitbake 1.9.x:
how extensively stamps are looked at for validity
- When handling build target failures make sure idepends are checked and
failed where needed. Fixes --continue mode crashes.
- Fix -f (force) in conjunction with -b
- Fix problems with recrdeptask handling where some idepends weren't handled
correctly.
- Handle exit codes correctly (from pH5)
- Work around refs/HEAD issues with git over http (#3410)
- Add proxy support to the CVS fetcher (from Cyril Chemparathy)
- Improve runfetchcmd so errors are seen and various GIT variables are exported
@@ -137,47 +41,109 @@ Changes in Bitbake 1.9.x:
- Add PERSISTENT_DIR to store the PersistData in a persistent
directory != the cache dir.
- Add md5 and sha256 checksum generation functions to utils.py
- Correctly handle '-' characters in class names (#2958)
- Make sure expandKeys has been called on the data dictionary before running tasks
- Correctly add a task override in the form task-TASKNAME.
- Revert the '-' character fix in class names since it breaks things
- When a regexp fails to compile for PACKAGES_DYNAMIC, print a more useful error (#4444)
- Allow to checkout CVS by Date and Time. Just add HHmm to the SRCDATE.
- Move prunedir function to utils.py and add explode_dep_versions function
- Raise an exception if SRCREV == 'INVALID'
- Fix hg fetcher username/password handling and fix crash
- Fix PACKAGES_DYNAMIC handling of packages with '++' in the name
- Rename __depends to __base_depends after configuration parsing so we don't
recheck the validity of the config files time after time
- Add better environmental variable handling. By default it will now only pass certain
whitelisted variables into the data store. If BB_PRESERVE_ENV is set bitbake will use
all variable from the environment. If BB_ENV_WHITELIST is set, that whitelist will be
used instead of the internal bitbake one. Alternatively, BB_ENV_EXTRAWHITE can be used
to extend the internal whitelist.
- Perforce fetcher fix to use commandline options instead of being overriden by the environment
- bb.utils.prunedir can cope with symlinks to directoriees without exceptions
- use @rev when doing a svn checkout
- Add osc fetcher (from Joshua Lock in Poky)
- When SRCREV autorevisioning for a recipe is in use, don't cache the recipe
- Add tryaltconfigs option to control whether bitbake trys using alternative providers
to fulfil failed dependencies. It defaults to off, changing the default since this
behaviour confuses many users and isn't often useful.
- Improve lock file function error handling
- Add username handling to the git fetcher (Robert Bragg)
- Add support for HTTP_PROXY and HTTP_PROXY_IGNORE variables to the wget fetcher
- Export more variables to the fetcher commands to allow ssh checkouts and checkouts through
proxies to work better. (from Poky)
- Also allow user and pswd options in SRC_URIs globally (from Poky)
- Improve proxy handling when using mirrors (from Poky)
- Add bb.utils.prune_suffix function
- Fix hg checkouts of specific revisions (from Poky)
- Fix wget fetching of urls with parameters specified (from Poky)
- Add username handling to git fetcher (from Poky)
- Set HOME environmental variable when running fetcher commands (from Poky)
- Make sure allowed variables inherited from the environment are exported again (from Poky)
- When running a stage task in bbshell, run populate_staging, not the stage task (from Poky)
- Fix + character escaping from PACKAGES_DYNAMIC (thanks Otavio Salvador)
- Addition of BBCLASSEXTEND support for allowing one recipe to provide multiple targets (from Poky)
Changes in BitBake 1.8.10:
- Psyco is available only for x86 - do not use it on other architectures.
- Fix a bug in bb.decodeurl where http://some.where.com/somefile.tgz decoded to host="" (#1530)
- Warn about malformed PREFERRED_PROVIDERS (#1072)
- Add support for BB_NICE_LEVEL option (#1627)
- Sort initial providers list by default preference (#1145, #2024)
- Improve provider sorting so prefered versions have preference over latest versions (#768)
- Detect builds of tasks with overlapping providers and warn (will become a fatal error) (#1359)
- Add MULTI_PROVIDER_WHITELIST variable to allow known safe multiple providers to be listed
- Handle paths in svn fetcher module parameter
- Support the syntax "export VARIABLE"
- Add bzr fetcher
- Add support for cleaning directories before a task in the form:
do_taskname[cleandirs] = "dir"
- bzr fetcher tweaks from Robert Schuster (#2913)
- Add mercurial (hg) fetcher from Robert Schuster (#2913)
- Fix bogus preferred_version return values
- Fix 'depends' flag splitting
- Fix unexport handling (#3135)
- Add bb.copyfile function similar to bb.movefile (and improve movefile error reporting)
- Allow multiple options for deptask flag
- Use git-fetch instead of git-pull removing any need for merges when
fetching (we don't care about the index). Fixes fetch errors.
- Add BB_GENERATE_MIRROR_TARBALLS option, set to 0 to make git fetches
faster at the expense of not creating mirror tarballs.
- SRCREV handling updates, improvements and fixes from Poky
- Add bb.utils.lockfile() and bb.utils.unlockfile() from Poky
- Add support for task selfstamp and lockfiles flags
- Disable task number acceleration since it can allow the tasks to run
out of sequence
- Improve runqueue code comments
- Add task scheduler abstraction and some example schedulers
- Improve circular dependency chain debugging code and user feedback
- Don't give a stacktrace for invalid tasks, have a user friendly message (#3431)
- Add support for "-e target" (#3432)
- Fix shell showdata command (#3259)
- Fix shell data updating problems (#1880)
- Properly raise errors for invalid source URI protocols
- Change the wget fetcher failure handling to avoid lockfile problems
- Add git branch support
- Add support for branches in git fetcher (Otavio Salvador, Michael Lauer)
- Make taskdata and runqueue errors more user friendly
- Add norecurse and fullpath options to cvs fetcher
Changes in Bitbake 1.8.8:
- Rewrite svn fetcher to make adding extra operations easier
as part of future SRCDATE="now" fixes
(requires new FETCHCMD_svn definition in bitbake.conf)
- Change SVNDIR layout to be more unique (fixes #2644 and #2624)
- Import persistent data store from trunk
- Sync fetcher code with that in trunk, adding SRCREV support for svn
- Add ConfigParsed Event after configuration parsing is complete
- data.emit_var() - only call getVar if we need the variable
- Stop generating the A variable (seems to be legacy code)
- Make sure intertask depends get processed correcting in recursive depends
- Add pn-PN to overrides when evaluating PREFERRED_VERSION
- Improve the progress indicator by skipping tasks that have
already run before starting the build rather than during it
- Add profiling option (-P)
- Add BB_SRCREV_POLICY variable (clear or cache) to control SRCREV cache
- Add SRCREV_FORMAT support
- Fix local fetcher's localpath return values
- Apply OVERRIDES before performing immediate expansions
- Allow the -b -e option combination to take regular expressions
- Add plain message function to bb.msg
- Sort the list of providers before processing so dependency problems are
reproducible rather than effectively random
- Add locking for fetchers so only one tries to fetch a given file at a given time
- Fix int(0)/None confusion in runqueue.py which causes random gaps in dependency chains
- Fix handling of variables with expansion in the name using _append/_prepend
e.g. RRECOMMENDS_${PN}_append_xyz = "abc"
- Expand data in addtasks
- Print the list of missing DEPENDS,RDEPENDS for the "No buildable providers available for required...."
error message.
- Rework add_task to be more efficient (6% speedup, 7% number of function calls reduction)
- Sort digraph output to make builds more reproducible
- Split expandKeys into two for loops to benefit from the expand_cache (12% speedup)
- runqueue.py: Fix idepends handling to avoid dependency errors
- Clear the terminal TOSTOP flag if set (and warn the user)
- Fix regression from r653 and make SRCDATE/CVSDATE work for packages again
Changes in Bitbake 1.8.6:
- Correctly redirect stdin when forking
- If parsing errors are found, exit, too many users miss the errors
- Remove supriours PREFERRED_PROVIDER warnings
Changes in Bitbake 1.8.4:
- Make sure __inherit_cache is updated before calling include() (from Michael Krelin)
- Fix bug when target was in ASSUME_PROVIDED (#2236)
- Raise ParseError for filenames with multiple underscores instead of infinitely looping (#2062)
- Fix invalid regexp in BBMASK error handling (missing import) (#1124)
- Don't run build sanity checks on incomplete builds
- Promote certain warnings from debug to note 2 level
- Update manual
Changes in Bitbake 1.8.2:
- Catch truncated cache file errors
- Add PE (Package Epoch) support from Philipp Zabel (pH5)
- Add code to handle inter-task dependencies
- Allow operations other than assignment on flag variables
- Fix cache errors when generation dotGraphs
Changes in Bitbake 1.8.0:
- Release 1.7.x as a stable series

52
bitbake/MANIFEST Normal file
View File

@@ -0,0 +1,52 @@
AUTHORS
COPYING
ChangeLog
MANIFEST
setup.py
bin/bitdoc
bin/bbimage
bin/bitbake
lib/bb/__init__.py
lib/bb/build.py
lib/bb/cache.py
lib/bb/cooker.py
lib/bb/COW.py
lib/bb/data.py
lib/bb/data_smart.py
lib/bb/event.py
lib/bb/fetch/__init__.py
lib/bb/fetch/bzr.py
lib/bb/fetch/cvs.py
lib/bb/fetch/git.py
lib/bb/fetch/hg.py
lib/bb/fetch/local.py
lib/bb/fetch/perforce.py
lib/bb/fetch/ssh.py
lib/bb/fetch/svk.py
lib/bb/fetch/svn.py
lib/bb/fetch/wget.py
lib/bb/manifest.py
lib/bb/methodpool.py
lib/bb/msg.py
lib/bb/parse/__init__.py
lib/bb/parse/parse_py/__init__.py
lib/bb/parse/parse_py/BBHandler.py
lib/bb/parse/parse_py/ConfHandler.py
lib/bb/persist_data.py
lib/bb/providers.py
lib/bb/runqueue.py
lib/bb/shell.py
lib/bb/taskdata.py
lib/bb/utils.py
setup.py
doc/COPYING.GPL
doc/COPYING.MIT
doc/bitbake.1
doc/manual/html.css
doc/manual/Makefile
doc/manual/usermanual.xml
contrib/bbdev.sh
contrib/vim/syntax/bitbake.vim
contrib/vim/ftdetect/bitbake.vim
conf/bitbake.conf
classes/base.bbclass

View File

@@ -22,243 +22,113 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys, logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
import optparse
import warnings
from traceback import format_exception
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
from bb import event
import bb.msg
import sys, os, getopt, re, time, optparse
sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
from bb import cooker
from bb import ui
from bb import server
__version__ = "1.15.1"
logger = logging.getLogger("BitBake")
__version__ = "1.8.11"
class BBConfiguration(object):
#============================================================================#
# BBOptions
#============================================================================#
class BBConfiguration( object ):
"""
Manages build options and configurations for one run
"""
def __init__(self, options):
def __init__( self, options ):
for key, val in options.__dict__.items():
setattr(self, key, val)
self.pkgs_to_build = []
setattr( self, key, val )
def get_ui(config):
if config.ui:
interface = config.ui
else:
interface = 'knotty'
try:
# Dynamically load the UI based on the ui name. Although we
# suggest a fixed set this allows you to have flexibility in which
# ones are available.
module = __import__("bb.ui", fromlist = [interface])
return getattr(module, interface).main
except AttributeError:
sys.exit("FATAL: Invalid user interface '%s' specified.\n"
"Valid interfaces: depexp, goggle, ncurses, knotty [default]." % interface)
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
warnlog = logging.getLogger("BitBake.Warnings")
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
warnlog.warn(s)
warnings.showwarning = _showwarning
warnings.filterwarnings("ignore")
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
warnings.filterwarnings("ignore", category=ImportWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
#============================================================================#
# main
#============================================================================#
def main():
parser = optparse.OptionParser(
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
usage = """%prog [options] [package ...]
parser = optparse.OptionParser( version = "BitBake Build Tool Core version %s, %%prog version %s" % ( bb.__version__, __version__ ),
usage = """%prog [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.""")
Default BBFILES are the .bb files in the current directory.""" )
parser.add_option("-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES. Does not handle any dependencies.",
action = "store", dest = "buildfile", default = None)
parser.add_option( "-b", "--buildfile", help = "execute the task against this .bb file, rather than a package from BBFILES.",
action = "store", dest = "buildfile", default = None )
parser.add_option("-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
action = "store_false", dest = "abort", default = True)
parser.add_option( "-k", "--continue", help = "continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same.",
action = "store_false", dest = "abort", default = True )
parser.add_option("-a", "--tryaltconfigs", help = "continue with builds by trying to use alternative providers where possible.",
action = "store_true", dest = "tryaltconfigs", default = False)
parser.add_option( "-f", "--force", help = "force run of specified cmd, regardless of stamp status",
action = "store_true", dest = "force", default = False )
parser.add_option("-f", "--force", help = "force run of specified cmd, regardless of stamp status",
action = "store_true", dest = "force", default = False)
parser.add_option( "-i", "--interactive", help = "drop into the interactive mode also called the BitBake shell.",
action = "store_true", dest = "interactive", default = False )
parser.add_option("-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd")
parser.add_option( "-c", "--cmd", help = "Specify task to execute. Note that this only executes the specified task for the providee and the packages it depends on, i.e. 'compile' does not implicitly call stage for the dependencies (IOW: use only if you know what you are doing). Depending on the base.bbclass a listtasks tasks is defined and will show available tasks",
action = "store", dest = "cmd" )
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "prefile", default = [])
parser.add_option( "-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "file", default = [] )
parser.add_option("-R", "--postread", help = "read the specified file after bitbake.conf",
action = "append", dest = "postfile", default = [])
parser.add_option( "-v", "--verbose", help = "output more chit-chat to the terminal",
action = "store_true", dest = "verbose", default = False )
parser.add_option("-v", "--verbose", help = "output more chit-chat to the terminal",
action = "store_true", dest = "verbose", default = False)
parser.add_option("-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
parser.add_option( "-D", "--debug", help = "Increase the debug level. You can specify this more than once.",
action = "count", dest="debug", default = 0)
parser.add_option("-n", "--dry-run", help = "don't execute, just go through the motions",
action = "store_true", dest = "dry_run", default = False)
parser.add_option( "-n", "--dry-run", help = "don't execute, just go through the motions",
action = "store_true", dest = "dry_run", default = False )
parser.add_option("-S", "--dump-signatures", help = "don't execute, just dump out the signature construction information",
action = "store_true", dest = "dump_signatures", default = False)
parser.add_option( "-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False )
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False)
parser.add_option( "-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
action = "store_true", dest = "disable_psyco", default = False )
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False)
parser.add_option( "-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False )
parser.add_option("-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False)
parser.add_option( "-e", "--environment", help = "show the global or per-package environment (this is what used to be bbread)",
action = "store_true", dest = "show_environment", default = False )
parser.add_option("-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False)
parser.add_option( "-g", "--graphviz", help = "emit the dependency trees of the specified packages in the dot syntax",
action = "store_true", dest = "dot_graph", default = False )
parser.add_option("-I", "--ignore-deps", help = """Assume these dependencies don't exist and are already provided (equivalent to ASSUME_PROVIDED). Useful to make dependency graphs more appealing""",
action = "append", dest = "extra_assume_provided", default = [])
parser.add_option( "-I", "--ignore-deps", help = """Stop processing at the given list of dependencies when generating dependency graphs. This can help to make the graph more appealing""",
action = "append", dest = "ignored_dot_deps", default = [] )
parser.add_option("-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [])
parser.add_option( "-l", "--log-domains", help = """Show debug logging for the specified logging domains""",
action = "append", dest = "debug_domains", default = [] )
parser.add_option("-P", "--profile", help = "profile the command and print a report",
action = "store_true", dest = "profile", default = False)
parser.add_option( "-P", "--profile", help = "profile the command and print a report",
action = "store_true", dest = "profile", default = False )
parser.add_option("-u", "--ui", help = "userinterface to use",
action = "store", dest = "ui")
parser.add_option("-t", "--servertype", help = "Choose which server to use, none, process or xmlrpc",
action = "store", dest = "servertype")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--server-only", help = "Run bitbake without UI, the frontend can connect with bitbake server itself",
action = "store_true", dest = "server_only", default = False)
parser.add_option("-B", "--bind", help = "The name/address for the bitbake server to bind to",
action = "store", dest = "bind", default = False)
options, args = parser.parse_args(sys.argv)
configuration = BBConfiguration(options)
configuration.pkgs_to_build = []
configuration.pkgs_to_build.extend(args[1:])
ui_main = get_ui(configuration)
# Server type can be xmlrpc, process or none currently, if nothing is specified,
# the default server is process
if configuration.servertype:
server_type = configuration.servertype
else:
server_type = 'process'
try:
module = __import__("bb.server", fromlist = [server_type])
server = getattr(module, server_type)
except AttributeError:
sys.exit("FATAL: Invalid server type '%s' specified.\n"
"Valid interfaces: xmlrpc, process [default], none." % servertype)
if configuration.server_only:
if configuration.servertype != "xmlrpc":
sys.exit("FATAL: If '--server-only' is defined, we must set the servertype as 'xmlrpc'.\n")
if not configuration.bind:
sys.exit("FATAL: The '--server-only' option requires a name/address to bind to with the -B option.\n")
if configuration.bind and configuration.servertype != "xmlrpc":
sys.exit("FATAL: If '-B' or '--bind' is defined, we must set the servertype as 'xmlrpc'.\n")
# Save a logfile for cooker into the current working directory. When the
# server is daemonized this logfile will be truncated.
cooker_logfile = os.path.join(os.getcwd(), "cooker.log")
bb.msg.init_msgconfig(configuration.verbose, configuration.debug,
configuration.debug_domains)
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
logger.addHandler(handler)
# Before we start modifying the environment we should take a pristine
# copy for possible later use
initialenv = os.environ.copy()
# Clear away any spurious environment variables. But don't wipe the
# environment totally. This is necessary to ensure the correct operation
# of the UIs (e.g. for DISPLAY, etc.)
bb.utils.clean_environment()
server = server.BitBakeServer()
if configuration.bind:
server.initServer((configuration.bind, 0))
else:
server.initServer()
idle = server.getServerIdleCB()
cooker = bb.cooker.BBCooker(configuration, idle, initialenv)
cooker.parseCommandLine()
server.addcooker(cooker)
server.saveConnectionDetails()
server.detach(cooker_logfile)
# Should no longer need to ever reference cooker
del cooker
logger.removeHandler(handler)
if not configuration.server_only:
# Setup a connection to the server (cooker)
server_connection = server.establishConnection()
cooker = bb.cooker.BBCooker(configuration)
if configuration.profile:
try:
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("server address: %s, server port: %s" % (server.serverinfo.host, server.serverinfo.port))
import cProfile as profile
except:
import profile
return 1
profile.runctx("cooker.cook()", globals(), locals(), "profile.log")
import pstats
p = pstats.Stats('profile.log')
p.sort_stats('time')
p.print_stats()
p.print_callers()
p.sort_stats('cumulative')
p.print_stats()
else:
cooker.cook()
if __name__ == "__main__":
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc(5)
sys.exit(ret)
main()

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env python
import os
import sys
import warnings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb.siggen
if len(sys.argv) > 2:
bb.siggen.compare_sigfiles(sys.argv[1], sys.argv[2])
else:
bb.siggen.dump_sigfile(sys.argv[1])

View File

@@ -1,593 +0,0 @@
#!/usr/bin/env python
# This script has subcommands which operate against your bitbake layers, either
# displaying useful information, or acting against them.
# See the help output for details on available commands.
# Copyright (C) 2011 Mentor Graphics Corporation
# Copyright (C) 2012 Intel Corporation
import cmd
import logging
import os
import sys
import fnmatch
from collections import defaultdict
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
from bb.cooker import state
import bb.fetch2
logger = logging.getLogger('BitBake')
def main(args):
# Set up logging
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
logger.addHandler(console)
initialenv = os.environ.copy()
bb.utils.clean_environment()
cmds = Commands(initialenv)
if args:
# Allow user to specify e.g. show-layers instead of show_layers
args = [args[0].replace('-', '_')] + args[1:]
cmds.onecmd(' '.join(args))
else:
cmds.do_help('')
return cmds.returncode
class Commands(cmd.Cmd):
def __init__(self, initialenv):
cmd.Cmd.__init__(self)
self.returncode = 0
self.config = Config(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function,
initialenv)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.cooker_data = None
self.bblayers = (self.config_data.getVar('BBLAYERS', True) or "").split()
def register_idle_function(self, function, data):
pass
def prepare_cooker(self):
sys.stderr.write("Parsing recipes..")
logger.setLevel(logging.WARNING)
try:
while self.cooker.state in (state.initial, state.parsing):
self.cooker.updateCache()
except KeyboardInterrupt:
self.cooker.shutdown()
self.cooker.updateCache()
sys.exit(2)
logger.setLevel(logging.INFO)
sys.stderr.write("done.\n")
self.cooker_data = self.cooker.status
self.cooker_data.appends = self.cooker.appendlist
def check_prepare_cooker(self):
if not self.cooker_data:
self.prepare_cooker()
def default(self, line):
"""Handle unrecognised commands"""
sys.stderr.write("Unrecognised command or option\n")
self.do_help('')
def do_help(self, topic):
"""display general help or help on a specified command"""
if topic:
sys.stdout.write('%s: ' % topic)
cmd.Cmd.do_help(self, topic.replace('-', '_'))
else:
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
sys.stdout.write("Available commands:\n")
procnames = self.get_names()
for procname in procnames:
if procname[:3] == 'do_':
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
doc = getattr(self, procname).__doc__
if doc:
sys.stdout.write(" %s\n" % doc.splitlines()[0])
def do_show_layers(self, args):
"""show current configured layers"""
self.check_prepare_cooker()
logger.plain('')
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
logger.plain('=' * 74)
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
layerpri = 0
for layer, _, regex, pri in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
layerpri = pri
break
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), layerpri))
def version_str(self, pe, pv, pr = None):
verstr = "%s" % pv
if pr:
verstr = "%s-%s" % (verstr, pr)
if pe:
verstr = "%s:%s" % (pe, verstr)
return verstr
def do_show_overlayed(self, args):
"""list overlayed recipes (where the same recipe exists in another layer that has a higher layer priority)
usage: show-overlayed [-f] [-s]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Note that skipped recipes that
are overlayed will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with the ones they overlay indented underneath
-s only list overlayed recipes where the version is the same
"""
self.check_prepare_cooker()
show_filenames = False
show_same_ver_only = False
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-s':
show_same_ver_only = True
else:
sys.stderr.write("show-overlayed: invalid option %s\n" % arg)
self.do_help('')
return
items_listed = self.list_recipes('Overlayed recipes', None, True, show_same_ver_only, show_filenames, True)
# Check for overlayed .bbclass files
classes = defaultdict(list)
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if os.path.exists(classdir):
for classfile in os.listdir(classdir):
if os.path.splitext(classfile)[1] == '.bbclass':
classes[classfile].append(classdir)
# Locating classes and other files is a bit more complicated than recipes -
# layer priority is not a factor; instead BitBake uses the first matching
# file in BBPATH, which is manipulated directly by each layer's
# conf/layer.conf in turn, thus the order of layers in bblayers.conf is a
# factor - however, each layer.conf is free to either prepend or append to
# BBPATH (or indeed do crazy stuff with it). Thus the order in BBPATH might
# not be exactly the order present in bblayers.conf either.
bbpath = str(self.config_data.getVar('BBPATH', True))
overlayed_class_found = False
for (classfile, classdirs) in classes.items():
if len(classdirs) > 1:
if not overlayed_class_found:
logger.plain('=== Overlayed classes ===')
overlayed_class_found = True
mainfile = bb.utils.which(bbpath, os.path.join('classes', classfile))
if show_filenames:
logger.plain('%s' % mainfile)
else:
# We effectively have to guess the layer here
logger.plain('%s:' % classfile)
mainlayername = '?'
for layerdir in self.bblayers:
classdir = os.path.join(layerdir, 'classes')
if mainfile.startswith(classdir):
mainlayername = self.get_layer_name(layerdir)
logger.plain(' %s' % mainlayername)
for classdir in classdirs:
fullpath = os.path.join(classdir, classfile)
if fullpath != mainfile:
if show_filenames:
print(' %s' % fullpath)
else:
print(' %s' % self.get_layer_name(os.path.dirname(classdir)))
if overlayed_class_found:
items_listed = True;
if not items_listed:
logger.note('No overlayed files found')
def do_show_recipes(self, args):
"""list available recipes, showing the layer they are provided by
usage: show-recipes [-f] [-m] [pnspec]
Lists the names of overlayed recipes and the available versions in each
layer, with the preferred version first. Optionally you may specify
pnspec to match a specified recipe name (supports wildcards). Note that
skipped recipes will also be listed, with a " (skipped)" suffix.
Options:
-f instead of the default formatting, list filenames of higher priority
recipes with other available recipes indented underneath
-m only list where multiple recipes (in the same layer or different
layers) exist for the same recipe name
"""
self.check_prepare_cooker()
show_filenames = False
show_multi_provider_only = False
pnspec = None
title = 'Available recipes:'
for arg in args.split():
if arg == '-f':
show_filenames = True
elif arg == '-m':
show_multi_provider_only = True
elif not arg.startswith('-'):
pnspec = arg
title = 'Available recipes matching %s:' % pnspec
else:
sys.stderr.write("show-recipes: invalid option %s\n" % arg)
self.do_help('')
return
self.list_recipes(title, pnspec, False, False, show_filenames, show_multi_provider_only)
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
pkg_pn = self.cooker.status.pkg_pn
(latest_versions, preferred_versions) = bb.providers.findProviders(self.cooker.configuration.data, self.cooker.status, pkg_pn)
allproviders = bb.providers.allProviders(self.cooker.status)
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = self.cooker.skiplist.keys()
skiplist.sort( key=lambda fileitem: self.cooker.calc_bbfile_priority(fileitem) )
skiplist.reverse()
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
p = recipe_parts[0]
if len(recipe_parts) > 1:
ver = (None, recipe_parts[1], None)
else:
ver = (None, 'unknown', None)
allproviders[p].append((ver, fn))
if not p in pkg_pn:
pkg_pn[p] = 'dummy'
preferred_versions[p] = (ver, fn)
def print_item(f, pn, ver, layer, ispref):
if f in skiplist:
skipped = ' (skipped)'
else:
skipped = ''
if show_filenames:
if ispref:
logger.plain("%s%s", f, skipped)
else:
logger.plain(" %s%s", f, skipped)
else:
if ispref:
logger.plain("%s:", pn)
logger.plain(" %s %s%s", layer.ljust(20), ver, skipped)
preffiles = []
items_listed = False
for p in sorted(pkg_pn):
if pnspec:
if not fnmatch.fnmatch(p, pnspec):
continue
if len(allproviders[p]) > 1 or not show_multi_provider_only:
pref = preferred_versions[p]
preffile = bb.cache.Cache.virtualfn2realfn(pref[1])[0]
if preffile not in preffiles:
preflayer = self.get_file_layer(preffile)
multilayer = False
same_ver = True
provs = []
for prov in allproviders[p]:
provfile = bb.cache.Cache.virtualfn2realfn(prov[1])[0]
provlayer = self.get_file_layer(provfile)
provs.append((provfile, provlayer, prov[0]))
if provlayer != preflayer:
multilayer = True
if prov[0] != pref[0]:
same_ver = False
if (multilayer or not show_overlayed_only) and (same_ver or not show_same_ver_only):
if not items_listed:
logger.plain('=== %s ===' % title)
items_listed = True
print_item(preffile, p, self.version_str(pref[0][0], pref[0][1]), preflayer, True)
for (provfile, provlayer, provver) in provs:
if provfile != preffile:
print_item(provfile, p, self.version_str(provver[0], provver[1]), provlayer, False)
# Ensure we don't show two entries for BBCLASSEXTENDed recipes
preffiles.append(preffile)
return items_listed
def do_flatten(self, args):
"""flattens layer configuration into a separate output directory.
usage: flatten [layer1 layer2 [layer3]...] <outputdir>
Takes the specified layers (or all layers in the current layer
configuration if none are specified) and builds a "flattened" directory
containing the contents of all layers, with any overlayed recipes removed
and bbappends appended to the corresponding recipes. Note that some manual
cleanup may still be necessary afterwards, in particular:
* where non-recipe files (such as patches) are overwritten (the flatten
command will show a warning for these)
* where anything beyond the normal layer setup has been added to
layer.conf (only the lowest priority number layer's layer.conf is used)
* overridden/appended items from bbappends will need to be tidied up
* when the flattened layers do not have the same directory structure (the
flatten command should show a warning when this will cause a problem)
Warning: if you flatten several layers where another layer is intended to
be used "inbetween" them (in layer priority order) such that recipes /
bbappends in the layers interact, and then attempt to use the new output
layer together with that other layer, you may no longer get the same
build results (as the layer priority order has effectively changed).
"""
arglist = args.split()
if len(arglist) < 1:
logger.error('Please specify an output directory')
self.do_help('flatten')
return
if len(arglist) == 2:
logger.error('If you specify layers to flatten you must specify at least two')
self.do_help('flatten')
return
outputdir = arglist[-1]
if os.path.exists(outputdir) and os.listdir(outputdir):
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
return
self.check_prepare_cooker()
layers = self.bblayers
if len(arglist) > 2:
layernames = arglist[:-1]
found_layernames = []
found_layerdirs = []
for layerdir in layers:
layername = self.get_layer_name(layerdir)
if layername in layernames:
found_layerdirs.append(layerdir)
found_layernames.append(layername)
for layername in layernames:
if not layername in found_layernames:
logger.error('Unable to find layer %s in current configuration, please run "%s show-layers" to list configured layers' % (layername, os.path.basename(sys.argv[0])))
return
layers = found_layerdirs
else:
layernames = []
# Ensure a specified path matches our list of layers
def layer_path_match(path):
for layerdir in layers:
if path.startswith(os.path.join(layerdir, '')):
return layerdir
return None
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.cooker.overlayed.iterkeys():
for of in self.cooker.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
logger.plain('Copying files from %s...' % layer )
for root, dirs, files in os.walk(layer):
for f1 in files:
f1full = os.sep.join([root, f1])
if f1full in overlayed:
logger.plain(' Skipping overlayed file %s' % f1full )
else:
ext = os.path.splitext(f1)[1]
if ext != '.bbappend':
fdest = f1full[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
if os.path.exists(fdest):
if f1 == 'layer.conf' and root.endswith('/conf'):
logger.plain(' Skipping layer config file %s' % f1full )
continue
else:
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.cooker_data.appends:
appends = self.cooker_data.appends[f1]
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
if layer_path_match(appendname):
self.apply_append(appendname, fdest)
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.cooker_data.appends.iterkeys():
if recipename not in appended_recipes:
appends = self.cooker_data.appends[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
if layer:
if first_append:
self.apply_append(appendname, first_append)
else:
fdest = appendname[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
bb.utils.copyfile(appendname, fdest)
first_append = fdest
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
newpath = os.path.join(outputdir, item[len(layerdir)+1:])
bbfiles_layer.append(newpath)
if bbfiles_layer:
# Check that all important layer files match BBFILES
for root, dirs, files in os.walk(outputdir):
for f1 in files:
ext = os.path.splitext(f1)[1]
if ext in ['.bb', '.bbappend']:
f1full = os.sep.join([root, f1])
entry_found = False
for item in bbfiles_layer:
if fnmatch.fnmatch(f1full, item):
entry_found = True
break
if not entry_found:
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
def get_file_layer(self, filename):
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(filename):
for layerdir in self.bblayers:
if regex.match(os.path.join(layerdir, 'test')):
return self.get_layer_name(layerdir)
return "?"
def get_layer_name(self, layerdir):
return os.path.basename(layerdir.rstrip(os.sep))
def apply_append(self, appendname, recipename):
appendfile = open(appendname, 'r')
recipefile = open(recipename, 'a')
recipefile.write('\n')
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
recipefile.writelines(appendfile.readlines())
recipefile.close()
appendfile.close()
def do_show_appends(self, args):
"""list bbappend files and recipe files they apply to
usage: show-appends
Recipes are listed with the bbappends that apply to them as subitems.
"""
self.check_prepare_cooker()
if not self.cooker_data.appends:
logger.plain('No append files found')
return
logger.plain('State of append files:')
pnlist = list(self.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
self.show_appends_for_pn(pn)
self.show_appends_for_skipped()
def show_appends_for_pn(self, pn):
filenames = self.cooker_data.pkg_pn[pn]
best = bb.providers.findBestProvider(pn,
self.cooker.configuration.data,
self.cooker_data,
self.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
appended, missing = self.get_appends_for_files(filenames)
if appended:
for basename, appends in appended:
logger.plain('%s%s:', basename, name_suffix)
for append in appends:
logger.plain(' %s', append)
if best_filename:
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
def get_appends_for_files(self, filenames):
appended, notappended = [], []
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
if cls:
continue
basename = os.path.basename(filename)
appends = self.cooker_data.appends.get(basename)
if appends:
appended.append((basename, list(appends)))
else:
notappended.append(basename)
return appended, notappended
class Config(object):
def __init__(self, **options):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.debug = 0
self.__dict__.update(options)
def __getattr__(self, attribute):
try:
return super(Config, self).__getattribute__(attribute)
except AttributeError:
return None
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]) or 0)

View File

@@ -1,55 +0,0 @@
#!/usr/bin/env python
import os
import sys,logging
import optparse
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
import prserv
import prserv.serv
__version__="1.0.0"
PRHOST_DEFAULT='0.0.0.0'
PRPORT_DEFAULT=8585
def main():
parser = optparse.OptionParser(
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
usage = "%prog < --start | --stop > [options]")
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
dest="dbfile", type="string", default="prserv.sqlite3")
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
dest="logfile", type="string", default="prserv.log")
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
action = "store", type="string", dest="loglevel", default = "INFO")
parser.add_option("--start", help="start daemon",
action="store_true", dest="start")
parser.add_option("--stop", help="stop daemon",
action="store_true", dest="stop")
parser.add_option("--host", help="ip address to bind", action="store",
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if options.start:
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
else:
ret=parser.print_help()
return ret
if __name__ == "__main__":
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -1,119 +0,0 @@
#!/usr/bin/env python
import os
import sys
import warnings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
from bb import fetch2
import logging
logger = logging.getLogger("BitBake")
try:
import cPickle as pickle
except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
class BBConfiguration(object):
"""
Manages build options and configurations for one run
"""
def __init__(self, **options):
self.data = {}
self.file = []
self.cmd = None
self.dump_signatures = True
self.prefile = []
self.postfile = []
self.parse_only = True
def __getattr__(self, attribute):
try:
return super(BBConfiguration, self).__getattribute__(attribute)
except AttributeError:
return None
_warnings_showwarning = warnings.showwarning
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""Display python warning messages using bb.msg"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno)
s = s.split("\n")[0]
bb.msg.warn(None, s)
warnings.showwarning = _showwarning
warnings.simplefilter("ignore", DeprecationWarning)
import bb.event
import bb.cooker
buildfile = sys.argv[1]
taskname = sys.argv[2]
if len(sys.argv) >= 4:
dryrun = sys.argv[3]
else:
dryrun = False
if len(sys.argv) >= 5:
hashfile = sys.argv[4]
p = pickle.Unpickler(file(hashfile, "rb"))
hashdata = p.load()
else:
hashdata = None
handler = bb.event.LogHandler()
logger.addHandler(handler)
#An example to make debug log messages show up
#bb.msg.init_msgconfig(True, 3, [])
console = logging.StreamHandler(sys.stdout)
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
def worker_fire(event, d):
if isinstance(event, logging.LogRecord):
console.handle(event)
bb.event.worker_fire = worker_fire
bb.event.worker_pid = os.getpid()
initialenv = os.environ.copy()
config = BBConfiguration()
def register_idle_function(self, function, data):
pass
cooker = bb.cooker.BBCooker(config, register_idle_function, initialenv)
config_data = cooker.configuration.data
cooker.status = config_data
cooker.handleCollections(config_data.getVar("BBFILE_COLLECTIONS", 1))
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
buildfile = cooker.matchFile(fn)
fn = bb.cache.Cache.realfn2virtual(buildfile, cls)
cooker.buildSetVars()
# Load data into the cache for fn and parse the loaded cache data
the_data = bb.cache.Cache.loadDataFull(fn, cooker.get_file_appends(fn), cooker.configuration.data)
if taskname.endswith("_setscene"):
the_data.setVarFlag(taskname, "quieterrors", "1")
if hashdata:
bb.parse.siggen.set_taskdata(hashdata["hashes"], hashdata["deps"])
for h in hashdata["hashes"]:
the_data.setVar("BBHASH_%s" % h, hashdata["hashes"][h])
for h in hashdata["deps"]:
the_data.setVar("BBHASHDEPS_%s" % h, hashdata["deps"][h])
ret = 0
if dryrun != "True":
ret = bb.build.exec_task(fn, taskname, the_data)
sys.exit(ret)

View File

@@ -20,7 +20,7 @@
import optparse, os, sys
# bitbake
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__), 'lib'))
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
import bb
import bb.parse
from string import split, join
@@ -48,7 +48,7 @@ class HTMLFormatter:
From pydoc... almost identical at least
"""
while pairs:
(a, b) = pairs[0]
(a,b) = pairs[0]
text = join(split(text, a), b)
pairs = pairs[1:]
return text
@@ -87,7 +87,7 @@ class HTMLFormatter:
return txt + ",".join(txts)
def groups(self, item):
def groups(self,item):
"""
Create HTML to link to related groups
"""
@@ -99,12 +99,12 @@ class HTMLFormatter:
txt = "<p><b>See also:</b><br>"
txts = []
for group in item.groups():
txts.append( """<a href="group%s.html">%s</a> """ % (group, group) )
txts.append( """<a href="group%s.html">%s</a> """ % (group,group) )
return txt + ",".join(txts)
def createKeySite(self, item):
def createKeySite(self,item):
"""
Create a site for a key. It contains the header/navigator, a heading,
the description, links to related keys and to the groups.
@@ -149,7 +149,8 @@ class HTMLFormatter:
"""
groups = ""
sorted_groups = sorted(doc.groups())
sorted_groups = doc.groups()
sorted_groups.sort()
for group in sorted_groups:
groups += """<a href="group%s.html">%s</a><br>""" % (group, group)
@@ -184,7 +185,8 @@ class HTMLFormatter:
Create Overview of all avilable keys
"""
keys = ""
sorted_keys = sorted(doc.doc_keys())
sorted_keys = doc.doc_keys()
sorted_keys.sort()
for key in sorted_keys:
keys += """<a href="key%s.html">%s</a><br>""" % (key, key)
@@ -212,7 +214,7 @@ class HTMLFormatter:
description += "<h2 Description of Grozp %s</h2>" % gr
description += _description
items.sort(lambda x, y:cmp(x.name(), y.name()))
items.sort(lambda x,y:cmp(x.name(),y.name()))
for group in items:
groups += """<a href="key%s.html">%s</a><br>""" % (group.name(), group.name())
@@ -341,7 +343,7 @@ class DocumentationItem:
def addGroup(self, group):
self._groups.append(group)
def addRelation(self, relation):
def addRelation(self,relation):
self._related.append(relation)
def sort(self):
@@ -394,7 +396,7 @@ class Documentation:
"""
return self.__groups.keys()
def group_content(self, group_name):
def group_content(self,group_name):
"""
Return a list of keys/names that are in a specefic
group or the empty list
@@ -410,7 +412,7 @@ def parse_cmdline(args):
Parse the CMD line and return the result as a n-tuple
"""
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__, __version__))
parser = optparse.OptionParser( version = "Bitbake Documentation Tool Core version %s, %%prog version %s" % (bb.__version__,__version__))
usage = """%prog [options]
Create a set of html pages (documentation) for a bitbake.conf....
@@ -426,12 +428,13 @@ Create a set of html pages (documentation) for a bitbake.conf....
parser.add_option( "-D", "--debug", help = "Increase the debug level",
action = "count", dest = "debug", default = 0 )
parser.add_option( "-v", "--verbose", help = "output more chit-char to the terminal",
parser.add_option( "-v","--verbose", help = "output more chit-char to the terminal",
action = "store_true", dest = "verbose", default = False )
options, args = parser.parse_args( sys.argv )
bb.msg.init_msgconfig(options.verbose, options.debug)
if options.debug:
bb.msg.set_debug_level(options.debug)
return options.config, options.output
@@ -440,7 +443,7 @@ def main():
The main Method
"""
(config_file, output_dir) = parse_cmdline( sys.argv )
(config_file,output_dir) = parse_cmdline( sys.argv )
# right to let us load the file now
try:
@@ -450,8 +453,6 @@ def main():
except bb.parse.ParseError:
bb.fatal( "Unable to parse %s" % config_file )
if isinstance(documentation, dict):
documentation = documentation[""]
# Assuming we've the file loaded now, we will initialize the 'tree'
doc = Documentation()
@@ -462,7 +463,7 @@ def main():
state_group = 2
for key in bb.data.keys(documentation):
data = documentation.getVarFlag(key, "doc")
data = bb.data.getVarFlag(key, "doc", documentation)
if not data:
continue
@@ -496,7 +497,7 @@ def main():
doc.insert_doc_item(doc_ins)
# let us create the HTML now
bb.utils.mkdirhier(output_dir)
bb.mkdirhier(output_dir)
os.chdir(output_dir)
# Let us create the sites now. We do it in the following order

View File

@@ -1,24 +1,4 @@
" Vim filetype detection file
" Language: BitBake
" Author: Ricardo Salveti <rsalveti@rsalveti.net>
" Copyright: Copyright (C) 2008 Ricardo Salveti <rsalveti@rsalveti.net>
" Licence: You may redistribute this under the same terms as Vim itself
"
" This sets up the syntax highlighting for BitBake files, like .bb, .bbclass and .inc
if &compatible || version < 600
finish
endif
" .bb and .bbclass
au BufNewFile,BufRead *.b{b,bclass} set filetype=bitbake
" .inc
au BufNewFile,BufRead *.inc set filetype=bitbake
" .conf
au BufNewFile,BufRead *.conf
\ if (match(expand("%:p:h"), "conf") > 0) |
\ set filetype=bitbake |
\ endif
au BufNewFile,BufRead *.bb setfiletype bitbake
au BufNewFile,BufRead *.bbclass setfiletype bitbake
au BufNewFile,BufRead *.inc setfiletype bitbake
" au BufNewFile,BufRead *.conf setfiletype bitbake

View File

@@ -1 +0,0 @@
set sts=4 sw=4 et

View File

@@ -1,85 +0,0 @@
" Vim plugin file
" Purpose: Create a template for new bb files
" Author: Ricardo Salveti <rsalveti@gmail.com>
" Copyright: Copyright (C) 2008 Ricardo Salveti <rsalveti@gmail.com>
"
" This file is licensed under the MIT license, see COPYING.MIT in
" this source distribution for the terms.
"
" Based on the gentoo-syntax package
"
" Will try to use git to find the user name and email
if &compatible || v:version < 600
finish
endif
fun! <SID>GetUserName()
let l:user_name = system("git-config --get user.name")
if v:shell_error
return "Unknow User"
else
return substitute(l:user_name, "\n", "", "")
endfun
fun! <SID>GetUserEmail()
let l:user_email = system("git-config --get user.email")
if v:shell_error
return "unknow@user.org"
else
return substitute(l:user_email, "\n", "", "")
endfun
fun! BBHeader()
let l:current_year = strftime("%Y")
let l:user_name = <SID>GetUserName()
let l:user_email = <SID>GetUserEmail()
0 put ='# Copyright (C) ' . l:current_year .
\ ' ' . l:user_name . ' <' . l:user_email . '>'
put ='# Released under the MIT license (see COPYING.MIT for the terms)'
$
endfun
fun! NewBBTemplate()
let l:paste = &paste
set nopaste
" Get the header
call BBHeader()
" New the bb template
put ='DESCRIPTION = \"\"'
put ='HOMEPAGE = \"\"'
put ='LICENSE = \"\"'
put ='SECTION = \"\"'
put ='DEPENDS = \"\"'
put ='PR = \"r0\"'
put =''
put ='SRC_URI = \"\"'
" Go to the first place to edit
0
/^DESCRIPTION =/
exec "normal 2f\""
if paste == 1
set paste
endif
endfun
if !exists("g:bb_create_on_empty")
let g:bb_create_on_empty = 1
endif
" disable in case of vimdiff
if v:progname =~ "vimdiff"
let g:bb_create_on_empty = 0
endif
augroup NewBB
au BufNewFile *.bb
\ if g:bb_create_on_empty |
\ call NewBBTemplate() |
\ endif
augroup END

View File

@@ -1,123 +1,120 @@
" Vim syntax file
" Language: BitBake bb/bbclasses/inc
" Author: Chris Larson <kergoth@handhelds.org>
" Ricardo Salveti <rsalveti@rsalveti.net>
" Copyright: Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
" Copyright (C) 2008 Ricardo Salveti <rsalveti@rsalveti.net>
"
" Copyright (C) 2004 Chris Larson <kergoth@handhelds.org>
" This file is licensed under the MIT license, see COPYING.MIT in
" this source distribution for the terms.
"
" Syntax highlighting for bb, bbclasses and inc files.
" Language: BitBake
" Maintainer: Chris Larson <kergoth@handhelds.org>
" Filenames: *.bb, *.bbclass
if version < 600
syntax clear
elseif exists("b:current_syntax")
finish
endif
syn case match
" Catch incorrect syntax (only matches if nothing else does)
"
" It's an entirely new type, just has specific syntax in shell and python code
syn match bbUnmatched "."
if &compatible || v:version < 600
finish
endif
if exists("b:current_syntax")
finish
endif
" Other
syn match bbComment "^#.*$" display contains=bbTodo
syn keyword bbTodo TODO FIXME XXX contained
syn match bbDelimiter "[(){}=]" contained
syn match bbQuote /['"]/ contained
syn match bbArrayBrackets "[\[\]]" contained
" BitBake strings
syn match bbContinue "\\$"
syn region bbString matchgroup=bbQuote start=/"/ skip=/\\$/ excludenl end=/"/ contained keepend contains=bbTodo,bbContinue,bbVarDeref
syn region bbString matchgroup=bbQuote start=/'/ skip=/\\$/ excludenl end=/'/ contained keepend contains=bbTodo,bbContinue,bbVarDeref
" BitBake variable metadata
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbVarDeref "${[a-zA-Z0-9\-_\.]\+}" contained
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.]\+\(_[${}a-zA-Z0-9\-_\.]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbIdentifier "[a-zA-Z0-9\-_\.]\+" display contained
"syn keyword bbVarEq = display contained nextgroup=bbVarValue
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref
" BitBake variable metadata flags
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
"syn match bbVarFlagFlag "\[\([a-zA-Z0-9\-_\.]\+\)\]\s*\(=\)\@=" contains=bbIdentifier nextgroup=bbVarEq
" Functions!
syn match bbFunction "\h\w*" display contained
" BitBake python metadata
syn include @python syntax/python.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
" BitBake syntax
syn keyword bbPythonFlag python contained nextgroup=bbFunction
syn match bbPythonFuncDef "^\(python\s\+\)\(\w\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPythonFlag,bbFunction,bbDelimiter nextgroup=bbPythonFuncRegion skipwhite
syn region bbPythonFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
"hi def link bbPythonFuncRegion Comment
" Matching case
syn case match
" Indicates the error when nothing is matched
syn match bbUnmatched "."
" Comments
syn cluster bbCommentGroup contains=bbTodo,@Spell
syn keyword bbTodo COMBAK FIXME TODO XXX contained
syn match bbComment "#.*$" contains=@bbCommentGroup
" String helpers
syn match bbQuote +['"]+ contained
syn match bbDelimiter "[(){}=]" contained
syn match bbArrayBrackets "[\[\]]" contained
" BitBake strings
syn match bbContinue "\\$"
syn region bbString matchgroup=bbQuote start=+"+ skip=+\\$+ excludenl end=+"+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ excludenl end=+'+ contained keepend contains=bbTodo,bbContinue,bbVarDeref,bbVarPyValue,@Spell
" Vars definition
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ excludenl end=+}+ contained contains=@python
" Vars metadata flags
syn match bbVarFlagDef "^\([a-zA-Z0-9\-_\.]\+\)\(\[[a-zA-Z0-9\-_\.]\+\]\)\@=" contains=bbIdentifier nextgroup=bbVarFlagFlag
syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*\(=\)\@=" keepend excludenl contained contains=bbIdentifier nextgroup=bbVarEq
" Includes and requires
syn keyword bbInclude inherit include require contained
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
" Add taks and similar
syn keyword bbStatement addtask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_-]*" display contained contains=bbOEFunctions
" BitBake shell metadata
syn include @shell syntax/sh.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([0-9A-Za-z_-]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
" BitBake python metadata
syn keyword bbPyFlag python contained
syn match bbPyFuncDef "^\(python\s\+\)\([0-9A-Za-z_-]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbPyFlag,bbFunction,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@python
syn keyword bbFakerootFlag fakeroot contained nextgroup=bbFunction
syn match bbShellFuncDef "^\(fakeroot\s*\)\?\(\w\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbFakerootFlag,bbFunction,bbDelimiter nextgroup=bbShellFuncRegion skipwhite
syn region bbShellFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" keepend contained contains=@shell
"hi def link bbShellFuncRegion Comment
" BitBake 'def'd python functions
syn keyword bbPyDef def contained
syn region bbPyDefRegion start='^\(def\s\+\)\([0-9A-Za-z_-]\+\)\(\s*(.*)\s*\):\s*$' end='^\(\s\|$\)\@!' contains=@python
syn keyword bbDef def contained
syn region bbDefRegion start='^def\s\+\w\+\s*([^)]*)\s*:\s*$' end='^\(\s\|$\)\@!' contains=@python
" Highlighting Definitions
hi def link bbUnmatched Error
hi def link bbInclude Include
hi def link bbTodo Todo
hi def link bbComment Comment
hi def link bbQuote String
hi def link bbString String
hi def link bbDelimiter Keyword
hi def link bbArrayBrackets Statement
hi def link bbContinue Special
hi def link bbExport Type
hi def link bbExportFlag Type
hi def link bbIdentifier Identifier
hi def link bbVarDeref PreProc
hi def link bbVarDef Identifier
hi def link bbVarValue String
hi def link bbShFakeRootFlag Type
hi def link bbFunction Function
hi def link bbPyFlag Type
hi def link bbPyDef Statement
hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
" BitBake statements
syn keyword bbStatement include inherit require addtask addhandler EXPORT_FUNCTIONS display contained
syn match bbStatementLine "^\(include\|inherit\|require\|addtask\|addhandler\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
syn match bbStatementRest ".*$" contained contains=bbString,bbVarDeref
" Highlight
"
hi def link bbArrayBrackets Statement
hi def link bbUnmatched Error
hi def link bbVarDeref String
hi def link bbContinue Special
hi def link bbDef Statement
hi def link bbPythonFlag Type
hi def link bbExportFlag Type
hi def link bbFakerootFlag Type
hi def link bbStatement Statement
hi def link bbString String
hi def link bbTodo Todo
hi def link bbComment Comment
hi def link bbOperator Operator
hi def link bbError Error
hi def link bbFunction Function
hi def link bbDelimiter Delimiter
hi def link bbIdentifier Identifier
hi def link bbVarEq Operator
hi def link bbQuote String
hi def link bbVarValue String
let b:current_syntax = "bb"

View File

@@ -32,7 +32,7 @@ command.
\fBbitbake\fP is a program that executes the specified task (default is 'build')
for a given set of BitBake files.
.br
It expects that BBFILES is defined, which is a space separated list of files to
It expects that BBFILES is defined, which is a space seperated list of files to
be executed. BBFILES does support wildcards.
.br
Default BBFILES are the .bb files in the current directory.
@@ -54,9 +54,6 @@ continue as much as possible after an error. While the target that failed, and
those that depend on it, cannot be remade, the other dependencies of these
targets can be processed all the same.
.TP
.B \-a, \-\-tryaltconfigs
continue with builds by trying to use alternative providers where possible.
.TP
.B \-f, \-\-force
force run of specified cmd, regardless of stamp status
.TP
@@ -67,7 +64,7 @@ drop into the interactive mode also called the BitBake shell.
Specify task to execute. Note that this only executes the specified task for
the providee and the packages it depends on, i.e. 'compile' does not implicitly
call stage for the dependencies (IOW: use only if you know what you are doing).
Depending on the base.bbclass a listtasks task is defined and will show
Depending on the base.bbclass a listtaks tasks is defined and will show
available tasks.
.TP
.B \-rFILE, \-\-read=FILE
@@ -85,6 +82,9 @@ don't execute, just go through the motions
.B \-p, \-\-parse-only
quit after parsing the BB files (developers only)
.TP
.B \-d, \-\-disable-psyco
disable using the psyco just-in-time compiler (not recommended)
.TP
.B \-s, \-\-show-versions
show current and preferred versions of all packages
.TP
@@ -97,13 +97,12 @@ emit the dependency trees of the specified packages in the dot syntax
.B \-IIGNORED\_DOT\_DEPS, \-\-ignore-deps=IGNORED_DOT_DEPS
Stop processing at the given list of dependencies when generating dependency
graphs. This can help to make the graph more appealing
.TP
.B \-lDEBUG_DOMAINS, \-\-log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
.TP
.B \-P, \-\-profile
profile the command and print a report
.TP
.\"
.\" Next option is only in BitBake 1.7.x (trunk)
.\"
.\".TP
.\".B \-lDEBUG_DOMAINS, \-\-log-domains=DEBUG_DOMAINS
.\"Show debug logging for the specified logging domains
.SH AUTHORS
BitBake was written by

View File

@@ -45,7 +45,7 @@ endif
$(call command,xsltproc --stringparam base.dir $@/ $(if $(htmlcssfile),--stringparam html.stylesheet $(htmlcssfile)) $(htmlxsl) $(manual),XSLTPROC $@ $(manual))
$(xmltotypes): $(manual)
$(call command,xmlto --with-dblatex --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
$(call command,xmlto --extensions -o $(topdir)/$@ $@ $(manual),XMLTO $@ $(manual))
clean:
rm -rf $(cleanfiles)

View File

@@ -12,10 +12,9 @@
<corpauthor>BitBake Team</corpauthor>
</authorgroup>
<copyright>
<year>2004, 2005, 2006, 2011</year>
<year>2004, 2005, 2006</year>
<holder>Chris Larson</holder>
<holder>Phil Blundell</holder>
<holder>Richard Purdie</holder>
</copyright>
<legalnotice>
<para>This work is licensed under the Creative Commons Attribution License. To view a copy of this license, visit <ulink url="http://creativecommons.org/licenses/by/2.5/">http://creativecommons.org/licenses/by/2.5/</ulink> or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.</para>
@@ -27,10 +26,10 @@
<title>Overview</title>
<para>BitBake is, at its simplest, a tool for executing
tasks and managing metadata. As such, its similarities to GNU make and other
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions/projects such as Angstrom and the Yocto project.</para>
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions, including OpenZaurus and Familiar.</para>
</section>
<section>
<title>Background and goals</title>
<title>Background and Goals</title>
<para>Prior to BitBake, no other build tool adequately met
the needs of an aspiring embedded Linux distribution. All of the
buildsystems used by traditional desktop Linux distributions lacked
@@ -38,14 +37,14 @@ important functionality, and none of the ad-hoc
<emphasis>buildroot</emphasis> systems, prevalent in the
embedded space, were scalable or maintainable.</para>
<para>Some important original goals for BitBake were:
<para>Some important goals for BitBake were:
<itemizedlist>
<listitem><para>Handle crosscompilation.</para></listitem>
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
<listitem><para>Must be Linux distribution agnostic (both build and target).</para></listitem>
<listitem><para>Must be linux distribution agnostic (both build and target).</para></listitem>
<listitem><para>Must be architecture agnostic</para></listitem>
<listitem><para>Must support multiple build and target operating systems (including Cygwin, the BSDs, etc).</para></listitem>
<listitem><para>Must support multiple build and target operating systems (including cygwin, the BSDs, etc).</para></listitem>
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
@@ -54,18 +53,10 @@ between multiple projects using BitBake for their
builds.</para></listitem>
<listitem><para>Should provide an inheritance mechanism to
share common metadata between many packages.</para></listitem>
<listitem><para>Et cetera...</para></listitem>
</itemizedlist>
</para>
<para>Over time it has become apparent that some further requirements were necessary:
<itemizedlist>
<listitem><para>Handle variants of a base recipe (native, sdk, multilib).</para></listitem>
<listitem><para>Able to split metadata into layers and allow layers to override each other.</para></listitem>
<listitem><para>Allow representation of a given set of input variables to a task as a checksum.</para></listitem>
<listitem><para>based on that checksum, allow acceleration of builds with prebuilt components.</para></listitem>
</itemizedlist>
</para>
<para>BitBake satisfies all the original requirements and many more with extensions being made to the basic functionality to reflect the additionl requirements. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
<para>BitBake satisfies all these and many more. Flexibility and power have always been the priorities. It is highly extensible, supporting embedded Python code and execution of any arbitrary tasks.</para>
</section>
</chapter>
<chapter>
@@ -97,17 +88,6 @@ share common metadata between many packages.</para></listitem>
<varname>B</varname> = "pre${A}post"</screen></para>
<para>This results in <varname>A</varname> containing <literal>aval</literal> and <varname>B</varname> containing <literal>preavalpost</literal>.</para>
</section>
<section>
<title>Setting a default value (?=)</title>
<para><screen><varname>A</varname> ?= "aval"</screen></para>
<para>If <varname>A</varname> is set before the above is called, it will retain its previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
</section>
<section>
<title>Setting a weak default value (??=)</title>
<para><screen><varname>A</varname> ??= "somevalue"
<varname>A</varname> ??= "someothervalue"</screen></para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy/weak assignment in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used. Any other setting of A using = or ?= will however override the value set with ??=</para>
</section>
<section>
<title>Immediate variable expansion (:=)</title>
<para>:= results in a variable's contents being expanded immediately, rather than when the variable is actually used.</para>
@@ -134,12 +114,12 @@ share common metadata between many packages.</para></listitem>
<varname>B</varname> .= "additionaldata"
<varname>C</varname> = "cval"
<varname>C</varname> =. "test"</screen></para>
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above appending and prepending operators, no additional space
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above Appending and Prepending operators no additional space
will be introduced.</para>
</section>
<section>
<title>Conditional metadata set</title>
<para>OVERRIDES is a <quote>:</quote> separated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
<para>OVERRIDES is a <quote>:</quote> seperated variable containing each item you want to satisfy conditions. So, if you have a variable which is conditional on <quote>arm</quote>, and <quote>arm</quote> is in OVERRIDES, then the <quote>arm</quote> specific version of the variable is used rather than the non-conditional version. Example:</para>
<para><screen><varname>OVERRIDES</varname> = "architecture:os:machine"
<varname>TEST</varname> = "defaultvalue"
<varname>TEST_os</varname> = "osspecificvalue"
@@ -156,12 +136,12 @@ will be introduced.</para>
</section>
<section>
<title>Inclusion</title>
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse in whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
</section>
<section>
<title>Requiring inclusion</title>
<title>Requiring Inclusion</title>
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
raise an ParseError if the file to be included cannot be found. Otherwise it will behave just like the <literal>
raise an ParseError if the to be included file can not be found. Otherwise it will behave just like the <literal>
include</literal> directive.</para>
</section>
<section>
@@ -180,13 +160,13 @@ include</literal> directive.</para>
import time
print time.strftime('%Y%m%d', time.gmtime())
}</screen></para>
<para>This is the similar to the previous, but flags it as Python so that BitBake knows it is Python code.</para>
<para>This is the similar to the previous, but flags it as python so that BitBake knows it is python code.</para>
</section>
<section>
<title>Defining Python functions into the global Python namespace</title>
<title>Defining python functions into the global python namespace</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para><screen>def get_depends(bb, d):
if d.getVar('SOMECONDITION', True):
if bb.data.getVar('SOMECONDITION', d, True):
return "dependencywithcond"
else:
return "dependency"
@@ -196,119 +176,51 @@ include</literal> directive.</para>
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
</section>
<section>
<title>Variable flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by BitBake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<title>Variable Flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
</section>
<section>
<title>Inheritance</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudimentary form of inheritance. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.bbclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
<para>The <literal>inherit</literal> directive is a means of specifying what classes of functionality your .bb requires. It is a rudamentary form of inheritence. For example, you can easily abstract out the tasks involved in building a package that uses autoconf and automake, and put that into a bbclass for your packages to make use of. A given bbclass is located by searching for classes/filename.oeclass in <envar>BBPATH</envar>, where filename is what you inherited.</para>
</section>
<section>
<title>Tasks</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined Python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
<para><screen>python do_printdate () {
import time
print time.strftime('%Y%m%d', time.gmtime())
}
addtask printdate before do_build</screen></para>
<para>This defines the necessary Python function and adds it as a task which is now a dependency of do_build, the default task. If anyone executes the do_build task, that will result in do_printdate being run first.</para>
<para>This defines the necessary python function and adds it as a task which is now a dependency of do_build (the default task). If anyone executes the do_build task, that will result in do_printdate being run first.</para>
</section>
<section>
<title>Task Flags</title>
<para>Tasks support a number of flags which control various functionality of the task. These are as follows:</para>
<para>'dirs' - directories which should be created before the task runs</para>
<para>'cleandirs' - directories which should created before the task runs but should be empty</para>
<para>'noexec' - marks the tasks as being empty and no execution required. These are used as dependency placeholders or used when added tasks need to be subsequently disabled.</para>
<para>'nostamp' - don't generate a stamp file for a task. This means the task is always rexecuted.</para>
<para>'fakeroot' - this task needs to be run in a fakeroot environment, obtained by adding the variables in FAKEROOTENV to the environment.</para>
<para>'umask' - the umask to run the task under.</para>
<para> For the 'deptask', 'rdeptask', 'recdeptask' and 'recrdeptask' flags please see the dependencies section.</para>
</section>
<section>
<title>Events</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>BitBake allows installation of event handlers. Events are triggered at certain points during operation, such as the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent is to make it easy to do things like email notification on build failure.</para>
<para>BitBake allows to install event handlers. Events are triggered at certain points during operation, such as, the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent was to make it easy to do things like email notifications on build failure.</para>
<para><screen>addhandler myclass_eventhandler
python myclass_eventhandler() {
from bb.event import getName
from bb.event import NotHandled, getName
from bb import data
print("The name of the Event is %s" % getName(e))
print("The file we run for is %s" % data.getVar('FILE', e.data, True))
print "The name of the Event is %s" % getName(e)
print "The file we run for is %s" % data.getVar('FILE', e.data, True)
return NotHandled
}
</screen></para><para>
This event handler gets called every time an event is triggered. A global variable <varname>e</varname> is defined. <varname>e</varname>.data contains an instance of bb.data. With the getName(<varname>e</varname>)
method one can get the name of the triggered event.</para><para>The above event handler prints the name
of the event and the content of the <varname>FILE</varname> variable.</para>
</section>
<section>
<title>Variants</title>
<para>Two BitBake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes used to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
<para><screen>BBVERSIONS = "1.0 2.0 git"
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
1.0.[7-9]:1.0.7+"
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides; it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
</section>
</section>
<section>
<title>Variable interaction: Worked Examples</title>
<para>Despite the documentation of the different forms of variable definition above, it can be hard to work out what happens when variable operators are combined. This section documents some common questions people have regarding the way variables interact.</para>
<section>
<title>Override and append ordering</title>
<para>There is often confusion about which order overrides and the various append operators take effect.</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A_foo_append</varname> = "X"</screen></para>
<para>In this case, X is unconditionally appended to the variable <varname>A_foo</varname>. Since foo is an override, A_foo would then replace <varname>A</varname>.</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A</varname> = "X"
<varname>A_append_foo</varname> = "Y"</screen></para>
<para>In this case, only when foo is in OVERRIDES, Y is appended to the variable <varname>A</varname> so the value of <varname>A</varname> would become XY (NB: no spaces are appended).</para>
<para><screen><varname>OVERRIDES</varname> = "foo"
<varname>A_foo_append</varname> = "X"
<varname>A_foo_append</varname> += "Y"</screen></para>
<para>This behaves as per the first case above, but the value of <varname>A</varname> would be "X Y" instead of just "X".</para>
<para><screen><varname>A</varname> = "1"
<varname>A_append</varname> = "2"
<varname>A_append</varname> = "3"
<varname>A</varname> += "4"
<varname>A</varname> .= "5"</screen></para>
<para>Would ultimately result in <varname>A</varname> taking the value "1 4523" since the _append operator executes at the same time as the expansion of other overrides.</para>
</section>
<section>
<title>Key Expansion</title>
<para>Key expansion happens at the data store finalisation time just before overrides are expanded.</para>
<para><screen><varname>A${B}</varname> = "X"
<varname>B</varname> = "2"
<varname>A2</varname> = "Y"</screen></para>
<para>So in this case <varname>A2</varname> would take the value of "X".</para>
</section>
</section>
<section>
<title>Dependency handling</title>
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<title>Dependency Handling</title>
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
@@ -316,26 +228,26 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
<section>
<title>DEPENDS</title>
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>RDEPENDS</title>
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
</section>
<section>
<title>Inter task</title>
<title>Inter Task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
@@ -345,56 +257,33 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
<section>
<title>Parsing</title>
<section>
<title>Configuration files</title>
<para>The first kind of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
<para>BitBake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list, a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
<para>BitBake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on).</para>
<title>Configuration Files</title>
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed. Currently, BitBake has hardcoded knowledge of a single configuration file. It expects to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
<para>Only variable definitions and include directives are allowed in .conf files.</para>
</section>
<section>
<title>Classes</title>
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the directories in <envar>BBPATH</envar>.</para>
<para>BitBake classes are our rudamentary inheritence mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
</section>
<section>
<title>.bb files</title>
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
<title>.bb Files</title>
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space seperated list of .bb files, and does handle wildcards.</para>
</section>
</section>
</chapter>
<chapter>
<title>File download support</title>
<title>File Download support</title>
<section>
<title>Overview</title>
<para>BitBake provides support to download files this procedure is called fetching and it handled by the fetch and fetch2 modules. At this point the original fetch code is considered to be replaced by fetch2 and this manual only related to the fetch2 codebase.</para>
<para>The SRC_URI is normally used to tell BitBake which files to fetch. The next sections will describe the available fetchers and their options. Each fetcher honors a set of variables and per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantics of the variables and parameters are defined by the fetcher. BitBake tries to have consistent semantics between the different fetchers.
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to indicate BitBake which files to fetch. The next sections will describe th available fetchers and the options they have. Each Fetcher honors a set of Variables and
a per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantic of the Variables and Parameters are defined by the Fetcher. BitBakes tries to have a consistent semantic between the different Fetchers.
</para>
<para>The overall fetch process is that first, fetches are attempted from PREMIRRORS. If those don't work, the original SRC_URI is attempted and if that fails, BitBake will fall back to MIRRORS. Cross urls are supported, so its possible to mirror a git repository on an http server as a tarball for example. Some example commonly used mirror definitions are:</para>
<para><screen><varname>PREMIRRORS</varname> ?= "\
bzr://.*/.* http://somemirror.org/sources/ \n \
cvs://.*/.* http://somemirror.org/sources/ \n \
git://.*/.* http://somemirror.org/sources/ \n \
hg://.*/.* http://somemirror.org/sources/ \n \
osc://.*/.* http://somemirror.org/sources/ \n \
p4://.*/.* http://somemirror.org/sources/ \n \
svk://.*/.* http://somemirror.org/sources/ \n \
svn://.*/.* http://somemirror.org/sources/ \n"
<varname>MIRRORS</varname> =+ "\
ftp://.*/.* http://somemirror.org/sources/ \n \
http://.*/.* http://somemirror.org/sources/ \n \
https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
<para>Non-local downloaded output is placed into the directory specified by the <varname>DL_DIR</varname>. For non local archive downloads the code can verify sha256 and md5 checksums for the download to ensure the file has been downloaded correctly. These may be specified either in the form <varname>SRC_URI[md5sum]</varname> for the md5 checksum and <varname>SRC_URI[sha256sum]</varname> for the sha256 checksum or as parameters on the SRC_URI such as SRC_URI="http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d". If <varname>BB_STRICT_CHECKSUM</varname> is set, any download without a checksum will trigger an error message. In cases where multiple files are listed in SRC_URI, the name parameter is used assign names to the urls and these are then specified in the checksums in the form SRC_URI[name.sha256sum].</para>
</section>
<section>
<title>Local file fetcher</title>
<para>The URN for the local file fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative, <varname>FILESPATH</varname> and failing that <varname>FILESDIR</varname> will be used to find the appropriate relative file. The metadata usually extend these variables to include variations of the values in <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
<title>Local File Fetcher</title>
<para>The URN for the Local File Fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
@@ -403,11 +292,10 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
</section>
<section>
<title>CVS fetcher</title>
<para>The URN for the CVS fetcher is <emphasis>cvs</emphasis>. This fetcher honors the variables <varname>CVSDIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build). <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables to use for the CVS checkout or update.
<title>CVS File Fetcher</title>
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIRS</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
</para>
<para>The supported parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>. If <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally, <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout by default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR></varname>.
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</screen>
@@ -415,22 +303,32 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
</section>
<section>
<title>HTTP/FTP fetcher</title>
<para>The URNs for the HTTP/FTP fetcher are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This fetcher honors the variables <varname>FETCHCOMMAND_wget</varname>. <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the URI and basename of the file to be fetched.
<title>HTTP/FTP Fetcher</title>
<para>The URNs for the HTTP/FTP are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file, <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the uri and basename of the to be fetched file. <varname>PREMIRRORS</varname>
will be tried first when fetching a file if that fails the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
</para>
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac"
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac"
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan"
<para>The only supported Parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
</para>
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac;md5sum=12343"
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac;md5sum=1234"
<varname>SRC_URI</varname> = "ftp://you@oe.handheld.sorg/home/you/secret.plan;md5sum=1234"
</screen></para>
</section>
<section>
<title>SVN fetcher</title>
<para>The URN for the SVN fetcher is <emphasis>svn</emphasis>.
<title>SVK Fetcher</title>
<para>
<emphasis>Currently NOT supported</emphasis>
</para>
<para>This fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>SVNDIR</varname>, <varname>SRCREV</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command. <varname>SRCREV</varname> specifies which revision to use when doing the fetching.
</section>
<section>
<title>SVN Fetcher</title>
<para>The URN for the SVN Fetcher is <emphasis>svn</emphasis>.
</para>
<para>The supported parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the Subversion protocol, <varname>rev</varname> is the Subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
<para>This Fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command, <varname>DL_DIR</varname> is the directory where tarballs will be saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
</para>
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname>. <varname>proto</varname> is the subversion prototype, <varname>rev</varname> is the subversions revision.
</para>
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
@@ -438,12 +336,12 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
</section>
<section>
<title>GIT fetcher</title>
<title>GIT Fetcher</title>
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
</para>
<para>The variable <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
<para>The Variables <varname>DL_DIR</varname>, <varname>GITDIR</varname> are used. <varname>DL_DIR</varname> will be used to store the checkedout version. <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
</para>
<para>The parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a Git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the Git protocol to use and defaults to <quote>git</quote> if a hostname is set, otherwise its <quote>file</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>.
</para>
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
@@ -454,19 +352,27 @@ https://.*/.* http://somemirror.org/sources/ \n"</screen></para>
<chapter>
<title>The BitBake command</title>
<title>Commands</title>
<section>
<title>bbread</title>
<para>bbread is a command for displaying BitBake metadata. When run with no arguments, it has the core parse 'conf/bitbake.conf', as located in BBPATH, and displays that. If you supply a file on the commandline, such as a .bb, then it parses that afterwards, using the aforementioned configuration metadata.</para>
<para><emphasis>NOTE: the stand a lone bbread command was removed. Instead of bbread use bitbake -e.
</emphasis></para>
</section>
<section>
<title>bitbake</title>
<section>
<title>Introduction</title>
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
</section>
<section>
<title>Usage and syntax</title>
<title>Usage and Syntax</title>
<para>
<screen><prompt>$ </prompt>bitbake --help
usage: bitbake [options] [package ...]
Executes the specified task (default is 'build') for a given set of BitBake files.
It expects that BBFILES is defined, which is a space separated list of files to
It expects that BBFILES is defined, which is a space seperated list of files to
be executed. BBFILES does support wildcards.
Default BBFILES are the .bb files in the current directory.
@@ -488,7 +394,7 @@ options:
it depends on, i.e. 'compile' does not implicitly call
stage for the dependencies (IOW: use only if you know
what you are doing). Depending on the base.bbclass a
listtasks task is defined and will show available
listtasks tasks is defined and will show available
tasks
-r FILE, --read=FILE read the specified file before bitbake.conf
-v, --verbose output more chit-chat to the terminal
@@ -496,6 +402,8 @@ options:
than once.
-n, --dry-run don't execute, just go through the motions
-p, --parse-only quit after parsing the BB files (developers only)
-d, --disable-psyco disable using the psyco just-in-time compiler (not
recommended)
-s, --show-versions show current and preferred versions of all packages
-e, --environment show the global or per-package environment (this is
what used to be bbread)
@@ -507,15 +415,13 @@ options:
the graph more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile profile the command and print a report
</screen>
</para>
<para>
<example>
<title>Executing a task against a single .bb</title>
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and BitBake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and bitbake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
<para><quote>clean</quote> task:</para>
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
<para><quote>build</quote> task:</para>
@@ -525,8 +431,8 @@ options:
<para>
<example>
<title>Executing tasks against a set of .bb files</title>
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell BitBake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
<para>The next section, Metadata, outlines how to specify such things.</para>
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell bitbake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
<para>The next section, Metadata, outlines how one goes about specifying such things.</para>
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
<screen><prompt>$ </prompt>bitbake blah</screen>
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
@@ -538,8 +444,8 @@ options:
<example>
<title>Generating dependency graphs</title>
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">Graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends, one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. This way, <varname>DEPENDS</varname> from inherited classes such as base.bbclass can be removed from the graph.</para>
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
<screen><prompt>$ </prompt>bitbake -g blah</screen>
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
</example>
@@ -547,20 +453,20 @@ Two files will be written into the current working directory, <emphasis>depends.
</section>
<section>
<title>Special variables</title>
<para>Certain variables affect BitBake operation:</para>
<para>Certain variables affect bitbake operation:</para>
<section>
<title><varname>BB_NUMBER_THREADS</varname></title>
<para> The number of threads BitBake should run at once (default: 1).</para>
<para> The number of threads bitbake should run at once (default: 1).</para>
</section>
</section>
<section>
<title>Metadata</title>
<para>As you may have seen in the usage information, or in the information about .bb files, the <varname>BBFILES</varname> variable is how the BitBake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space seperated list of files that are available, and supports wildcards.
<example>
<title>Setting BBFILES</title>
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
</example></para>
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, set to a component of the .bb filename by default.</para>
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space seperated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
<example>
<title>Depending on another .bb</title>
<para>a.bb:
@@ -573,7 +479,7 @@ DEPENDS += "package-b"</screen>
</example>
<example>
<title>Using PROVIDES</title>
<para>This example shows the usage of the <varname>PROVIDES</varname> variable, which allows a given .bb to specify what functionality it provides.</para>
<para>This example shows the usage of the PROVIDES variable, which allows a given .bb to specify what functionality it provides.</para>
<para>package1.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
@@ -583,16 +489,16 @@ DEPENDS += "package-b"</screen>
<para>package3.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
<para>As you can see, we have two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running BitBake to control which of those providers gets used. There is, indeed, such a way.</para>
<para>As you can see, here there are two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running bitbake to control which of those providers gets used. There is, indeed, such a way.</para>
<para>The following would go into a .conf file, to select package1:
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
</para>
</example>
<example>
<title>Specifying version preference</title>
<para>When there are multiple <quote>versions</quote> of a given package, BitBake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preference for the default selected version. In addition, the user can specify their preferred version.</para>
<para>When there are multiple <quote>versions</quote> of a given package, bitbake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preferences for the default selected version. In addition, the user can specify their preferences with regard to version.</para>
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
<para>If we then have an <filename>a_1.2.bb</filename>, BitBake will choose 1.2 by default. However, if we define the following variable in a .conf that BitBake parses, we can change that.
<para>If we then have an <filename>a_1.2.bb</filename>, bitbake will choose 1.2 by default. However, if we define the following variable in a .conf that bitbake parses, we can change that.
<screen>PREFERRED_VERSION_a = "1.1"</screen>
</para>
</example>
@@ -607,5 +513,6 @@ BBFILE_PRIORITY_upstream = "5"
BBFILE_PRIORITY_local = "10"</screen>
</example>
</section>
</section>
</chapter>
</book>

View File

@@ -3,7 +3,7 @@
#
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
#
# Copyright (C) 2006 Tim Amsell
# Copyright (C) 2006 Tim Amsell
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -18,31 +18,31 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#Please Note:
#Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.
#
from __future__ import print_function
from inspect import getmro
import copy
import types
ImmutableTypes = (
types.NoneType,
bool,
complex,
float,
int,
long,
tuple,
frozenset,
basestring
)
import types, sets
types.ImmutableTypes = tuple([ \
types.BooleanType, \
types.ComplexType, \
types.FloatType, \
types.IntType, \
types.LongType, \
types.NoneType, \
types.TupleType, \
sets.ImmutableSet] + \
list(types.StringTypes))
MUTABLE = "__mutable__"
class COWMeta(type):
pass
class COWDictMeta(COWMeta):
__warn__ = False
__hasmutable__ = False
@@ -61,12 +61,12 @@ class COWDictMeta(COWMeta):
__call__ = cow
def __setitem__(cls, key, value):
if not isinstance(value, ImmutableTypes):
if not isinstance(value, types.ImmutableTypes):
if not isinstance(value, COWMeta):
cls.__hasmutable__ = True
key += MUTABLE
setattr(cls, key, value)
def __getmutable__(cls, key, readonly=False):
nkey = key + MUTABLE
try:
@@ -79,10 +79,10 @@ class COWDictMeta(COWMeta):
return value
if not cls.__warn__ is False and not isinstance(value, COWMeta):
print("Warning: Doing a copy because %s is a mutable type." % key, file=cls.__warn__)
print >> cls.__warn__, "Warning: Doing a copy because %s is a mutable type." % key
try:
value = value.copy()
except AttributeError as e:
except AttributeError, e:
value = copy.copy(value)
setattr(cls, nkey, value)
return value
@@ -100,13 +100,13 @@ class COWDictMeta(COWMeta):
value = getattr(cls, key)
except AttributeError:
value = cls.__getmutable__(key, readonly)
# This is for values which have been deleted
# This is for values which have been deleted
if value is cls.__marker__:
raise AttributeError("key %s does not exist." % key)
return value
except AttributeError as e:
except AttributeError, e:
if not default is cls.__getmarker__:
return default
@@ -120,9 +120,6 @@ class COWDictMeta(COWMeta):
key += MUTABLE
delattr(cls, key)
def __contains__(cls, key):
return cls.has_key(key)
def has_key(cls, key):
value = cls.__getreadonly__(key, cls.__marker__)
if value is cls.__marker__:
@@ -132,7 +129,7 @@ class COWDictMeta(COWMeta):
def iter(cls, type, readonly=False):
for key in dir(cls):
if key.startswith("__"):
continue
continue
if key.endswith(MUTABLE):
key = key[:-len(MUTABLE)]
@@ -158,11 +155,11 @@ class COWDictMeta(COWMeta):
return cls.iter("keys")
def itervalues(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
return cls.iter("values", readonly)
def iteritems(cls, readonly=False):
if not cls.__warn__ is False and cls.__hasmutable__ and readonly is False:
print("Warning: If you arn't going to change any of the values call with True.", file=cls.__warn__)
print >> cls.__warn__, "Warning: If you arn't going to change any of the values call with True."
return cls.iter("items", readonly)
class COWSetMeta(COWDictMeta):
@@ -181,13 +178,13 @@ class COWSetMeta(COWDictMeta):
def remove(cls, value):
COWDictMeta.__delitem__(cls, repr(hash(value)))
def __in__(cls, value):
return COWDictMeta.has_key(repr(hash(value)))
def iterkeys(cls):
raise TypeError("sets don't have keys")
def iteritems(cls):
raise TypeError("sets don't have 'items'")
@@ -204,120 +201,120 @@ if __name__ == "__main__":
import sys
COWDictBase.__warn__ = sys.stderr
a = COWDictBase()
print("a", a)
print "a", a
a['a'] = 'a'
a['b'] = 'b'
a['dict'] = {}
b = a.copy()
print("b", b)
print "b", b
b['c'] = 'b'
print()
print
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems():
print(x)
print()
print x
print
b['dict']['a'] = 'b'
b['a'] = 'c'
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems():
print(x)
print()
print x
print
try:
b['dict2']
except KeyError as e:
print("Okay!")
except KeyError, e:
print "Okay!"
a['set'] = COWSetBase()
a['set'].add("o1")
a['set'].add("o1")
a['set'].add("o2")
print("a", a)
print "a", a
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b['set'].itervalues():
print(x)
print()
print x
print
b['set'].add('o3')
print("a", a)
print "a", a
for x in a['set'].itervalues():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b['set'].itervalues():
print(x)
print()
print x
print
a['set2'] = set()
a['set2'].add("o1")
a['set2'].add("o1")
a['set2'].add("o2")
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print
del b['b']
try:
print(b['b'])
print b['b']
except KeyError:
print("Yay! deleted key raises error")
print "Yay! deleted key raises error"
if b.has_key('b'):
print("Boo!")
print "Boo!"
else:
print("Yay - has_key with delete works!")
print("a", a)
print "Yay - has_key with delete works!"
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print
b.__revertitem__('b')
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print
b.__revertitem__('dict')
print("a", a)
print "a", a
for x in a.iteritems():
print(x)
print("--")
print("b", b)
print x
print "--"
print "b", b
for x in b.iteritems(readonly=True):
print(x)
print()
print x
print

File diff suppressed because it is too large Load Diff

View File

@@ -25,54 +25,25 @@
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import shlex
import bb
import bb.msg
import bb.process
from contextlib import nested
from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
NULL = open(os.devnull, 'r+')
# When we execute a python function we'd like certain things
# in all namespaces, hence we add them to __builtins__
# If we do not do this and use the exec globals, they will
# not be available to subfunctions.
__builtins__['bb'] = bb
__builtins__['os'] = os
from bb import data, fetch, event, mkdirhier, utils
import bb, os
# events
class FuncFailed(Exception):
def __init__(self, name = None, logfile = None):
self.logfile = logfile
self.name = name
if name:
self.msg = 'Function failed: %s' % name
else:
self.msg = "Function failed"
"""Executed function failed"""
def __str__(self):
if self.logfile and os.path.exists(self.logfile):
msg = ("%s (see %s for further information)" %
(self.msg, self.logfile))
else:
msg = self.msg
return msg
class EventException(Exception):
"""Exception which is associated with an Event."""
def __init__(self, msg, event):
self.args = msg, event
class TaskBase(event.Event):
"""Base class for task events"""
def __init__(self, t, d ):
self._task = t
self._package = d.getVar("PF", 1)
event.Event.__init__(self)
self._message = "package %s: task %s: %s" % (d.getVar("PF", 1), t, bb.event.getName(self)[4:])
event.Event.__init__(self, d)
def getTask(self):
return self._task
@@ -91,336 +62,256 @@ class TaskSucceeded(TaskBase):
class TaskFailed(TaskBase):
"""Task execution failed"""
def __init__(self, task, logfile, metadata, errprinted = False):
self.logfile = logfile
self.errprinted = errprinted
super(TaskFailed, self).__init__(task, metadata)
class TaskFailedSilent(TaskBase):
"""Task execution failed (silently)"""
def __init__(self, task, logfile, metadata):
self.logfile = logfile
super(TaskFailedSilent, self).__init__(task, metadata)
class TaskInvalid(TaskBase):
def __init__(self, task, metadata):
super(TaskInvalid, self).__init__(task, metadata)
self._message = "No such task '%s'" % task
class LogTee(object):
def __init__(self, logger, outfile):
self.outfile = outfile
self.logger = logger
self.name = self.outfile.name
def write(self, string):
self.logger.plain(string)
self.outfile.write(string)
def __enter__(self):
self.outfile.__enter__()
return self
def __exit__(self, *excinfo):
self.outfile.__exit__(*excinfo)
def __repr__(self):
return '<LogTee {0}>'.format(self.name)
class InvalidTask(TaskBase):
"""Invalid Task"""
# functions
def exec_func(func, d, dirs = None):
"""Execute an BB 'function'"""
body = data.getVar(func, d)
if not body:
if body is None:
logger.warn("Function %s doesn't exist", func)
return
flags = data.getVarFlags(func, d)
cleandirs = flags.get('cleandirs')
if cleandirs:
for cdir in data.expand(cleandirs, d).split():
bb.utils.remove(cdir, True)
for item in ['deps', 'check', 'interactive', 'python', 'cleandirs', 'dirs', 'lockfiles', 'fakeroot']:
if not item in flags:
flags[item] = None
if dirs is None:
dirs = flags.get('dirs')
if dirs:
dirs = data.expand(dirs, d).split()
ispython = flags['python']
cleandirs = (data.expand(flags['cleandirs'], d) or "").split()
for cdir in cleandirs:
os.system("rm -rf %s" % cdir)
if dirs:
for adir in dirs:
bb.utils.mkdirhier(adir)
dirs = data.expand(dirs, d)
else:
dirs = (data.expand(flags['dirs'], d) or "").split()
for adir in dirs:
mkdirhier(adir)
if len(dirs) > 0:
adir = dirs[-1]
else:
adir = data.getVar('B', d, 1)
bb.utils.mkdirhier(adir)
ispython = flags.get('python')
lockflag = flags.get('lockfiles')
if lockflag:
lockfiles = [data.expand(f, d) for f in lockflag.split()]
else:
lockfiles = None
tempdir = data.getVar('T', d, 1)
bb.utils.mkdirhier(tempdir)
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
with bb.utils.fileslocked(lockfiles):
if ispython:
exec_func_python(func, d, runfile, cwd=adir)
else:
exec_func_shell(func, d, runfile, cwd=adir)
_functionfmt = """
def {function}(d):
{body}
{function}(d)
"""
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
def exec_func_python(func, d, runfile, cwd=None):
"""Execute a python BB 'function'"""
bbfile = d.getVar('FILE', True)
code = _functionfmt.format(function=func, body=d.getVar(func, True))
bb.utils.mkdirhier(os.path.dirname(runfile))
with open(runfile, 'w') as script:
script.write(code)
if cwd:
try:
olddir = os.getcwd()
except OSError:
olddir = None
os.chdir(cwd)
try:
comp = utils.better_compile(code, func, bbfile)
utils.better_exec(comp, {"d": d}, code, bbfile)
except:
if sys.exc_info()[0] in (bb.parse.SkipPackage, bb.build.FuncFailed):
raise
prevdir = os.getcwd()
except OSError:
prevdir = data.getVar('TOPDIR', d, True)
if adir and os.access(adir, os.F_OK):
os.chdir(adir)
raise FuncFailed(func, None)
finally:
if cwd and olddir:
try:
os.chdir(olddir)
except OSError:
pass
locks = []
lockfiles = (data.expand(flags['lockfiles'], d) or "").split()
for lock in lockfiles:
locks.append(bb.utils.lockfile(lock))
def exec_func_shell(function, d, runfile, cwd=None):
"""Execute a shell function from the metadata
if flags['python']:
exec_func_python(func, d)
else:
exec_func_shell(func, d, flags)
for lock in locks:
bb.utils.unlockfile(lock)
if os.path.exists(prevdir):
os.chdir(prevdir)
def exec_func_python(func, d):
"""Execute a python BB 'function'"""
import re, os
bbfile = bb.data.getVar('FILE', d, 1)
tmp = "def " + func + "():\n%s" % data.getVar(func, d)
tmp += '\n' + func + '()'
comp = utils.better_compile(tmp, func, bbfile)
prevdir = os.getcwd()
g = {} # globals
g['bb'] = bb
g['os'] = os
g['d'] = d
utils.better_exec(comp, g, tmp, bbfile)
if os.path.exists(prevdir):
os.chdir(prevdir)
def exec_func_shell(func, d, flags):
"""Execute a shell BB 'function' Returns true if execution was successful.
For this, it creates a bash shell script in the tmp dectory, writes the local
data into it and finally executes. The output of the shell will end in a log file and stdout.
Note on directory behavior. The 'dirs' varflag should contain a list
of the directories you need created prior to execution. The last
item in the list is where we will chdir/cd to.
"""
import sys
# Don't let the emitted shell script override PWD
d.delVarFlag('PWD', 'export')
deps = flags['deps']
check = flags['check']
interact = flags['interactive']
if check in globals():
if globals()[check](func, deps):
return
with open(runfile, 'w') as script:
script.write('#!/bin/sh -e\n')
data.emit_func(function, script, d)
global logfile
t = data.getVar('T', d, 1)
if not t:
return 0
mkdirhier(t)
logfile = "%s/log.%s.%s" % (t, func, str(os.getpid()))
runfile = "%s/run.%s.%s" % (t, func, str(os.getpid()))
if bb.msg.loggerVerboseLogs:
script.write("set -x\n")
if cwd:
script.write("cd %s\n" % cwd)
script.write("%s\n" % function)
f = open(runfile, "w")
f.write("#!/bin/sh -e\n")
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
data.emit_env(f, d)
f.write("cd %s\n" % os.getcwd())
if func: f.write("%s\n" % func)
f.close()
os.chmod(runfile, 0775)
if not func:
bb.msg.error(bb.msg.domain.Build, "Function not specified")
raise FuncFailed()
cmd = runfile
if d.getVarFlag(function, 'fakeroot'):
fakerootcmd = d.getVar('FAKEROOT', True)
if fakerootcmd:
cmd = [fakerootcmd, runfile]
if bb.msg.loggerDefaultVerbose:
logfile = LogTee(logger, sys.stdout)
else:
logfile = sys.stdout
try:
bb.process.run(cmd, shell=False, stdin=NULL, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(function, logfn)
def _task_data(fn, task, d):
localdata = data.createCopy(d)
localdata.setVar('BB_FILENAME', fn)
localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:], d.getVar('OVERRIDES', False)))
localdata.finalize()
data.expandKeys(localdata)
return localdata
def _exec_task(fn, task, d, quieterr):
"""Execute a BB 'task'
Execution of a task involves a bit more setup than executing a function,
running it with its own local metadata, and with some useful variables set.
"""
if not data.getVarFlag(task, 'task', d):
event.fire(TaskInvalid(task, d), d)
logger.error("No such task: %s" % task)
return 1
logger.debug(1, "Executing task %s", task)
localdata = _task_data(fn, task, d)
tempdir = localdata.getVar('T', True)
if not tempdir:
bb.fatal("T variable not set, unable to build")
bb.utils.mkdirhier(tempdir)
loglink = os.path.join(tempdir, 'log.{0}'.format(task))
logfn = os.path.join(tempdir, 'log.{0}.{1}'.format(task, os.getpid()))
if loglink:
bb.utils.remove(loglink)
try:
os.symlink(logfn, loglink)
except OSError:
pass
prefuncs = localdata.getVarFlag(task, 'prefuncs', expand=True)
postfuncs = localdata.getVarFlag(task, 'postfuncs', expand=True)
class ErrorCheckHandler(logging.Handler):
def __init__(self):
self.triggered = False
logging.Handler.__init__(self, logging.ERROR)
def emit(self, record):
self.triggered = True
# Handle logfiles
# open logs
si = file('/dev/null', 'r')
try:
logfile = file(logfn, 'w')
except OSError:
logger.exception("Opening log file '%s'", logfn)
if bb.msg.debug_level['default'] > 0:
so = os.popen("tee \"%s\"" % logfile, "w")
else:
so = file(logfile, 'w')
except OSError, e:
bb.msg.error(bb.msg.domain.Build, "opening log file: %s" % e)
pass
# Dup the existing fds so we dont lose them
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
se = so
# Replace those fds with our own
os.dup2(si.fileno(), osi[1])
os.dup2(logfile.fileno(), oso[1])
os.dup2(logfile.fileno(), ose[1])
if not interact:
# dup the existing fds so we dont lose them
osi = [os.dup(sys.stdin.fileno()), sys.stdin.fileno()]
oso = [os.dup(sys.stdout.fileno()), sys.stdout.fileno()]
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Ensure python logging goes to the logfile
handler = logging.StreamHandler(logfile)
handler.setFormatter(logformatter)
# Always enable full debug output into task logfiles
handler.setLevel(logging.DEBUG - 2)
bblogger.addHandler(handler)
# replace those fds with our own
os.dup2(si.fileno(), osi[1])
os.dup2(so.fileno(), oso[1])
os.dup2(se.fileno(), ose[1])
errchk = ErrorCheckHandler()
bblogger.addHandler(errchk)
localdata.setVar('BB_LOGFILE', logfn)
event.fire(TaskStarted(task, localdata), localdata)
# execute function
prevdir = os.getcwd()
if flags['fakeroot']:
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
else:
maybe_fakeroot = ''
lang_environment = "LC_ALL=C "
ret = os.system('%s%ssh -e %s' % (lang_environment, maybe_fakeroot, runfile))
try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
except FuncFailed as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
errprinted = errchk.triggered
logger.error(str(exc))
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
os.chdir(prevdir)
except:
pass
bblogger.removeHandler(handler)
# Restore the backup fds
if not interact:
# restore the backups
os.dup2(osi[0], osi[1])
os.dup2(oso[0], oso[1])
os.dup2(ose[0], ose[1])
# Close the backup fds
# close our logs
si.close()
so.close()
se.close()
# close the backup fds
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
si.close()
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
logger.debug(2, "Zero size logfn %s, removing", logfn)
bb.utils.remove(logfn)
bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, localdata), localdata)
if ret==0:
if bb.msg.debug_level['default'] > 0:
os.remove(runfile)
# os.remove(logfile)
return
else:
bb.msg.error(bb.msg.domain.Build, "function %s failed" % func)
if data.getVar("BBINCLUDELOGS", d):
bb.msg.error(bb.msg.domain.Build, "log data follows (%s)" % logfile)
number_of_lines = data.getVar("BBINCLUDELOGS_LINES", d)
if number_of_lines:
os.system('tail -n%s %s' % (number_of_lines, logfile))
else:
f = open(logfile, "r")
while True:
l = f.readline()
if l == '':
break
l = l.rstrip()
print '| %s' % l
f.close()
else:
bb.msg.error(bb.msg.domain.Build, "see log in %s" % logfile)
raise FuncFailed( logfile )
if not localdata.getVarFlag(task, 'nostamp') and not localdata.getVarFlag(task, 'selfstamp'):
make_stamp(task, localdata)
return 0
def exec_task(task, d):
"""Execute an BB 'task'
def exec_task(fn, task, d):
try:
quieterr = False
if d.getVarFlag(task, "quieterrors") is not None:
quieterr = True
The primary difference between executing a task versus executing
a function is that a task exists in the task digraph, and therefore
has dependencies amongst other tasks."""
return _exec_task(fn, task, d, quieterr)
except Exception:
from traceback import format_exc
if not quieterr:
logger.error("Build of %s failed" % (task))
logger.error(format_exc())
failedevent = TaskFailed(task, None, d, True)
event.fire(failedevent, d)
return 1
# Check whther this is a valid task
if not data.getVarFlag(task, 'task', d):
raise EventException("No such task", InvalidTask(task, d))
def stamp_internal(taskname, d, file_name):
try:
bb.msg.debug(1, bb.msg.domain.Build, "Executing task %s" % task)
old_overrides = data.getVar('OVERRIDES', d, 0)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', 'task-%s:%s' % (task[3:], old_overrides), localdata)
data.update_data(localdata)
data.expandKeys(localdata)
event.fire(TaskStarted(task, localdata))
exec_func(task, localdata)
event.fire(TaskSucceeded(task, localdata))
except FuncFailed, reason:
bb.msg.note(1, bb.msg.domain.Build, "Task failed: %s" % reason )
failedevent = TaskFailed(task, d)
event.fire(failedevent)
raise EventException("Function failed in task: %s" % reason, failedevent)
# make stamp, or cause event and raise exception
if not data.getVarFlag(task, 'nostamp', d) and not data.getVarFlag(task, 'selfstamp', d):
make_stamp(task, d)
def extract_stamp(d, fn):
"""
Extracts stamp format which is either a data dictonary (fn unset)
or a dataCache entry (fn set).
"""
if fn:
return d.stamp[fn]
return data.getVar('STAMP', d, 1)
def stamp_internal(task, d, file_name):
"""
Internal stamp helper function
Removes any stamp for the given task
Makes sure the stamp directory exists
Returns the stamp path+filename
In the bitbake core, d can be a CacheData and file_name will be set.
When called in task context, d will be a data store, file_name will not be set
"""
taskflagname = taskname
if taskname.endswith("_setscene") and taskname != "do_setscene":
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
stamp = extract_stamp(d, file_name)
if not stamp:
return
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
bb.utils.mkdirhier(os.path.dirname(stamp))
stamp = "%s.%s" % (stamp, task)
mkdirhier(os.path.dirname(stamp))
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if os.access(stamp, os.F_OK):
os.remove(stamp)
return stamp
def make_stamp(task, d, file_name = None):
@@ -429,33 +320,16 @@ def make_stamp(task, d, file_name = None):
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
# change on broken NFS filesystems
if stamp:
bb.utils.remove(stamp)
f = open(stamp, "w")
f.close()
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
file_name = d.getVar('BB_FILENAME', True)
bb.parse.siggen.dump_sigtask(file_name, task, d.getVar('STAMP', True), True)
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
(d can be a data dict or dataCache)
"""
stamp = stamp_internal(task, d, file_name)
bb.utils.remove(stamp)
def stampfile(taskname, d, file_name = None):
"""
Return the stamp for a given task
(d can be a data dict or dataCache)
"""
return stamp_internal(taskname, d, file_name)
stamp_internal(task, d, file_name)
def add_tasks(tasklist, d):
task_deps = data.getVar('_task_deps', d)
@@ -473,7 +347,7 @@ def add_tasks(tasklist, d):
if not task in task_deps['tasks']:
task_deps['tasks'].append(task)
flags = data.getVarFlags(task, d)
flags = data.getVarFlags(task, d)
def getTask(name):
if not name in task_deps:
task_deps[name] = {}
@@ -485,9 +359,6 @@ def add_tasks(tasklist, d):
getTask('rdeptask')
getTask('recrdeptask')
getTask('nostamp')
getTask('fakeroot')
getTask('noexec')
getTask('umask')
task_deps['parents'][task] = []
for dep in flags['deps']:
dep = data.expand(dep, d)
@@ -502,3 +373,4 @@ def remove_task(task, kill, d):
If kill is 1, also remove tasks that depend on this task."""
data.delVarFlag(task, 'task', d)

View File

@@ -28,434 +28,150 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import logging
from collections import defaultdict
import os, re
import bb.data
import bb.utils
logger = logging.getLogger("BitBake.Cache")
from sets import Set
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
__cache_version__ = "143"
__cache_version__ = "129"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
# RecipeInfoCommon defines common data retrieving methods
# from meta data for caches. CoreRecipeInfo as well as other
# Extra RecipeInfo needs to inherit this class
class RecipeInfoCommon(object):
@classmethod
def listvar(cls, var, metadata):
return cls.getvar(var, metadata).split()
@classmethod
def intvar(cls, var, metadata):
return int(cls.getvar(var, metadata) or 0)
@classmethod
def depvar(cls, var, metadata):
return bb.utils.explode_deps(cls.getvar(var, metadata))
@classmethod
def pkgvar(cls, var, packages, metadata):
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
for pkg in packages)
@classmethod
def taskvar(cls, var, tasks, metadata):
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
for task in tasks)
@classmethod
def flaglist(cls, flag, varlist, metadata):
return dict((var, metadata.getVarFlag(var, flag, True))
for var in varlist)
@classmethod
def getvar(cls, var, metadata):
return metadata.getVar(var, True) or ''
class CoreRecipeInfo(RecipeInfoCommon):
__slots__ = ()
cachefile = "bb_cache.dat"
def __init__(self, filename, metadata):
self.file_depends = metadata.getVar('__depends', False)
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
self.appends = self.listvar('__BBAPPEND', metadata)
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
self.pn = self.getvar('PN', metadata) or bb.parse.BBHandler.vars_from_file(filename,metadata)[0]
self.skipped = True
self.provides = self.depvar('PROVIDES', metadata)
self.rprovides = self.depvar('RPROVIDES', metadata)
return
self.tasks = metadata.getVar('__BBTASKS', False)
self.pn = self.getvar('PN', metadata)
self.packages = self.listvar('PACKAGES', metadata)
if not self.pn in self.packages:
self.packages.append(self.pn)
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.file_depends = metadata.getVar('__depends', False)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
self.skipped = False
self.pe = self.getvar('PE', metadata)
self.pv = self.getvar('PV', metadata)
self.pr = self.getvar('PR', metadata)
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
self.broken = self.getvar('BROKEN', metadata)
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.depends = self.depvar('DEPENDS', metadata)
self.provides = self.depvar('PROVIDES', metadata)
self.rdepends = self.depvar('RDEPENDS', metadata)
self.rprovides = self.depvar('RPROVIDES', metadata)
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
@classmethod
def init_cacheData(cls, cachedata):
# CacheData in Core RecipeInfo Class
cachedata.task_deps = {}
cachedata.pkg_fn = {}
cachedata.pkg_pn = defaultdict(list)
cachedata.pkg_pepvpr = {}
cachedata.pkg_dp = {}
cachedata.stamp = {}
cachedata.stamp_base = {}
cachedata.stamp_extrainfo = {}
cachedata.fn_provides = {}
cachedata.pn_provides = defaultdict(list)
cachedata.all_depends = []
cachedata.deps = defaultdict(list)
cachedata.packages = defaultdict(list)
cachedata.providers = defaultdict(list)
cachedata.rproviders = defaultdict(list)
cachedata.packages_dynamic = defaultdict(list)
cachedata.rundeps = defaultdict(lambda: defaultdict(list))
cachedata.runrecs = defaultdict(lambda: defaultdict(list))
cachedata.possible_world = []
cachedata.universe_target = []
cachedata.hashfn = {}
cachedata.basetaskhash = {}
cachedata.inherits = {}
cachedata.fakerootenv = {}
cachedata.fakerootnoenv = {}
cachedata.fakerootdirs = {}
def add_cacheData(self, cachedata, fn):
cachedata.task_deps[fn] = self.task_deps
cachedata.pkg_fn[fn] = self.pn
cachedata.pkg_pn[self.pn].append(fn)
cachedata.pkg_pepvpr[fn] = (self.pe, self.pv, self.pr)
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
provides = [self.pn]
for provide in self.provides:
if provide not in provides:
provides.append(provide)
cachedata.fn_provides[fn] = provides
for provide in provides:
cachedata.providers[provide].append(fn)
if provide not in cachedata.pn_provides[self.pn]:
cachedata.pn_provides[self.pn].append(provide)
for dep in self.depends:
if dep not in cachedata.deps[fn]:
cachedata.deps[fn].append(dep)
if dep not in cachedata.all_depends:
cachedata.all_depends.append(dep)
rprovides = self.rprovides
for package in self.packages:
cachedata.packages[package].append(fn)
rprovides += self.rprovides_pkg[package]
for rprovide in rprovides:
cachedata.rproviders[rprovide].append(fn)
for package in self.packages_dynamic:
cachedata.packages_dynamic[package].append(fn)
# Build hash of runtime depends and rececommends
for package in self.packages + [self.pn]:
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
# Collect files we may need for possible world-dep
# calculations
if not self.broken and not self.not_world:
cachedata.possible_world.append(fn)
# create a collection of all targets for sanity checking
# tasks, such as upstream versions, license, and tools for
# task and image creation.
cachedata.universe_target.append(self.pn)
cachedata.hashfn[fn] = self.hashfilename
for task, taskhash in self.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
cachedata.basetaskhash[identifier] = taskhash
cachedata.inherits[fn] = self.inherits
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
class Cache(object):
class Cache:
"""
BitBake Cache implementation
"""
def __init__(self, cooker):
def __init__(self, data, data_hash, caches_array):
# Pass caches_array information into Cache Constructor
# It will be used in later for deciding whether we
# need extra cache file dump/load support
self.caches_array = caches_array
self.cachedir = data.getVar("CACHE", True)
self.clean = set()
self.checked = set()
self.cachedir = bb.data.getVar("CACHE", cooker.configuration.data, True)
self.clean = {}
self.checked = {}
self.depends_cache = {}
self.data = None
self.data_fn = None
self.cacheclean = True
self.data_hash = data_hash
if self.cachedir in [None, '']:
self.has_cache = False
logger.info("Not using a cache. "
"Set CACHE = <directory> to enable.")
return
self.has_cache = True
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash)
logger.debug(1, "Using cache in '%s'", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
if self.caches_array:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
cache_ok = cache_ok and os.path.exists(cachefile)
cache_class.init_cacheData(self)
if cache_ok:
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
def load_cachefile(self):
# Firstly, using core cache file information for
# valid checking
with open(self.cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
bb.msg.note(1, bb.msg.domain.Cache, "Not using a cache. Set CACHE = <directory> to enable.")
else:
self.has_cache = True
self.cachefile = os.path.join(self.cachedir,"bb_cache.dat")
bb.msg.debug(1, bb.msg.domain.Cache, "Using cache in '%s'" % self.cachedir)
try:
cache_ver = pickled.load()
bitbake_ver = pickled.load()
except Exception:
logger.info('Invalid cache, rebuilding...')
return
os.stat( self.cachedir )
except OSError:
bb.mkdirhier( self.cachedir )
if cache_ver != __cache_version__:
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
logger.info('Bitbake version mismatch, rebuilding...')
return
if not self.has_cache:
return
# If any of configuration.data's dependencies are newer than the
# cache there isn't even any point in loading it...
newest_mtime = 0
deps = bb.data.getVar("__depends", cooker.configuration.data, True)
for f,old_mtime in deps:
if old_mtime > newest_mtime:
newest_mtime = old_mtime
cachesize = 0
previous_progress = 0
previous_percent = 0
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
try:
p = pickle.Unpickler(file(self.cachefile, "rb"))
self.depends_cache, version_data = p.load()
if version_data['CACHE_VER'] != __cache_version__:
raise ValueError, 'Cache Version Mismatch'
if version_data['BITBAKE_VER'] != bb.__version__:
raise ValueError, 'Bitbake Version Mismatch'
except EOFError:
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
self.depends_cache = {}
except:
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
self.depends_cache = {}
else:
bb.msg.note(1, bb.msg.domain.Cache, "Out of date cache found, rebuilding...")
# Calculate the correct cachesize of all those cache files
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
def getVar(self, var, fn, exp = 0):
"""
Gets the value of a variable
(similar to getVar in the data class)
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if self.depends_cache.has_key(key):
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
self.data)
previous_progress += current_progress
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
len(self.depends_cache)),
self.data)
@staticmethod
def virtualfn2realfn(virtualfn):
"""
Convert a virtual file name to a real one + the associated subclass keyword
There are two scenarios:
1. We have cached data - serve from depends_cache[fn]
2. We're learning what data to cache - serve from data
backend but add a copy of the data to the cache.
"""
if fn in self.clean:
return self.depends_cache[fn][var]
fn = virtualfn
cls = ""
if virtualfn.startswith('virtual:'):
elems = virtualfn.split(':')
cls = ":".join(elems[1:-1])
fn = elems[-1]
return (fn, cls)
if not fn in self.depends_cache:
self.depends_cache[fn] = {}
@staticmethod
def realfn2virtual(realfn, cls):
"""
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if cls == "":
return realfn
return "virtual:" + cls + ":" + realfn
if fn != self.data_fn:
# We're trying to access data in the cache which doesn't exist
# yet setData hasn't been called to setup the right access. Very bad.
bb.msg.error(bb.msg.domain.Cache, "Parsing error data_fn %s and fn %s don't match" % (self.data_fn, fn))
@classmethod
def loadDataFull(cls, virtualfn, appends, cfgData):
self.cacheclean = False
result = bb.data.getVar(var, self.data, exp)
self.depends_cache[fn][var] = result
return result
def setData(self, fn, data):
"""
Called to prime bb_cache ready to learn which variables to cache.
Will be followed by calls to self.getVar which aren't cached
but can be fulfilled from self.data.
"""
self.data_fn = fn
self.data = data
# Make sure __depends makes the depends_cache
self.getVar("__depends", fn, True)
self.depends_cache[fn]["CACHETIMESTAMP"] = bb.parse.cached_mtime(fn)
def loadDataFull(self, fn, cfgData):
"""
Return a complete set of data for fn.
To do this, we need to parse the file.
"""
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s (full)" % fn)
(fn, virtual) = cls.virtualfn2realfn(virtualfn)
bb_data, skipped = self.load_bbfile(fn, cfgData)
return bb_data
logger.debug(1, "Parsing %s (full)", fn)
def loadData(self, fn, cfgData):
"""
Load a subset of data for fn.
If the cached data is valid we do nothing,
To do this, we need to parse the file and set the system
to record the variables accessed.
Return the cache status and whether the file was skipped when parsed
"""
if fn not in self.checked:
self.cacheValidUpdate(fn)
if self.cacheValid(fn):
if "SKIPPED" in self.depends_cache[fn]:
return True, True
return True, False
cfgData.setVar("__ONLYFINALISE", virtual or "default")
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
bb.msg.debug(1, bb.msg.domain.Cache, "Parsing %s" % fn)
@classmethod
def parse(cls, filename, appends, configdata, caches_array):
"""Parse the specified filename, returning the recipe information"""
infos = []
datastores = cls.load_bbfile(filename, appends, configdata)
depends = set()
for variant, data in sorted(datastores.iteritems(),
key=lambda i: i[0],
reverse=True):
virtualfn = cls.realfn2virtual(filename, variant)
depends |= (data.getVar("__depends", False) or set())
if depends and not variant:
data.setVar("__depends", depends)
bb_data, skipped = self.load_bbfile(fn, cfgData)
self.setData(fn, bb_data)
return False, skipped
info_array = []
for cache_class in caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info = cache_class(filename, data)
info_array.append(info)
infos.append((virtualfn, info_array))
return infos
def load(self, filename, appends, configdata):
"""Obtain the recipe information for the specified filename,
using cached values if available, otherwise parsing.
Note that if it does parse to obtain the info, it will not
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename, appends)
if cached:
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
virtualfn = self.realfn2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
logger.debug(1, "Parsing %s", filename)
return self.parse(filename, appends, configdata, self.caches_array)
return cached, infos
def loadData(self, fn, appends, cfgData, cacheData):
"""Load the recipe info for the specified filename,
parsing and adding to the cache if necessary, and adding
the recipe information to the supplied CacheData instance."""
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends, cfgData)
for virtualfn, info_array in infos:
if info_array[0].skipped:
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
virtuals += 1
return cached, skipped, virtuals
def cacheValid(self, fn, appends):
def cacheValid(self, fn):
"""
Is the cache valid for fn?
Fast version, no timestamps checked.
"""
if fn not in self.checked:
self.cacheValidUpdate(fn, appends)
# Is cache enabled?
if not self.has_cache:
return False
@@ -463,7 +179,7 @@ class Cache(object):
return True
return False
def cacheValidUpdate(self, fn, appends):
def cacheValidUpdate(self, fn):
"""
Is the cache valid for fn?
Make thorough (slower) checks including timestamps.
@@ -472,86 +188,71 @@ class Cache(object):
if not self.has_cache:
return False
self.checked.add(fn)
self.checked[fn] = ""
# Pretend we're clean so getVar works
self.clean[fn] = ""
# File isn't in depends_cache
if not fn in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
logger.debug(2, "Cache: %s no longer exists", fn)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s is not cached" % fn)
self.remove(fn)
return False
mtime = bb.parse.cached_mtime_noerror(fn)
# Check file still exists
if mtime == 0:
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s not longer exists" % fn)
self.remove(fn)
return False
info_array = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info_array[0].timestamp:
logger.debug(2, "Cache: %s changed", fn)
if mtime != self.getVar("CACHETIMESTAMP", fn, True):
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s changed" % fn)
self.remove(fn)
return False
# Check dependencies are still valid
depends = info_array[0].file_depends
depends = self.getVar("__depends", fn, True)
if depends:
for f, old_mtime in depends:
for f,old_mtime in depends:
fmtime = bb.parse.cached_mtime_noerror(f)
# Check if file still exists
if old_mtime != 0 and fmtime == 0:
logger.debug(2, "Cache: %s's dependency %s was removed",
fn, f)
if fmtime == 0:
self.remove(fn)
return False
if (fmtime != old_mtime):
logger.debug(2, "Cache: %s's dependency %s changed",
fn, f)
bb.msg.debug(2, bb.msg.domain.Cache, "Cache: %s's dependency %s changed" % (fn, f))
self.remove(fn)
return False
if appends != info_array[0].appends:
logger.debug(2, "Cache: appends for %s changed", fn)
bb.note("%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
#bb.msg.debug(2, bb.msg.domain.Cache, "Depends Cache: %s is clean" % fn)
if not fn in self.clean:
self.clean[fn] = ""
invalid = False
for cls in info_array[0].variants:
virtualfn = self.realfn2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
logger.debug(2, "Cache: %s is not cached", virtualfn)
invalid = True
# If any one of the variants is not present, mark as invalid for all
if invalid:
for cls in info_array[0].variants:
virtualfn = self.realfn2virtual(fn, cls)
if virtualfn in self.clean:
logger.debug(2, "Cache: Removing %s from cache", virtualfn)
self.clean.remove(virtualfn)
if fn in self.clean:
logger.debug(2, "Cache: Marking %s as not clean", fn)
self.clean.remove(fn)
return False
self.clean.add(fn)
return True
def skip(self, fn):
"""
Mark a fn as skipped
Called from the parser
"""
if not fn in self.depends_cache:
self.depends_cache[fn] = {}
self.depends_cache[fn]["SKIPPED"] = "1"
def remove(self, fn):
"""
Remove a fn from the cache
Called from the parser in error cases
"""
bb.msg.debug(1, bb.msg.domain.Cache, "Removing %s from cache" % fn)
if fn in self.depends_cache:
logger.debug(1, "Removing %s from cache", fn)
del self.depends_cache[fn]
if fn in self.clean:
logger.debug(1, "Marking %s as unclean", fn)
self.clean.remove(fn)
del self.clean[fn]
def sync(self):
"""
@@ -563,110 +264,151 @@ class Cache(object):
return
if self.cacheclean:
logger.debug(2, "Cache is clean, not saving.")
bb.msg.note(1, bb.msg.domain.Cache, "Cache is clean, not saving.")
return
file_dict = {}
pickler_dict = {}
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
file_dict[cache_class_name] = open(cachefile, "wb")
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
version_data = {}
version_data['CACHE_VER'] = __cache_version__
version_data['BITBAKE_VER'] = bb.__version__
try:
for key, info_array in self.depends_cache.iteritems():
for info in info_array:
if isinstance(info, RecipeInfoCommon):
cache_class_name = info.__class__.__name__
pickler_dict[cache_class_name].dump(key)
pickler_dict[cache_class_name].dump(info)
finally:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
file_dict[cache_class_name].close()
p = pickle.Pickler(file(self.cachefile, "wb" ), -1 )
p.dump([self.depends_cache, version_data])
del self.depends_cache
@staticmethod
def mtime(cachefile):
def mtime(self, cachefile):
return bb.parse.cached_mtime_noerror(cachefile)
def add_info(self, filename, info_array, cacheData, parsed=None):
if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
cacheData.add_from_recipeinfo(filename, info_array)
if not self.has_cache:
return
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info_array
def add(self, file_name, data, cacheData, parsed=None):
def handle_data(self, file_name, cacheData):
"""
Save data we need into the cache
Save data we need into the cache
"""
realfn = self.virtualfn2realfn(file_name)[0]
pn = self.getVar('PN', file_name, True)
pe = self.getVar('PE', file_name, True) or "0"
pv = self.getVar('PV', file_name, True)
pr = self.getVar('PR', file_name, True)
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
depends = bb.utils.explode_deps(self.getVar("DEPENDS", file_name, True) or "")
packages = (self.getVar('PACKAGES', file_name, True) or "").split()
packages_dynamic = (self.getVar('PACKAGES_DYNAMIC', file_name, True) or "").split()
rprovides = (self.getVar("RPROVIDES", file_name, True) or "").split()
info_array = []
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
cacheData.task_deps[file_name] = self.getVar("_task_deps", file_name, True)
@staticmethod
def load_bbfile(bbfile, appends, config):
# build PackageName to FileName lookup table
if pn not in cacheData.pkg_pn:
cacheData.pkg_pn[pn] = []
cacheData.pkg_pn[pn].append(file_name)
cacheData.stamp[file_name] = self.getVar('STAMP', file_name, True)
# build FileName to PackageName lookup table
cacheData.pkg_fn[file_name] = pn
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
cacheData.pkg_dp[file_name] = dp
provides = [pn]
for provide in (self.getVar("PROVIDES", file_name, True) or "").split():
if provide not in provides:
provides.append(provide)
# Build forward and reverse provider hashes
# Forward: virtual -> [filenames]
# Reverse: PN -> [virtuals]
if pn not in cacheData.pn_provides:
cacheData.pn_provides[pn] = []
cacheData.fn_provides[file_name] = provides
for provide in provides:
if provide not in cacheData.providers:
cacheData.providers[provide] = []
cacheData.providers[provide].append(file_name)
if not provide in cacheData.pn_provides[pn]:
cacheData.pn_provides[pn].append(provide)
cacheData.deps[file_name] = []
for dep in depends:
if not dep in cacheData.deps[file_name]:
cacheData.deps[file_name].append(dep)
if not dep in cacheData.all_depends:
cacheData.all_depends.append(dep)
# Build reverse hash for PACKAGES, so runtime dependencies
# can be be resolved (RDEPENDS, RRECOMMENDS etc.)
for package in packages:
if not package in cacheData.packages:
cacheData.packages[package] = []
cacheData.packages[package].append(file_name)
rprovides += (self.getVar("RPROVIDES_%s" % package, file_name, 1) or "").split()
for package in packages_dynamic:
if not package in cacheData.packages_dynamic:
cacheData.packages_dynamic[package] = []
cacheData.packages_dynamic[package].append(file_name)
for rprovide in rprovides:
if not rprovide in cacheData.rproviders:
cacheData.rproviders[rprovide] = []
cacheData.rproviders[rprovide].append(file_name)
# Build hash of runtime depends and rececommends
if not file_name in cacheData.rundeps:
cacheData.rundeps[file_name] = {}
if not file_name in cacheData.runrecs:
cacheData.runrecs[file_name] = {}
rdepends = self.getVar('RDEPENDS', file_name, True) or ""
rrecommends = self.getVar('RRECOMMENDS', file_name, True) or ""
for package in packages + [pn]:
if not package in cacheData.rundeps[file_name]:
cacheData.rundeps[file_name][package] = []
if not package in cacheData.runrecs[file_name]:
cacheData.runrecs[file_name][package] = []
cacheData.rundeps[file_name][package] = rdepends + " " + (self.getVar("RDEPENDS_%s" % package, file_name, True) or "")
cacheData.runrecs[file_name][package] = rrecommends + " " + (self.getVar("RRECOMMENDS_%s" % package, file_name, True) or "")
# Collect files we may need for possible world-dep
# calculations
if not self.getVar('BROKEN', file_name, True) and not self.getVar('EXCLUDE_FROM_WORLD', file_name, True):
cacheData.possible_world.append(file_name)
def load_bbfile( self, bbfile , config):
"""
Load and parse one .bb build file
Return the data and whether parsing resulted in the file being skipped
"""
chdir_back = False
from bb import data, parse
import bb
from bb import utils, data, parse, debug, event, fatal
# expand tmpdir to include this topdir
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
oldpath = os.path.abspath(os.getcwd())
parse.cached_mtime_noerror(bbfile_loc)
if bb.parse.cached_mtime_noerror(bbfile_loc):
os.chdir(bbfile_loc)
bb_data = data.init_db(config)
# The ConfHandler first looks if there is a TOPDIR and if not
# then it would call getcwd().
# Previously, we chdir()ed to bbfile_loc, called the handler
# and finally chdir()ed back, a couple of thousand times. We now
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
if not data.getVar('TOPDIR', bb_data):
chdir_back = True
data.setVar('TOPDIR', bbfile_loc, bb_data)
try:
if appends:
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
bb_data = parse.handle(bbfile, bb_data)
if chdir_back:
os.chdir(oldpath)
return bb_data
bb_data = parse.handle(bbfile, bb_data) # read .bb data
os.chdir(oldpath)
return bb_data, False
except bb.parse.SkipPackage:
os.chdir(oldpath)
return bb_data, True
except:
if chdir_back:
os.chdir(oldpath)
os.chdir(oldpath)
raise
def init(cooker):
"""
The Objective: Cache the minimum amount of data possible yet get to the
The Objective: Cache the minimum amount of data possible yet get to the
stage of building packages (i.e. tryBuild) without reparsing any .bb files.
To do this, we intercept getVar calls and only cache the variables we see
being accessed. We rely on the cache getVar calls being made for all
variables bitbake might need to use to reach this stage. For each cached
To do this, we intercept getVar calls and only cache the variables we see
being accessed. We rely on the cache getVar calls being made for all
variables bitbake might need to use to reach this stage. For each cached
file we need to track:
* Its mtime
@@ -676,31 +418,48 @@ def init(cooker):
Files causing parsing errors are evicted from the cache.
"""
return Cache(cooker.configuration.data, cooker.configuration.data_hash)
return Cache(cooker)
class CacheData(object):
#============================================================================#
# CacheData
#============================================================================#
class CacheData:
"""
The data structures we compile from the cached data
"""
def __init__(self, caches_array):
self.caches_array = caches_array
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class.init_cacheData(self)
# Direct cache variables
def __init__(self):
"""
Direct cache variables
(from Cache.handle_data)
"""
self.providers = {}
self.rproviders = {}
self.packages = {}
self.packages_dynamic = {}
self.possible_world = []
self.pkg_pn = {}
self.pkg_fn = {}
self.pkg_pepvpr = {}
self.pkg_dp = {}
self.pn_provides = {}
self.fn_provides = {}
self.all_depends = []
self.deps = {}
self.rundeps = {}
self.runrecs = {}
self.task_queues = {}
self.task_deps = {}
self.stamp = {}
self.preferred = {}
self.tasks = {}
# Indirect Cache variables (set elsewhere)
"""
Indirect Cache variables
(set elsewhere)
"""
self.ignored_dependencies = []
self.world_target = set()
self.world_target = Set()
self.bbfile_priority = {}
def add_from_recipeinfo(self, fn, info_array):
for info in info_array:
info.add_cacheData(self, fn)
self.bbfile_config_priorities = []

View File

@@ -1,57 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Extra RecipeInfo will be all defined in this file. Currently,
# Only Hob (Image Creator) Requests some extra fields. So
# HobRecipeInfo is defined. It's named HobRecipeInfo because it
# is introduced by 'hob'. Users could also introduce other
# RecipeInfo or simply use those already defined RecipeInfo.
# In the following patch, this newly defined new extra RecipeInfo
# will be dynamically loaded and used for loading/saving the extra
# cache fields
# Copyright (C) 2011, Intel Corporation. All rights reserved.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from bb.cache import RecipeInfoCommon
class HobRecipeInfo(RecipeInfoCommon):
__slots__ = ()
classname = "HobRecipeInfo"
# please override this member with the correct data cache file
# such as (bb_cache.dat, bb_extracache_hob.dat)
cachefile = "bb_extracache_" + classname +".dat"
def __init__(self, filename, metadata):
self.summary = self.getvar('SUMMARY', metadata)
self.license = self.getvar('LICENSE', metadata)
self.section = self.getvar('SECTION', metadata)
self.description = self.getvar('DESCRIPTION', metadata)
@classmethod
def init_cacheData(cls, cachedata):
# CacheData in Hob RecipeInfo Class
cachedata.summary = {}
cachedata.license = {}
cachedata.section = {}
cachedata.description = {}
def add_cacheData(self, cachedata, fn):
cachedata.summary[fn] = self.summary
cachedata.license[fn] = self.license
cachedata.section[fn] = self.section
cachedata.description[fn] = self.description

View File

@@ -1,375 +0,0 @@
import ast
import codegen
import logging
import os.path
import bb.utils, bb.data
from itertools import chain
from pysh import pyshyacc, pyshlex, sherrors
logger = logging.getLogger('BitBake.CodeParser')
PARSERCACHE_VERSION = 2
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
i = 0
while codestr[i] in ["\n", "\t", " "]:
i = i + 1
if i == 0:
return codestr
if codestr[i-1] == "\t" or codestr[i-1] == " ":
return "if 1:\n" + codestr
return codestr
pythonparsecache = {}
shellparsecache = {}
def parser_cachefile(d):
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if cachedir in [None, '']:
return None
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_codeparser.dat")
logger.debug(1, "Using cache in '%s' for codeparser cache", cachefile)
return cachefile
def parser_cache_init(d):
global pythonparsecache
global shellparsecache
cachefile = parser_cachefile(d)
if not cachefile:
return
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except:
return
if version != PARSERCACHE_VERSION:
return
pythonparsecache = data[0]
shellparsecache = data[1]
def parser_cache_save(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
shellcache = {}
pythoncache = {}
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError, ValueError):
data, version = None, None
if version != PARSERCACHE_VERSION:
shellcache = shellparsecache
pythoncache = pythonparsecache
else:
for h in pythonparsecache:
if h not in data[0]:
pythoncache[h] = pythonparsecache[h]
for h in shellparsecache:
if h not in data[1]:
shellcache[h] = shellparsecache[h]
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def parser_cache_savemerge(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock")
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != PARSERCACHE_VERSION:
data = [{}, {}]
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
f = os.path.join(os.path.dirname(cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = [{}, {}], None
if version != PARSERCACHE_VERSION:
continue
for h in extradata[0]:
if h not in data[0]:
data[0][h] = extradata[0][h]
for h in extradata[1]:
if h not in data[1]:
data[1][h] = extradata[1][h]
os.unlink(f)
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([data, PARSERCACHE_VERSION])
bb.utils.unlockfile(glf)
Logger = logging.getLoggerClass()
class BufferedLogger(Logger):
def __init__(self, name, level=0, target=None):
Logger.__init__(self, name)
self.setLevel(level)
self.buffer = []
self.target = target
def handle(self, record):
self.buffer.append(record)
def flush(self):
for record in self.buffer:
self.target.handle(record)
self.buffer = []
class PythonParser():
getvars = ("d.getVar", "bb.data.getVar", "data.getVar")
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
def warn(self, func, arg):
"""Warn about calls of bitbake APIs which pass a non-literal
argument for the variable name, as we're not able to track such
a reference.
"""
try:
funcstr = codegen.to_source(func)
argstr = codegen.to_source(arg)
except TypeError:
self.log.debug(2, 'Failed to convert function and argument to source form')
else:
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
def visit_Call(self, node):
name = self.called_node_name(node.func)
if name in self.getvars:
if isinstance(node.args[0], ast.Str):
self.var_references.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
self.execs.add(name)
def called_node_name(self, node):
"""Given a called node, return its original string form"""
components = []
while node:
if isinstance(node, ast.Attribute):
components.append(node.attr)
node = node.value
elif isinstance(node, ast.Name):
components.append(node.id)
return '.'.join(reversed(components))
else:
break
def __init__(self, name, log):
self.var_references = set()
self.var_execs = set()
self.execs = set()
self.references = set()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
def parse_python(self, node):
h = hash(str(node))
if h in pythonparsecache:
self.references = pythonparsecache[h]["refs"]
self.execs = pythonparsecache[h]["execs"]
return
code = compile(check_indent(str(node)), "<string>", "exec",
ast.PyCF_ONLY_AST)
for n in ast.walk(code):
if n.__class__.__name__ == "Call":
self.visit_Call(n)
self.references.update(self.var_references)
self.references.update(self.var_execs)
pythonparsecache[h] = {}
pythonparsecache[h]["refs"] = self.references
pythonparsecache[h]["execs"] = self.execs
class ShellParser():
def __init__(self, name, log):
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
self.unhandled_template = "unable to handle non-literal command '%s'"
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
def parse_shell(self, value):
"""Parse the supplied shell code in a string, returning the external
commands it executes.
"""
h = hash(str(value))
if h in shellparsecache:
self.execs = shellparsecache[h]["execs"]
return self.execs
try:
tokens, _ = pyshyacc.parse(value, eof=True, debug=False)
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
for token in tokens:
self.process_tokens(token)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
shellparsecache[h] = {}
shellparsecache[h]["execs"] = self.execs
return self.execs
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
pyshyacc.parse.
"""
def function_definition(value):
self.funcdefs.add(value.name)
return [value.body], None
def case_clause(value):
# Element 0 of each item in the case is the list of patterns, and
# Element 1 of each item in the case is the list of commands to be
# executed when that pattern matches.
words = chain(*[item[0] for item in value.items])
cmds = chain(*[item[1] for item in value.items])
return cmds, words
def if_clause(value):
main = chain(value.cond, value.if_cmds)
rest = value.else_cmds
if isinstance(rest, tuple) and rest[0] == "elif":
return chain(main, if_clause(rest[1]))
else:
return chain(main, rest)
def simple_command(value):
return None, chain(value.words, (assign[1] for assign in value.assigns))
token_handlers = {
"and_or": lambda x: ((x.left, x.right), None),
"async": lambda x: ([x], None),
"brace_group": lambda x: (x.cmds, None),
"for_clause": lambda x: (x.cmds, x.items),
"function_definition": function_definition,
"if_clause": lambda x: (if_clause(x), None),
"pipeline": lambda x: (x.commands, None),
"redirect_list": lambda x: ([x.cmd], None),
"subshell": lambda x: (x.cmds, None),
"while_clause": lambda x: (chain(x.condition, x.cmds), None),
"until_clause": lambda x: (chain(x.condition, x.cmds), None),
"simple_command": simple_command,
"case_clause": case_clause,
}
for token in tokens:
name, value = token
try:
more_tokens, words = token_handlers[name](value)
except KeyError:
raise NotImplementedError("Unsupported token type " + name)
if more_tokens:
self.process_tokens(more_tokens)
if words:
self.process_words(words)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes
extraction of executed commands from $() blocks, as well as grabbing
the command name argument.
"""
words = list(words)
for word in list(words):
wtree = pyshlex.make_wordtree(word[1])
for part in wtree:
if not isinstance(part, list):
continue
if part[0] in ('`', '$('):
command = pyshlex.wordtree_as_string(part[1:-1])
self.parse_shell(command)
if word[0] in ("cmd_name", "cmd_word"):
if word in words:
words.remove(word)
usetoken = False
for word in words:
if word[0] in ("cmd_name", "cmd_word") or \
(usetoken and word[0] == "TOKEN"):
if "=" in word[1]:
usetoken = True
continue
cmd = word[1]
if cmd.startswith("$"):
self.log.debug(1, self.unhandled_template % cmd)
elif cmd == "eval":
command = " ".join(word for _, word in words[1:])
self.parse_shell(command)
else:
self.allexecs.add(cmd)
break

View File

@@ -1,371 +0,0 @@
"""
BitBake 'Command' module
Provide an interface to interact with the bitbake server through 'commands'
"""
# Copyright (C) 2006-2007 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
The bitbake server takes 'commands' from its UI/commandline.
Commands are either synchronous or asynchronous.
Async commands return data to the client in the form of events.
Sync commands must only return data through the function return value
and must not trigger events, directly or indirectly.
Commands are queued in a CommandQueue
"""
import bb.event
import bb.cooker
class CommandCompleted(bb.event.Event):
pass
class CommandExit(bb.event.Event):
def __init__(self, exitcode):
bb.event.Event.__init__(self)
self.exitcode = int(exitcode)
class CommandFailed(CommandExit):
def __init__(self, message):
self.error = message
CommandExit.__init__(self, 1)
class Command:
"""
A queue of asynchronous commands for bitbake
"""
def __init__(self, cooker):
self.cooker = cooker
self.cmds_sync = CommandsSync()
self.cmds_async = CommandsAsync()
# FIXME Add lock for this
self.currentAsyncCommand = None
def runCommand(self, commandline):
try:
command = commandline.pop(0)
if command in CommandsSync.__dict__:
# Can run synchronous commands straight away
return getattr(CommandsSync, command)(self.cmds_sync, self, commandline)
if self.currentAsyncCommand is not None:
return "Busy (%s in progress)" % self.currentAsyncCommand[0]
if command not in CommandsAsync.__dict__:
return "No such command"
self.currentAsyncCommand = (command, commandline)
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
return True
except:
import traceback
return traceback.format_exc()
def runAsyncCommand(self):
try:
if self.currentAsyncCommand is not None:
(command, options) = self.currentAsyncCommand
commandmethod = getattr(CommandsAsync, command)
needcache = getattr( commandmethod, "needcache" )
if (needcache and self.cooker.state in
(bb.cooker.state.initial, bb.cooker.state.parsing)):
self.cooker.updateCache()
return True
else:
commandmethod(self.cmds_async, self, options)
return False
else:
return False
except KeyboardInterrupt as exc:
self.finishAsyncCommand("Interrupted")
return False
except SystemExit as exc:
arg = exc.args[0]
if isinstance(arg, basestring):
self.finishAsyncCommand(arg)
else:
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception as exc:
import traceback
if isinstance(exc, bb.BBHandledException):
self.finishAsyncCommand("")
else:
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
if msg:
bb.event.fire(CommandFailed(msg), self.cooker.configuration.event_data)
elif code:
bb.event.fire(CommandExit(code), self.cooker.configuration.event_data)
else:
bb.event.fire(CommandCompleted(), self.cooker.configuration.event_data)
self.currentAsyncCommand = None
class CommandsSync:
"""
A class of synchronous commands
These should run quickly so as not to hurt interactive performance.
These must not influence any running synchronous command.
"""
def stateShutdown(self, command, params):
"""
Trigger cooker 'shutdown' mode
"""
command.cooker.shutdown()
def stateStop(self, command, params):
"""
Stop the cooker
"""
command.cooker.stop()
def getCmdLineAction(self, command, params):
"""
Get any command parsed from the commandline
"""
return command.cooker.commandlineAction
def getVariable(self, command, params):
"""
Read the value of a variable from configuration.data
"""
varname = params[0]
expand = True
if len(params) > 1:
expand = params[1]
return command.cooker.configuration.data.getVar(varname, expand)
def setVariable(self, command, params):
"""
Set the value of variable in configuration.data
"""
varname = params[0]
value = params[1]
command.cooker.configuration.data.setVar(varname, value)
def initCooker(self, command, params):
"""
Init the cooker to initial state with nothing parsed
"""
command.cooker.initialize()
def resetCooker(self, command, params):
"""
Reset the cooker to its initial state, thus forcing a reparse for
any async command that has the needcache property set to True
"""
command.cooker.reset()
def getCpuCount(self, command, params):
"""
Get the CPU count on the bitbake server
"""
return bb.utils.cpu_count()
def triggerEvent(self, command, params):
"""
Trigger a certain event
"""
event = params[0]
bb.event.fire(eval(event), command.cooker.configuration.data)
class CommandsAsync:
"""
A class of asynchronous commands
These functions communicate via generated events.
Any function that requires metadata parsing should be here.
"""
def buildFile(self, command, params):
"""
Build a single specified .bb file
"""
bfile = params[0]
task = params[1]
command.cooker.buildFile(bfile, task)
buildFile.needcache = False
def buildTargets(self, command, params):
"""
Build a set of targets
"""
pkgs_to_build = params[0]
task = params[1]
command.cooker.buildTargets(pkgs_to_build, task)
buildTargets.needcache = True
def generateDepTreeEvent(self, command, params):
"""
Generate an event containing the dependency information
"""
pkgs_to_build = params[0]
task = params[1]
command.cooker.generateDepTreeEvent(pkgs_to_build, task)
command.finishAsyncCommand()
generateDepTreeEvent.needcache = True
def generateDotGraph(self, command, params):
"""
Dump dependency information to disk as .dot files
"""
pkgs_to_build = params[0]
task = params[1]
command.cooker.generateDotGraphFiles(pkgs_to_build, task)
command.finishAsyncCommand()
generateDotGraph.needcache = True
def generateTargetsTree(self, command, params):
"""
Generate a tree of buildable targets.
If klass is provided ensure all recipes that inherit the class are
included in the package list.
If pkg_list provided use that list (plus any extras brought in by
klass) rather than generating a tree for all packages.
Add a new option "resolve" to indicate if we need to resolve the
replacement for "virtual/xxx" like pn.
"""
klass = params[0]
resolve = False
if len(params) > 2:
pkg_list = params[1]
resolve = params[2]
elif len(params) > 1:
pkg_list = params[1]
else:
pkg_list = []
command.cooker.generateTargetsTree(klass, pkg_list, resolve)
command.finishAsyncCommand()
generateTargetsTree.needcache = True
def findCoreBaseFiles(self, command, params):
"""
Find certain files in COREBASE directory. i.e. Layers
"""
subdir = params[0]
filename = params[1]
command.cooker.findCoreBaseFiles(subdir, filename)
command.finishAsyncCommand()
findCoreBaseFiles.needcache = False
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
for the passed configuration variable. i.e. MACHINE
"""
varname = params[0]
command.cooker.findConfigFiles(varname)
command.finishAsyncCommand()
findConfigFiles.needcache = False
def findFilesMatchingInDir(self, command, params):
"""
Find implementation files matching the specified pattern
in the requested subdirectory of a BBPATH
"""
pattern = params[0]
directory = params[1]
command.cooker.findFilesMatchingInDir(pattern, directory)
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = False
def findConfigFilePath(self, command, params):
"""
Find the path of the requested configuration file
"""
configfile = params[0]
command.cooker.findConfigFilePath(configfile)
command.finishAsyncCommand()
findConfigFilePath.needcache = False
def showVersions(self, command, params):
"""
Show the currently selected versions
"""
command.cooker.showVersions()
command.finishAsyncCommand()
showVersions.needcache = True
def showEnvironmentTarget(self, command, params):
"""
Print the environment of a target recipe
(needs the cache to work out which recipe to use)
"""
pkg = params[0]
command.cooker.showEnvironment(None, pkg)
command.finishAsyncCommand()
showEnvironmentTarget.needcache = True
def showEnvironment(self, command, params):
"""
Print the standard environment
or if specified the environment for a specified recipe
"""
bfile = params[0]
command.cooker.showEnvironment(bfile)
command.finishAsyncCommand()
showEnvironment.needcache = False
def parseFiles(self, command, params):
"""
Parse the .bb files
"""
command.cooker.updateCache()
command.finishAsyncCommand()
parseFiles.needcache = True
def reparseFiles(self, command, params):
"""
Reparse .bb files
"""
command.cooker.reparseFiles()
command.finishAsyncCommand()
reparseFiles.needcache = True
def compareRevisions(self, command, params):
"""
Parse the .bb files
"""
if bb.fetch.fetcher_compare_revisions(command.cooker.configuration.data):
command.finishAsyncCommand(code=1)
else:
command.finishAsyncCommand()
compareRevisions.needcache = True
def parseConfigurationFiles(self, command, params):
"""
Parse the configuration files
"""
prefiles = params[0]
postfiles = params[1]
command.cooker.parseConfigurationFiles(prefiles, postfiles)
command.finishAsyncCommand()
parseConfigurationFiles.needcache = False

View File

@@ -1,28 +0,0 @@
"""Code pulled from future python versions, here for compatibility"""
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
'__lt__': [('__gt__', lambda self, other: other < self),
('__le__', lambda self, other: not other < self),
('__ge__', lambda self, other: not self < other)],
'__le__': [('__ge__', lambda self, other: other <= self),
('__lt__', lambda self, other: not other <= self),
('__gt__', lambda self, other: not self <= other)],
'__gt__': [('__lt__', lambda self, other: other > self),
('__ge__', lambda self, other: not other > self),
('__le__', lambda self, other: not self > other)],
'__ge__': [('__le__', lambda self, other: other >= self),
('__gt__', lambda self, other: not other >= self),
('__lt__', lambda self, other: not self >= other)]
}
roots = set(dir(cls)) & set(convert)
if not roots:
raise ValueError('must define at least one ordering operation: < > <= >=')
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
for opname, opfunc in convert[root]:
if opname not in roots:
opfunc.__name__ = opname
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls

File diff suppressed because it is too large Load Diff

View File

@@ -1,190 +0,0 @@
"""
Python Deamonizing helper
Configurable daemon behaviors:
1.) The current working directory set to the "/" directory.
2.) The current file creation mode mask set to 0.
3.) Close all open files (1024).
4.) Redirect standard I/O streams to "/dev/null".
A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
2) Unix Programming Frequently Asked Questions:
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
Modified to allow a function to be daemonized and return for
bitbake use by Richard Purdie
"""
__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
__version__ = "0.2"
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
# For BitBake's children, we do want to inherit the parent umask.
UMASK = None
# Default maximum for the number of available file descriptors.
MAXFD = 1024
# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"
def createDaemon(function, logfile):
"""
Detach a process from the controlling terminal and run it in the
background as a daemon, returning control to the caller.
"""
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
# be a process group leader, since the child receives a new process ID
# and inherits the parent's process group ID. This step is required
# to insure that the next call to os.setsid is successful.
pid = os.fork()
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
if (pid == 0): # The first child.
# To become the session leader of this new session and the process group
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session that
# is attached to a terminal device, SIGHUP is sent to all processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
# second child is not STOPPED though, we can safely forego ignoring the
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
# process responsible for its cleanup. And, since the first child is
# a session leader without a controlling terminal, it's possible for
# it to acquire one by opening a terminal in the future (System V-
# based systems). This second fork guarantees that the child is no
# longer a session leader, preventing the daemon from ever acquiring
# a controlling terminal.
pid = os.fork() # Fork a second child.
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
if (pid == 0): # The second child.
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over permissions.
if UMASK is not None:
os.umask(UMASK)
else:
# Parent (the first child) of the second child.
os._exit(0)
else:
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
return
# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD
# Iterate through and close all file descriptors.
# for fd in range(0, maxfd):
# try:
# os.close(fd)
# except OSError: # ERROR, fd wasn't open to begin with (ignored)
# pass
# Redirect the standard I/O file descriptors to the specified file. Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.
# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
# Duplicate standard input to standard output and standard error.
# os.dup2(0, 1) # standard output (1)
# os.dup2(0, 2) # standard error (2)
si = file('/dev/null', 'r')
so = file(logfile, 'w')
se = so
# Replace those fds with our own
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
function()
os._exit(0)

View File

@@ -11,7 +11,7 @@ operations. At night the cookie monster came by and
suggested 'give me cookies on setting the variables and
things will work out'. Taking this suggestion into account
applying the skills from the not yet passed 'Entwurf und
Analyse von Algorithmen' lecture and the cookie
Analyse von Algorithmen' lecture and the cookie
monster seems to be right. We will track setVar more carefully
to have faster update_data and expandKeys operations.
@@ -37,42 +37,36 @@ the speed is more critical here.
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
import sys, os, re
import sys, os, re, time, types
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
path = os.path.dirname(os.path.dirname(sys.argv[0]))
sys.path.insert(0, path)
from itertools import groupby
sys.path.insert(0,path)
from bb import data_smart
from bb import codeparser
import bb
logger = data_smart.logger
_dict_type = data_smart.DataSmart
def init():
"""Return a new object representing the Bitbake data"""
return _dict_type()
def init_db(parent = None):
"""Return a new object representing the Bitbake data,
optionally based on an existing object"""
if parent:
return parent.createCopy()
else:
return _dict_type()
def createCopy(source):
"""Link the source set to the destination
If one does not find the value in the destination set,
search will go on to the source set to get the value.
Value from source are copy-on-write. i.e. any try to
modify one of them will end up putting the modified value
in the destination set.
"""
return source.createCopy()
"""Link the source set to the destination
If one does not find the value in the destination set,
search will go on to the source set to get the value.
Value from source are copy-on-write. i.e. any try to
modify one of them will end up putting the modified value
in the destination set.
"""
return source.createCopy()
def initVar(var, d):
"""Non-destructive var init for data structure"""
@@ -80,34 +74,91 @@ def initVar(var, d):
def setVar(var, value, d):
"""Set a variable to a given value"""
d.setVar(var, value)
"""Set a variable to a given value
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> print getVar('TEST', d)
testcontents
"""
d.setVar(var,value)
def getVar(var, d, exp = 0):
"""Gets the value of a variable"""
return d.getVar(var, exp)
"""Gets the value of a variable
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> print getVar('TEST', d)
testcontents
"""
return d.getVar(var,exp)
def renameVar(key, newkey, d):
"""Renames a variable from key to newkey"""
"""Renames a variable from key to newkey
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> renameVar('TEST', 'TEST2', d)
>>> print getVar('TEST2', d)
testcontents
"""
d.renameVar(key, newkey)
def delVar(var, d):
"""Removes a variable from the data set"""
"""Removes a variable from the data set
Example:
>>> d = init()
>>> setVar('TEST', 'testcontents', d)
>>> print getVar('TEST', d)
testcontents
>>> delVar('TEST', d)
>>> print getVar('TEST', d)
None
"""
d.delVar(var)
def setVarFlag(var, flag, flagvalue, d):
"""Set a flag for a given variable to a given value"""
d.setVarFlag(var, flag, flagvalue)
"""Set a flag for a given variable to a given value
Example:
>>> d = init()
>>> setVarFlag('TEST', 'python', 1, d)
>>> print getVarFlag('TEST', 'python', d)
1
"""
d.setVarFlag(var,flag,flagvalue)
def getVarFlag(var, flag, d):
"""Gets given flag from given var"""
return d.getVarFlag(var, flag)
"""Gets given flag from given var
Example:
>>> d = init()
>>> setVarFlag('TEST', 'python', 1, d)
>>> print getVarFlag('TEST', 'python', d)
1
"""
return d.getVarFlag(var,flag)
def delVarFlag(var, flag, d):
"""Removes a given flag from the variable's flags"""
d.delVarFlag(var, flag)
"""Removes a given flag from the variable's flags
Example:
>>> d = init()
>>> setVarFlag('TEST', 'testflag', 1, d)
>>> print getVarFlag('TEST', 'testflag', d)
1
>>> delVarFlag('TEST', 'testflag', d)
>>> print getVarFlag('TEST', 'testflag', d)
None
"""
d.delVarFlag(var,flag)
def setVarFlags(var, flags, d):
"""Set the flags for a given variable
@@ -116,27 +167,115 @@ def setVarFlags(var, flags, d):
setVarFlags will not clear previous
flags. Think of this method as
addVarFlags
Example:
>>> d = init()
>>> myflags = {}
>>> myflags['test'] = 'blah'
>>> setVarFlags('TEST', myflags, d)
>>> print getVarFlag('TEST', 'test', d)
blah
"""
d.setVarFlags(var, flags)
d.setVarFlags(var,flags)
def getVarFlags(var, d):
"""Gets a variable's flags"""
"""Gets a variable's flags
Example:
>>> d = init()
>>> setVarFlag('TEST', 'test', 'blah', d)
>>> print getVarFlags('TEST', d)['test']
blah
"""
return d.getVarFlags(var)
def delVarFlags(var, d):
"""Removes a variable's flags"""
"""Removes a variable's flags
Example:
>>> data = init()
>>> setVarFlag('TEST', 'testflag', 1, data)
>>> print getVarFlag('TEST', 'testflag', data)
1
>>> delVarFlags('TEST', data)
>>> print getVarFlags('TEST', data)
None
"""
d.delVarFlags(var)
def keys(d):
"""Return a list of keys in d"""
"""Return a list of keys in d
Example:
>>> d = init()
>>> setVar('TEST', 1, d)
>>> setVar('MOO' , 2, d)
>>> setVarFlag('TEST', 'test', 1, d)
>>> keys(d)
['TEST', 'MOO']
"""
return d.keys()
def getData(d):
"""Returns the data object used"""
return d
def setData(newData, d):
"""Sets the data object to the supplied value"""
d = newData
##
## Cookie Monsters' query functions
##
def _get_override_vars(d, override):
"""
Internal!!!
Get the Names of Variables that have a specific
override. This function returns a iterable
Set or an empty list
"""
return []
def _get_var_flags_triple(d):
"""
Internal!!!
"""
return []
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
def expand(s, d, varname = None):
"""Variable expansion using the data store"""
"""Variable expansion using the data store.
Example:
Standard expansion:
>>> d = init()
>>> setVar('A', 'sshd', d)
>>> print expand('/usr/bin/${A}', d)
/usr/bin/sshd
Python expansion:
>>> d = init()
>>> print expand('result: ${@37 * 72}', d)
result: 2664
Shell expansion:
>>> d = init()
>>> print expand('${TARGET_MOO}', d)
${TARGET_MOO}
>>> setVar('TARGET_MOO', 'yupp', d)
>>> print expand('${TARGET_MOO}',d)
yupp
>>> setVar('SRC_URI', 'http://somebug.${TARGET_MOO}', d)
>>> delVar('TARGET_MOO', d)
>>> print expand('${SRC_URI}', d)
http://somebug.${TARGET_MOO}
"""
return d.expand(s, varname)
def expandKeys(alterdata, readdata = None):
@@ -153,25 +292,54 @@ def expandKeys(alterdata, readdata = None):
continue
todolist[key] = ekey
# These two for loops are split for performance to maximise the
# These two for loops are split for performance to maximise the
# usefulness of the expand cache
for key in todolist:
ekey = todolist[key]
renameVar(key, ekey, alterdata)
def inheritFromOS(d, savedenv, permitted):
"""Inherit variables from the initial environment."""
exportlist = bb.utils.preserved_envvars_exported()
for s in savedenv.keys():
if s in permitted:
def expandData(alterdata, readdata = None):
"""For each variable in alterdata, expand it, and update the var contents.
Replacements use data from readdata.
Example:
>>> a=init()
>>> b=init()
>>> setVar("dlmsg", "dl_dir is ${DL_DIR}", a)
>>> setVar("DL_DIR", "/path/to/whatever", b)
>>> expandData(a, b)
>>> print getVar("dlmsg", a)
dl_dir is /path/to/whatever
"""
if readdata == None:
readdata = alterdata
for key in keys(alterdata):
val = getVar(key, alterdata)
if type(val) is not types.StringType:
continue
expanded = expand(val, readdata)
# print "key is %s, val is %s, expanded is %s" % (key, val, expanded)
if val != expanded:
setVar(key, expanded, alterdata)
import os
def inheritFromOS(d):
"""Inherit variables from the environment."""
# fakeroot needs to be able to set these
non_inherit_vars = [ "LD_LIBRARY_PATH", "LD_PRELOAD" ]
for s in os.environ.keys():
if not s in non_inherit_vars:
try:
setVar(s, getVar(s, savedenv, True), d)
if s in exportlist:
setVarFlag(s, "export", True, d)
setVar(s, os.environ[s], d)
setVarFlag(s, 'matchesenv', '1', d)
except TypeError:
pass
import sys
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
"""Emit a variable to be sourced by a shell."""
if getVarFlag(var, "python", d):
@@ -187,15 +355,20 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if all:
oval = getVar(var, d, 0)
val = getVar(var, d, 1)
except (KeyboardInterrupt, bb.build.FuncFailed):
except KeyboardInterrupt:
raise
except Exception as exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
except:
excname = str(sys.exc_info()[0])
if excname == "bb.build.FuncFailed":
raise
o.write('# expansion of %s threw %s\n' % (var, excname))
return 0
if all:
commentVal = re.sub('\n', '\n#', str(oval))
o.write('# %s=%s\n' % (var, commentVal))
o.write('# %s=%s\n' % (var, oval))
if type(val) is not types.StringType:
return 0
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
return 0
@@ -204,13 +377,15 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
if unexport:
o.write('unset %s\n' % varExpanded)
return 1
if getVarFlag(var, 'matchesenv', d):
return 0
val.rstrip()
if not val:
return 0
val = str(val)
if func:
# NOTE: should probably check for unbalanced {} within the var
o.write("%s() {\n%s\n}\n" % (varExpanded, val))
@@ -222,122 +397,174 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
# if we're going to output this within doublequotes,
# to a shell, we need to escape the quotes in the var
alter = re.sub('"', '\\"', val.strip())
alter = re.sub('\n', ' \\\n', alter)
o.write('%s="%s"\n' % (varExpanded, alter))
return 0
return 1
def emit_env(o=sys.__stdout__, d = init(), all=False):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
isfunc = lambda key: bool(d.getVarFlag(key, "func"))
keys = sorted((key for key in d.keys() if not key.startswith("__")), key=isfunc)
grouped = groupby(keys, isfunc)
for isfunc, keys in grouped:
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
env = keys(d)
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
d.getVarFlag(key, 'export') and
not d.getVarFlag(key, 'unexport'))
for e in env:
if getVarFlag(e, "func", d):
continue
emit_var(e, o, d, all) and o.write('\n')
def exported_vars(d):
for key in exported_keys(d):
try:
value = d.getVar(key, True)
except Exception:
pass
if value is not None:
yield key, str(value)
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
for key in keys:
emit_var(key, o, d, False) and o.write('\n')
emit_var(func, o, d, False) and o.write('\n')
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
seen = set()
while newdeps:
deps = newdeps
seen |= deps
newdeps = set()
for dep in deps:
if d.getVarFlag(dep, "func"):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep, True))
newdeps -= seen
for e in env:
if not getVarFlag(e, "func", d):
continue
emit_var(e, o, d) and o.write('\n')
def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize()
"""Modifies the environment vars according to local overrides and commands.
Examples:
Appending to a variable:
>>> d = init()
>>> setVar('TEST', 'this is a', d)
>>> setVar('TEST_append', ' test', d)
>>> setVar('TEST_append', ' of the emergency broadcast system.', d)
>>> update_data(d)
>>> print getVar('TEST', d)
this is a test of the emergency broadcast system.
def build_dependencies(key, keys, shelldeps, vardepvals, d):
deps = set()
vardeps = d.getVarFlag(key, "vardeps", True)
try:
value = d.getVar(key, False)
if key in vardepvals:
value = d.getVarFlag(key, "vardepvalue", True)
elif d.getVarFlag(key, "func"):
if d.getVarFlag(key, "python"):
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.PythonParser(key, logger)
parser.parse_python(parsedvar.value)
deps = deps | parser.references
else:
parsedvar = d.expandWithRefs(value, key)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
if vardeps is None:
parser.log.flush()
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
else:
parser = d.expandWithRefs(value, key)
deps |= parser.references
deps = deps | (keys & parser.execs)
deps |= set((vardeps or "").split())
deps -= set((d.getVarFlag(key, "vardepsexclude", True) or "").split())
except:
bb.note("Error expanding variable %s" % key)
raise
return deps, value
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
Prepending to a variable:
>>> setVar('TEST', 'virtual/libc', d)
>>> setVar('TEST_prepend', 'virtual/tmake ', d)
>>> setVar('TEST_prepend', 'virtual/patcher ', d)
>>> update_data(d)
>>> print getVar('TEST', d)
virtual/patcher virtual/tmake virtual/libc
def generate_dependencies(d):
Overrides:
>>> setVar('TEST_arm', 'target', d)
>>> setVar('TEST_ramses', 'machine', d)
>>> setVar('TEST_local', 'local', d)
>>> setVar('OVERRIDES', 'arm', d)
keys = set(key for key in d.keys() if not key.startswith("__"))
shelldeps = set(key for key in keys if d.getVarFlag(key, "export") and not d.getVarFlag(key, "unexport"))
vardepvals = set(key for key in keys if d.getVarFlag(key, "vardepvalue"))
>>> setVar('TEST', 'original', d)
>>> update_data(d)
>>> print getVar('TEST', d)
target
deps = {}
values = {}
>>> setVar('OVERRIDES', 'arm:ramses:local', d)
>>> setVar('TEST', 'original', d)
>>> update_data(d)
>>> print getVar('TEST', d)
local
CopyMonster:
>>> e = d.createCopy()
>>> setVar('TEST_foo', 'foo', e)
>>> update_data(e)
>>> print getVar('TEST', e)
local
>>> setVar('OVERRIDES', 'arm:ramses:local:foo', e)
>>> update_data(e)
>>> print getVar('TEST', e)
foo
>>> f = d.createCopy()
>>> setVar('TEST_moo', 'something', f)
>>> setVar('OVERRIDES', 'moo:arm:ramses:local:foo', e)
>>> update_data(e)
>>> print getVar('TEST', e)
foo
>>> h = init()
>>> setVar('SRC_URI', 'file://append.foo;patch=1 ', h)
>>> g = h.createCopy()
>>> setVar('SRC_URI_append_arm', 'file://other.foo;patch=1', g)
>>> setVar('OVERRIDES', 'arm:moo', g)
>>> update_data(g)
>>> print getVar('SRC_URI', g)
file://append.foo;patch=1 file://other.foo;patch=1
"""
bb.msg.debug(2, bb.msg.domain.Data, "update_data()")
# now ask the cookie monster for help
#print "Cookie Monster"
#print "Append/Prepend %s" % d._special_values
#print "Overrides %s" % d._seen_overrides
overrides = (getVar('OVERRIDES', d, 1) or "").split(':') or []
#
# Well let us see what breaks here. We used to iterate
# over each variable and apply the override and then
# do the line expanding.
# If we have bad luck - which we will have - the keys
# where in some order that is so important for this
# method which we don't have anymore.
# Anyway we will fix that and write test cases this
# time.
#
# First we apply all overrides
# Then we will handle _append and _prepend
#
for o in overrides:
# calculate '_'+override
l = len(o)+1
# see if one should even try
if not d._seen_overrides.has_key(o):
continue
vars = d._seen_overrides[o]
for var in vars:
name = var[:-l]
try:
d[name] = d[var]
except:
bb.msg.note(1, bb.msg.domain.Data, "Untracked delVar")
# now on to the appends and prepends
if d._special_values.has_key('_append'):
appends = d._special_values['_append'] or []
for append in appends:
for (a, o) in getVarFlag(append, '_append', d) or []:
# maybe the OVERRIDE was not yet added so keep the append
if (o and o in overrides) or not o:
delVarFlag(append, '_append', d)
if o and not o in overrides:
continue
sval = getVar(append,d) or ""
sval+=a
setVar(append, sval, d)
if d._special_values.has_key('_prepend'):
prepends = d._special_values['_prepend'] or []
for prepend in prepends:
for (a, o) in getVarFlag(prepend, '_prepend', d) or []:
# maybe the OVERRIDE was not yet added so keep the prepend
if (o and o in overrides) or not o:
delVarFlag(prepend, '_prepend', d)
if o and not o in overrides:
continue
sval = a + (getVar(prepend,d) or "")
setVar(prepend, sval, d)
tasklist = d.getVar('__BBTASKS') or []
for task in tasklist:
deps[task], values[task] = build_dependencies(task, keys, shelldeps, vardepvals, d)
newdeps = deps[task]
seen = set()
while newdeps:
nextdeps = newdeps
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, vardepvals, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
def inherits_class(klass, d):
val = getVar('__inherit_cache', d) or []
if os.path.join('classes', '%s.bbclass' % klass) in val:
return True
return False
def _test():
"""Start a doctest run on this module"""
import doctest
from bb import data
doctest.testmod(data)
if __name__ == "__main__":
_test()

View File

@@ -28,87 +28,21 @@ BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import copy, re
from collections import MutableMapping
import logging
import hashlib
import bb, bb.codeparser
from bb import utils
from bb.COW import COWDictBase
import copy, os, re, sys, time, types
import bb
from bb import utils, methodpool
from COW import COWDictBase
from sets import Set
from new import classobj
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend"]
__setvar_keyword__ = ["_append","_prepend"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend)(_(?P<add>.*))?')
__expand_var_regexp__ = re.compile(r"\${[^{}]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
class VariableParse:
def __init__(self, varname, d, val = None):
self.varname = varname
self.d = d
self.value = val
self.references = set()
self.execs = set()
def var_sub(self, match):
key = match.group()[2:-1]
if self.varname and key:
if self.varname == key:
raise Exception("variable %s references itself!" % self.varname)
var = self.d.getVar(key, 1)
if var is not None:
self.references.add(key)
return var
else:
return match.group()
def python_sub(self, match):
code = match.group()[3:-1]
codeobj = compile(code.strip(), self.varname or "<expansion>", "eval")
parser = bb.codeparser.PythonParser(self.varname, logger)
parser.parse_python(code)
if self.varname:
vardeps = self.d.getVarFlag(self.varname, "vardeps", True)
if vardeps is None:
parser.log.flush()
else:
parser.log.flush()
self.references |= parser.references
self.execs |= parser.execs
value = utils.better_eval(codeobj, DataContext(self.d))
return str(value)
class DataContext(dict):
def __init__(self, metadata, **kwargs):
self.metadata = metadata
dict.__init__(self, **kwargs)
self['d'] = metadata
def __missing__(self, key):
value = self.metadata.getVar(key, True)
if value is None or self.metadata.getVarFlag(key, 'func'):
raise KeyError(key)
else:
return value
class ExpansionError(Exception):
def __init__(self, varname, expression, exception):
self.expression = expression
self.variablename = varname
self.exception = exception
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
def __str__(self):
return self.msg
class DataSmart(MutableMapping):
class DataSmart:
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
self.dict = {}
@@ -118,115 +52,68 @@ class DataSmart(MutableMapping):
self.expand_cache = {}
def expandWithRefs(self, s, varname):
def expand(self,s, varname):
def var_sub(match):
key = match.group()[2:-1]
if varname and key:
if varname == key:
raise Exception("variable %s references itself!" % varname)
var = self.getVar(key, 1)
if var is not None:
return var
else:
return match.group()
if not isinstance(s, basestring): # sanity check
return VariableParse(varname, self, s)
def python_sub(match):
import bb
code = match.group()[3:-1]
locals()['d'] = self
s = eval(code)
if type(s) == types.IntType: s = str(s)
return s
if type(s) is not types.StringType: # sanity check
return s
if varname and varname in self.expand_cache:
return self.expand_cache[varname]
varparse = VariableParse(varname, self)
while s.find('${') != -1:
olds = s
try:
s = __expand_var_regexp__.sub(varparse.var_sub, s)
s = __expand_python_regexp__.sub(varparse.python_sub, s)
if s == olds:
break
except ExpansionError:
s = __expand_var_regexp__.sub(var_sub, s)
s = __expand_python_regexp__.sub(python_sub, s)
if s == olds: break
if type(s) is not types.StringType: # sanity check
bb.msg.error(bb.msg.domain.Data, 'expansion of %s returned non-string %s' % (olds, s))
except KeyboardInterrupt:
raise
except:
bb.msg.note(1, bb.msg.domain.Data, "%s:%s while evaluating:\n%s" % (sys.exc_info()[0], sys.exc_info()[1], s))
raise
except Exception as exc:
raise ExpansionError(varname, s, exc)
varparse.value = s
if varname:
self.expand_cache[varname] = varparse
self.expand_cache[varname] = s
return varparse
def expand(self, s, varname = None):
return self.expandWithRefs(s, varname).value
def finalize(self):
"""Performs final steps upon the datastore, including application of overrides"""
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
#
# Well let us see what breaks here. We used to iterate
# over each variable and apply the override and then
# do the line expanding.
# If we have bad luck - which we will have - the keys
# where in some order that is so important for this
# method which we don't have anymore.
# Anyway we will fix that and write test cases this
# time.
#
# First we apply all overrides
# Then we will handle _append and _prepend
#
for o in overrides:
# calculate '_'+override
l = len(o) + 1
# see if one should even try
if o not in self._seen_overrides:
continue
vars = self._seen_overrides[o].copy()
for var in vars:
name = var[:-l]
try:
self.setVar(name, self.getVar(var, False))
self.delVar(var)
except Exception:
logger.info("Untracked delVar")
# now on to the appends and prepends
for op in __setvar_keyword__:
if op in self._special_values:
appends = self._special_values[op] or []
for append in appends:
keep = []
for (a, o) in self.getVarFlag(append, op) or []:
if o and not o in overrides:
keep.append((a ,o))
continue
if op == "_append":
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
elif op == "_prepend":
sval = a + (self.getVar(append, False) or "")
self.setVar(append, sval)
# We save overrides that may be applied at some later stage
if keep:
self.setVarFlag(append, op, keep)
else:
self.delVarFlag(append, op)
return s
def initVar(self, var):
self.expand_cache = {}
if not var in self.dict:
self.dict[var] = {}
def _findVar(self, var):
dest = self.dict
while dest:
if var in dest:
return dest[var]
def _findVar(self,var):
_dest = self.dict
if "_data" not in dest:
while (_dest and var not in _dest):
if not "_data" in _dest:
_dest = None
break
dest = dest["_data"]
_dest = _dest["_data"]
if _dest and var in _dest:
return _dest[var]
return None
def _makeShadowCopy(self, var):
if var in self.dict:
@@ -239,7 +126,7 @@ class DataSmart(MutableMapping):
else:
self.initVar(var)
def setVar(self, var, value):
def setVar(self,var,value):
self.expand_cache = {}
match = __setvar_regexp__.match(var)
if match and match.group("keyword") in __setvar_keyword__:
@@ -254,91 +141,74 @@ class DataSmart(MutableMapping):
# pay the cookie monster
try:
self._special_values[keyword].add( base )
except KeyError:
self._special_values[keyword] = set()
except:
self._special_values[keyword] = Set()
self._special_values[keyword].add( base )
return
if not var in self.dict:
self._makeShadowCopy(var)
if self.getVarFlag(var, 'matchesenv'):
self.delVarFlag(var, 'matchesenv')
self.setVarFlag(var, 'export', 1)
# more cookies for the cookie monster
if '_' in var:
override = var[var.rfind('_')+1:]
if len(override) > 0:
if override not in self._seen_overrides:
self._seen_overrides[override] = set()
self._seen_overrides[override].add( var )
if not self._seen_overrides.has_key(override):
self._seen_overrides[override] = Set()
self._seen_overrides[override].add( var )
# setting var
self.dict[var]["content"] = value
def getVar(self, var, expand=False, noweakdefault=False):
value = self.getVarFlag(var, "content", False, noweakdefault)
def getVar(self,var,exp):
value = self.getVarFlag(var,"content")
# Call expand() separately to make use of the expand cache
if expand and value:
return self.expand(value, var)
if exp and value:
return self.expand(value,var)
return value
def renameVar(self, key, newkey):
"""
Rename the variable key to newkey
Rename the variable key to newkey
"""
val = self.getVar(key, 0)
if val is not None:
self.setVar(newkey, val)
if val is None:
return
self.setVar(newkey, val)
for i in ('_append', '_prepend'):
src = self.getVarFlag(key, i)
if src is None:
continue
dest = self.getVarFlag(newkey, i) or []
src = self.getVarFlag(key, i) or []
dest.extend(src)
self.setVarFlag(newkey, i, dest)
if i in self._special_values and key in self._special_values[i]:
if self._special_values.has_key(i) and key in self._special_values[i]:
self._special_values[i].remove(key)
self._special_values[i].add(newkey)
self.delVar(key)
def appendVar(self, key, value):
value = (self.getVar(key, False) or "") + value
self.setVar(key, value)
def prependVar(self, key, value):
value = value + (self.getVar(key, False) or "")
self.setVar(key, value)
def delVar(self, var):
def delVar(self,var):
self.expand_cache = {}
self.dict[var] = {}
if '_' in var:
override = var[var.rfind('_')+1:]
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
self._seen_overrides[override].remove(var)
def setVarFlag(self, var, flag, flagvalue):
def setVarFlag(self,var,flag,flagvalue):
if not var in self.dict:
self._makeShadowCopy(var)
self.dict[var][flag] = flagvalue
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
def getVarFlag(self,var,flag):
local_var = self._findVar(var)
value = None
if local_var:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "content" and "defaultval" in local_var and not noweakdefault:
value = copy.copy(local_var["defaultval"])
if expand and value:
value = self.expand(value, None)
return value
return copy.copy(local_var[flag])
return None
def delVarFlag(self, var, flag):
def delVarFlag(self,var,flag):
local_var = self._findVar(var)
if not local_var:
return
@@ -348,29 +218,21 @@ class DataSmart(MutableMapping):
if var in self.dict and flag in self.dict[var]:
del self.dict[var][flag]
def appendVarFlag(self, key, flag, value):
value = (self.getVarFlag(key, flag, False) or "") + value
self.setVarFlag(key, flag, value)
def prependVarFlag(self, key, flag, value):
value = value + (self.getVarFlag(key, flag, False) or "")
self.setVarFlag(key, flag, value)
def setVarFlags(self, var, flags):
def setVarFlags(self,var,flags):
if not var in self.dict:
self._makeShadowCopy(var)
for i in flags:
for i in flags.keys():
if i == "content":
continue
self.dict[var][i] = flags[i]
def getVarFlags(self, var):
def getVarFlags(self,var):
local_var = self._findVar(var)
flags = {}
if local_var:
for i in local_var:
for i in local_var.keys():
if i == "content":
continue
flags[i] = local_var[i]
@@ -380,7 +242,7 @@ class DataSmart(MutableMapping):
return flags
def delVarFlags(self, var):
def delVarFlags(self,var):
if not var in self.dict:
self._makeShadowCopy(var)
@@ -406,69 +268,25 @@ class DataSmart(MutableMapping):
return data
def expandVarref(self, variable, parents=False):
"""Find all references to variable in the data and expand it
in place, optionally descending to parent datastores."""
if parents:
keys = iter(self)
else:
keys = self.localkeys()
ref = '${%s}' % variable
value = self.getVar(variable, False)
for key in keys:
referrervalue = self.getVar(key, False)
if referrervalue and ref in referrervalue:
self.setVar(key, referrervalue.replace(ref, value))
def localkeys(self):
for key in self.dict:
if key != '_data':
yield key
def __iter__(self):
def keylist(d):
klist = set()
for key in d:
if key == "_data":
continue
if not d[key]:
continue
klist.add(key)
# Dictionary Methods
def keys(self):
def _keys(d, mykey):
if "_data" in d:
klist |= keylist(d["_data"])
_keys(d["_data"],mykey)
return klist
for key in d.keys():
if key != "_data":
mykey[key] = None
keytab = {}
_keys(self.dict,keytab)
return keytab.keys()
for k in keylist(self.dict):
yield k
def __getitem__(self,item):
#print "Warning deprecated"
return self.getVar(item, False)
def __len__(self):
return len(frozenset(self))
def __setitem__(self,var,data):
#print "Warning deprecated"
self.setVar(var,data)
def __getitem__(self, item):
value = self.getVar(item, False)
if value is None:
raise KeyError(item)
else:
return value
def __setitem__(self, var, value):
self.setVar(var, value)
def __delitem__(self, var):
self.delVar(var)
def get_hash(self):
data = ""
config_whitelist = set((self.getVar("BB_HASHCONFIG_WHITELIST", True) or "").split())
keys = set(key for key in iter(self) if not key.startswith("__"))
for key in keys:
if key in config_whitelist:
continue
value = self.getVar(key, False) or ""
data = data + key + ': ' + str(value) + '\n'
return hashlib.md5(data).hexdigest()

View File

@@ -22,180 +22,100 @@ BitBake build tools.
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os, sys
import warnings
try:
import cPickle as pickle
except ImportError:
import pickle
import logging
import atexit
import traceback
import os, re
import bb.utils
# This is the pid for which we should generate the event. This is set when
# the runqueue forks off.
worker_pid = 0
worker_pipe = None
logger = logging.getLogger('BitBake.Event')
class Event(object):
class Event:
"""Base class for events"""
type = "Event"
def __init__(self):
self.pid = worker_pid
def __init__(self, d):
self._data = d
def getData(self):
return self._data
def setData(self, data):
self._data = data
data = property(getData, setData, None, "data property")
NotHandled = 0
Handled = 1
Handled = 1
Registered = 10
AlreadyRegistered = 14
# Internal
_handlers = {}
_ui_handlers = {}
_ui_handler_seq = 0
_handlers = []
_handlers_dict = {}
# For compatibility
bb.utils._context["NotHandled"] = NotHandled
bb.utils._context["Handled"] = Handled
def tmpHandler(event):
"""Default handler for code events"""
return NotHandled
def execute_handler(name, handler, event, d):
event.data = d
try:
ret = handler(event)
except bb.parse.SkipPackage:
raise
except Exception:
etype, value, tb = sys.exc_info()
logger.error("Execution of event handler '%s' failed" % name,
exc_info=(etype, value, tb.tb_next))
raise
except SystemExit as exc:
if exc.code != 0:
logger.error("Execution of event handler '%s' failed" % name)
raise
finally:
del event.data
def defaultTmpHandler():
tmp = "def tmpHandler(e):\n\t\"\"\"heh\"\"\"\n\treturn NotHandled"
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event.defaultTmpHandler")
return comp
if ret is not None:
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
DeprecationWarning, stacklevel = 2)
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
return
for name, handler in _handlers.iteritems():
try:
execute_handler(name, handler, event, d)
except Exception:
continue
ui_queue = []
@atexit.register
def print_ui_queue():
"""If we're exiting before a UI has been spawned, display any queued
LogRecords to the console."""
logger = logging.getLogger("BitBake")
if not _ui_handlers:
from bb.msg import BBLogFormatter
console = logging.StreamHandler(sys.stdout)
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
logger.handlers = [console]
for event in ui_queue:
if isinstance(event, logging.LogRecord):
logger.handle(event)
def fire_ui_handlers(event, d):
if not _ui_handlers:
# No UI handlers registered yet, queue up the messages
ui_queue.append(event)
return
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
try:
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
except:
errors.append(h)
for h in errors:
del _ui_handlers[h]
def fire(event, d):
def fire(event):
"""Fire off an Event"""
for h in _handlers:
if type(h).__name__ == "code":
exec(h)
if tmpHandler(event) == Handled:
return Handled
else:
if h(event) == Handled:
return Handled
return NotHandled
# We can fire class handlers in the worker process context and this is
# desired so they get the task based datastore.
# UI handlers need to be fired in the server context so we defer this. They
# don't have a datastore so the datastore context isn't a problem.
fire_class_handlers(event, d)
if worker_pid != 0:
worker_fire(event, d)
else:
fire_ui_handlers(event, d)
def worker_fire(event, d):
data = "<event>" + pickle.dumps(event) + "</event>"
worker_pipe.write(data)
def fire_from_worker(event, d):
if not event.startswith("<event>") or not event.endswith("</event>"):
print("Error, not an event %s" % event)
return
event = pickle.loads(event[7:-8])
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler):
"""Register an Event handler"""
# already registered
if name in _handlers:
if name in _handlers_dict:
return AlreadyRegistered
if handler is not None:
# handle string containing python code
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = compile(tmp, "%s(e)" % name, "exec")
except SyntaxError:
logger.error("Unable to register event handler '%s':\n%s", name,
''.join(traceback.format_exc(limit=0)))
_handlers[name] = noop
return
env = {}
bb.utils.simple_exec(code, env)
func = bb.utils.better_eval(name, env)
_handlers[name] = func
# handle string containing python code
if type(handler).__name__ == "str":
_registerCode(handler)
else:
_handlers[name] = handler
_handlers.append(handler)
_handlers_dict[name] = 1
return Registered
def _registerCode(handlerStr):
"""Register a 'code' Event.
Deprecated interface; call register instead.
Expects to be passed python code as a string, which will
be passed in turn to compile() and then exec(). Note that
the code will be within a function, so should have had
appropriate tabbing put in place."""
tmp = "def tmpHandler(e):\n%s" % handlerStr
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
# prevent duplicate registration
_handlers.append(comp)
def remove(name, handler):
"""Remove an Event handler"""
_handlers.pop(name)
def register_UIHhandler(handler):
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
return _ui_handler_seq
_handlers_dict.pop(name)
if type(handler).__name__ == "str":
return _removeCode(handler)
else:
_handlers.remove(handler)
def unregister_UIHhandler(handlerNum):
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
def _removeCode(handlerStr):
"""Remove a 'code' Event handler
Deprecated interface; call remove instead."""
tmp = "def tmpHandler(e):\n%s" % handlerStr
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._removeCode")
_handlers.remove(comp)
def getName(e):
"""Returns the name of a class or class instance"""
@@ -204,48 +124,16 @@ def getName(e):
else:
return e.__name__
class OperationStarted(Event):
"""An operation has begun"""
def __init__(self, msg = "Operation Started"):
Event.__init__(self)
self.msg = msg
class OperationCompleted(Event):
"""An operation has completed"""
def __init__(self, total, msg = "Operation Completed"):
Event.__init__(self)
self.total = total
self.msg = msg
class OperationProgress(Event):
"""An operation is in progress"""
def __init__(self, current, total, msg = "Operation in Progress"):
Event.__init__(self)
self.current = current
self.total = total
self.msg = msg + ": %s/%s" % (current, total);
class ConfigParsed(Event):
"""Configuration Parsing Complete"""
class RecipeEvent(Event):
def __init__(self, fn):
self.fn = fn
Event.__init__(self)
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finialised"""
class RecipeParsed(RecipeEvent):
""" Recipe Parsing Complete """
class StampUpdate(Event):
"""Trigger for any adjustment of the stamp files to happen"""
def __init__(self, targets, stampfns):
def __init__(self, targets, stampfns, d):
self._targets = targets
self._stampfns = stampfns
Event.__init__(self)
Event.__init__(self, d)
def getStampPrefix(self):
return self._stampfns
@@ -256,13 +144,29 @@ class StampUpdate(Event):
stampPrefix = property(getStampPrefix)
targets = property(getTargets)
class PkgBase(Event):
"""Base class for package events"""
def __init__(self, t, d):
self._pkg = t
Event.__init__(self, d)
def getPkg(self):
return self._pkg
def setPkg(self, pkg):
self._pkg = pkg
pkg = property(getPkg, setPkg, None, "pkg property")
class BuildBase(Event):
"""Base class for bbmake run events"""
def __init__(self, n, p, failures = 0):
def __init__(self, n, p, c, failures = 0):
self._name = n
self._pkgs = p
Event.__init__(self)
Event.__init__(self, c)
self._failures = failures
def getPkgs(self):
@@ -294,34 +198,56 @@ class BuildBase(Event):
cfg = property(getCfg, setCfg, None, "cfg property")
class DepBase(PkgBase):
"""Base class for dependency events"""
def __init__(self, t, data, d):
self._dep = d
PkgBase.__init__(self, t, data)
def getDep(self):
return self._dep
def setDep(self, dep):
self._dep = dep
dep = property(getDep, setDep, None, "dep property")
class PkgStarted(PkgBase):
"""Package build started"""
class BuildStarted(BuildBase, OperationStarted):
class PkgFailed(PkgBase):
"""Package build failed"""
class PkgSucceeded(PkgBase):
"""Package build completed"""
class BuildStarted(BuildBase):
"""bbmake build run started"""
def __init__(self, n, p, failures = 0):
OperationStarted.__init__(self, "Building Started")
BuildBase.__init__(self, n, p, failures)
class BuildCompleted(BuildBase, OperationCompleted):
class BuildCompleted(BuildBase):
"""bbmake build run completed"""
def __init__(self, total, n, p, failures = 0):
if not failures:
OperationCompleted.__init__(self, total, "Building Succeeded")
else:
OperationCompleted.__init__(self, total, "Building Failed")
BuildBase.__init__(self, n, p, failures)
class UnsatisfiedDep(DepBase):
"""Unsatisfied Dependency"""
class RecursiveDep(DepBase):
"""Recursive Dependency"""
class NoProvider(Event):
"""No Provider for an Event"""
def __init__(self, item, runtime=False, dependees=None, reasons=[]):
Event.__init__(self)
def __init__(self, item, data,runtime=False):
Event.__init__(self, data)
self._item = item
self._runtime = runtime
self._dependees = dependees
self._reasons = reasons
def getItem(self):
return self._item
@@ -332,8 +258,8 @@ class NoProvider(Event):
class MultipleProviders(Event):
"""Multiple Providers"""
def __init__(self, item, candidates, runtime = False):
Event.__init__(self)
def __init__(self, item, candidates, data, runtime = False):
Event.__init__(self, data)
self._item = item
self._candidates = candidates
self._is_runtime = runtime
@@ -355,165 +281,3 @@ class MultipleProviders(Event):
Get the possible Candidates for a PROVIDER.
"""
return self._candidates
class ParseStarted(OperationStarted):
"""Recipe parsing for the runqueue has begun"""
def __init__(self, total):
OperationStarted.__init__(self, "Recipe parsing Started")
self.total = total
class ParseCompleted(OperationCompleted):
"""Recipe parsing for the runqueue has completed"""
def __init__(self, cached, parsed, skipped, masked, virtuals, errors, total):
OperationCompleted.__init__(self, total, "Recipe parsing Completed")
self.cached = cached
self.parsed = parsed
self.skipped = skipped
self.virtuals = virtuals
self.masked = masked
self.errors = errors
self.sofar = cached + parsed
class ParseProgress(OperationProgress):
"""Recipe parsing progress"""
def __init__(self, current, total):
OperationProgress.__init__(self, current, total, "Recipe parsing")
class CacheLoadStarted(OperationStarted):
"""Loading of the dependency cache has begun"""
def __init__(self, total):
OperationStarted.__init__(self, "Loading cache Started")
self.total = total
class CacheLoadProgress(OperationProgress):
"""Cache loading progress"""
def __init__(self, current, total):
OperationProgress.__init__(self, current, total, "Loading cache")
class CacheLoadCompleted(OperationCompleted):
"""Cache loading is complete"""
def __init__(self, total, num_entries):
OperationCompleted.__init__(self, total, "Loading cache Completed")
self.num_entries = num_entries
class TreeDataPreparationStarted(OperationStarted):
"""Tree data preparation started"""
def __init__(self):
OperationStarted.__init__(self, "Preparing tree data Started")
class TreeDataPreparationProgress(OperationProgress):
"""Tree data preparation is in progress"""
def __init__(self, current, total):
OperationProgress.__init__(self, current, total, "Preparing tree data")
class TreeDataPreparationCompleted(OperationCompleted):
"""Tree data preparation completed"""
def __init__(self, total):
OperationCompleted.__init__(self, total, "Preparing tree data Completed")
class DepTreeGenerated(Event):
"""
Event when a dependency tree has been generated
"""
def __init__(self, depgraph):
Event.__init__(self)
self._depgraph = depgraph
class TargetsTreeGenerated(Event):
"""
Event when a set of buildable targets has been generated
"""
def __init__(self, model):
Event.__init__(self)
self._model = model
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has
been generated
"""
def __init__(self, pattern, matches):
Event.__init__(self)
self._pattern = pattern
self._matches = matches
class CoreBaseFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
"""
def __init__(self, paths):
Event.__init__(self)
self._paths = paths
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
"""
def __init__(self, variable, values):
Event.__init__(self)
self._variable = variable
self._values = values
class ConfigFilePathFound(Event):
"""
Event when a path for a config file has been found
"""
def __init__(self, path):
Event.__init__(self)
self._path = path
class MsgBase(Event):
"""Base class for messages"""
def __init__(self, msg):
self._message = msg
Event.__init__(self)
class MsgDebug(MsgBase):
"""Debug Message"""
class MsgNote(MsgBase):
"""Note Message"""
class MsgWarn(MsgBase):
"""Warning Message"""
class MsgError(MsgBase):
"""Error Message"""
class MsgFatal(MsgBase):
"""Fatal Message"""
class MsgPlain(MsgBase):
"""General output"""
class LogHandler(logging.Handler):
"""Dispatch logging messages as bitbake events"""
def emit(self, record):
if record.exc_info:
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
record.bb_exc_info = (etype, value, tb)
record.exc_info = None
fire(record, None)
def filter(self, record):
record.taskpid = worker_pid
return True
class RequestPackageInfo(Event):
"""
Event to request package information
"""
class PackageInfo(Event):
"""
Package information for GUI
"""
def __init__(self, pkginfolist):
Event.__init__(self)
self._pkginfolist = pkginfolist

View File

@@ -1,84 +0,0 @@
from __future__ import absolute_import
import inspect
import traceback
import bb.namedtuple_with_abc
from collections import namedtuple
class TracebackEntry(namedtuple.abc):
"""Pickleable representation of a traceback entry"""
_fields = 'filename lineno function args code_context index'
_header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
def format(self, formatter=None):
if not self.code_context:
return self._header.format(self) + '\n'
formatted = [self._header.format(self) + ':\n']
for lineindex, line in enumerate(self.code_context):
if formatter:
line = formatter(line)
if lineindex == self.index:
formatted.append(' >%s' % line)
else:
formatted.append(' %s' % line)
return formatted
def __str__(self):
return ''.join(self.format())
def _get_frame_args(frame):
"""Get the formatted arguments and class (if available) for a frame"""
arginfo = inspect.getargvalues(frame)
if not arginfo.args:
return '', None
firstarg = arginfo.args[0]
if firstarg == 'self':
self = arginfo.locals['self']
cls = self.__class__.__name__
arginfo.args.pop(0)
del arginfo.locals['self']
else:
cls = None
formatted = inspect.formatargvalues(*arginfo)
return formatted, cls
def extract_traceback(tb, context=1):
frames = inspect.getinnerframes(tb, context)
for frame, filename, lineno, function, code_context, index in frames:
formatted_args, cls = _get_frame_args(frame)
if cls:
function = '%s.%s' % (cls, function)
yield TracebackEntry(filename, lineno, function, formatted_args,
code_context, index)
def format_extracted(extracted, formatter=None, limit=None):
if limit:
extracted = extracted[-limit:]
formatted = []
for tracebackinfo in extracted:
formatted.extend(tracebackinfo.format(formatter))
return formatted
def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
formatted = ['Traceback (most recent call last):\n']
if hasattr(tb, 'tb_next'):
tb = extract_traceback(tb, context)
formatted.extend(format_extracted(tb, formatter, limit))
formatted.extend(traceback.format_exception_only(etype, value))
return formatted
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, basestring):
return 'Exited with "%d"' % exc.code
return str(exc)

View File

@@ -24,21 +24,15 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from __future__ import absolute_import
from __future__ import print_function
import os, re
import logging
import os, re, fcntl
import bb
from bb import data
from bb import persist_data
from bb import utils
__version__ = "1"
logger = logging.getLogger("BitBake.Fetch")
class MalformedUrl(Exception):
"""Exception raised when encountering an invalid url"""
try:
import cPickle as pickle
except ImportError:
import pickle
class FetchError(Exception):
"""Exception raised when a download fails"""
@@ -58,138 +52,57 @@ class MD5SumError(Exception):
class InvalidSRCREV(Exception):
"""Exception raised when an invalid SRCREV is encountered"""
def decodeurl(url):
"""Decodes an URL into the tokens (scheme, network location, path,
user, password, parameters).
"""
m = re.compile('(?P<type>[^:]*)://((?P<user>.+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
if not m:
raise MalformedUrl(url)
type = m.group('type')
location = m.group('location')
if not location:
raise MalformedUrl(url)
user = m.group('user')
parm = m.group('parm')
locidx = location.find('/')
if locidx != -1 and type.lower() != 'file':
host = location[:locidx]
path = location[locidx:]
else:
host = ""
path = location
if user:
m = re.compile('(?P<user>[^:]+)(:?(?P<pswd>.*))').match(user)
if m:
user = m.group('user')
pswd = m.group('pswd')
else:
user = ''
pswd = ''
p = {}
if parm:
for s in parm.split(';'):
s1, s2 = s.split('=')
p[s1] = s2
return (type, host, path, user, pswd, p)
def encodeurl(decoded):
"""Encodes a URL from tokens (scheme, network location, path,
user, password, parameters).
"""
(type, host, path, user, pswd, p) = decoded
if not type or not path:
raise MissingParameterError("Type or path url components missing when encoding %s" % decoded)
url = '%s://' % type
if user:
url += "%s" % user
if pswd:
url += ":%s" % pswd
url += "@"
if host:
url += "%s" % host
url += "%s" % path
if p:
for parm in p:
url += ";%s=%s" % (parm, p[parm])
return url
def uri_replace(uri, uri_find, uri_replace, d):
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: operating on %s" % uri)
if not uri or not uri_find or not uri_replace:
logger.debug(1, "uri_replace: passed an undefined value, not replacing")
uri_decoded = list(decodeurl(uri))
uri_find_decoded = list(decodeurl(uri_find))
uri_replace_decoded = list(decodeurl(uri_replace))
result_decoded = ['', '', '', '', '', {}]
bb.msg.debug(1, bb.msg.domain.Fetcher, "uri_replace: passed an undefined value, not replacing")
uri_decoded = list(bb.decodeurl(uri))
uri_find_decoded = list(bb.decodeurl(uri_find))
uri_replace_decoded = list(bb.decodeurl(uri_replace))
result_decoded = ['','','','','',{}]
for i in uri_find_decoded:
loc = uri_find_decoded.index(i)
result_decoded[loc] = uri_decoded[loc]
if isinstance(i, basestring):
import types
if type(i) == types.StringType:
import re
if (re.match(i, uri_decoded[loc])):
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
if uri_find_decoded.index(i) == 2:
if d:
localfn = bb.fetch.localpath(uri, d)
if localfn:
result_decoded[loc] = os.path.join(os.path.dirname(result_decoded[loc]), os.path.basename(bb.fetch.localpath(uri, d)))
result_decoded[loc] = os.path.dirname(result_decoded[loc]) + "/" + os.path.basename(bb.fetch.localpath(uri, d))
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: matching %s against %s and replacing with %s" % (i, uri_decoded[loc], uri_replace_decoded[loc]))
else:
# bb.msg.note(1, bb.msg.domain.Fetcher, "uri_replace: no match")
return uri
return encodeurl(result_decoded)
# else:
# for j in i.keys():
# FIXME: apply replacements against options
return bb.encodeurl(result_decoded)
methods = []
urldata_cache = {}
saved_headrevs = {}
def fetcher_init(d):
"""
Called to initialize the fetchers once the configuration data is known.
Called to initilize the fetchers once the configuration data is known
Calls before this must not hit the cache.
"""
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY', 1) or "clear"
pd = persist_data.PersistData(d)
# When to drop SCM head revisions controled by user policy
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
if srcrev_policy == "cache":
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Keeping SRCREV cache due to cache policy of: %s" % srcrev_policy)
elif srcrev_policy == "clear":
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs = persist_data.persist('BB_URI_HEADREVS', d)
try:
bb.fetch.saved_headrevs = revs.items()
except:
pass
revs.clear()
bb.msg.debug(1, bb.msg.domain.Fetcher, "Clearing SRCREV cache due to cache policy of: %s" % srcrev_policy)
pd.delDomain("BB_URI_HEADREVS")
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
for m in methods:
if hasattr(m, "init"):
m.init(d)
def fetcher_compare_revisions(d):
"""
Compare the revisions in the persistant cache with current values and
return true/false on whether they've changed.
"""
data = persist_data.persist('BB_URI_HEADREVS', d).items()
data2 = bb.fetch.saved_headrevs
changed = False
for key in data:
if key not in data2 or data2[key] != data[key]:
logger.debug(1, "%s changed", key)
changed = True
return True
else:
logger.debug(2, "%s did not change", key)
return False
bb.msg.fatal(bb.msg.domain.Fetcher, "Invalid SRCREV cache policy of: %s" % srcrev_policy)
# Make sure our domains exist
pd.addDomain("BB_URI_HEADREVS")
pd.addDomain("BB_URI_LOCALCOUNT")
# Function call order is usually:
# 1. init
@@ -199,8 +112,7 @@ def fetcher_compare_revisions(d):
def init(urls, d, setup = True):
urldata = {}
fn = d.getVar('FILE', 1)
fn = bb.data.getVar('FILE', d, 1)
if fn in urldata_cache:
urldata = urldata_cache[fn]
@@ -211,135 +123,63 @@ def init(urls, d, setup = True):
if setup:
for url in urldata:
if not urldata[url].setup:
urldata[url].setup_localpath(d)
urldata[url].setup_localpath(d)
urldata_cache[fn] = urldata
return urldata
def mirror_from_string(data):
return [ i.split() for i in (data or "").replace('\\n','\n').split('\n') if i ]
def verify_checksum(u, ud, d):
"""
verify the MD5 and SHA256 checksum for downloaded src
return value:
- True: checksum matched
- False: checksum unmatched
if checksum is missing in recipes file, "BB_STRICT_CHECKSUM" decide the return value.
if BB_STRICT_CHECKSUM = "1" then return false as unmatched, otherwise return true as
matched
"""
if not ud.type in ["http", "https", "ftp", "ftps"]:
return
md5data = bb.utils.md5_file(ud.localpath)
sha256data = bb.utils.sha256_file(ud.localpath)
if (ud.md5_expected == None or ud.sha256_expected == None):
logger.warn('Missing SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"',
ud.localpath, ud.md5_name, md5data,
ud.sha256_name, sha256data)
if d.getVar("BB_STRICT_CHECKSUM", True) == "1":
raise FetchError("No checksum specified for %s." % u)
return
if (ud.md5_expected != md5data or ud.sha256_expected != sha256data):
logger.error('The checksums for "%s" did not match.\n'
' MD5: expected "%s", got "%s"\n'
' SHA256: expected "%s", got "%s"\n',
ud.localpath, ud.md5_expected, md5data,
ud.sha256_expected, sha256data)
raise FetchError("%s checksum mismatch." % u)
def go(d, urls = None):
def go(d):
"""
Fetch all urls
init must have previously been called
"""
if not urls:
urls = d.getVar("SRC_URI", 1).split()
urldata = init(urls, d, True)
urldata = init([], d, True)
for u in urls:
for u in urldata:
ud = urldata[u]
m = ud.method
localpath = ""
if ud.localfile:
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
# File already present along with md5 stamp file
# Touch md5 file to show activity
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
continue
lf = bb.utils.lockfile(ud.lockfile)
if not m.forcefetch(u, ud, d) and os.path.exists(ud.md5):
# If someone else fetched this before we got the lock,
# notice and don't try again
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
bb.utils.unlockfile(lf)
continue
m.go(u, ud, d)
if ud.localfile:
if not m.forcefetch(u, ud, d):
Fetch.write_md5sum(u, ud, d)
bb.utils.unlockfile(lf)
if not ud.localfile:
continue
lf = bb.utils.lockfile(ud.lockfile)
if m.try_premirror(u, ud, d):
# First try fetching uri, u, from PREMIRRORS
mirrors = mirror_from_string(d.getVar('PREMIRRORS', True))
localpath = try_mirrors(d, u, mirrors, False, m.forcefetch(u, ud, d))
elif os.path.exists(ud.localfile):
localpath = ud.localfile
# Need to re-test forcefetch() which will return true if our copy is too old
if m.forcefetch(u, ud, d) or not localpath:
# Next try fetching from the original uri, u
try:
m.go(u, ud, d)
localpath = ud.localpath
except FetchError:
# Remove any incomplete file
bb.utils.remove(ud.localpath)
# Finally, try fetching uri, u, from MIRRORS
mirrors = mirror_from_string(d.getVar('MIRRORS', True))
localpath = try_mirrors (d, u, mirrors)
if not localpath or not os.path.exists(localpath):
raise FetchError("Unable to fetch URL %s from any source." % u)
ud.localpath = localpath
if os.path.exists(ud.md5):
# Touch the md5 file to show active use of the download
try:
os.utime(ud.md5, None)
except:
# Errors aren't fatal here
pass
else:
# Only check the checksums if we've not seen this item before
verify_checksum(u, ud, d)
Fetch.write_md5sum(u, ud, d)
bb.utils.unlockfile(lf)
def checkstatus(d, urls = None):
def checkstatus(d):
"""
Check all urls exist upstream
init must have previously been called
"""
urldata = init([], d, True)
if not urls:
urls = urldata
for u in urls:
for u in urldata:
ud = urldata[u]
m = ud.method
logger.debug(1, "Testing URL %s", u)
# First try checking uri, u, from PREMIRRORS
mirrors = mirror_from_string(d.getVar('PREMIRRORS', True))
ret = try_mirrors(d, u, mirrors, True)
bb.msg.note(1, bb.msg.domain.Fetcher, "Testing URL %s" % u)
ret = m.checkstatus(u, ud, d)
if not ret:
# Next try checking from the original uri, u
try:
ret = m.checkstatus(u, ud, d)
except:
# Finally, try checking uri, u, from MIRRORS
mirrors = mirror_from_string(d.getVar('MIRRORS', True))
ret = try_mirrors (d, u, mirrors, True)
if not ret:
raise FetchError("URL %s doesn't work" % u)
bb.msg.fatal(bb.msg.domain.Fetcher, "URL %s doesn't work" % u)
def localpaths(d):
"""
@@ -349,30 +189,27 @@ def localpaths(d):
urldata = init([], d, True)
for u in urldata:
ud = urldata[u]
ud = urldata[u]
local.append(ud.localpath)
return local
srcrev_internal_call = False
def get_autorev(d):
return get_srcrev(d)
def get_srcrev(d):
"""
Return the version string for the current package
(usually to be used as PV)
Most packages usually only have one SCM so we just pass on the call.
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
have been set.
"""
#
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
# Ugly code alert. localpath in the fetchers will try to evaluate SRCREV which
# could translate into a call to here. If it does, we need to catch this
# and provide some way so it knows get_srcrev is active instead of being
# some number etc. hence the srcrev_internal_call tracking and the magic
# some number etc. hence the srcrev_internal_call tracking and the magic
# "SRCREVINACTION" return value.
#
# Neater solutions welcome!
@@ -382,31 +219,28 @@ def get_srcrev(d):
scms = []
# Only call setup_localpath on URIs which supports_srcrev()
urldata = init(d.getVar('SRC_URI', 1).split(), d, False)
# Only call setup_localpath on URIs which suppports_srcrev()
urldata = init(bb.data.getVar('SRC_URI', d, 1).split(), d, False)
for u in urldata:
ud = urldata[u]
if ud.method.supports_srcrev():
if ud.method.suppports_srcrev():
if not ud.setup:
ud.setup_localpath(d)
scms.append(u)
if len(scms) == 0:
logger.error("SRCREV was used yet no valid SCM was found in SRC_URI")
bb.msg.error(bb.msg.domain.Fetcher, "SRCREV was used yet no valid SCM was found in SRC_URI")
raise ParameterError
if d.getVar('BB_SRCREV_POLICY', True) != "cache":
d.setVar('__BB_DONT_CACHE', '1')
if len(scms) == 1:
return urldata[scms[0]].method.sortable_revision(scms[0], urldata[scms[0]], d)
#
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
#
format = d.getVar('SRCREV_FORMAT', 1)
format = bb.data.getVar('SRCREV_FORMAT', d, 1)
if not format:
logger.error("The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
bb.msg.error(bb.msg.domain.Fetcher, "The SRCREV_FORMAT variable must be set when multiple SCMs are used.")
raise ParameterError
for scm in scms:
@@ -419,7 +253,7 @@ def get_srcrev(d):
def localpath(url, d, cache = True):
"""
Called from the parser with cache=False since the cache isn't ready
Called from the parser with cache=False since the cache isn't ready
at this point. Also called from classed in OE e.g. patch.bbclass
"""
ud = init([url], d)
@@ -438,31 +272,28 @@ def runfetchcmd(cmd, d, quiet = False):
# rather than host provided
# Also include some other variables.
# FIXME: Should really include all export varaiables?
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST',
'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy',
'https_proxy', 'no_proxy', 'ALL_PROXY', 'all_proxy',
'KRB5CCNAME', 'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
exportvars = ['PATH', 'GIT_PROXY_HOST', 'GIT_PROXY_PORT', 'GIT_PROXY_COMMAND']
for var in exportvars:
val = data.getVar(var, d, True)
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
cmd = 'export ' + var + '=%s; %s' % (val, cmd)
logger.debug(1, "Running %s", cmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cmd)
# redirect stderr to stdout
stdout_handle = os.popen(cmd + " 2>&1", "r")
output = ""
while True:
while 1:
line = stdout_handle.readline()
if not line:
break
if not quiet:
print(line, end=' ')
print line,
output += line
status = stdout_handle.close() or 0
status = stdout_handle.close() or 0
signal = status >> 8
exitstatus = status & 0xff
@@ -473,75 +304,16 @@ def runfetchcmd(cmd, d, quiet = False):
return output
def try_mirrors(d, uri, mirrors, check = False, force = False):
"""
Try to use a mirrored version of the sources.
This method will be automatically called before the fetchers go.
d Is a bb.data instance
uri is the original uri we're trying to download
mirrors is the list of mirrors we're going to try
"""
fpath = os.path.join(data.getVar("DL_DIR", d, 1), os.path.basename(uri))
if not check and os.access(fpath, os.R_OK) and not force:
logger.debug(1, "%s already exists, skipping checkout.", fpath)
return fpath
ld = d.createCopy()
for (find, replace) in mirrors:
newuri = uri_replace(uri, find, replace, ld)
if newuri != uri:
try:
ud = FetchData(newuri, ld)
except bb.fetch.NoMethodError:
logger.debug(1, "No method for %s", uri)
continue
ud.setup_localpath(ld)
try:
if check:
found = ud.method.checkstatus(newuri, ud, ld)
if found:
return found
else:
ud.method.go(newuri, ud, ld)
return ud.localpath
except (bb.fetch.MissingParameterError,
bb.fetch.FetchError,
bb.fetch.MD5SumError):
import sys
(type, value, traceback) = sys.exc_info()
logger.debug(2, "Mirror fetch failure: %s", value)
bb.utils.remove(ud.localpath)
continue
return None
class FetchData(object):
"""
A class which represents the fetcher state for a given URI.
"""
def __init__(self, url, d):
self.localfile = ""
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(data.expand(url, d))
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = bb.decodeurl(data.expand(url, d))
self.date = Fetch.getSRCDate(self, d)
self.url = url
if not self.user and "user" in self.parm:
self.user = self.parm["user"]
if not self.pswd and "pswd" in self.parm:
self.pswd = self.parm["pswd"]
self.setup = False
if "name" in self.parm:
self.md5_name = "%s.md5sum" % self.parm["name"]
self.sha256_name = "%s.sha256sum" % self.parm["name"]
else:
self.md5_name = "md5sum"
self.sha256_name = "sha256sum"
self.md5_expected = d.getVarFlag("SRC_URI", self.md5_name)
self.sha256_expected = d.getVarFlag("SRC_URI", self.sha256_name)
for m in methods:
if m.supports(url, self, d):
self.method = m
@@ -553,36 +325,15 @@ class FetchData(object):
if "localpath" in self.parm:
# if user sets localpath for file, use it instead.
self.localpath = self.parm["localpath"]
self.basename = os.path.basename(self.localpath)
else:
premirrors = d.getVar('PREMIRRORS', True)
local = ""
if premirrors and self.url:
aurl = self.url.split(";")[0]
mirrors = mirror_from_string(premirrors)
for (find, replace) in mirrors:
if replace.startswith("file://"):
path = aurl.split("://")[1]
path = path.split(";")[0]
local = replace.split("://")[1] + os.path.basename(path)
if local == aurl or not os.path.exists(local) or os.path.isdir(local):
local = ""
self.localpath = local
if not local:
try:
bb.fetch.srcrev_internal_call = True
self.localpath = self.method.localpath(self.url, self, d)
finally:
bb.fetch.srcrev_internal_call = False
# We have to clear data's internal caches since the cached value of SRCREV is now wrong.
# Horrible...
bb.data.delVar("ISHOULDNEVEREXIST", d)
if self.localpath is not None:
# Note: These files should always be in DL_DIR whereas localpath may not be.
basepath = bb.data.expand("${DL_DIR}/%s" % os.path.basename(self.localpath), d)
self.md5 = basepath + '.md5'
self.lockfile = basepath + '.lock'
bb.fetch.srcrev_internal_call = True
self.localpath = self.method.localpath(self.url, self, d)
bb.fetch.srcrev_internal_call = False
# We have to clear data's internal caches since the cached value of SRCREV is now wrong.
# Horrible...
bb.data.delVar("ISHOULDNEVEREXIST", d)
self.md5 = self.localpath + '.md5'
self.lockfile = self.localpath + '.lock'
class Fetch(object):
@@ -600,17 +351,10 @@ class Fetch(object):
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
Can also setup variables in urldata for use in go (saving code duplication
Can also setup variables in urldata for use in go (saving code duplication
and duplicate code execution)
"""
return url
def _strip_leading_slashes(self, relpath):
"""
Remove leading slash as os.path.join can't cope
"""
while os.path.isabs(relpath):
relpath = relpath[1:]
return relpath
def setUrls(self, urls):
self.__urls = urls
@@ -626,7 +370,7 @@ class Fetch(object):
"""
return False
def supports_srcrev(self):
def suppports_srcrev(self):
"""
The fetcher supports auto source revisions (SRCREV)
"""
@@ -639,23 +383,12 @@ class Fetch(object):
"""
raise NoMethodError("Missing implementation for url")
def try_premirror(self, url, urldata, d):
"""
Should premirrors be used?
"""
if urldata.method.forcefetch(url, urldata, d):
return True
elif os.path.exists(urldata.md5) and os.path.exists(urldata.localfile):
return False
else:
return True
def checkstatus(self, url, urldata, d):
"""
Check the status of a URL
Assumes localpath was called first
"""
logger.info("URL %s could not be checked for status since no method exists.", url)
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s could not be checked for status since no method exists." % url)
return True
def getSRCDate(urldata, d):
@@ -679,8 +412,8 @@ class Fetch(object):
"""
Return:
a) a source revision if specified
b) True if auto srcrev is in action
c) False otherwise
b) True if auto srcrev is in action
c) False otherwise
"""
if 'rev' in ud.parm:
@@ -692,45 +425,58 @@ class Fetch(object):
rev = None
if 'name' in ud.parm:
pn = data.getVar("PN", d, 1)
rev = data.getVar("SRCREV_%s_pn-%s" % (ud.parm['name'], pn), d, 1)
if not rev:
rev = data.getVar("SRCREV_pn-%s_%s" % (pn, ud.parm['name']), d, 1)
if not rev:
rev = data.getVar("SRCREV_%s" % (ud.parm['name']), d, 1)
rev = data.getVar("SRCREV_pn-" + pn + "_" + ud.parm['name'], d, 1)
if not rev:
rev = data.getVar("SRCREV", d, 1)
if rev == "INVALID":
raise InvalidSRCREV("Please set SRCREV to a valid value")
if not rev:
return False
if rev == "SRCREVINACTION":
if rev is "SRCREVINACTION":
return True
return rev
srcrev_internal_helper = staticmethod(srcrev_internal_helper)
def localcount_internal_helper(ud, d):
def try_mirror(d, tarfn):
"""
Return:
a) a locked localcount if specified
b) None otherwise
Try to use a mirrored version of the sources. We do this
to avoid massive loads on foreign cvs and svn servers.
This method will be used by the different fetcher
implementations.
d Is a bb.data instance
tarfn is the name of the tarball
"""
tarpath = os.path.join(data.getVar("DL_DIR", d, 1), tarfn)
if os.access(tarpath, os.R_OK):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists, skipping checkout." % tarfn)
return True
localcount = None
if 'name' in ud.parm:
pn = data.getVar("PN", d, 1)
localcount = data.getVar("LOCALCOUNT_" + ud.parm['name'], d, 1)
if not localcount:
localcount = data.getVar("LOCALCOUNT", d, 1)
return localcount
pn = data.getVar('PN', d, True)
src_tarball_stash = None
if pn:
src_tarball_stash = (data.getVar('SRC_TARBALL_STASH_%s' % pn, d, True) or data.getVar('CVS_TARBALL_STASH_%s' % pn, d, True) or data.getVar('SRC_TARBALL_STASH', d, True) or data.getVar('CVS_TARBALL_STASH', d, True) or "").split()
localcount_internal_helper = staticmethod(localcount_internal_helper)
for stash in src_tarball_stash:
fetchcmd = data.getVar("FETCHCOMMAND_mirror", d, True) or data.getVar("FETCHCOMMAND_wget", d, True)
uri = stash + tarfn
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
fetchcmd = fetchcmd.replace("${URI}", uri)
ret = os.system(fetchcmd)
if ret == 0:
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetched %s from tarball stash, skipping checkout" % tarfn)
return True
return False
try_mirror = staticmethod(try_mirror)
def verify_md5sum(ud, got_sum):
"""
Verify the md5sum we wanted with the one we got
"""
wanted_sum = ud.parm.get('md5sum')
wanted_sum = None
if 'md5sum' in ud.parm:
wanted_sum = ud.parm['md5sum']
if not wanted_sum:
return True
@@ -755,68 +501,53 @@ class Fetch(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError
revs = persist_data.persist('BB_URI_HEADREVS', d)
key = self.generate_revision_key(url, ud, d)
try:
return revs[key]
except KeyError:
revs[key] = rev = self._latest_revision(url, ud, d)
return rev
pd = persist_data.PersistData(d)
key = self._revision_key(url, ud, d)
rev = pd.getValue("BB_URI_HEADREVS", key)
if rev != None:
return str(rev)
rev = self._latest_revision(url, ud, d)
pd.setValue("BB_URI_HEADREVS", key, rev)
return rev
def sortable_revision(self, url, ud, d):
"""
"""
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
localcounts = persist_data.persist('BB_URI_LOCALCOUNT', d)
key = self.generate_revision_key(url, ud, d)
pd = persist_data.PersistData(d)
key = self._revision_key(url, ud, d)
latest_rev = self._build_revision(url, ud, d)
last_rev = localcounts.get(key + '_rev')
uselocalcount = d.getVar("BB_LOCALCOUNT_OVERRIDE", True) or False
count = None
if uselocalcount:
count = Fetch.localcount_internal_helper(ud, d)
if count is None:
count = localcounts.get(key + '_count')
last_rev = pd.getValue("BB_URI_LOCALCOUNT", key + "_rev")
count = pd.getValue("BB_URI_LOCALCOUNT", key + "_count")
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
buildindex_provided = hasattr(self, "_sortable_buildindex")
if buildindex_provided:
count = self._sortable_buildindex(url, ud, d, latest_rev)
if count is None:
count = "0"
elif uselocalcount or buildindex_provided:
count = str(count)
else:
count = str(int(count) + 1)
localcounts[key + '_rev'] = latest_rev
localcounts[key + '_count'] = count
pd.setValue("BB_URI_LOCALCOUNT", key + "_rev", latest_rev)
pd.setValue("BB_URI_LOCALCOUNT", key + "_count", count)
return str(count + "+" + latest_rev)
def generate_revision_key(self, url, ud, d):
key = self._revision_key(url, ud, d)
return "%s-%s" % (key, d.getVar("PN", True) or "")
from . import cvs
from . import git
from . import local
from . import svn
from . import wget
from . import svk
from . import ssh
from . import perforce
from . import bzr
from . import hg
from . import osc
from . import repo
import cvs
import git
import local
import svn
import wget
import svk
import ssh
import perforce
import bzr
import hg
methods.append(local.Local())
methods.append(wget.Wget())
@@ -828,5 +559,3 @@ methods.append(ssh.SSH())
methods.append(perforce.Perforce())
methods.append(bzr.Bzr())
methods.append(hg.Hg())
methods.append(osc.Osc())
methods.append(repo.Repo())

View File

@@ -25,10 +25,12 @@ BitBake 'Fetch' implementation for bzr.
import os
import sys
import logging
import bb
from bb import data
from bb.fetch import Fetch, FetchError, runfetchcmd, logger
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
class Bzr(Fetch):
def supports(self, url, ud, d):
@@ -37,20 +39,23 @@ class Bzr(Fetch):
def localpath (self, url, ud, d):
# Create paths to bzr checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
revision = Fetch.srcrev_internal_helper(ud, d)
if revision is True:
ud.revision = self.latest_revision(url, ud, d)
ud.revision = self.latest_revision(url, ud, d)
elif revision:
ud.revision = revision
if not ud.revision:
ud.revision = self.latest_revision(url, ud, d)
ud.revision = self.latest_revision(url, ud, d)
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _buildbzrcommand(self, ud, d, command):
@@ -61,21 +66,23 @@ class Bzr(Fetch):
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('proto', 'http')
proto = "http"
if "proto" in ud.parm:
proto = ud.parm["proto"]
bzrroot = ud.host + ud.path
options = []
if command == "revno":
if command is "revno":
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
if command is "fetch":
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command == "update":
elif command is "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid bzr command %s" % command)
@@ -85,31 +92,29 @@ class Bzr(Fetch):
def go(self, loc, ud, d):
"""Fetch url"""
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping bzr checkout." % ud.localpath)
return
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", loc)
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Update %s" % loc)
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
os.system("rm -rf %s" % os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)))
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
logger.debug(1, "BZR Checkout %s", loc)
bb.utils.mkdirhier(ud.pkgdir)
bb.msg.debug(1, bb.msg.domain.Fetcher, "BZR Checkout %s" % loc)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % bzrcmd)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
try:
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d)
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.pkgdir)), d)
except:
t, v, tb = sys.exc_info()
try:
@@ -118,7 +123,7 @@ class Bzr(Fetch):
pass
raise t, v, tb
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
@@ -131,7 +136,7 @@ class Bzr(Fetch):
"""
Return the latest upstream revision number
"""
logger.debug(2, "BZR fetcher hitting network for %s", url)
bb.msg.debug(2, bb.msg.domain.Fetcher, "BZR fetcher hitting network for %s" % url)
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
@@ -146,3 +151,4 @@ class Bzr(Fetch):
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -26,11 +26,12 @@ BitBake build tools.
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
#
import os
import logging
import os, re
import bb
from bb import data
from bb.fetch import Fetch, FetchError, MissingParameterError, logger
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
class Cvs(Fetch):
"""
@@ -40,14 +41,16 @@ class Cvs(Fetch):
"""
Check to see if a given url can be fetched with cvs.
"""
return ud.type in ['cvs']
return ud.type in ['cvs', 'pserver']
def localpath(self, url, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("cvs method needs a 'module' parameter")
ud.module = ud.parm["module"]
ud.tag = ud.parm.get('tag', "")
ud.tag = ""
if 'tag' in ud.parm:
ud.tag = ud.parm['tag']
# Override the default date in certain cases
if 'date' in ud.parm:
@@ -74,9 +77,22 @@ class Cvs(Fetch):
def go(self, loc, ud, d):
method = ud.parm.get('method', 'pserver')
localdir = ud.parm.get('localdir', ud.module)
cvs_port = ud.parm.get('port', '')
# try to use the tarball stash
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping cvs checkout." % ud.localpath)
return
method = "pserver"
if "method" in ud.parm:
method = ud.parm["method"]
localdir = ud.module
if "localdir" in ud.parm:
localdir = ud.parm["localdir"]
cvs_port = ""
if "port" in ud.parm:
cvs_port = ud.parm["port"]
cvs_rsh = None
if method == "ext":
@@ -102,11 +118,7 @@ class Cvs(Fetch):
if 'norecurse' in ud.parm:
options.append("-l")
if ud.date:
# treat YYYYMMDDHHMM specially for CVS
if len(ud.date) == 12:
options.append("-D \"%s %s:%s UTC\"" % (ud.date[0:8], ud.date[8:10], ud.date[10:12]))
else:
options.append("-D \"%s UTC\"" % ud.date)
options.append("-D \"%s UTC\"" % ud.date)
if ud.tag:
options.append("-r %s" % ud.tag)
@@ -125,44 +137,38 @@ class Cvs(Fetch):
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug(2, "Fetch: checking for module directory")
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory")
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + loc)
moddir = os.path.join(pkgdir,localdir)
if os.access(os.path.join(moddir,'CVS'), os.R_OK):
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(moddir)
myret = os.system(cvsupdatecmd)
else:
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(pkgdir)
bb.mkdirhier(pkgdir)
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % cvscmd)
myret = os.system(cvscmd)
if myret != 0 or not os.access(moddir, os.R_OK):
try:
os.rmdir(moddir)
except OSError:
pass
pass
raise FetchError(ud.module)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
if 'fullpath' in ud.parm:
os.chdir(pkgdir)
myret = os.system("tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir))
myret = os.system("tar -czf %s %s" % (ud.localpath, localdir))
else:
os.chdir(moddir)
os.chdir('..')
myret = os.system("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir)))
myret = os.system("tar -czf %s %s" % (ud.localpath, os.path.basename(moddir)))
if myret != 0:
try:

View File

@@ -20,22 +20,15 @@ BitBake 'Fetch' git implementation
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import os, re
import bb
import bb.persist_data
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import runfetchcmd
from bb.fetch import logger
class Git(Fetch):
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
#
# Only enable _sortable revision if the key is set
#
if d.getVar("BB_GIT_CLONE_FOR_SRCREV", True):
self._sortable_buildindex = self._sortable_buildindex_disabled
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
@@ -44,296 +37,97 @@ class Git(Fetch):
def localpath(self, url, ud, d):
ud.proto = "rsync"
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
elif not ud.host:
ud.proto = 'file'
else:
ud.proto = "rsync"
ud.branch = ud.parm.get("branch", "master")
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
ud.mirrortarball = 'git_%s.tar.gz' % (gitsrcname)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
tag = Fetch.srcrev_internal_helper(ud, d)
if tag is True:
ud.tag = self.latest_revision(url, ud, d)
ud.tag = self.latest_revision(url, ud, d)
elif tag:
ud.tag = tag
if not ud.tag or ud.tag == "master":
if not ud.tag:
ud.tag = self.latest_revision(url, ud, d)
if ud.tag == "master":
ud.tag = self.latest_revision(url, ud, d)
subdir = ud.parm.get("subpath", "")
if subdir != "":
if subdir.endswith("/"):
subdir = subdir[:-1]
subdirpath = os.path.join(ud.path, subdir);
else:
subdirpath = ud.path;
if 'fullclone' in ud.parm:
ud.localfile = ud.mirrortarball
else:
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, subdirpath.replace('/', '.'), ud.tag), d)
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
if 'noclone' in ud.parm:
ud.localfile = None
return None
ud.localfile = data.expand('git_%s%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.tag), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def forcefetch(self, url, ud, d):
if 'fullclone' in ud.parm:
return True
if 'noclone' in ud.parm:
return False
if os.path.exists(ud.localpath):
return False
if not self._contains_ref(ud.tag, d):
return True
return False
def try_premirror(self, u, ud, d):
if 'noclone' in ud.parm:
return False
if os.path.exists(ud.clonedir):
return False
if os.path.exists(ud.localpath):
return False
return True
def go(self, loc, ud, d):
"""Fetch url"""
if ud.user:
username = ud.user + '@'
else:
username = ""
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists (or was stashed). Skipping git checkout." % ud.localpath)
return
repofile = os.path.join(data.getVar("DL_DIR", d, 1), ud.mirrortarball)
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
repofilename = 'git_%s.tar.gz' % (gitsrcname)
repofile = os.path.join(data.getVar("DL_DIR", d, 1), repofilename)
repodir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
coname = '%s' % (ud.tag)
codir = os.path.join(ud.clonedir, coname)
codir = os.path.join(repodir, coname)
# If we have no existing clone and no mirror tarball, try and obtain one
if not os.path.exists(ud.clonedir) and not os.path.exists(repofile):
try:
Fetch.try_mirrors(ud.mirrortarball)
except:
pass
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(repofile):
bb.utils.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (repofile), d)
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
runfetchcmd("%s clone -n %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
os.chdir(ud.clonedir)
# Update the checkout if needed
if not self._contains_ref(ud.tag, d) or 'fullclone' in ud.parm:
# Remove all but the .git directory
runfetchcmd("rm * -Rf", d)
if 'fullclone' in ud.parm:
runfetchcmd("%s fetch --all" % (ud.basecmd), d)
if not os.path.exists(repodir):
if Fetch.try_mirror(d, repofilename):
bb.mkdirhier(repodir)
os.chdir(repodir)
runfetchcmd("tar -xzf %s" % (repofile), d)
else:
runfetchcmd("%s fetch %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.branch), d)
runfetchcmd("%s fetch --tags %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
runfetchcmd("git clone -n %s://%s%s %s" % (ud.proto, ud.host, ud.path, repodir), d)
# Generate a mirror tarball if needed
os.chdir(ud.clonedir)
os.chdir(repodir)
# Remove all but the .git directory
runfetchcmd("rm * -Rf", d)
runfetchcmd("git fetch %s://%s%s %s" % (ud.proto, ud.host, ud.path, ud.branch), d)
runfetchcmd("git fetch --tags %s://%s%s" % (ud.proto, ud.host, ud.path), d)
runfetchcmd("git prune-packed", d)
runfetchcmd("git pack-redundant --all | xargs -r rm", d)
os.chdir(repodir)
mirror_tarballs = data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True)
if mirror_tarballs != "0" or 'fullclone' in ud.parm:
logger.info("Creating tarball of git repository")
if mirror_tarballs != "0":
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (repofile, os.path.join(".", ".git", "*") ), d)
if 'fullclone' in ud.parm:
return
if os.path.exists(codir):
bb.utils.prunedir(codir)
subdir = ud.parm.get("subpath", "")
if subdir != "":
if subdir.endswith("/"):
subdirbase = os.path.basename(subdir[:-1])
else:
subdirbase = os.path.basename(subdir)
else:
subdirbase = ""
if subdir != "":
readpathspec = ":%s" % (subdir)
codir = os.path.join(codir, "git")
coprefix = os.path.join(codir, subdirbase, "")
else:
readpathspec = ""
coprefix = os.path.join(codir, "git", "")
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
runfetchcmd("%s clone -n %s %s" % (ud.basecmd, ud.clonedir, coprefix), d)
os.chdir(coprefix)
runfetchcmd("%s checkout -q -f %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
else:
bb.utils.mkdirhier(codir)
os.chdir(ud.clonedir)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.tag, readpathspec), d)
runfetchcmd("%s checkout-index -q -f --prefix=%s -a" % (ud.basecmd, coprefix), d)
bb.mkdirhier(codir)
os.chdir(repodir)
runfetchcmd("git read-tree %s" % (ud.tag), d)
runfetchcmd("git checkout-index -q -f --prefix=%s -a" % (os.path.join(codir, "git", "")), d)
os.chdir(codir)
logger.info("Creating tarball of git checkout")
bb.msg.note(1, bb.msg.domain.Fetcher, "Creating tarball of git checkout")
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.join(".", "*") ), d)
os.chdir(ud.clonedir)
os.chdir(repodir)
bb.utils.prunedir(codir)
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _contains_ref(self, tag, d):
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
output = runfetchcmd("%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag), d, quiet=True)
return output.split()[0] != "0"
def _revision_key(self, url, ud, d, branch=False):
def _revision_key(self, url, ud, d):
"""
Return a unique key for the url
"""
key = 'git:' + ud.host + ud.path.replace('/', '.')
if branch:
return key + ud.branch
else:
return key
def generate_revision_key(self, url, ud, d, branch=False):
key = self._revision_key(url, ud, d, branch)
return "%s-%s" % (key, d.getVar("PN", True) or "")
return "git:" + ud.host + ud.path.replace('/', '.')
def _latest_revision(self, url, ud, d):
"""
Compute the HEAD revision for the url
"""
if ud.user:
username = ud.user + '@'
else:
username = ""
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s %s" % (basecmd, ud.proto, username, ud.host, ud.path, ud.branch)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch.FetchError("Fetch command %s gave empty output\n" % (cmd))
output = runfetchcmd("git ls-remote %s://%s%s %s" % (ud.proto, ud.host, ud.path, ud.branch), d, True)
return output.split()[0]
def latest_revision(self, url, ud, d):
"""
Look in the cache for the latest revision, if not present ask the SCM.
"""
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
key = self.generate_revision_key(url, ud, d, branch=True)
try:
return revs[key]
except KeyError:
# Compatibility with old key format, no branch included
oldkey = self.generate_revision_key(url, ud, d, branch=False)
try:
rev = revs[oldkey]
except KeyError:
rev = self._latest_revision(url, ud, d)
else:
del revs[oldkey]
revs[key] = rev
return rev
def sortable_revision(self, url, ud, d):
"""
"""
localcounts = bb.persist_data.persist('BB_URI_LOCALCOUNT', d)
key = self.generate_revision_key(url, ud, d, branch=True)
oldkey = self.generate_revision_key(url, ud, d, branch=False)
latest_rev = self._build_revision(url, ud, d)
last_rev = localcounts.get(key + '_rev')
if last_rev is None:
last_rev = localcounts.get(oldkey + '_rev')
if last_rev is not None:
del localcounts[oldkey + '_rev']
localcounts[key + '_rev'] = last_rev
uselocalcount = d.getVar("BB_LOCALCOUNT_OVERRIDE", True) or False
count = None
if uselocalcount:
count = Fetch.localcount_internal_helper(ud, d)
if count is None:
count = localcounts.get(key + '_count')
if count is None:
count = localcounts.get(oldkey + '_count')
if count is not None:
del localcounts[oldkey + '_count']
localcounts[key + '_count'] = count
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
buildindex_provided = hasattr(self, "_sortable_buildindex")
if buildindex_provided:
count = self._sortable_buildindex(url, ud, d, latest_rev)
if count is None:
count = "0"
elif uselocalcount or buildindex_provided:
count = str(count)
else:
count = str(int(count) + 1)
localcounts[key + '_rev'] = latest_rev
localcounts[key + '_count'] = count
return str(count + "+" + latest_rev)
def _build_revision(self, url, ud, d):
return ud.tag
def _sortable_buildindex_disabled(self, url, ud, d, rev):
"""
Return a suitable buildindex for the revision specified. This is done by counting revisions
using "git rev-list" which may or may not work in different circumstances.
"""
cwd = os.getcwd()
# Check if we have the rev already
if not os.path.exists(ud.clonedir):
print("no repo")
self.go(None, ud, d)
if not os.path.exists(ud.clonedir):
logger.error("GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value", url, ud.clonedir)
return None
os.chdir(ud.clonedir)
if not self._contains_ref(rev, d):
self.go(None, ud, d)
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
os.chdir(cwd)
buildindex = "%s" % output.split()[0]
logger.debug(1, "GIT repository for %s in %s is returning %s revisions in rev-list before %s", url, ud.clonedir, buildindex, rev)
return buildindex

View File

@@ -24,29 +24,23 @@ BitBake 'Fetch' implementation for mercurial DRCS (hg).
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import os, re
import sys
import logging
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
from bb.fetch import logger
class Hg(Fetch):
"""Class to fetch from mercurial repositories"""
"""Class to fetch a from mercurial repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with mercurial.
"""
return ud.type in ['hg']
def forcefetch(self, url, ud, d):
revTag = ud.parm.get('rev', 'tip')
return revTag == "tip"
def localpath(self, url, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("hg method needs a 'module' parameter")
@@ -54,20 +48,15 @@ class Hg(Fetch):
ud.module = ud.parm["module"]
# Create paths to mercurial checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
tag = Fetch.srcrev_internal_helper(ud, d)
if tag is True:
ud.revision = self.latest_revision(url, ud, d)
elif tag:
ud.revision = tag
else:
ud.revision = self.latest_revision(url, ud, d)
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
@@ -81,14 +70,16 @@ class Hg(Fetch):
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('proto', 'http')
proto = "http"
if "proto" in ud.parm:
proto = ud.parm["proto"]
host = ud.host
if proto == "file":
host = "/"
ud.host = "localhost"
if not ud.user:
if ud.user == None:
hgroot = host + ud.path
else:
hgroot = ud.user + "@" + host + ud.path
@@ -117,41 +108,36 @@ class Hg(Fetch):
def go(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping hg checkout." % ud.localpath)
return
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
updatecmd = self._buildhgcommand(ud, d, "pull")
logger.info("Update " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
updatecmd = self._buildhgcommand(ud, d, "update")
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % updatecmd)
runfetchcmd(updatecmd, d)
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % fetchcmd)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
os.chdir(ud.pkgdir)
try:
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d)
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
try:
@@ -159,22 +145,3 @@ class Hg(Fetch):
except OSError:
pass
raise t, v, tb
def supports_srcrev(self):
return True
def _latest_revision(self, url, ud, d):
"""
Compute tip revision for the url
"""
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
return output.strip()
def _build_revision(self, url, ud, d):
return ud.revision
def _revision_key(self, url, ud, d):
"""
Return a unique key for the url
"""
return "hg:" + ud.moddir

View File

@@ -25,18 +25,17 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import os, re
import bb
import bb.utils
from bb import data
from bb.fetch import Fetch
class Local(Fetch):
def supports(self, url, urldata, d):
"""
Check to see if a given url represents a local fetch.
Check to see if a given url can be fetched with cvs.
"""
return urldata.type in ['file']
return urldata.type in ['file','patch']
def localpath(self, url, urldata, d):
"""
@@ -48,7 +47,7 @@ class Local(Fetch):
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, 1)
if filespath:
newpath = bb.utils.which(filespath, path)
newpath = bb.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, 1)
if filesdir:
@@ -66,8 +65,8 @@ class Local(Fetch):
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
logger.info("URL %s looks like a glob and was therefore not checked.", url)
return True
bb.msg.note(1, bb.msg.domain.Fetcher, "URL %s looks like a glob and was therefore not checked." % url)
return True
if os.path.exists(urldata.localpath):
return True
return True
return False

View File

@@ -1,143 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
Bitbake "Fetch" implementation for osc (Opensuse build service client).
Based on the svn "Fetch" implementation.
"""
import os
import sys
import logging
import bb
from bb import data
from bb import utils
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
class Osc(Fetch):
"""Class to fetch a module or modules from Opensuse build server
repositories."""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with osc.
"""
return ud.type in ['osc']
def localpath(self, url, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("osc method needs a 'module' parameter.")
ud.module = ud.parm["module"]
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
pv = data.getVar("PV", d, 0)
rev = Fetch.srcrev_internal_helper(ud, d)
if rev and rev != True:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def _buildosccommand(self, ud, d, command):
"""
Build up an ocs commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('proto', 'ocs')
options = []
config = "-c %s" % self.generate_config(ud, d)
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
if command is "fetch":
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command is "update":
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command)
return osccmd
def go(self, loc, ud, d):
"""
Fetch url
"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
try:
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
try:
os.unlink(ud.localpath)
except OSError:
pass
raise t, v, tb
def supports_srcrev(self):
return False
def generate_config(self, ud, d):
"""
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
bb.utils.remove(config_path)
f = open(config_path, 'w')
f.write("[general]\n")
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()
return config_path

View File

@@ -25,28 +25,26 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import logging
import os, re
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import logger
from bb.fetch import MissingParameterError
class Perforce(Fetch):
def supports(self, url, ud, d):
return ud.type in ['p4']
def doparse(url, d):
def doparse(url,d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(user, pswd, host, port) = path.split('@')[0].split(":")
(user,pswd,host,port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
(host, port) = data.getVar('P4PORT', d).split(':')
(host,port) = data.getVar('P4PORT', d).split(':')
user = ""
pswd = ""
@@ -56,28 +54,27 @@ class Perforce(Fetch):
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
(key,value) = item.split('=')
keys.append(key)
values.append(value)
parm = dict(zip(keys, values))
parm = dict(zip(keys,values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
return host, path, user, pswd, parm
return host,path,user,pswd,parm
doparse = staticmethod(doparse)
def getcset(d, depot, host, user, pswd, parm):
p4opt = ""
def getcset(d, depot,host,user,pswd,parm):
if "cset" in parm:
return parm["cset"];
if user:
p4opt += " -u %s" % (user)
data.setVar('P4USER', user, d)
if pswd:
p4opt += " -P %s" % (pswd)
data.setVar('P4PASSWD', pswd, d)
if host:
p4opt += " -p %s" % (host)
data.setVar('P4PORT', host, d)
p4date = data.getVar("P4DATE", d, 1)
if "revision" in parm:
@@ -88,19 +85,19 @@ class Perforce(Fetch):
depot += "@%s" % (p4date)
p4cmd = data.getVar('FETCHCOMMAND_p4', d, 1)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s changes -m 1 %s" % (p4cmd, depot))
p4file = os.popen("%s changes -m 1 %s" % (p4cmd,depot))
cset = p4file.readline().strip()
logger.debug(1, "READ %s", cset)
bb.msg.debug(1, bb.msg.domain.Fetcher, "READ %s" % (cset))
if not cset:
return -1
return cset.split(' ')[1]
getcset = staticmethod(getcset)
def localpath(self, url, ud, d):
def localpath(self, url, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(url, d)
(host,path,user,pswd,parm) = Perforce.doparse(url,d)
# If a label is specified, we use that as our filename
@@ -113,11 +110,12 @@ class Perforce(Fetch):
if which != -1:
base = path[:which]
base = self._strip_leading_slashes(base)
if base[0] == "/":
base = base[1:]
cset = Perforce.getcset(d, path, host, user, pswd, parm)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base.replace('/', '.'), cset), d)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host,base.replace('/', '.'), cset), d)
return os.path.join(data.getVar("DL_DIR", d, 1), ud.localfile)
@@ -126,60 +124,67 @@ class Perforce(Fetch):
Fetch urls
"""
(host, depot, user, pswd, parm) = Perforce.doparse(loc, d)
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping perforce checkout." % ud.localpath)
return
(host,depot,user,pswd,parm) = Perforce.doparse(loc, d)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
else:
path = depot
module = parm.get('module', os.path.basename(path))
if "module" in parm:
module = parm["module"]
else:
module = os.path.basename(path)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
# Get the p4 command
p4opt = ""
if user:
p4opt += " -u %s" % (user)
data.setVar('P4USER', user, localdata)
if pswd:
p4opt += " -P %s" % (pswd)
data.setVar('P4PASSWD', pswd, localdata)
if host:
p4opt += " -p %s" % (host)
data.setVar('P4PORT', host, localdata)
p4cmd = data.getVar('FETCHCOMMAND', localdata, 1)
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
bb.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
raise FetchError(module)
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
depot = "%s@%s" % (depot,parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
depot = "%s@%s" % (depot,cset)
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "%s files %s" % (p4cmd, depot))
p4file = os.popen("%s files %s" % (p4cmd, depot))
if not p4file:
logger.error("Fetch: unable to get the P4 files from %s", depot)
bb.error("Fetch: unable to get the P4 files from %s" % (depot))
raise FetchError(module)
count = 0
for file in p4file:
for file in p4file:
list = file.split()
if list[2] == "delete":
@@ -188,11 +193,11 @@ class Perforce(Fetch):
dest = list[0][len(path)+1:]
where = dest.find("#")
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
os.system("%s print -o %s/%s %s" % (p4cmd, module,dest[:where],list[0]))
count = count + 1
if count == 0:
logger.error("Fetch: No files gathered from the P4 fetch")
bb.error("Fetch: No files gathered from the P4 fetch")
raise FetchError(module)
myret = os.system("tar -czf %s %s" % (ud.localpath, module))
@@ -203,4 +208,6 @@ class Perforce(Fetch):
pass
raise FetchError(module)
# cleanup
bb.utils.prunedir(tmpfile)
os.system('rm -rf %s' % tmpfile)

View File

@@ -1,98 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake "Fetch" repo (git) implementation
"""
# Copyright (C) 2009 Tom Rini <trini@embeddedalley.com>
#
# Based on git.py which is:
#Copyright (C) 2005 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import runfetchcmd
class Repo(Fetch):
"""Class to fetch a module or modules from repo (git) repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with repo.
"""
return ud.type in ["repo"]
def localpath(self, url, ud, d):
"""
We don"t care about the git rev of the manifests repository, but
we do care about the manifest to use. The default is "default".
We also care about the branch or tag to be used. The default is
"master".
"""
ud.proto = ud.parm.get('protocol', 'git')
ud.branch = ud.parm.get('branch', 'master')
ud.manifest = ud.parm.get('manifest', 'default.xml')
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def go(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
username = ud.user + "@"
else:
username = ""
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
return False
def _build_revision(self, url, ud, d):
return ud.manifest
def _want_sortable_revision(self, url, ud, d):
return False

View File

@@ -37,9 +37,11 @@ IETF secsh internet draft:
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
__pattern__ = re.compile(r'''
@@ -114,5 +116,5 @@ class SSH(Fetch):
(exitstatus, output) = commands.getstatusoutput(cmd)
if exitstatus != 0:
print(output)
print output
raise FetchError('Unable to fetch %s' % url)

View File

@@ -25,20 +25,18 @@ This implementation is for svk. It is based on the svn implementation
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import os, re
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import logger
class Svk(Fetch):
"""Class to fetch a module or modules from svk repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svk.
Check to see if a given url can be fetched with cvs.
"""
return ud.type in ['svk']
@@ -48,41 +46,48 @@ class Svk(Fetch):
else:
ud.module = ud.parm["module"]
ud.revision = ud.parm.get('rev', "")
ud.revision = ""
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
def forcefetch(self, url, ud, d):
return ud.date == "now"
if (ud.date == "now"):
return True
return False
def go(self, loc, ud, d):
"""Fetch urls"""
if not self.forcefetch(loc, ud, d) and Fetch.try_mirror(d, ud.localfile):
return
svkroot = ud.host + ud.path
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
svkcmd = "svk co -r {%s} %s/%s" % (date, svkroot, ud.module)
if ud.revision:
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
svkcmd = "svk co -r %s/%s" % (ud.revision, svkroot, ud.module)
# create temp directory
localdata = data.createCopy(d)
data.update_data(localdata)
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: creating temporary directory")
bb.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, 1) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
bb.msg.error(bb.msg.domain.Fetcher, "Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.")
raise FetchError(ud.module)
# check out sources there
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.debug(1, "Running %s", svkcmd)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svkcmd)
myret = os.system(svkcmd)
if myret != 0:
try:
@@ -101,4 +106,4 @@ class Svk(Fetch):
pass
raise FetchError(ud.module)
# cleanup
bb.utils.prunedir(tmpfile)
os.system('rm -rf %s' % tmpfile)

View File

@@ -23,16 +23,14 @@ BitBake 'Fetch' implementation for svn.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import os, re
import sys
import logging
import bb
from bb import data
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import MissingParameterError
from bb.fetch import runfetchcmd
from bb.fetch import logger
class Svn(Fetch):
"""Class to fetch a module or modules from svn repositories"""
@@ -49,7 +47,10 @@ class Svn(Fetch):
ud.module = ud.parm["module"]
# Create paths to svn checkouts
relpath = self._strip_leading_slashes(ud.path)
relpath = ud.path
if relpath.startswith('/'):
# Remove leading slash as os.path.join can't cope
relpath = relpath[1:]
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
@@ -77,7 +78,7 @@ class Svn(Fetch):
ud.revision = rev
ud.date = ""
else:
ud.revision = ""
ud.revision = ""
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
@@ -91,7 +92,9 @@ class Svn(Fetch):
basecmd = data.expand('${FETCHCMD_svn}', d)
proto = ud.parm.get('proto', 'svn')
proto = "svn"
if "proto" in ud.parm:
proto = ud.parm["proto"]
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
@@ -111,15 +114,13 @@ class Svn(Fetch):
if command is "info":
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
if ud.revision:
options.append("-r %s" % ud.revision)
suffix = "@%s" % (ud.revision)
elif ud.date:
options.append("-r {%s}" % ud.date)
if command is "fetch":
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
svncmd = "%s co %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, ud.module)
elif command is "update":
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
@@ -133,34 +134,33 @@ class Svn(Fetch):
def go(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
# try to use the tarball stash
if Fetch.try_mirror(d, ud.localfile):
bb.msg.debug(1, bb.msg.domain.Fetcher, "%s already exists or was mirrored, skipping svn checkout." % ud.localpath)
return
bb.msg.debug(2, bb.msg.domain.Fetcher, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", svnupdatecmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnupdatecmd)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + loc)
bb.msg.note(1, bb.msg.domain.Fetcher, "Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
bb.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.msg.debug(1, bb.msg.domain.Fetcher, "Running %s" % svnfetchcmd)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
try:
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d)
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d)
except:
t, v, tb = sys.exc_info()
try:
@@ -169,7 +169,7 @@ class Svn(Fetch):
pass
raise t, v, tb
def supports_srcrev(self):
def suppports_srcrev(self):
return True
def _revision_key(self, url, ud, d):
@@ -182,7 +182,7 @@ class Svn(Fetch):
"""
Return the latest upstream revision number
"""
logger.debug(2, "SVN fetcher hitting network for %s", url)
bb.msg.debug(2, bb.msg.domain.Fetcher, "SVN fetcher hitting network for %s" % url)
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)

View File

@@ -25,26 +25,26 @@ BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import os, re
import bb
import urllib
from bb import data
from bb.fetch import Fetch, FetchError, encodeurl, decodeurl, logger, runfetchcmd
from bb.fetch import Fetch
from bb.fetch import FetchError
from bb.fetch import uri_replace
class Wget(Fetch):
"""Class to fetch urls via 'wget'"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with wget.
Check to see if a given url can be fetched with cvs.
"""
return ud.type in ['http', 'https', 'ftp']
return ud.type in ['http','https','ftp']
def localpath(self, url, ud, d):
url = encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
url = bb.encodeurl([ud.type, ud.host, ud.path, ud.user, ud.pswd, {}])
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
ud.localfile = data.expand(os.path.basename(url), d)
return os.path.join(data.getVar("DL_DIR", d, True), ud.localfile)
@@ -60,21 +60,18 @@ class Wget(Fetch):
else:
fetchcmd = data.getVar("FETCHCOMMAND", d, 1)
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
bb.msg.note(1, bb.msg.domain.Fetcher, "fetch " + uri)
fetchcmd = fetchcmd.replace("${URI}", uri)
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
runfetchcmd(fetchcmd, d)
bb.msg.debug(2, bb.msg.domain.Fetcher, "executing " + fetchcmd)
ret = os.system(fetchcmd)
if ret != 0:
return False
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
logger.debug(2, "The fetch command for %s returned success but %s doesn't exist?...", uri, ud.localpath)
if not os.path.exists(ud.localpath):
bb.msg.debug(2, bb.msg.domain.Fetcher, "The fetch command for %s returned success but %s doesn't exist?..." % (uri, ud.localpath))
return False
return True
@@ -83,9 +80,24 @@ class Wget(Fetch):
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
premirrors = [ i.split() for i in (data.getVar('PREMIRRORS', localdata, 1) or "").split('\n') if i ]
for (find, replace) in premirrors:
newuri = uri_replace(uri, find, replace, d)
if newuri != uri:
if fetch_uri(newuri, ud, localdata):
return True
if fetch_uri(uri, ud, localdata):
return True
# try mirrors
mirrors = [ i.split() for i in (data.getVar('MIRRORS', localdata, 1) or "").split('\n') if i ]
for (find, replace) in mirrors:
newuri = uri_replace(uri, find, replace, d)
if newuri != uri:
if fetch_uri(newuri, ud, localdata):
return True
raise FetchError(uri)

File diff suppressed because it is too large Load Diff

View File

@@ -1,143 +0,0 @@
"""
BitBake 'Fetch' implementation for bzr.
"""
# Copyright (C) 2007 Ross Burton
# Copyright (C) 2007 Richard Purdie
#
# Classes for obtaining upstream sources for the
# BitBake build tools.
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Bzr(FetchMethod):
def supports(self, url, ud, d):
return ud.type in ['bzr']
def urldata_init(self, ud, d):
"""
init bzr specific variable within url data
"""
# Create paths to bzr checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${BZRDIR}', d), ud.host, relpath)
ud.setup_revisons(d)
if not ud.revision:
ud.revision = self.latest_revision(ud.url, ud, d)
ud.localfile = data.expand('bzr_%s_%s_%s.tar.gz' % (ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildbzrcommand(self, ud, d, command):
"""
Build up an bzr commandline based on ud
command is "fetch", "update", "revno"
"""
basecmd = data.expand('${FETCHCMD_bzr}', d)
proto = ud.parm.get('proto', 'http')
bzrroot = ud.host + ud.path
options = []
if command == "revno":
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command == "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid bzr command %s" % command, ud.url)
return bzrcmd
def download(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
bzrcmd = self._buildbzrcommand(ud, d, "update")
logger.debug(1, "BZR Update %s", loc)
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
os.chdir(os.path.join (ud.pkgdir, os.path.basename(ud.path)))
runfetchcmd(bzrcmd, d)
else:
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
logger.debug(1, "BZR Checkout %s", loc)
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", bzrcmd)
runfetchcmd(bzrcmd, d)
os.chdir(ud.pkgdir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.bzr' --exclude '.bzrtags'"
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(ud.pkgdir)), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "bzr:" + ud.pkgdir
def _latest_revision(self, url, ud, d, name):
"""
Return the latest upstream revision number
"""
logger.debug(2, "BZR fetcher hitting network for %s", url)
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)
output = runfetchcmd(self._buildbzrcommand(ud, d, "revno"), d, True)
return output.strip()
def _sortable_revision(self, url, ud, d):
"""
Return a sortable revision number which in our case is the revision number
"""
return self._build_revision(url, ud, d)
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -1,181 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#Based on functions from the base bb module, Copyright 2003 Holger Schurig
#
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod, FetchError, MissingParameterError, logger
from bb.fetch2 import runfetchcmd
class Cvs(FetchMethod):
"""
Class to fetch a module or modules from cvs repositories
"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with cvs.
"""
return ud.type in ['cvs']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError("module", ud.url)
ud.module = ud.parm["module"]
ud.tag = ud.parm.get('tag', "")
# Override the default date in certain cases
if 'date' in ud.parm:
ud.date = ud.parm['date']
elif ud.tag:
ud.date = ""
norecurse = ''
if 'norecurse' in ud.parm:
norecurse = '_norecurse'
fullpath = ''
if 'fullpath' in ud.parm:
fullpath = '_fullpath'
ud.localfile = data.expand('%s_%s_%s_%s%s%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.tag, ud.date, norecurse, fullpath), d)
def need_update(self, url, ud, d):
if (ud.date == "now"):
return True
if not os.path.exists(ud.localpath):
return True
return False
def download(self, loc, ud, d):
method = ud.parm.get('method', 'pserver')
localdir = ud.parm.get('localdir', ud.module)
cvs_port = ud.parm.get('port', '')
cvs_rsh = None
if method == "ext":
if "rsh" in ud.parm:
cvs_rsh = ud.parm["rsh"]
if method == "dir":
cvsroot = ud.path
else:
cvsroot = ":" + method
cvsproxyhost = data.getVar('CVS_PROXY_HOST', d, True)
if cvsproxyhost:
cvsroot += ";proxy=" + cvsproxyhost
cvsproxyport = data.getVar('CVS_PROXY_PORT', d, True)
if cvsproxyport:
cvsroot += ";proxyport=" + cvsproxyport
cvsroot += ":" + ud.user
if ud.pswd:
cvsroot += ":" + ud.pswd
cvsroot += "@" + ud.host + ":" + cvs_port + ud.path
options = []
if 'norecurse' in ud.parm:
options.append("-l")
if ud.date:
# treat YYYYMMDDHHMM specially for CVS
if len(ud.date) == 12:
options.append("-D \"%s %s:%s UTC\"" % (ud.date[0:8], ud.date[8:10], ud.date[10:12]))
else:
options.append("-D \"%s UTC\"" % ud.date)
if ud.tag:
options.append("-r %s" % ud.tag)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
data.setVar('CVSROOT', cvsroot, localdata)
data.setVar('CVSCOOPTS', " ".join(options), localdata)
data.setVar('CVSMODULE', ud.module, localdata)
cvscmd = data.getVar('FETCHCOMMAND', localdata, True)
cvsupdatecmd = data.getVar('UPDATECOMMAND', localdata, True)
if cvs_rsh:
cvscmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvscmd)
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = data.expand('${PN}', d)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
moddir = os.path.join(pkgdir, localdir)
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
logger.info("Update " + loc)
bb.fetch2.check_network_access(d, cvsupdatecmd, ud.url)
# update sources there
os.chdir(moddir)
cmd = cvsupdatecmd
else:
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(pkgdir)
os.chdir(pkgdir)
logger.debug(1, "Running %s", cvscmd)
bb.fetch2.check_network_access(d, cvscmd, ud.url)
cmd = cvscmd
runfetchcmd(cmd, d, cleanup = [moddir])
if not os.access(moddir, os.R_OK):
raise FetchError("Directory %s was not readable despite sucessful fetch?!" % moddir, ud.url)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude 'CVS'"
# tar them up to a defined filename
if 'fullpath' in ud.parm:
os.chdir(pkgdir)
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, localdir)
else:
os.chdir(moddir)
os.chdir('..')
cmd = "tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.basename(moddir))
runfetchcmd(cmd, d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean CVS Files and tarballs """
pkg = data.expand('${PN}', d)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "cvs:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
pkgdir = os.path.join(data.expand('${CVSDIR}', localdata), pkg)
bb.utils.remove(pkgdir, True)
bb.utils.remove(ud.localpath)

View File

@@ -1,330 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' git implementation
git fetcher support the SRC_URI with format of:
SRC_URI = "git://some.host/somepath;OptionA=xxx;OptionB=xxx;..."
Supported SRC_URI options are:
- branch
The git branch to retrieve from. The default is "master"
this option also support multiple branches fetching, branches
are seperated by comma. in multiple branches case, the name option
must have the same number of names to match the branches, which is
used to specify the SRC_REV for the branch
e.g:
SRC_URI="git://some.host/somepath;branch=branchX,branchY;name=nameX,nameY"
SRCREV_nameX = "xxxxxxxxxxxxxxxxxxxx"
SRCREV_nameY = "YYYYYYYYYYYYYYYYYYYY"
- tag
The git tag to retrieve. The default is "master"
- protocol
The method to use to access the repository. Common options are "git",
"http", "file" and "rsync". The default is "git"
- rebaseable
rebaseable indicates that the upstream git repo may rebase in the future,
and current revision may disappear from upstream repo. This option will
reminder fetcher to preserve local cache carefully for future use.
The default value is "0", set rebaseable=1 for rebaseable git repo
- nocheckout
Don't checkout source code when unpacking. set this option for the recipe
who has its own routine to checkout code.
The default is "0", set nocheckout=1 if needed.
- bareclone
Create a bare clone of the source code and don't checkout the source code
when unpacking. Set this option for the recipe who has its own routine to
checkout code and tracking branch requirements.
The default is "0", set bareclone=1 if needed.
"""
#Copyright (C) 2005 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Git(FetchMethod):
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
#
# Only enable _sortable revision if the key is set
#
if d.getVar("BB_GIT_CLONE_FOR_SRCREV", True):
self._sortable_buildindex = self._sortable_buildindex_disabled
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with git.
"""
return ud.type in ['git']
def urldata_init(self, ud, d):
"""
init git specific variable within url data
so that the git method like latest_revision() can work
"""
if 'protocol' in ud.parm:
ud.proto = ud.parm['protocol']
elif not ud.host:
ud.proto = 'file'
else:
ud.proto = "git"
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
ud.nocheckout = ud.parm.get("nocheckout","0") == "1"
ud.rebaseable = ud.parm.get("rebaseable","0") == "1"
# bareclone implies nocheckout
ud.bareclone = ud.parm.get("bareclone","0") == "1"
if ud.bareclone:
ud.nocheckout = 1
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
ud.branches = {}
for name in ud.names:
branch = branches[ud.names.index(name)]
ud.branches[name] = branch
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
ud.setup_revisons(d)
for name in ud.names:
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
# contains the revision
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
ud.localfile = ud.clonedir
def localpath(self, url, ud, d):
return ud.clonedir
def need_update(self, u, ud, d):
if not os.path.exists(ud.clonedir):
return True
os.chdir(ud.clonedir)
for name in ud.names:
if not self._contains_ref(ud.revisions[name], d):
return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
return False
def try_premirror(self, u, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
return True
if os.path.exists(ud.clonedir):
return False
return True
def download(self, loc, ud, d):
"""Fetch url"""
if ud.user:
username = ud.user + '@'
else:
username = ""
ud.repochanged = not os.path.exists(ud.fullmirror)
# If the checkout doesn't exist and the mirror tarball does, extract it
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
bb.utils.mkdirhier(ud.clonedir)
os.chdir(ud.clonedir)
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
repourl = "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
clone_cmd = "%s clone --bare --mirror %s %s" % (ud.basecmd, repourl, ud.clonedir)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
os.chdir(ud.clonedir)
# Update the checkout if needed
needupdate = False
for name in ud.names:
if not self._contains_ref(ud.revisions[name], d):
needupdate = True
if needupdate:
try:
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d)
fetch_cmd = "%s fetch -f --prune %s refs/*:refs/*" % (ud.basecmd, repourl)
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
ud.repochanged = True
def build_mirror_data(self, url, ud, d):
# Generate a mirror tarball if needed
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
os.chdir(ud.clonedir)
logger.info("Creating tarball of git repository")
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
subdir = ud.parm.get("subpath", "")
if subdir != "":
readpathspec = ":%s" % (subdir)
def_destsuffix = "%s/" % os.path.basename(subdir)
else:
readpathspec = ""
def_destsuffix = "git/"
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
destdir = os.path.join(destdir, destsuffix)
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
cloneflags = "-s -n"
if ud.bareclone:
cloneflags += " --mirror"
runfetchcmd("git clone %s %s/ %s" % (cloneflags, ud.clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
return True
def clean(self, ud, d):
""" clean the git directory """
bb.utils.remove(ud.localpath, True)
bb.utils.remove(ud.fullmirror)
def supports_srcrev(self):
return True
def _contains_ref(self, tag, d):
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s log --pretty=oneline -n 1 %s -- 2> /dev/null | wc -l" % (basecmd, tag)
output = runfetchcmd(cmd, d, quiet=True)
if len(output.split()) > 1:
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "git:" + ud.host + ud.path.replace('/', '.') + ud.branches[name]
def _latest_revision(self, url, ud, d, name):
"""
Compute the HEAD revision for the url
"""
if ud.user:
username = ud.user + '@'
else:
username = ""
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s %s" % \
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
bb.fetch2.check_network_access(d, cmd)
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
return output.split()[0]
def _build_revision(self, url, ud, d, name):
return ud.revisions[name]
def _sortable_buildindex_disabled(self, url, ud, d, rev):
"""
Return a suitable buildindex for the revision specified. This is done by counting revisions
using "git rev-list" which may or may not work in different circumstances.
"""
cwd = os.getcwd()
# Check if we have the rev already
if not os.path.exists(ud.clonedir):
logger.debug(1, "GIT repository for %s does not exist in %s. \
Downloading.", url, ud.clonedir)
self.download(None, ud, d)
if not os.path.exists(ud.clonedir):
logger.error("GIT repository for %s does not exist in %s after \
download. Cannot get sortable buildnumber, using \
old value", url, ud.clonedir)
return None
os.chdir(ud.clonedir)
if not self._contains_ref(rev, d):
self.download(None, ud, d)
output = runfetchcmd("%s rev-list %s -- 2> /dev/null | wc -l" % (ud.basecmd, rev), d, quiet=True)
os.chdir(cwd)
buildindex = "%s" % output.split()[0]
logger.debug(1, "GIT repository for %s in %s is returning %s revisions in rev-list before %s", url, ud.clonedir, buildindex, rev)
return buildindex
def checkstatus(self, uri, ud, d):
fetchcmd = "%s ls-remote %s" % (ud.basecmd, uri)
try:
runfetchcmd(fetchcmd, d, quiet=True)
return True
except FetchError:
return False

View File

@@ -1,176 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for mercurial DRCS (hg).
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2004 Marcin Juszkiewicz
# Copyright (C) 2007 Robert Schuster
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Hg(FetchMethod):
"""Class to fetch from mercurial repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with mercurial.
"""
return ud.type in ['hg']
def urldata_init(self, ud, d):
"""
init hg specific variable within url data
"""
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.module = ud.parm["module"]
# Create paths to mercurial checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
elif not ud.revision:
ud.revision = self.latest_revision(ud.url, ud, d)
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def need_update(self, url, ud, d):
revTag = ud.parm.get('rev', 'tip')
if revTag == "tip":
return True
if not os.path.exists(ud.localpath):
return True
return False
def _buildhgcommand(self, ud, d, command):
"""
Build up an hg commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_hg}', d)
proto = ud.parm.get('proto', 'http')
host = ud.host
if proto == "file":
host = "/"
ud.host = "localhost"
if not ud.user:
hgroot = host + ud.path
else:
hgroot = ud.user + "@" + host + ud.path
if command == "info":
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
elif command == "pull":
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
cmd = "%s pull" % (basecmd)
elif command == "update":
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid hg command %s" % command, ud.url)
return cmd
def download(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
updatecmd = self._buildhgcommand(ud, d, "pull")
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
bb.fetch2.check_network_access(d, updatecmd, ud.url)
runfetchcmd(updatecmd, d)
else:
fetchcmd = self._buildhgcommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
runfetchcmd(fetchcmd, d)
# Even when we clone (fetch), we still need to update as hg's clone
# won't checkout the specified revision if its on a branch
updatecmd = self._buildhgcommand(ud, d, "update")
os.chdir(ud.moddir)
logger.debug(1, "Running %s", updatecmd)
runfetchcmd(updatecmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
os.chdir(ud.pkgdir)
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return True
def _latest_revision(self, url, ud, d, name):
"""
Compute tip revision for the url
"""
bb.fetch2.check_network_access(d, self._buildhgcommand(ud, d, "info"))
output = runfetchcmd(self._buildhgcommand(ud, d, "info"), d)
return output.strip()
def _build_revision(self, url, ud, d, name):
return ud.revision
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "hg:" + ud.moddir

View File

@@ -1,91 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import bb
import bb.utils
from bb import data
from bb.fetch2 import FetchMethod
class Local(FetchMethod):
def supports(self, url, urldata, d):
"""
Check to see if a given url represents a local fetch.
"""
return urldata.type in ['file']
def urldata_init(self, ud, d):
# We don't set localfile as for this fetcher the file is already local!
ud.basename = os.path.basename(ud.url.split("://")[1].split(";")[0])
return
def localpath(self, url, urldata, d):
"""
Return the local filename of a given url assuming a successful fetch.
"""
path = url.split("://")[1]
path = path.split(";")[0]
newpath = path
if path[0] != "/":
filespath = data.getVar('FILESPATH', d, True)
if filespath:
newpath = bb.utils.which(filespath, path)
if not newpath:
filesdir = data.getVar('FILESDIR', d, True)
if filesdir:
newpath = os.path.join(filesdir, path)
if not os.path.exists(newpath) and path.find("*") == -1:
dldirfile = os.path.join(data.getVar("DL_DIR", d, True), os.path.basename(path))
return dldirfile
return newpath
def need_update(self, url, ud, d):
if url.find("*") != -1:
return False
if os.path.exists(ud.localpath):
return False
return True
def download(self, url, urldata, d):
"""Fetch urls (no-op for Local method)"""
# no need to fetch local files, we'll deal with them in place.
return 1
def checkstatus(self, url, urldata, d):
"""
Check the status of the url
"""
if urldata.localpath.find("*") != -1:
logger.info("URL %s looks like a glob and was therefore not checked.", url)
return True
if os.path.exists(urldata.localpath):
return True
return False
def clean(self, urldata, d):
return

View File

@@ -1,135 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
Bitbake "Fetch" implementation for osc (Opensuse build service client).
Based on the svn "Fetch" implementation.
"""
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
class Osc(FetchMethod):
"""Class to fetch a module or modules from Opensuse build server
repositories."""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with osc.
"""
return ud.type in ['osc']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.module = ud.parm["module"]
# Create paths to osc checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${OSCDIR}', d), ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
else:
pv = data.getVar("PV", d, 0)
rev = bb.fetch2.srcrev_internal_helper(ud, d)
if rev and rev != True:
ud.revision = rev
else:
ud.revision = ""
ud.localfile = data.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision), d)
def _buildosccommand(self, ud, d, command):
"""
Build up an ocs commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_osc}', d)
proto = ud.parm.get('proto', 'ocs')
options = []
config = "-c %s" % self.generate_config(ud, d)
if ud.revision:
options.append("-r %s" % ud.revision)
coroot = self._strip_leading_slashes(ud.path)
if command == "fetch":
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command == "update":
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command, ud.url)
return osccmd
def download(self, loc, ud, d):
"""
Fetch url
"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(data.expand('${OSCDIR}', d), ud.path, ud.module), os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", oscupdatecmd)
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
runfetchcmd(oscupdatecmd, d)
else:
oscfetchcmd = self._buildosccommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", oscfetchcmd)
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
runfetchcmd(oscfetchcmd, d)
os.chdir(os.path.join(ud.pkgdir + ud.path))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, ud.module), d, cleanup = [ud.localpath])
def supports_srcrev(self):
return False
def generate_config(self, ud, d):
"""
Generate a .oscrc to be used for this run.
"""
config_path = os.path.join(data.expand('${OSCDIR}', d), "oscrc")
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
f.write("[general]\n")
f.write("apisrv = %s\n" % ud.host)
f.write("scheme = http\n")
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % data.expand('${WORKDIR}', d))
f.write("urllist = http://moblin-obs.jf.intel.com:8888/build/%(project)s/%(repository)s/%(buildarch)s/:full/%(name)s.rpm\n")
f.write("extra-pkgs = gzip\n")
f.write("\n")
f.write("[%s]\n" % ud.host)
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()
return config_path

View File

@@ -1,196 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from future_builtins import zip
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Perforce(FetchMethod):
def supports(self, url, ud, d):
return ud.type in ['p4']
def doparse(url, d):
parm = {}
path = url.split("://")[1]
delim = path.find("@");
if delim != -1:
(user, pswd, host, port) = path.split('@')[0].split(":")
path = path.split('@')[1]
else:
(host, port) = data.getVar('P4PORT', d).split(':')
user = ""
pswd = ""
if path.find(";") != -1:
keys=[]
values=[]
plist = path.split(';')
for item in plist:
if item.count('='):
(key, value) = item.split('=')
keys.append(key)
values.append(value)
parm = dict(zip(keys, values))
path = "//" + path.split(';')[0]
host += ":%s" % (port)
parm["cset"] = Perforce.getcset(d, path, host, user, pswd, parm)
return host, path, user, pswd, parm
doparse = staticmethod(doparse)
def getcset(d, depot, host, user, pswd, parm):
p4opt = ""
if "cset" in parm:
return parm["cset"];
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
p4date = data.getVar("P4DATE", d, True)
if "revision" in parm:
depot += "#%s" % (parm["revision"])
elif "label" in parm:
depot += "@%s" % (parm["label"])
elif p4date:
depot += "@%s" % (p4date)
p4cmd = data.getVar('FETCHCOMMAND_p4', d, True)
logger.debug(1, "Running %s%s changes -m 1 %s", p4cmd, p4opt, depot)
p4file = os.popen("%s%s changes -m 1 %s" % (p4cmd, p4opt, depot))
cset = p4file.readline().strip()
logger.debug(1, "READ %s", cset)
if not cset:
return -1
return cset.split(' ')[1]
getcset = staticmethod(getcset)
def urldata_init(self, ud, d):
(host, path, user, pswd, parm) = Perforce.doparse(ud.url, d)
# If a label is specified, we use that as our filename
if "label" in parm:
ud.localfile = "%s.tar.gz" % (parm["label"])
return
base = path
which = path.find('/...')
if which != -1:
base = path[:which]
base = self._strip_leading_slashes(base)
cset = Perforce.getcset(d, path, host, user, pswd, parm)
ud.localfile = data.expand('%s+%s+%s.tar.gz' % (host, base.replace('/', '.'), cset), d)
def download(self, loc, ud, d):
"""
Fetch urls
"""
(host, depot, user, pswd, parm) = Perforce.doparse(loc, d)
if depot.find('/...') != -1:
path = depot[:depot.find('/...')]
else:
path = depot
module = parm.get('module', os.path.basename(path))
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "p4:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
# Get the p4 command
p4opt = ""
if user:
p4opt += " -u %s" % (user)
if pswd:
p4opt += " -P %s" % (pswd)
if host:
p4opt += " -p %s" % (host)
p4cmd = data.getVar('FETCHCOMMAND', localdata, True)
# create temp directory
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oep4.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
if "label" in parm:
depot = "%s@%s" % (depot, parm["label"])
else:
cset = Perforce.getcset(d, depot, host, user, pswd, parm)
depot = "%s@%s" % (depot, cset)
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.info("%s%s files %s", p4cmd, p4opt, depot)
p4file = os.popen("%s%s files %s" % (p4cmd, p4opt, depot))
if not p4file:
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
count = 0
for file in p4file:
list = file.split()
if list[2] == "delete":
continue
dest = list[0][len(path)+1:]
where = dest.find("#")
os.system("%s%s print -o %s/%s %s" % (p4cmd, p4opt, module, dest[:where], list[0]))
count = count + 1
if count == 0:
logger.error()
raise FetchError("Fetch: No files gathered from the P4 fetch", loc)
runfetchcmd("tar -czf %s %s" % (ud.localpath, module), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -1,98 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake "Fetch" repo (git) implementation
"""
# Copyright (C) 2009 Tom Rini <trini@embeddedalley.com>
#
# Based on git.py which is:
#Copyright (C) 2005 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
class Repo(FetchMethod):
"""Class to fetch a module or modules from repo (git) repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with repo.
"""
return ud.type in ["repo"]
def urldata_init(self, ud, d):
"""
We don"t care about the git rev of the manifests repository, but
we do care about the manifest to use. The default is "default".
We also care about the branch or tag to be used. The default is
"master".
"""
ud.proto = ud.parm.get('protocol', 'git')
ud.branch = ud.parm.get('branch', 'master')
ud.manifest = ud.parm.get('manifest', 'default.xml')
if not ud.manifest.endswith('.xml'):
ud.manifest += '.xml'
ud.localfile = data.expand("repo_%s%s_%s_%s.tar.gz" % (ud.host, ud.path.replace("/", "."), ud.manifest, ud.branch), d)
def download(self, loc, ud, d):
"""Fetch url"""
if os.access(os.path.join(data.getVar("DL_DIR", d, True), ud.localfile), os.R_OK):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
repodir = data.getVar("REPODIR", d, True) or os.path.join(data.getVar("DL_DIR", d, True), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
username = ud.user + "@"
else:
username = ""
bb.utils.mkdirhier(os.path.join(codir, "repo"))
os.chdir(os.path.join(codir, "repo"))
if not os.path.exists(os.path.join(codir, "repo", ".repo")):
bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d)
bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
runfetchcmd("repo sync", d)
os.chdir(codir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.repo' --exclude '.git'"
# Create a cache
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, os.path.join(".", "*") ), d)
def supports_srcrev(self):
return False
def _build_revision(self, url, ud, d):
return ud.manifest
def _want_sortable_revision(self, url, ud, d):
return False

View File

@@ -1,120 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
'''
BitBake 'Fetch' implementations
This implementation is for Secure Shell (SSH), and attempts to comply with the
IETF secsh internet draft:
http://tools.ietf.org/wg/secsh/draft-ietf-secsh-scp-sftp-ssh-uri/
Currently does not support the sftp parameters, as this uses scp
Also does not support the 'fingerprint' connection parameter.
'''
# Copyright (C) 2006 OpenedHand Ltd.
#
#
# Based in part on svk.py:
# Copyright (C) 2006 Holger Hans Peter Freyther
# Based on svn.py:
# Copyright (C) 2003, 2004 Chris Larson
# Based on functions from the base bb module:
# Copyright 2003 Holger Schurig
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
__pattern__ = re.compile(r'''
\s* # Skip leading whitespace
ssh:// # scheme
( # Optional username/password block
(?P<user>\S+) # username
(:(?P<pass>\S+))? # colon followed by the password (optional)
)?
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
@
(?P<host>\S+?) # non-greedy match of the host
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
/
(?P<path>[^;]+) # path on the remote system, may be absolute or relative,
# and may include the use of '~' to reference the remote home
# directory
(?P<sparam>(;[^;]+)*)? # parameters block (optional)
$
''', re.VERBOSE)
class SSH(FetchMethod):
'''Class to fetch a module or modules via Secure Shell'''
def supports(self, url, urldata, d):
return __pattern__.match(url) != None
def localpath(self, url, urldata, d):
m = __pattern__.match(urldata.url)
path = m.group('path')
host = m.group('host')
lpath = os.path.join(data.getVar('DL_DIR', d, True), host, os.path.basename(path))
return lpath
def download(self, url, urldata, d):
dldir = data.getVar('DL_DIR', d, True)
m = __pattern__.match(url)
path = m.group('path')
host = m.group('host')
port = m.group('port')
user = m.group('user')
password = m.group('pass')
ldir = os.path.join(dldir, host)
lpath = os.path.join(ldir, os.path.basename(path))
if not os.path.exists(ldir):
os.makedirs(ldir)
if port:
port = '-P %s' % port
else:
port = ''
if user:
fr = user
if password:
fr += ':%s' % password
fr += '@%s' % host
else:
fr = host
fr += ':%s' % path
import commands
cmd = 'scp -B -r %s %s %s/' % (
port,
commands.mkarg(fr),
commands.mkarg(ldir)
)
bb.fetch2.check_network_access(d, cmd, urldata.url)
runfetchcmd(cmd, d)

View File

@@ -1,97 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
This implementation is for svk. It is based on the svn implementation
"""
# Copyright (C) 2006 Holger Hans Peter Freyther
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Svk(FetchMethod):
"""Class to fetch a module or modules from svk repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svk.
"""
return ud.type in ['svk']
def urldata_init(self, ud, d):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
else:
ud.module = ud.parm["module"]
ud.revision = ud.parm.get('rev', "")
ud.localfile = data.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ud.date), d)
def need_update(self, url, ud, d):
if ud.date == "now":
return True
if not os.path.exists(ud.localpath):
return True
return False
def download(self, loc, ud, d):
"""Fetch urls"""
svkroot = ud.host + ud.path
svkcmd = "svk co -r {%s} %s/%s" % (ud.date, svkroot, ud.module)
if ud.revision:
svkcmd = "svk co -r %s %s/%s" % (ud.revision, svkroot, ud.module)
# create temp directory
localdata = data.createCopy(d)
data.update_data(localdata)
logger.debug(2, "Fetch: creating temporary directory")
bb.utils.mkdirhier(data.expand('${WORKDIR}', localdata))
data.setVar('TMPBASE', data.expand('${WORKDIR}/oesvk.XXXXXX', localdata), localdata)
tmppipe = os.popen(data.getVar('MKTEMPDIRCMD', localdata, True) or "false")
tmpfile = tmppipe.readline().strip()
if not tmpfile:
logger.error()
raise FetchError("Fetch: unable to create temporary directory.. make sure 'mktemp' is in the PATH.", loc)
# check out sources there
os.chdir(tmpfile)
logger.info("Fetch " + loc)
logger.debug(1, "Running %s", svkcmd)
runfetchcmd(svkcmd, d, cleanup = [tmpfile])
os.chdir(os.path.join(tmpfile, os.path.dirname(ud.module)))
# tar them up to a defined filename
runfetchcmd("tar -czf %s %s" % (ud.localpath, os.path.basename(ud.module)), d, cleanup = [ud.localpath])
# cleanup
bb.utils.prunedir(tmpfile)

View File

@@ -1,182 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementation for svn.
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2004 Marcin Juszkiewicz
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import sys
import logging
import bb
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import MissingParameterError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
class Svn(FetchMethod):
"""Class to fetch a module or modules from svn repositories"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with svn.
"""
return ud.type in ['svn']
def urldata_init(self, ud, d):
"""
init svn specific variable within url data
"""
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
ud.module = ud.parm["module"]
# Create paths to svn checkouts
relpath = self._strip_leading_slashes(ud.path)
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisons(d)
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
ud.localfile = data.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
def _buildsvncommand(self, ud, d, command):
"""
Build up an svn commandline based on ud
command is "fetch", "update", "info"
"""
basecmd = data.expand('${FETCHCMD_svn}', d)
proto = ud.parm.get('proto', 'svn')
svn_rsh = None
if proto == "svn+ssh" and "rsh" in ud.parm:
svn_rsh = ud.parm["rsh"]
svnroot = ud.host + ud.path
options = []
if ud.user:
options.append("--username %s" % ud.user)
if ud.pswd:
options.append("--password %s" % ud.pswd)
if command == "info":
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
if ud.revision:
options.append("-r %s" % ud.revision)
suffix = "@%s" % (ud.revision)
if command == "fetch":
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
elif command == "update":
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)
if svn_rsh:
svncmd = "svn_RSH=\"%s\" %s" % (svn_rsh, svncmd)
return svncmd
def download(self, loc, ud, d):
"""Fetch url"""
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
if os.access(os.path.join(ud.moddir, '.svn'), os.R_OK):
svnupdatecmd = self._buildsvncommand(ud, d, "update")
logger.info("Update " + loc)
# update sources there
os.chdir(ud.moddir)
logger.debug(1, "Running %s", svnupdatecmd)
bb.fetch2.check_network_access(d, svnupdatecmd, ud.url)
runfetchcmd(svnupdatecmd, d)
else:
svnfetchcmd = self._buildsvncommand(ud, d, "fetch")
logger.info("Fetch " + loc)
# check out sources there
bb.utils.mkdirhier(ud.pkgdir)
os.chdir(ud.pkgdir)
logger.debug(1, "Running %s", svnfetchcmd)
bb.fetch2.check_network_access(d, svnfetchcmd, ud.url)
runfetchcmd(svnfetchcmd, d)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
tar_flags = ""
else:
tar_flags = "--exclude '.svn'"
os.chdir(ud.pkgdir)
# tar them up to a defined filename
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
def clean(self, ud, d):
""" Clean SVN specific files and dirs """
bb.utils.remove(ud.localpath)
bb.utils.remove(ud.moddir, True)
def supports_srcrev(self):
return True
def _revision_key(self, url, ud, d, name):
"""
Return a unique key for the url
"""
return "svn:" + ud.moddir
def _latest_revision(self, url, ud, d, name):
"""
Return the latest upstream revision number
"""
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "info"))
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
revision = None
for line in output.splitlines():
if "Last Changed Rev" in line:
revision = line.split(":")[1].strip()
return revision
def _sortable_revision(self, url, ud, d):
"""
Return a sortable revision number which in our case is the revision number
"""
return self._build_revision(url, ud, d)
def _build_revision(self, url, ud, d):
return ud.revision

View File

@@ -1,92 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
BitBake 'Fetch' implementations
Classes for obtaining upstream sources for the
BitBake build tools.
"""
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import os
import logging
import bb
import urllib
from bb import data
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import encodeurl
from bb.fetch2 import decodeurl
from bb.fetch2 import logger
from bb.fetch2 import runfetchcmd
class Wget(FetchMethod):
"""Class to fetch urls via 'wget'"""
def supports(self, url, ud, d):
"""
Check to see if a given url can be fetched with wget.
"""
return ud.type in ['http', 'https', 'ftp']
def urldata_init(self, ud, d):
ud.basename = os.path.basename(ud.path)
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
def download(self, uri, ud, d, checkonly = False):
"""Fetch urls"""
def fetch_uri(uri, ud, d):
if checkonly:
fetchcmd = data.getVar("CHECKCOMMAND", d, True)
elif os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd = data.getVar("RESUMECOMMAND", d, True)
else:
fetchcmd = data.getVar("FETCHCOMMAND", d, True)
uri = uri.split(";")[0]
uri_decoded = list(decodeurl(uri))
uri_type = uri_decoded[0]
uri_host = uri_decoded[1]
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
if not checkonly:
logger.info("fetch " + uri)
logger.debug(2, "executing " + fetchcmd)
bb.fetch2.check_network_access(d, fetchcmd)
runfetchcmd(fetchcmd, d, quiet=checkonly)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath) and not checkonly:
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "wget:" + data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)
fetch_uri(uri, ud, localdata)
return True
def checkstatus(self, uri, ud, d):
return self.download(uri, ud, d, True)

144
bitbake/lib/bb/manifest.py Normal file
View File

@@ -0,0 +1,144 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2003, 2004 Chris Larson
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os, sys
import bb, bb.data
def getfields(line):
fields = {}
fieldmap = ( "pkg", "src", "dest", "type", "mode", "uid", "gid", "major", "minor", "start", "inc", "count" )
for f in xrange(len(fieldmap)):
fields[fieldmap[f]] = None
if not line:
return None
splitline = line.split()
if not len(splitline):
return None
try:
for f in xrange(len(fieldmap)):
if splitline[f] == '-':
continue
fields[fieldmap[f]] = splitline[f]
except IndexError:
pass
return fields
def parse (mfile, d):
manifest = []
while 1:
line = mfile.readline()
if not line:
break
if line.startswith("#"):
continue
fields = getfields(line)
if not fields:
continue
manifest.append(fields)
return manifest
def emit (func, manifest, d):
#str = "%s () {\n" % func
str = ""
for line in manifest:
emittedline = emit_line(func, line, d)
if not emittedline:
continue
str += emittedline + "\n"
# str += "}\n"
return str
def mangle (func, line, d):
import copy
newline = copy.copy(line)
src = bb.data.expand(newline["src"], d)
if src:
if not os.path.isabs(src):
src = "${WORKDIR}/" + src
dest = newline["dest"]
if not dest:
return
if dest.startswith("/"):
dest = dest[1:]
if func is "do_install":
dest = "${D}/" + dest
elif func is "do_populate":
dest = "${WORKDIR}/install/" + newline["pkg"] + "/" + dest
elif func is "do_stage":
varmap = {}
varmap["${bindir}"] = "${STAGING_DIR}/${HOST_SYS}/bin"
varmap["${libdir}"] = "${STAGING_DIR}/${HOST_SYS}/lib"
varmap["${includedir}"] = "${STAGING_DIR}/${HOST_SYS}/include"
varmap["${datadir}"] = "${STAGING_DATADIR}"
matched = 0
for key in varmap.keys():
if dest.startswith(key):
dest = varmap[key] + "/" + dest[len(key):]
matched = 1
if not matched:
newline = None
return
else:
newline = None
return
newline["src"] = src
newline["dest"] = dest
return newline
def emit_line (func, line, d):
import copy
newline = copy.deepcopy(line)
newline = mangle(func, newline, d)
if not newline:
return None
str = ""
type = newline["type"]
mode = newline["mode"]
src = newline["src"]
dest = newline["dest"]
if type is "d":
str = "install -d "
if mode:
str += "-m %s " % mode
str += dest
elif type is "f":
if not src:
return None
if dest.endswith("/"):
str = "install -d "
str += dest + "\n"
str += "install "
else:
str = "install -D "
if mode:
str += "-m %s " % mode
str += src + " " + dest
del newline
return str

View File

@@ -27,7 +27,7 @@
a method pool to do this task.
This pool will be used to compile and execute the functions. It
will be smart enough to
will be smart enough to
"""
from bb.utils import better_compile, better_exec
@@ -43,8 +43,8 @@ def insert_method(modulename, code, fn):
Add code of a module should be added. The methods
will be simply added, no checking will be done
"""
comp = better_compile(code, modulename, fn )
better_exec(comp, None, code, fn)
comp = better_compile(code, "<bb>", fn )
better_exec(comp, __builtins__, code, fn)
# now some instrumentation
code = comp.co_names
@@ -59,7 +59,7 @@ def insert_method(modulename, code, fn):
def check_insert_method(modulename, code, fn):
"""
Add the code if it wasnt added before. The module
name will be used for that
name will be used for that
Variables:
@modulename a short name e.g. base.bbclass
@@ -81,4 +81,4 @@ def get_parsed_dict():
"""
shortcut
"""
return _parsed_methods
return _parsed_methods

View File

@@ -1,237 +0,0 @@
#!/usr/bin/env python
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Copyright (C) 2012 Robert Yang
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os, logging, re, sys
import bb
logger = logging.getLogger("BitBake.Monitor")
def printErr(info):
logger.error("%s\n Disk space monitor will NOT be enabled" % info)
def convertGMK(unit):
""" Convert the space unit G, M, K, the unit is case-insensitive """
unitG = re.match('([1-9][0-9]*)[gG]\s?$', unit)
if unitG:
return int(unitG.group(1)) * (1024 ** 3)
unitM = re.match('([1-9][0-9]*)[mM]\s?$', unit)
if unitM:
return int(unitM.group(1)) * (1024 ** 2)
unitK = re.match('([1-9][0-9]*)[kK]\s?$', unit)
if unitK:
return int(unitK.group(1)) * 1024
unitN = re.match('([1-9][0-9]*)\s?$', unit)
if unitN:
return int(unitN.group(1))
else:
return None
def getMountedDev(path):
""" Get the device mounted at the path, uses /proc/mounts """
# Get the mount point of the filesystem containing path
# st_dev is the ID of device containing file
parentDev = os.stat(path).st_dev
currentDev = parentDev
# When the current directory's device is different from the
# parrent's, then the current directory is a mount point
while parentDev == currentDev:
mountPoint = path
# Use dirname to get the parrent's directory
path = os.path.dirname(path)
# Reach the "/"
if path == mountPoint:
break
parentDev= os.stat(path).st_dev
try:
with open("/proc/mounts", "r") as ifp:
for line in ifp:
procLines = line.rstrip('\n').split()
if procLines[1] == mountPoint:
return procLines[0]
except EnvironmentError:
pass
return None
def getDiskData(BBDirs, configuration):
"""Prepare disk data for disk space monitor"""
# Save the device IDs, need the ID to be unique (the dictionary's key is
# unique), so that when more than one directories are located in the same
# device, we just monitor it once
devDict = {}
for pathSpaceInode in BBDirs.split():
# The input format is: "dir,space,inode", dir is a must, space
# and inode are optional
pathSpaceInodeRe = re.match('([^,]*),([^,]*),([^,]*),?(.*)', pathSpaceInode)
if not pathSpaceInodeRe:
printErr("Invalid value in BB_DISKMON_DIRS: %s" % pathSpaceInode)
return None
action = pathSpaceInodeRe.group(1)
if action not in ("ABORT", "STOPTASKS", "WARN"):
printErr("Unknown disk space monitor action: %s" % action)
return None
path = os.path.realpath(pathSpaceInodeRe.group(2))
if not path:
printErr("Invalid path value in BB_DISKMON_DIRS: %s" % pathSpaceInode)
return None
# The disk space or inode is optional, but it should have a correct
# value once it is specified
minSpace = pathSpaceInodeRe.group(3)
if minSpace:
minSpace = convertGMK(minSpace)
if not minSpace:
printErr("Invalid disk space value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(3))
return None
else:
# 0 means that it is not specified
minSpace = None
minInode = pathSpaceInodeRe.group(4)
if minInode:
minInode = convertGMK(minInode)
if not minInode:
printErr("Invalid inode value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(4))
return None
else:
# 0 means that it is not specified
minInode = None
if minSpace is None and minInode is None:
printErr("No disk space or inode value in found BB_DISKMON_DIRS: %s" % pathSpaceInode)
return None
# mkdir for the directory since it may not exist, for example the
# DL_DIR may not exist at the very beginning
if not os.path.exists(path):
bb.utils.mkdirhier(path)
mountedDev = getMountedDev(path)
devDict[mountedDev] = action, path, minSpace, minInode
return devDict
def getInterval(configuration):
""" Get the disk space interval """
interval = configuration.getVar("BB_DISKMON_WARNINTERVAL", 1)
if not interval:
# The default value is 50M and 5K.
return 50 * 1024 * 1024, 5 * 1024
else:
# The disk space or inode interval is optional, but it should
# have a correct value once it is specified
intervalRe = re.match('([^,]*),?\s*(.*)', interval)
if intervalRe:
intervalSpace = intervalRe.group(1)
if intervalSpace:
intervalSpace = convertGMK(intervalSpace)
if not intervalSpace:
printErr("Invalid disk space interval value in BB_DISKMON_WARNINTERVAL: %s" % intervalRe.group(1))
return None, None
intervalInode = intervalRe.group(2)
if intervalInode:
intervalInode = convertGMK(intervalInode)
if not intervalInode:
printErr("Invalid disk inode interval value in BB_DISKMON_WARNINTERVAL: %s" % intervalRe.group(2))
return None, None
return intervalSpace, intervalInode
else:
printErr("Invalid interval value in BB_DISKMON_WARNINTERVAL: %s" % interval)
return None, None
class diskMonitor:
"""Prepare the disk space monitor data"""
def __init__(self, configuration):
self.enableMonitor = False
BBDirs = configuration.getVar("BB_DISKMON_DIRS", 1) or None
if BBDirs:
self.devDict = getDiskData(BBDirs, configuration)
if self.devDict:
self.spaceInterval, self.inodeInterval = getInterval(configuration)
if self.spaceInterval and self.inodeInterval:
self.enableMonitor = True
# These are for saving the previous disk free space and inode, we
# use them to avoid print too many warning messages
self.preFreeS = {}
self.preFreeI = {}
# This is for STOPTASKS and ABORT, to avoid print the message repeatly
# during waiting the tasks to finish
self.checked = {}
for dev in self.devDict:
self.preFreeS[dev] = 0
self.preFreeI[dev] = 0
self.checked[dev] = False
if self.spaceInterval is None and self.inodeInterval is None:
self.enableMonitor = False
def check(self, rq):
""" Take action for the monitor """
if self.enableMonitor:
for dev in self.devDict:
st = os.statvfs(self.devDict[dev][1])
# The free space, float point number
freeSpace = st.f_bavail * st.f_frsize
if self.devDict[dev][2] and freeSpace < self.devDict[dev][2]:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeS[dev] == 0 or self.preFreeS[dev] - freeSpace > self.spaceInterval and not self.checked[dev]:
logger.warn("The free space of %s is running low (%.3fGB left)" % (dev, freeSpace / 1024 / 1024 / 1024.0))
self.preFreeS[dev] = freeSpace
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
self.checked[dev] = True
rq.finish_runqueue(False)
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[dev] = True
rq.finish_runqueue(True)
# The free inodes, float point number
freeInode = st.f_favail
if self.devDict[dev][3] and freeInode < self.devDict[dev][3]:
# Always show warning, the self.checked would always be False if the action is WARN
if self.preFreeI[dev] == 0 or self.preFreeI[dev] - freeInode > self.inodeInterval and not self.checked[dev]:
logger.warn("The free inode of %s is running low (%.3fK left)" % (dev, freeInode / 1024.0))
self.preFreeI[dev] = freeInode
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
self.checked[dev] = True
rq.finish_runqueue(False)
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
self.checked[dev] = True
rq.finish_runqueue(True)
return

View File

@@ -22,125 +22,108 @@ Message handling infrastructure for bitbake
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys
import logging
import collections
from itertools import groupby
import warnings
import bb
import bb.event
import sys, os, re, bb
from bb import utils, event
class BBLogFormatter(logging.Formatter):
"""Formatter which ensures that our 'plain' messages (logging.INFO + 1) are used as is"""
debug_level = {}
DEBUG3 = logging.DEBUG - 2
DEBUG2 = logging.DEBUG - 1
DEBUG = logging.DEBUG
VERBOSE = logging.INFO - 1
NOTE = logging.INFO
PLAIN = logging.INFO + 1
ERROR = logging.ERROR
WARNING = logging.WARNING
CRITICAL = logging.CRITICAL
verbose = False
levelnames = {
DEBUG3 : 'DEBUG',
DEBUG2 : 'DEBUG',
DEBUG : 'DEBUG',
VERBOSE: 'NOTE',
NOTE : 'NOTE',
PLAIN : '',
WARNING : 'WARNING',
ERROR : 'ERROR',
CRITICAL: 'ERROR',
}
def getLevelName(self, levelno):
try:
return self.levelnames[levelno]
except KeyError:
self.levelnames[levelno] = value = 'Level %d' % levelno
return value
def format(self, record):
record.levelname = self.getLevelName(record.levelno)
if record.levelno == self.PLAIN:
msg = record.getMessage()
else:
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)
return msg
class BBLogFilter(object):
def __init__(self, handler, level, debug_domains):
self.stdlevel = level
self.debug_domains = debug_domains
loglevel = level
for domain in debug_domains:
if debug_domains[domain] < loglevel:
loglevel = debug_domains[domain]
handler.setLevel(loglevel)
handler.addFilter(self)
def filter(self, record):
if record.levelno >= self.stdlevel:
return True
if record.name in self.debug_domains and record.levelno >= self.debug_domains[record.name]:
return True
return False
domain = bb.utils.Enum(
'Build',
'Cache',
'Collection',
'Data',
'Depends',
'Fetcher',
'Parsing',
'PersistData',
'Provider',
'RunQueue',
'TaskData',
'Util')
class MsgBase(bb.event.Event):
"""Base class for messages"""
def __init__(self, msg, d ):
self._message = msg
event.Event.__init__(self, d)
class MsgDebug(MsgBase):
"""Debug Message"""
class MsgNote(MsgBase):
"""Note Message"""
class MsgWarn(MsgBase):
"""Warning Message"""
class MsgError(MsgBase):
"""Error Message"""
class MsgFatal(MsgBase):
"""Fatal Message"""
class MsgPlain(MsgBase):
"""General output"""
#
# Message control functions
#
loggerDefaultDebugLevel = 0
loggerDefaultVerbose = False
loggerVerboseLogs = False
loggerDefaultDomains = []
def set_debug_level(level):
bb.msg.debug_level = {}
for domain in bb.msg.domain:
bb.msg.debug_level[domain] = level
bb.msg.debug_level['default'] = level
def init_msgconfig(verbose, debug, debug_domains = []):
"""
Set default verbosity and debug levels config the logger
"""
bb.msg.loggerDefaultDebugLevel = debug
bb.msg.loggerDefaultVerbose = verbose
if verbose:
bb.msg.loggerVerboseLogs = True
bb.msg.loggerDefaultDomains = debug_domains
def set_verbose(level):
bb.msg.verbose = level
def addDefaultlogFilter(handler):
debug = loggerDefaultDebugLevel
verbose = loggerDefaultVerbose
domains = loggerDefaultDomains
if debug:
level = BBLogFormatter.DEBUG - debug + 1
elif verbose:
level = BBLogFormatter.VERBOSE
else:
level = BBLogFormatter.NOTE
debug_domains = {}
for (domainarg, iterator) in groupby(domains):
dlevel = len(tuple(iterator))
debug_domains["BitBake.%s" % domainarg] = logging.DEBUG - dlevel + 1
BBLogFilter(handler, level, debug_domains)
def set_debug_domains(domains):
for domain in domains:
found = False
for ddomain in bb.msg.domain:
if domain == str(ddomain):
bb.msg.debug_level[ddomain] = bb.msg.debug_level[ddomain] + 1
found = True
if not found:
bb.msg.warn(None, "Logging domain %s is not valid, ignoring" % domain)
#
# Message handling functions
#
def fatal(msgdomain, msg):
if msgdomain:
logger = logging.getLogger("BitBake.%s" % msgdomain)
else:
logger = logging.getLogger("BitBake")
logger.critical(msg)
def debug(level, domain, msg, fn = None):
bb.event.fire(MsgDebug(msg, None))
if not domain:
domain = 'default'
if debug_level[domain] >= level:
print 'DEBUG: ' + msg
def note(level, domain, msg, fn = None):
bb.event.fire(MsgNote(msg, None))
if not domain:
domain = 'default'
if level == 1 or verbose or debug_level[domain] >= 1:
print 'NOTE: ' + msg
def warn(domain, msg, fn = None):
bb.event.fire(MsgWarn(msg, None))
print 'WARNING: ' + msg
def error(domain, msg, fn = None):
bb.event.fire(MsgError(msg, None))
print 'ERROR: ' + msg
def fatal(domain, msg, fn = None):
bb.event.fire(MsgFatal(msg, None))
print 'ERROR: ' + msg
sys.exit(1)
def plain(msg, fn = None):
bb.event.fire(MsgPlain(msg, None))
print msg

View File

@@ -1,255 +0,0 @@
# http://code.activestate.com/recipes/577629-namedtupleabc-abstract-base-class-mix-in-for-named/
#!/usr/bin/env python
# Copyright (c) 2011 Jan Kaliszewski (zuo). Available under the MIT License.
"""
namedtuple_with_abc.py:
* named tuple mix-in + ABC (abstract base class) recipe,
* works under Python 2.6, 2.7 as well as 3.x.
Import this module to patch collections.namedtuple() factory function
-- enriching it with the 'abc' attribute (an abstract base class + mix-in
for named tuples) and decorating it with a wrapper that registers each
newly created named tuple as a subclass of namedtuple.abc.
How to import:
import collections, namedtuple_with_abc
or:
import namedtuple_with_abc
from collections import namedtuple
# ^ in this variant you must import namedtuple function
# *after* importing namedtuple_with_abc module
or simply:
from namedtuple_with_abc import namedtuple
Simple usage example:
class Credentials(namedtuple.abc):
_fields = 'username password'
def __str__(self):
return ('{0.__class__.__name__}'
'(username={0.username}, password=...)'.format(self))
print(Credentials("alice", "Alice's password"))
For more advanced examples -- see below the "if __name__ == '__main__':".
"""
import collections
from abc import ABCMeta, abstractproperty
from functools import wraps
from sys import version_info
__all__ = ('namedtuple',)
_namedtuple = collections.namedtuple
class _NamedTupleABCMeta(ABCMeta):
'''The metaclass for the abstract base class + mix-in for named tuples.'''
def __new__(mcls, name, bases, namespace):
fields = namespace.get('_fields')
for base in bases:
if fields is not None:
break
fields = getattr(base, '_fields', None)
if not isinstance(fields, abstractproperty):
basetuple = _namedtuple(name, fields)
bases = (basetuple,) + bases
namespace.pop('_fields', None)
namespace.setdefault('__doc__', basetuple.__doc__)
namespace.setdefault('__slots__', ())
return ABCMeta.__new__(mcls, name, bases, namespace)
exec(
# Python 2.x metaclass declaration syntax
"""class _NamedTupleABC(object):
'''The abstract base class + mix-in for named tuples.'''
__metaclass__ = _NamedTupleABCMeta
_fields = abstractproperty()""" if version_info[0] < 3 else
# Python 3.x metaclass declaration syntax
"""class _NamedTupleABC(metaclass=_NamedTupleABCMeta):
'''The abstract base class + mix-in for named tuples.'''
_fields = abstractproperty()"""
)
_namedtuple.abc = _NamedTupleABC
#_NamedTupleABC.register(type(version_info)) # (and similar, in the future...)
@wraps(_namedtuple)
def namedtuple(*args, **kwargs):
'''Named tuple factory with namedtuple.abc subclass registration.'''
cls = _namedtuple(*args, **kwargs)
_NamedTupleABC.register(cls)
return cls
collections.namedtuple = namedtuple
if __name__ == '__main__':
'''Examples and explanations'''
# Simple usage
class MyRecord(namedtuple.abc):
_fields = 'x y z' # such form will be transformed into ('x', 'y', 'z')
def _my_custom_method(self):
return list(self._asdict().items())
# (the '_fields' attribute belongs to the named tuple public API anyway)
rec = MyRecord(1, 2, 3)
print(rec)
print(rec._my_custom_method())
print(rec._replace(y=222))
print(rec._replace(y=222)._my_custom_method())
# Custom abstract classes...
class MyAbstractRecord(namedtuple.abc):
def _my_custom_method(self):
return list(self._asdict().items())
try:
MyAbstractRecord() # (abstract classes cannot be instantiated)
except TypeError as exc:
print(exc)
class AnotherAbstractRecord(MyAbstractRecord):
def __str__(self):
return '<<<{0}>>>'.format(super(AnotherAbstractRecord,
self).__str__())
# ...and their non-abstract subclasses
class MyRecord2(MyAbstractRecord):
_fields = 'a, b'
class MyRecord3(AnotherAbstractRecord):
_fields = 'p', 'q', 'r'
rec2 = MyRecord2('foo', 'bar')
print(rec2)
print(rec2._my_custom_method())
print(rec2._replace(b=222))
print(rec2._replace(b=222)._my_custom_method())
rec3 = MyRecord3('foo', 'bar', 'baz')
print(rec3)
print(rec3._my_custom_method())
print(rec3._replace(q=222))
print(rec3._replace(q=222)._my_custom_method())
# You can also subclass non-abstract ones...
class MyRecord33(MyRecord3):
def __str__(self):
return '< {0!r}, ..., {0!r} >'.format(self.p, self.r)
rec33 = MyRecord33('foo', 'bar', 'baz')
print(rec33)
print(rec33._my_custom_method())
print(rec33._replace(q=222))
print(rec33._replace(q=222)._my_custom_method())
# ...and even override the magic '_fields' attribute again
class MyRecord345(MyRecord3):
_fields = 'e f g h i j k'
rec345 = MyRecord345(1, 2, 3, 4, 3, 2, 1)
print(rec345)
print(rec345._my_custom_method())
print(rec345._replace(f=222))
print(rec345._replace(f=222)._my_custom_method())
# Mixing-in some other classes is also possible:
class MyMixIn(object):
def method(self):
return "MyMixIn.method() called"
def _my_custom_method(self):
return "MyMixIn._my_custom_method() called"
def count(self, item):
return "MyMixIn.count({0}) called".format(item)
def _asdict(self): # (cannot override a namedtuple method, see below)
return "MyMixIn._asdict() called"
class MyRecord4(MyRecord33, MyMixIn): # mix-in on the right
_fields = 'j k l x'
class MyRecord5(MyMixIn, MyRecord33): # mix-in on the left
_fields = 'j k l x y'
rec4 = MyRecord4(1, 2, 3, 2)
print(rec4)
print(rec4.method())
print(rec4._my_custom_method()) # MyRecord33's
print(rec4.count(2)) # tuple's
print(rec4._replace(k=222))
print(rec4._replace(k=222).method())
print(rec4._replace(k=222)._my_custom_method()) # MyRecord33's
print(rec4._replace(k=222).count(8)) # tuple's
rec5 = MyRecord5(1, 2, 3, 2, 1)
print(rec5)
print(rec5.method())
print(rec5._my_custom_method()) # MyMixIn's
print(rec5.count(2)) # MyMixIn's
print(rec5._replace(k=222))
print(rec5._replace(k=222).method())
print(rec5._replace(k=222)._my_custom_method()) # MyMixIn's
print(rec5._replace(k=222).count(2)) # MyMixIn's
# None that behavior: the standard namedtuple methods cannot be
# overriden by a foreign mix-in -- even if the mix-in is declared
# as the leftmost base class (but, obviously, you can override them
# in the defined class or its subclasses):
print(rec4._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
print(rec5._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
class MyRecord6(MyRecord33):
_fields = 'j k l x y z'
def _asdict(self):
return "MyRecord6._asdict() called"
rec6 = MyRecord6(1, 2, 3, 1, 2, 3)
print(rec6._asdict()) # (this returns "MyRecord6._asdict() called")
# All that record classes are real subclasses of namedtuple.abc:
assert issubclass(MyRecord, namedtuple.abc)
assert issubclass(MyAbstractRecord, namedtuple.abc)
assert issubclass(AnotherAbstractRecord, namedtuple.abc)
assert issubclass(MyRecord2, namedtuple.abc)
assert issubclass(MyRecord3, namedtuple.abc)
assert issubclass(MyRecord33, namedtuple.abc)
assert issubclass(MyRecord345, namedtuple.abc)
assert issubclass(MyRecord4, namedtuple.abc)
assert issubclass(MyRecord5, namedtuple.abc)
assert issubclass(MyRecord6, namedtuple.abc)
# ...but abstract ones are not subclasses of tuple
# (and this is what you probably want):
assert not issubclass(MyAbstractRecord, tuple)
assert not issubclass(AnotherAbstractRecord, tuple)
assert issubclass(MyRecord, tuple)
assert issubclass(MyRecord2, tuple)
assert issubclass(MyRecord3, tuple)
assert issubclass(MyRecord33, tuple)
assert issubclass(MyRecord345, tuple)
assert issubclass(MyRecord4, tuple)
assert issubclass(MyRecord5, tuple)
assert issubclass(MyRecord6, tuple)
# Named tuple classes created with namedtuple() factory function
# (in the "traditional" way) are registered as "virtual" subclasses
# of namedtuple.abc:
MyTuple = namedtuple('MyTuple', 'a b c')
mt = MyTuple(1, 2, 3)
assert issubclass(MyTuple, namedtuple.abc)
assert isinstance(mt, namedtuple.abc)

View File

@@ -24,58 +24,42 @@ File parsers for the BitBake build tools.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
__all__ = [ 'ParseError', 'SkipPackage', 'cached_mtime', 'mark_dependency',
'supports', 'handle', 'init' ]
handlers = []
import os
import stat
import logging
import bb
import bb.utils
import bb.siggen
logger = logging.getLogger("BitBake.Parsing")
import bb, os
class ParseError(Exception):
"""Exception raised when parsing fails"""
def __init__(self, msg, filename, lineno=0):
self.msg = msg
self.filename = filename
self.lineno = lineno
Exception.__init__(self, msg, filename, lineno)
def __str__(self):
if self.lineno:
return "ParseError at %s:%d: %s" % (self.filename, self.lineno, self.msg)
else:
return "ParseError in %s: %s" % (self.filename, self.msg)
class SkipPackage(Exception):
"""Exception raised to skip this package"""
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
if not __mtime_cache.has_key(f):
__mtime_cache[f] = os.stat(f)[8]
return __mtime_cache[f]
def cached_mtime_noerror(f):
if f not in __mtime_cache:
if not __mtime_cache.has_key(f):
try:
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
__mtime_cache[f] = os.stat(f)[8]
except OSError:
return 0
return __mtime_cache[f]
def update_mtime(f):
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
__mtime_cache[f] = os.stat(f)[8]
return __mtime_cache[f]
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
deps = d.getVar('__depends') or set()
deps.update([(f, cached_mtime(f))])
d.setVar('__depends', deps)
deps = bb.data.getVar('__depends', d) or []
deps.append( (f, cached_mtime(f)) )
bb.data.setVar('__depends', deps, d)
def supports(fn, data):
"""Returns true if we have a handler for this file, false otherwise"""
@@ -89,55 +73,12 @@ def handle(fn, data, include = 0):
for h in handlers:
if h['supports'](fn, data):
return h['handle'](fn, data, include)
raise ParseError("not a BitBake file", fn)
raise ParseError("%s is not a BitBake file" % fn)
def init(fn, data):
for h in handlers:
if h['supports'](fn):
return h['init'](data)
def init_parser(d):
bb.parse.siggen = bb.siggen.init(d)
def resolve_file(fn, d):
if not os.path.isabs(fn):
bbpath = d.getVar("BBPATH", True)
newfn = bb.utils.which(bbpath, fn)
if not newfn:
raise IOError("file %s not found in %s" % (fn, bbpath))
fn = newfn
logger.debug(2, "LOAD %s", fn)
return fn
# Used by OpenEmbedded metadata
__pkgsplit_cache__={}
def vars_from_file(mypkg, d):
if not mypkg:
return (None, None, None)
if mypkg in __pkgsplit_cache__:
return __pkgsplit_cache__[mypkg]
myfile = os.path.splitext(os.path.basename(mypkg))
parts = myfile[0].split('_')
__pkgsplit_cache__[mypkg] = parts
if len(parts) > 3:
raise ParseError("Unable to generate default variables from filename (too many underscores)", mypkg)
exp = 3 - len(parts)
tmplist = []
while exp != 0:
exp -= 1
tmplist.append(None)
parts.extend(tmplist)
return parts
def get_file_depends(d):
'''Return the dependent files'''
dep_files = []
depends = d.getVar('__depends', True) or set()
depends = depends.union(d.getVar('__base_depends', True) or set())
for (fn, _) in depends:
dep_files.append(os.path.abspath(fn))
return " ".join(dep_files)
from bb.parse.parse_py import __version__, ConfHandler, BBHandler
from parse_py import __version__, ConfHandler, BBHandler

View File

@@ -1,473 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
"""
AbstractSyntaxTree classes for the Bitbake language
"""
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
# Copyright (C) 2009 Holger Hans Peter Freyther
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
from future_builtins import filter
import re
import string
import logging
import bb
import itertools
from bb import methodpool
from bb.parse import logger
__parsed_methods__ = bb.methodpool.get_parsed_dict()
_bbversions_re = re.compile(r"\[(?P<from>[0-9]+)-(?P<to>[0-9]+)\]")
class StatementGroup(list):
def eval(self, data):
for statement in self:
statement.eval(data)
class AstNode(object):
def __init__(self, filename, lineno):
self.filename = filename
self.lineno = lineno
class IncludeNode(AstNode):
def __init__(self, filename, lineno, what_file, force):
AstNode.__init__(self, filename, lineno)
self.what_file = what_file
self.force = force
def eval(self, data):
"""
Include the file and evaluate the statements
"""
s = data.expand(self.what_file)
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
# TODO: Cache those includes... maybe not here though
if self.force:
bb.parse.ConfHandler.include(self.filename, s, self.lineno, data, "include required")
else:
bb.parse.ConfHandler.include(self.filename, s, self.lineno, data, False)
class ExportNode(AstNode):
def __init__(self, filename, lineno, var):
AstNode.__init__(self, filename, lineno)
self.var = var
def eval(self, data):
data.setVarFlag(self.var, "export", 1)
class DataNode(AstNode):
"""
Various data related updates. For the sake of sanity
we have one class doing all this. This means that all
this need to be re-evaluated... we might be able to do
that faster with multiple classes.
"""
def __init__(self, filename, lineno, groupd):
AstNode.__init__(self, filename, lineno)
self.groupd = groupd
def getFunc(self, key, data):
if 'flag' in self.groupd and self.groupd['flag'] != None:
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
else:
return data.getVar(key, noweakdefault=True)
def eval(self, data):
groupd = self.groupd
key = groupd["var"]
if "exp" in groupd and groupd["exp"] != None:
data.setVarFlag(key, "export", 1)
if "ques" in groupd and groupd["ques"] != None:
val = self.getFunc(key, data)
if val == None:
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
val = e.expand(groupd["value"], key + "[:=]")
elif "append" in groupd and groupd["append"] != None:
val = "%s %s" % ((self.getFunc(key, data) or ""), groupd["value"])
elif "prepend" in groupd and groupd["prepend"] != None:
val = "%s %s" % (groupd["value"], (self.getFunc(key, data) or ""))
elif "postdot" in groupd and groupd["postdot"] != None:
val = "%s%s" % ((self.getFunc(key, data) or ""), groupd["value"])
elif "predot" in groupd and groupd["predot"] != None:
val = "%s%s" % (groupd["value"], (self.getFunc(key, data) or ""))
else:
val = groupd["value"]
if 'flag' in groupd and groupd['flag'] != None:
data.setVarFlag(key, groupd['flag'], val)
elif groupd["lazyques"]:
data.setVarFlag(key, "defaultval", val)
else:
data.setVar(key, val)
class MethodNode(AstNode):
def __init__(self, filename, lineno, func_name, body):
AstNode.__init__(self, filename, lineno)
self.func_name = func_name
self.body = body
def eval(self, data):
if self.func_name == "__anonymous":
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-', '____'))))
if not funcname in bb.methodpool._parsed_fns:
text = "def %s(d):\n" % (funcname) + '\n'.join(self.body)
bb.methodpool.insert_method(funcname, text, self.filename)
anonfuncs = data.getVar('__BBANONFUNCS') or []
anonfuncs.append(funcname)
data.setVar('__BBANONFUNCS', anonfuncs)
else:
data.setVarFlag(self.func_name, "func", 1)
data.setVar(self.func_name, '\n'.join(self.body))
class PythonMethodNode(AstNode):
def __init__(self, filename, lineno, function, define, body):
AstNode.__init__(self, filename, lineno)
self.function = function
self.define = define
self.body = body
def eval(self, data):
# Note we will add root to parsedmethods after having parse
# 'this' file. This means we will not parse methods from
# bb classes twice
text = '\n'.join(self.body)
if not bb.methodpool.parsed_module(self.define):
bb.methodpool.insert_method(self.define, text, self.filename)
data.setVarFlag(self.function, "func", 1)
data.setVarFlag(self.function, "python", 1)
data.setVar(self.function, text)
class MethodFlagsNode(AstNode):
def __init__(self, filename, lineno, key, m):
AstNode.__init__(self, filename, lineno)
self.key = key
self.m = m
def eval(self, data):
if data.getVar(self.key):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.setVarFlag(self.key, 'python', None)
data.setVarFlag(self.key, 'fakeroot', None)
if self.m.group("py") is not None:
data.setVarFlag(self.key, "python", "1")
else:
data.delVarFlag(self.key, "python")
if self.m.group("fr") is not None:
data.setVarFlag(self.key, "fakeroot", "1")
else:
data.delVarFlag(self.key, "fakeroot")
class ExportFuncsNode(AstNode):
def __init__(self, filename, lineno, fns, classes):
AstNode.__init__(self, filename, lineno)
self.n = fns.split()
self.classes = classes
def eval(self, data):
for f in self.n:
allvars = []
allvars.append(f)
allvars.append(self.classes[-1] + "_" + f)
vars = [[ allvars[0], allvars[1] ]]
if len(self.classes) > 1 and self.classes[-2] is not None:
allvars.append(self.classes[-2] + "_" + f)
vars = []
vars.append([allvars[2], allvars[1]])
vars.append([allvars[0], allvars[2]])
for (var, calledvar) in vars:
if data.getVar(var) and not data.getVarFlag(var, 'export_func'):
continue
if data.getVar(var):
data.setVarFlag(var, 'python', None)
data.setVarFlag(var, 'func', None)
for flag in [ "func", "python" ]:
if data.getVarFlag(calledvar, flag):
data.setVarFlag(var, flag, data.getVarFlag(calledvar, flag))
for flag in [ "dirs" ]:
if data.getVarFlag(var, flag):
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag))
if data.getVarFlag(calledvar, "python"):
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n")
else:
data.setVar(var, "\t" + calledvar + "\n")
data.setVarFlag(var, 'export_func', '1')
class AddTaskNode(AstNode):
def __init__(self, filename, lineno, func, before, after):
AstNode.__init__(self, filename, lineno)
self.func = func
self.before = before
self.after = after
def eval(self, data):
var = self.func
if self.func[:3] != "do_":
var = "do_" + self.func
data.setVarFlag(var, "task", 1)
bbtasks = data.getVar('__BBTASKS') or []
if not var in bbtasks:
bbtasks.append(var)
data.setVar('__BBTASKS', bbtasks)
existing = data.getVarFlag(var, "deps") or []
if self.after is not None:
# set up deps for function
for entry in self.after.split():
if entry not in existing:
existing.append(entry)
data.setVarFlag(var, "deps", existing)
if self.before is not None:
# set up things that depend on this func
for entry in self.before.split():
existing = data.getVarFlag(entry, "deps") or []
if var not in existing:
data.setVarFlag(entry, "deps", [var] + existing)
class BBHandlerNode(AstNode):
def __init__(self, filename, lineno, fns):
AstNode.__init__(self, filename, lineno)
self.hs = fns.split()
def eval(self, data):
bbhands = data.getVar('__BBHANDLERS') or []
for h in self.hs:
bbhands.append(h)
data.setVarFlag(h, "handler", 1)
data.setVar('__BBHANDLERS', bbhands)
class InheritNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
self.classes = classes
def eval(self, data):
bb.parse.BBHandler.inherit(self.classes, self.filename, self.lineno, data)
def handleInclude(statements, filename, lineno, m, force):
statements.append(IncludeNode(filename, lineno, m.group(1), force))
def handleExport(statements, filename, lineno, m):
statements.append(ExportNode(filename, lineno, m.group(1)))
def handleData(statements, filename, lineno, groupd):
statements.append(DataNode(filename, lineno, groupd))
def handleMethod(statements, filename, lineno, func_name, body):
statements.append(MethodNode(filename, lineno, func_name, body))
def handlePythonMethod(statements, filename, lineno, funcname, root, body):
statements.append(PythonMethodNode(filename, lineno, funcname, root, body))
def handleMethodFlags(statements, filename, lineno, key, m):
statements.append(MethodFlagsNode(filename, lineno, key, m))
def handleExportFuncs(statements, filename, lineno, m, classes):
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classes))
def handleAddTask(statements, filename, lineno, m):
func = m.group("func")
before = m.group("before")
after = m.group("after")
if func is None:
return
statements.append(AddTaskNode(filename, lineno, func, before, after))
def handleBBHandlers(statements, filename, lineno, m):
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes.split()))
def finalize(fn, d, variant = None):
all_handlers = {}
for var in d.getVar('__BBHANDLERS') or []:
# try to add the handler
handler = d.getVar(var)
bb.event.register(var, handler)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
for funcname in d.getVar("__BBANONFUNCS") or []:
code.append("%s(d)" % funcname)
bb.utils.simple_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
tasklist = d.getVar('__BBTASKS') or []
bb.build.add_tasks(tasklist, d)
bb.parse.siggen.finalise(fn, d, variant)
d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
bb.event.fire(bb.event.RecipeParsed(fn), d)
def _create_variants(datastores, names, function):
def create_variant(name, orig_d, arg = None):
new_d = bb.data.createCopy(orig_d)
function(arg or name, new_d)
datastores[name] = new_d
for variant, variant_d in datastores.items():
for name in names:
if not variant:
# Based on main recipe
create_variant(name, variant_d)
else:
create_variant("%s-%s" % (variant, name), variant_d, name)
def _expand_versions(versions):
def expand_one(version, start, end):
for i in xrange(start, end + 1):
ver = _bbversions_re.sub(str(i), version, 1)
yield ver
versions = iter(versions)
while True:
try:
version = next(versions)
except StopIteration:
break
range_ver = _bbversions_re.search(version)
if not range_ver:
yield version
else:
newversions = expand_one(version, int(range_ver.group("from")),
int(range_ver.group("to")))
versions = itertools.chain(newversions, versions)
def multi_finalize(fn, d):
appends = (d.getVar("__BBAPPEND", True) or "").split()
for append in appends:
logger.debug(2, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)
safe_d = d
d = bb.data.createCopy(safe_d)
try:
finalize(fn, d)
except bb.parse.SkipPackage as e:
d.setVar("__SKIPPED", e.args[0])
datastores = {"": safe_d}
versions = (d.getVar("BBVERSIONS", True) or "").split()
if versions:
pv = orig_pv = d.getVar("PV", True)
baseversions = {}
def verfunc(ver, d, pv_d = None):
if pv_d is None:
pv_d = d
overrides = d.getVar("OVERRIDES", True).split(":")
pv_d.setVar("PV", ver)
overrides.append(ver)
bpv = baseversions.get(ver) or orig_pv
pv_d.setVar("BPV", bpv)
overrides.append(bpv)
d.setVar("OVERRIDES", ":".join(overrides))
versions = list(_expand_versions(versions))
for pos, version in enumerate(list(versions)):
try:
pv, bpv = version.split(":", 2)
except ValueError:
pass
else:
versions[pos] = pv
baseversions[pv] = bpv
if pv in versions and not baseversions.get(pv):
versions.remove(pv)
else:
pv = versions.pop()
# This is necessary because our existing main datastore
# has already been finalized with the old PV, we need one
# that's been finalized with the new PV.
d = bb.data.createCopy(safe_d)
verfunc(pv, d, safe_d)
try:
finalize(fn, d)
except bb.parse.SkipPackage as e:
d.setVar("__SKIPPED", e.args[0])
_create_variants(datastores, versions, verfunc)
extended = d.getVar("BBCLASSEXTEND", True) or ""
if extended:
# the following is to support bbextends with arguments, for e.g. multilib
# an example is as follows:
# BBCLASSEXTEND = "multilib:lib32"
# it will create foo-lib32, inheriting multilib.bbclass and set
# BBEXTENDCURR to "multilib" and BBEXTENDVARIANT to "lib32"
extendedmap = {}
variantmap = {}
for ext in extended.split():
eext = ext.split(':', 2)
if len(eext) > 1:
extendedmap[ext] = eext[0]
variantmap[ext] = eext[1]
else:
extendedmap[ext] = ext
pn = d.getVar("PN", True)
def extendfunc(name, d):
if name != extendedmap[name]:
d.setVar("BBEXTENDCURR", extendedmap[name])
d.setVar("BBEXTENDVARIANT", variantmap[name])
else:
d.setVar("PN", "%s-%s" % (pn, name))
bb.parse.BBHandler.inherit([extendedmap[name]], fn, 0, d)
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc)
for variant, variant_d in datastores.iteritems():
if variant:
try:
if not onlyfinalise or variant in onlyfinalise:
finalize(fn, variant_d, variant)
except bb.parse.SkipPackage as e:
variant_d.setVar("__SKIPPED", e.args[0])
if len(datastores) > 1:
variants = filter(None, datastores.iterkeys())
safe_d.setVar("__VARIANTS", " ".join(variants))
datastores[""] = d
return datastores

View File

@@ -11,7 +11,7 @@
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
@@ -25,18 +25,12 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import absolute_import
import re, bb, os
import logging
import bb.build, bb.utils
from bb import data
import re, bb, os, sys, time
import bb.fetch, bb.build, bb.utils
from bb import data, fetch, methodpool
from . import ConfHandler
from .. import resolve_file, ast, logger
from .ConfHandler import include, init
# For compatibility
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
from ConfHandler import include, localpath, obtain, init
from bb.parse import ParseError
__func_start_regexp__ = re.compile( r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile( r"inherit\s+(.+)" )
@@ -45,7 +39,7 @@ __addtask_regexp__ = re.compile("addtask\s+(?P<func>\w+)\s*((before\s*(?P<
__addhandler_regexp__ = re.compile( r"addhandler\s+(.+)" )
__def_regexp__ = re.compile( r"def\s+(\w+).*:" )
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
__word__ = re.compile(r"\S+")
__infunc__ = ""
__inpython__ = False
@@ -53,8 +47,6 @@ __body__ = []
__classname__ = ""
classes = [ None, ]
cached_statements = {}
# We need to indicate EOF to the feeder. This code is so messy that
# factoring it out to a close_parse_file method is out of question.
# We will use the IN_PYTHON_EOF as an indicator to just close the method
@@ -62,65 +54,42 @@ cached_statements = {}
# The two parts using it are tightly integrated anyway
IN_PYTHON_EOF = -9999999999999
__parsed_methods__ = methodpool.get_parsed_dict()
def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
localfn = localpath(fn, d)
return localfn[-3:] == ".bb" or localfn[-8:] == ".bbclass" or localfn[-4:] == ".inc"
def inherit(files, fn, lineno, d):
def inherit(files, d):
__inherit_cache = data.getVar('__inherit_cache', d) or []
fn = ""
lineno = 0
files = data.expand(files, d)
for file in files:
file = data.expand(file, d)
if not os.path.isabs(file) and not file.endswith(".bbclass"):
if file[0] != "/" and file[-8:] != ".bbclass":
file = os.path.join('classes', '%s.bbclass' % file)
if not file in __inherit_cache:
logger.log(logging.DEBUG -1, "BB %s:%d: inheriting %s", fn, lineno, file)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
__inherit_cache.append( file )
data.setVar('__inherit_cache', __inherit_cache, d)
include(fn, file, lineno, d, "inherit")
include(fn, file, d, "inherit")
__inherit_cache = data.getVar('__inherit_cache', d) or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements
try:
return cached_statements[absolute_filename]
except KeyError:
file = open(absolute_filename, 'r')
statements = ast.StatementGroup()
lineno = 0
while True:
lineno = lineno + 1
s = file.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, filename, base_name, statements)
if __inpython__:
# add a blank line to close out any python definition
feeder(IN_PYTHON_EOF, "", filename, base_name, statements)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
return statements
def handle(fn, d, include):
def handle(fn, d, include = 0):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__
__body__ = []
__infunc__ = ""
__classname__ = ""
__residue__ = []
if include == 0:
logger.debug(2, "BB %s: handle(data)", fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data)")
else:
logger.debug(2, "BB %s: handle(data, include)", fn)
bb.msg.debug(2, bb.msg.domain.Parsing, "BB " + fn + ": handle(data, include)")
base_name = os.path.basename(fn)
(root, ext) = os.path.splitext(base_name)
(root, ext) = os.path.splitext(os.path.basename(fn))
base_name = "%s%s" % (root,ext)
init(d)
if ext == ".bbclass":
@@ -136,41 +105,106 @@ def handle(fn, d, include):
else:
oldfile = None
abs_fn = resolve_file(fn, d)
fn = obtain(fn, d)
bbpath = (data.getVar('BBPATH', d, 1) or '').split(':')
if not os.path.isabs(fn):
f = None
for p in bbpath:
j = os.path.join(p, fn)
if os.access(j, os.R_OK):
abs_fn = j
f = open(j, 'r')
break
if f is None:
raise IOError("file not found")
else:
f = open(fn,'r')
abs_fn = fn
if ext != ".bbclass":
dname = os.path.dirname(abs_fn)
if bbpath[0] != dname:
bbpath.insert(0, dname)
data.setVar('BBPATH', ":".join(bbpath), d)
if include:
bb.parse.mark_dependency(d, abs_fn)
# actual loading
statements = get_statements(fn, abs_fn, base_name)
# DONE WITH PARSING... time to evaluate
if ext != ".bbclass":
data.setVar('FILE', abs_fn, d)
statements.eval(d)
data.setVar('FILE', fn, d)
lineno = 0
while 1:
lineno = lineno + 1
s = f.readline()
if not s: break
s = s.rstrip()
feeder(lineno, s, fn, base_name, d)
if __inpython__:
# add a blank line to close out any python definition
feeder(IN_PYTHON_EOF, "", fn, base_name, d)
if ext == ".bbclass":
classes.remove(__classname__)
else:
if include == 0:
return ast.multi_finalize(fn, d)
data.expandKeys(d)
data.update_data(d)
anonqueue = data.getVar("__anonqueue", d, 1) or []
body = [x['content'] for x in anonqueue]
flag = { 'python' : 1, 'func' : 1 }
data.setVar("__anonfunc", "\n".join(body), d)
data.setVarFlags("__anonfunc", flag, d)
from bb import build
try:
t = data.getVar('T', d)
data.setVar('T', '${TMPDIR}/', d)
build.exec_func("__anonfunc", d)
data.delVar('T', d)
if t:
data.setVar('T', t, d)
except Exception, e:
bb.msg.debug(1, bb.msg.domain.Parsing, "Exception when executing anonymous function: %s" % e)
raise
data.delVar("__anonqueue", d)
data.delVar("__anonfunc", d)
set_additional_vars(fn, d, include)
data.update_data(d)
all_handlers = {}
for var in data.getVar('__BBHANDLERS', d) or []:
# try to add the handler
handler = data.getVar(var,d)
bb.event.register(var, handler)
tasklist = data.getVar('__BBTASKS', d) or []
bb.build.add_tasks(tasklist, d)
bbpath.pop(0)
if oldfile:
d.setVar("FILE", oldfile)
bb.data.setVar("FILE", oldfile, d)
# we have parsed the bb class now
if ext == ".bbclass" or ext == ".inc":
bb.methodpool.get_parsed_dict()[base_name] = 1
__parsed_methods__[base_name] = 1
return d
def feeder(lineno, s, fn, root, statements):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__, __infunc__, __body__, classes, bb, __residue__
def feeder(lineno, s, fn, root, d):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __def_regexp__, __python_func_regexp__, __inpython__,__infunc__, __body__, classes, bb, __residue__
if __infunc__:
if s == '}':
__body__.append('')
ast.handleMethod(statements, fn, lineno, __infunc__, __body__)
data.setVar(__infunc__, '\n'.join(__body__), d)
data.setVarFlag(__infunc__, "func", 1, d)
if __infunc__ == "__anonymous":
anonqueue = bb.data.getVar("__anonqueue", d) or []
anonitem = {}
anonitem["content"] = bb.data.getVar("__anonymous", d)
anonitem["flags"] = bb.data.getVarFlags("__anonymous", d)
anonqueue.append(anonitem)
bb.data.setVar("__anonqueue", anonqueue, d)
bb.data.delVarFlags("__anonymous", d)
bb.data.delVar("__anonymous", d)
__infunc__ = ""
__body__ = []
else:
@@ -183,69 +217,200 @@ def feeder(lineno, s, fn, root, statements):
__body__.append(s)
return
else:
ast.handlePythonMethod(statements, fn, lineno, __inpython__,
root, __body__)
# Note we will add root to parsedmethods after having parse
# 'this' file. This means we will not parse methods from
# bb classes twice
if not root in __parsed_methods__:
text = '\n'.join(__body__)
methodpool.insert_method( root, text, fn )
funcs = data.getVar('__functions__', d) or {}
if not funcs.has_key( root ):
funcs[root] = text
else:
funcs[root] = "%s\n%s" % (funcs[root], text)
data.setVar('__functions__', funcs, d)
__body__ = []
__inpython__ = False
if lineno == IN_PYTHON_EOF:
return
if s and s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
bb.error("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
# fall through
if s and s[-1] == '\\':
if s == '' or s[0] == '#': return # skip comments and empty lines
if s[-1] == '\\':
__residue__.append(s[:-1])
return
s = "".join(__residue__) + s
__residue__ = []
# Skip empty lines
if s == '':
return
# Skip comments
if s[0] == '#':
return
m = __func_start_regexp__.match(s)
if m:
__infunc__ = m.group("func") or "__anonymous"
ast.handleMethodFlags(statements, fn, lineno, __infunc__, m)
key = __infunc__
if data.getVar(key, d):
# clean up old version of this piece of metadata, as its
# flags could cause problems
data.setVarFlag(key, 'python', None, d)
data.setVarFlag(key, 'fakeroot', None, d)
if m.group("py") is not None:
data.setVarFlag(key, "python", "1", d)
else:
data.delVarFlag(key, "python", d)
if m.group("fr") is not None:
data.setVarFlag(key, "fakeroot", "1", d)
else:
data.delVarFlag(key, "fakeroot", d)
return
m = __def_regexp__.match(s)
if m:
__body__.append(s)
__inpython__ = m.group(1)
__inpython__ = True
return
m = __export_func_regexp__.match(s)
if m:
ast.handleExportFuncs(statements, fn, lineno, m, classes)
fns = m.group(1)
n = __word__.findall(fns)
for f in n:
allvars = []
allvars.append(f)
allvars.append(classes[-1] + "_" + f)
vars = [[ allvars[0], allvars[1] ]]
if len(classes) > 1 and classes[-2] is not None:
allvars.append(classes[-2] + "_" + f)
vars = []
vars.append([allvars[2], allvars[1]])
vars.append([allvars[0], allvars[2]])
for (var, calledvar) in vars:
if data.getVar(var, d) and not data.getVarFlag(var, 'export_func', d):
continue
if data.getVar(var, d):
data.setVarFlag(var, 'python', None, d)
data.setVarFlag(var, 'func', None, d)
for flag in [ "func", "python" ]:
if data.getVarFlag(calledvar, flag, d):
data.setVarFlag(var, flag, data.getVarFlag(calledvar, flag, d), d)
for flag in [ "dirs" ]:
if data.getVarFlag(var, flag, d):
data.setVarFlag(calledvar, flag, data.getVarFlag(var, flag, d), d)
if data.getVarFlag(calledvar, "python", d):
data.setVar(var, "\tbb.build.exec_func('" + calledvar + "', d)\n", d)
else:
data.setVar(var, "\t" + calledvar + "\n", d)
data.setVarFlag(var, 'export_func', '1', d)
return
m = __addtask_regexp__.match(s)
if m:
ast.handleAddTask(statements, fn, lineno, m)
func = m.group("func")
before = m.group("before")
after = m.group("after")
if func is None:
return
var = "do_" + func
data.setVarFlag(var, "task", 1, d)
bbtasks = data.getVar('__BBTASKS', d) or []
if not var in bbtasks:
bbtasks.append(var)
data.setVar('__BBTASKS', bbtasks, d)
existing = data.getVarFlag(var, "deps", d) or []
if after is not None:
# set up deps for function
for entry in after.split():
if entry not in existing:
existing.append(entry)
data.setVarFlag(var, "deps", existing, d)
if before is not None:
# set up things that depend on this func
for entry in before.split():
existing = data.getVarFlag(entry, "deps", d) or []
if var not in existing:
data.setVarFlag(entry, "deps", [var] + existing, d)
return
m = __addhandler_regexp__.match(s)
if m:
ast.handleBBHandlers(statements, fn, lineno, m)
fns = m.group(1)
hs = __word__.findall(fns)
bbhands = data.getVar('__BBHANDLERS', d) or []
for h in hs:
bbhands.append(h)
data.setVarFlag(h, "handler", 1, d)
data.setVar('__BBHANDLERS', bbhands, d)
return
m = __inherit_regexp__.match(s)
if m:
ast.handleInherit(statements, fn, lineno, m)
files = m.group(1)
n = __word__.findall(files)
inherit(n, d)
return
return ConfHandler.feeder(lineno, s, fn, statements)
from bb.parse import ConfHandler
return ConfHandler.feeder(lineno, s, fn, d)
__pkgsplit_cache__={}
def vars_from_file(mypkg, d):
if not mypkg:
return (None, None, None)
if mypkg in __pkgsplit_cache__:
return __pkgsplit_cache__[mypkg]
myfile = os.path.splitext(os.path.basename(mypkg))
parts = myfile[0].split('_')
__pkgsplit_cache__[mypkg] = parts
if len(parts) > 3:
raise ParseError("Unable to generate default variables from the filename: %s (too many underscores)" % mypkg)
exp = 3 - len(parts)
tmplist = []
while exp != 0:
exp -= 1
tmplist.append(None)
parts.extend(tmplist)
return parts
def set_additional_vars(file, d, include):
"""Deduce rest of variables, e.g. ${A} out of ${SRC_URI}"""
return
# Nothing seems to use this variable
#bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s: set_additional_vars" % file)
#src_uri = data.getVar('SRC_URI', d, 1)
#if not src_uri:
# return
#a = (data.getVar('A', d, 1) or '').split()
#from bb import fetch
#try:
# ud = fetch.init(src_uri.split(), d)
# a += fetch.localpaths(d, ud)
#except fetch.NoMethodError:
# pass
#except bb.MalformedUrl,e:
# raise ParseError("Unable to generate local paths for SRC_URI due to malformed uri: %s" % e)
#del fetch
#data.setVar('A', " ".join(a), d)
# Add us to the handlers list
from .. import handlers
from bb.parse import handlers
handlers.append({'supports': supports, 'handle': handle, 'init': init})
del handlers

View File

@@ -10,7 +10,7 @@
# Copyright (C) 2003, 2004 Chris Larson
# Copyright (C) 2003, 2004 Phil Blundell
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
@@ -24,70 +24,129 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re, os
import logging
import bb.utils
from bb.parse import ParseError, resolve_file, ast, logger
import re, bb.data, os, sys
from bb.parse import ParseError
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"])(?P<value>.*)(?P=apo)$")
#__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}]+)\s*(?P<colon>:)?(?P<ques>\?)?=\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?P<apo>['\"]?)(?P<value>.*)(?P=apo)$")
__include_regexp__ = re.compile( r"include\s+(.+)" )
__require_regexp__ = re.compile( r"require\s+(.+)" )
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
__export_regexp__ = re.compile( r"export\s+(.+)" )
def init(data):
topdir = data.getVar('TOPDIR')
if not topdir:
data.setVar('TOPDIR', os.getcwd())
if not bb.data.getVar('TOPDIR', data):
bb.data.setVar('TOPDIR', os.getcwd(), data)
if not bb.data.getVar('BBPATH', data):
bb.data.setVar('BBPATH', os.path.join(sys.prefix, 'share', 'bitbake'), data)
def supports(fn, d):
return fn[-5:] == ".conf"
return localpath(fn, d)[-5:] == ".conf"
def include(oldfn, fn, lineno, data, error_out):
def localpath(fn, d):
if os.path.exists(fn):
return fn
if "://" not in fn:
return fn
localfn = None
try:
localfn = bb.fetch.localpath(fn, d, False)
except bb.MalformedUrl:
pass
if not localfn:
return fn
return localfn
def obtain(fn, data):
import sys, bb
fn = bb.data.expand(fn, data)
localfn = bb.data.expand(localpath(fn, data), data)
if localfn != fn:
dldir = bb.data.getVar('DL_DIR', data, 1)
if not dldir:
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: DL_DIR not defined")
return localfn
bb.mkdirhier(dldir)
try:
bb.fetch.init([fn], data)
except bb.fetch.NoMethodError:
(type, value, traceback) = sys.exc_info()
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: no method: %s" % value)
return localfn
try:
bb.fetch.go(data)
except bb.fetch.MissingParameterError:
(type, value, traceback) = sys.exc_info()
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: missing parameters: %s" % value)
return localfn
except bb.fetch.FetchError:
(type, value, traceback) = sys.exc_info()
bb.msg.debug(1, bb.msg.domain.Parsing, "obtain: failed: %s" % value)
return localfn
return localfn
def include(oldfn, fn, data, error_out):
"""
error_out: A string indicating the verb (e.g. "include", "inherit") to be
used in a ParseError that will be raised if the file to be included could
not be included. Specify False to avoid raising an error in this case.
error_out If True a ParseError will be reaised if the to be included
"""
if oldfn == fn: # prevent infinite recursion
if oldfn == fn: # prevent infinate recursion
return None
import bb
fn = data.expand(fn)
oldfn = data.expand(oldfn)
if not os.path.isabs(fn):
dname = os.path.dirname(oldfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH", 1))
abs_fn = bb.utils.which(bbpath, fn)
if abs_fn:
fn = abs_fn
fn = bb.data.expand(fn, data)
oldfn = bb.data.expand(oldfn, data)
from bb.parse import handle
try:
ret = handle(fn, data, True)
except IOError:
if error_out:
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), oldfn, lineno)
logger.debug(2, "CONF file '%s' not found", fn)
raise ParseError("Could not %(error_out)s file %(fn)s" % vars() )
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF file '%s' not found" % fn)
def handle(fn, data, include):
def handle(fn, data, include = 0):
if include:
inc_string = "including"
else:
inc_string = "reading"
init(data)
if include == 0:
bb.data.inheritFromOS(data)
oldfile = None
else:
oldfile = data.getVar('FILE')
oldfile = bb.data.getVar('FILE', data)
abs_fn = resolve_file(fn, data)
f = open(abs_fn, 'r')
fn = obtain(fn, data)
if not os.path.isabs(fn):
f = None
bbpath = bb.data.getVar("BBPATH", data, 1) or []
for p in bbpath.split(":"):
currname = os.path.join(p, fn)
if os.access(currname, os.R_OK):
f = open(currname, 'r')
abs_fn = currname
bb.msg.debug(2, bb.msg.domain.Parsing, "CONF %s %s" % (inc_string, currname))
break
if f is None:
raise IOError("file '%s' not found" % fn)
else:
f = open(fn,'r')
bb.msg.debug(1, bb.msg.domain.Parsing, "CONF %s %s" % (inc_string,fn))
abs_fn = fn
if include:
bb.parse.mark_dependency(data, abs_fn)
statements = ast.StatementGroup()
lineno = 0
while True:
bb.data.setVar('FILE', fn, data)
while 1:
lineno = lineno + 1
s = f.readline()
if not s: break
@@ -96,42 +155,72 @@ def handle(fn, data, include):
s = s.rstrip()
if s[0] == '#': continue # skip comments
while s[-1] == '\\':
s2 = f.readline().strip()
s2 = f.readline()[:-1].strip()
lineno = lineno + 1
s = s[:-1] + s2
feeder(lineno, s, fn, statements)
feeder(lineno, s, fn, data)
# DONE WITH PARSING... time to evaluate
data.setVar('FILE', abs_fn)
statements.eval(data)
if oldfile:
data.setVar('FILE', oldfile)
bb.data.setVar('FILE', oldfile, data)
return data
def feeder(lineno, s, fn, statements):
def feeder(lineno, s, fn, data):
def getFunc(groupd, key, data):
if 'flag' in groupd and groupd['flag'] != None:
return bb.data.getVarFlag(key, groupd['flag'], data)
else:
return bb.data.getVar(key, data)
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
ast.handleData(statements, fn, lineno, groupd)
key = groupd["var"]
if "exp" in groupd and groupd["exp"] != None:
bb.data.setVarFlag(key, "export", 1, data)
if "ques" in groupd and groupd["ques"] != None:
val = getFunc(groupd, key, data)
if val == None:
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
val = bb.data.expand(groupd["value"], e)
elif "append" in groupd and groupd["append"] != None:
val = "%s %s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
elif "prepend" in groupd and groupd["prepend"] != None:
val = "%s %s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
elif "postdot" in groupd and groupd["postdot"] != None:
val = "%s%s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
elif "predot" in groupd and groupd["predot"] != None:
val = "%s%s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
else:
val = groupd["value"]
if 'flag' in groupd and groupd['flag'] != None:
bb.msg.debug(3, bb.msg.domain.Parsing, "setVarFlag(%s, %s, %s, data)" % (key, groupd['flag'], val))
bb.data.setVarFlag(key, groupd['flag'], val, data)
else:
bb.data.setVar(key, val, data)
return
m = __include_regexp__.match(s)
if m:
ast.handleInclude(statements, fn, lineno, m, False)
s = bb.data.expand(m.group(1), data)
bb.msg.debug(3, bb.msg.domain.Parsing, "CONF %s:%d: including %s" % (fn, lineno, s))
include(fn, s, data, False)
return
m = __require_regexp__.match(s)
if m:
ast.handleInclude(statements, fn, lineno, m, True)
s = bb.data.expand(m.group(1), data)
include(fn, s, data, "include required")
return
m = __export_regexp__.match(s)
if m:
ast.handleExport(statements, fn, lineno, m)
bb.data.setVarFlag(m.group(1), "export", 1, data)
return
raise ParseError("unparsed line: '%s'" % s, fn, lineno);
raise ParseError("%s:%d: unparsed line: '%s'" % (fn, lineno, s));
# Add us to the handlers list
from bb.parse import handlers

View File

@@ -25,9 +25,9 @@ File parsers for the BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
from __future__ import absolute_import
from . import ConfHandler
from . import BBHandler
__version__ = '1.0'
__all__ = [ 'ConfHandler', 'BBHandler']
import ConfHandler
import BBHandler

View File

@@ -1,12 +1,6 @@
"""BitBake Persistent Data Store
Used to store data in a central location such that other threads/tasks can
access them at some future date. Acts as a convenience wrapper around sqlite,
currently, providing a key/value store accessed by 'domain'.
"""
# BitBake Persistent Data Store
#
# Copyright (C) 2007 Richard Purdie
# Copyright (C) 2010 Chris Larson <chris_larson@mentor.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -21,190 +15,96 @@ currently, providing a key/value store accessed by 'domain'.
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import collections
import logging
import os.path
import sys
import warnings
from bb.compat import total_ordering
from collections import Mapping
import bb, os
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
try:
from pysqlite2 import dbapi2 as sqlite3
except ImportError:
bb.msg.fatal(bb.msg.domain.PersistData, "Importing sqlite3 and pysqlite2 failed, please install one of them. Python 2.5 or a 'python-pysqlite2' like package is likely to be what you need.")
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
raise Exception("sqlite3 version 3.3.0 or later is required.")
bb.msg.fatal(bb.msg.domain.PersistData, "sqlite3 version 3.3.0 or later is required.")
class PersistData:
"""
BitBake Persistent Data Store
logger = logging.getLogger("BitBake.PersistData")
if hasattr(sqlite3, 'enable_shared_cache'):
try:
sqlite3.enable_shared_cache(True)
except sqlite3.OperationalError:
pass
Used to store data in a central location such that other threads/tasks can
access them at some future date.
The "domain" is used as a key to isolate each data pool and in this
implementation corresponds to an SQL table. The SQL table consists of a
simple key and value pair.
@total_ordering
class SQLTable(collections.MutableMapping):
"""Object representing a table/domain in the database"""
def __init__(self, cachefile, table):
self.cachefile = cachefile
self.table = table
self.cursor = connect(self.cachefile)
self._execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);"
% table)
def _execute(self, *query):
"""Execute a query, waiting to acquire a lock if necessary"""
count = 0
while True:
try:
return self.cursor.execute(*query)
except sqlite3.OperationalError as exc:
if 'database is locked' in str(exc) and count < 500:
count = count + 1
self.cursor.close()
self.cursor = connect(self.cachefile)
continue
raise
def __enter__(self):
self.cursor.__enter__()
return self
def __exit__(self, *excinfo):
self.cursor.__exit__(*excinfo)
def __getitem__(self, key):
data = self._execute("SELECT * from %s where key=?;" %
self.table, [key])
for row in data:
return row[1]
raise KeyError(key)
def __delitem__(self, key):
if key not in self:
raise KeyError(key)
self._execute("DELETE from %s where key=?;" % self.table, [key])
def __setitem__(self, key, value):
if not isinstance(key, basestring):
raise TypeError('Only string keys are supported')
elif not isinstance(value, basestring):
raise TypeError('Only string values are supported')
data = self._execute("SELECT * from %s where key=?;" %
self.table, [key])
exists = len(list(data))
if exists:
self._execute("UPDATE %s SET value=? WHERE key=?;" % self.table,
[value, key])
else:
self._execute("INSERT into %s(key, value) values (?, ?);" %
self.table, [key, value])
def __contains__(self, key):
return key in set(self)
def __len__(self):
data = self._execute("SELECT COUNT(key) FROM %s;" % self.table)
for row in data:
return row[0]
def __iter__(self):
data = self._execute("SELECT key FROM %s;" % self.table)
return (row[0] for row in data)
def __lt__(self, other):
if not isinstance(other, Mapping):
raise NotImplemented
return len(self) < len(other)
def values(self):
return list(self.itervalues())
def itervalues(self):
data = self._execute("SELECT value FROM %s;" % self.table)
return (row[0] for row in data)
def items(self):
return list(self.iteritems())
def iteritems(self):
return self._execute("SELECT * FROM %s;" % self.table)
def clear(self):
self._execute("DELETE FROM %s;" % self.table)
def has_key(self, key):
return key in self
class PersistData(object):
"""Deprecated representation of the bitbake persistent data store"""
Why sqlite? It handles all the locking issues for us.
"""
def __init__(self, d):
warnings.warn("Use of PersistData is deprecated. Please use "
"persist(domain, d) instead.",
category=DeprecationWarning,
stacklevel=2)
self.cachedir = bb.data.getVar("PERSISTENT_DIR", d, True) or bb.data.getVar("CACHE", d, True)
if self.cachedir in [None, '']:
bb.msg.fatal(bb.msg.domain.PersistData, "Please set the 'PERSISTENT_DIR' or 'CACHE' variable.")
try:
os.stat(self.cachedir)
except OSError:
bb.mkdirhier(self.cachedir)
self.data = persist(d)
logger.debug(1, "Using '%s' as the persistent data cache",
self.data.filename)
self.cachefile = os.path.join(self.cachedir,"bb_persist_data.sqlite3")
bb.msg.debug(1, bb.msg.domain.PersistData, "Using '%s' as the persistent data cache" % self.cachefile)
self.connection = sqlite3.connect(self.cachefile, timeout=5, isolation_level=None)
def addDomain(self, domain):
"""
Add a domain (pending deprecation)
Should be called before any domain is used
Creates it if it doesn't exist.
"""
return self.data[domain]
self.connection.execute("CREATE TABLE IF NOT EXISTS %s(key TEXT, value TEXT);" % domain)
def delDomain(self, domain):
"""
Removes a domain and all the data it contains
"""
del self.data[domain]
def getKeyValues(self, domain):
"""
Return a list of key + value pairs for a domain
"""
return self.data[domain].items()
self.connection.execute("DROP TABLE IF EXISTS %s;" % domain)
def getValue(self, domain, key):
"""
Return the value of a key for a domain
"""
return self.data[domain][key]
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
for row in data:
return row[1]
def setValue(self, domain, key, value):
"""
Sets the value of a key for a domain
"""
self.data[domain][key] = value
data = self.connection.execute("SELECT * from %s where key=?;" % domain, [key])
rows = 0
for row in data:
rows = rows + 1
if rows:
self._execute("UPDATE %s SET value=? WHERE key=?;" % domain, [value, key])
else:
self._execute("INSERT into %s(key, value) values (?, ?);" % domain, [key, value])
def delValue(self, domain, key):
"""
Deletes a key/value pair
"""
del self.data[domain][key]
self._execute("DELETE from %s where key=?;" % domain, [key])
def connect(database):
return sqlite3.connect(database, timeout=5, isolation_level=None)
def _execute(self, *query):
while True:
try:
self.connection.execute(*query)
return
except sqlite3.OperationalError, e:
if 'database is locked' in str(e):
continue
raise
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
cachedir = (d.getVar("PERSISTENT_DIR", True) or
d.getVar("CACHE", True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
sys.exit(1)
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
return SQLTable(cachefile, domain)

View File

@@ -1,109 +0,0 @@
import logging
import signal
import subprocess
logger = logging.getLogger('BitBake.Process')
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect.
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
class CmdError(RuntimeError):
def __init__(self, command, msg=None):
self.command = command
self.msg = msg
def __str__(self):
if not isinstance(self.command, basestring):
cmd = subprocess.list2cmdline(self.command)
else:
cmd = self.command
msg = "Execution of '%s' failed" % cmd
if self.msg:
msg += ': %s' % self.msg
return msg
class NotFoundError(CmdError):
def __str__(self):
return CmdError.__str__(self) + ": command not found"
class ExecutionError(CmdError):
def __init__(self, command, exitcode, stdout = None, stderr = None):
CmdError.__init__(self, command)
self.exitcode = exitcode
self.stdout = stdout
self.stderr = stderr
def __str__(self):
message = ""
if self.stderr:
message += self.stderr
if self.stdout:
message += self.stdout
if message:
message = ":\n" + message
return (CmdError.__str__(self) +
" with exit code %s" % self.exitcode + message)
class Popen(subprocess.Popen):
defaults = {
"close_fds": True,
"preexec_fn": subprocess_setup,
"stdout": subprocess.PIPE,
"stderr": subprocess.STDOUT,
"stdin": subprocess.PIPE,
"shell": False,
}
def __init__(self, *args, **kwargs):
options = dict(self.defaults)
options.update(kwargs)
subprocess.Popen.__init__(self, *args, **options)
def _logged_communicate(pipe, log, input):
if pipe.stdin:
if input is not None:
pipe.stdin.write(input)
pipe.stdin.close()
bufsize = 512
outdata, errdata = [], []
while pipe.poll() is None:
if pipe.stdout is not None:
data = pipe.stdout.read(bufsize)
if data is not None:
outdata.append(data)
log.write(data)
if pipe.stderr is not None:
data = pipe.stderr.read(bufsize)
if data is not None:
errdata.append(data)
log.write(data)
return ''.join(outdata), ''.join(errdata)
def run(cmd, input=None, log=None, **options):
"""Convenience function to run a command and return its output, raising an
exception when the command fails"""
if isinstance(cmd, basestring) and not "shell" in options:
options["shell"] = True
try:
pipe = Popen(cmd, **options)
except OSError as exc:
if exc.errno == 2:
raise NotFoundError(cmd)
else:
raise CmdError(cmd, exc)
if log:
stdout, stderr = _logged_communicate(pipe, log, input)
else:
stdout, stderr = pipe.communicate(input)
if pipe.returncode != 0:
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
return stdout, stderr

View File

@@ -21,56 +21,17 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import re
import logging
import os, re
from bb import data, utils
from collections import defaultdict
import bb
logger = logging.getLogger("BitBake.Provider")
class NoProvider(bb.BBHandledException):
class NoProvider(Exception):
"""Exception raised when no provider of a build dependency can be found"""
class NoRProvider(bb.BBHandledException):
class NoRProvider(Exception):
"""Exception raised when no provider of a runtime dependency can be found"""
def findProviders(cfgData, dataCache, pkg_pn = None):
"""
Convenience function to get latest and preferred providers in pkg_pn
"""
if not pkg_pn:
pkg_pn = dataCache.pkg_pn
# Need to ensure data store is expanded
localdata = data.createCopy(cfgData)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
preferred_versions = {}
latest_versions = {}
for pn in pkg_pn:
(last_ver, last_file, pref_ver, pref_file) = findBestProvider(pn, localdata, dataCache, pkg_pn)
preferred_versions[pn] = (pref_ver, pref_file)
latest_versions[pn] = (last_ver, last_file)
return (latest_versions, preferred_versions)
def allProviders(dataCache):
"""
Find all providers for each pn
"""
all_providers = defaultdict(list)
for (fn, pn) in dataCache.pkg_fn.items():
ver = dataCache.pkg_pepvpr[fn]
all_providers[pn].append((ver, fn))
return all_providers
def sortPriorities(pn, dataCache, pkg_pn = None):
"""
Reorder pkg_pn by file priority and default preference
@@ -89,27 +50,19 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
if preference not in priorities[priority]:
priorities[priority][preference] = []
priorities[priority][preference].append(f)
pri_list = priorities.keys()
pri_list.sort(lambda a, b: a - b)
tmp_pn = []
for pri in sorted(priorities, lambda a, b: a - b):
for pri in pri_list:
pref_list = priorities[pri].keys()
pref_list.sort(lambda a, b: b - a)
tmp_pref = []
for pref in sorted(priorities[pri], lambda a, b: b - a):
for pref in pref_list:
tmp_pref.extend(priorities[pri][pref])
tmp_pn = [tmp_pref] + tmp_pn
return tmp_pn
def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
"""
Check if the version pe,pv,pr is the preferred one.
If there is preferred version defined and ends with '%', then pv has to start with that version after removing the '%'
"""
if (pr == preferred_r or preferred_r == None):
if (pe == preferred_e or preferred_e == None):
if preferred_v == pv:
return True
if preferred_v != None and preferred_v.endswith('%') and pv.startswith(preferred_v[:len(preferred_v)-1]):
return True
return False
def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
"""
@@ -120,10 +73,10 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
preferred_ver = None
localdata = data.createCopy(cfgData)
localdata.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn))
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
bb.data.update_data(localdata)
preferred_v = localdata.getVar('PREFERRED_VERSION', True)
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:
@@ -142,8 +95,8 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
for file_set in pkg_pn:
for f in file_set:
pe, pv, pr = dataCache.pkg_pepvpr[f]
if preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
pe,pv,pr = dataCache.pkg_pepvpr[f]
if preferred_v == pv and (preferred_r == pr or preferred_r == None) and (preferred_e == pe or preferred_e == None):
preferred_file = f
preferred_ver = (pe, pv, pr)
break
@@ -159,21 +112,9 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
if item:
itemstr = " (for item %s)" % item
if preferred_file is None:
logger.info("preferred version %s of %s not available%s", pv_str, pn, itemstr)
available_vers = []
for file_set in pkg_pn:
for f in file_set:
pe, pv, pr = dataCache.pkg_pepvpr[f]
ver_str = pv
if pe:
ver_str = "%s:%s" % (pe, ver_str)
if not ver_str in available_vers:
available_vers.append(ver_str)
if available_vers:
available_vers.sort()
logger.info("versions of %s available: %s", pn, ' '.join(available_vers))
bb.msg.note(1, bb.msg.domain.Provider, "preferred version %s of %s not available%s" % (pv_str, pn, itemstr))
else:
logger.debug(1, "selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
bb.msg.debug(1, bb.msg.domain.Provider, "selecting %s as PREFERRED_VERSION %s of package %s%s" % (preferred_file, pv_str, pn, itemstr))
return (preferred_ver, preferred_file)
@@ -187,7 +128,7 @@ def findLatestProvider(pn, cfgData, dataCache, file_set):
latest_p = 0
latest_f = None
for file_name in file_set:
pe, pv, pr = dataCache.pkg_pepvpr[file_name]
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
dp = dataCache.pkg_dp[file_name]
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
@@ -220,14 +161,14 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
def _filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
Take a list of providers and filter/reorder according to the
environment variables and previous build results
"""
eligible = []
preferred_versions = {}
sortpkg_pn = {}
# The order of providers depends on the order of the files on the disk
# The order of providers depends on the order of the files on the disk
# up to here. Sort pkg_pn to make dependency issues reproducible rather
# than effectively random.
providers.sort()
@@ -240,24 +181,24 @@ def _filterProviders(providers, item, cfgData, dataCache):
pkg_pn[pn] = []
pkg_pn[pn].append(p)
logger.debug(1, "providers for %s are: %s", item, pkg_pn.keys())
bb.msg.debug(1, bb.msg.domain.Provider, "providers for %s are: %s" % (item, pkg_pn.keys()))
# First add PREFERRED_VERSIONS
for pn in pkg_pn:
for pn in pkg_pn.keys():
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
if preferred_versions[pn][1]:
eligible.append(preferred_versions[pn][1])
# Now add latest versions
for pn in sortpkg_pn:
# Now add latest verisons
for pn in pkg_pn.keys():
if pn in preferred_versions and preferred_versions[pn][1]:
continue
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
eligible.append(preferred_versions[pn][1])
if len(eligible) == 0:
logger.error("no eligible providers for %s", item)
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
return 0
# If pn == item, give it a slight default preference
@@ -277,14 +218,14 @@ def _filterProviders(providers, item, cfgData, dataCache):
def filterProviders(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
Take a list of providers and filter/reorder according to the
environment variables and previous build results
Takes a "normal" target item
"""
eligible = _filterProviders(providers, item, cfgData, dataCache)
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % item, 1)
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % item, cfgData, 1)
if prefervar:
dataCache.preferred[item] = prefervar
@@ -293,19 +234,19 @@ def filterProviders(providers, item, cfgData, dataCache):
for p in eligible:
pn = dataCache.pkg_fn[p]
if dataCache.preferred[item] == pn:
logger.verbose("selecting %s to satisfy %s due to PREFERRED_PROVIDERS", pn, item)
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy %s due to PREFERRED_PROVIDERS" % (pn, item))
eligible.remove(p)
eligible = [p] + eligible
foundUnique = True
break
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
return eligible, foundUnique
def filterProvidersRunTime(providers, item, cfgData, dataCache):
"""
Take a list of providers and filter/reorder according to the
Take a list of providers and filter/reorder according to the
environment variables and previous build results
Takes a "runtime" target item
"""
@@ -315,36 +256,29 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
# Should use dataCache.preferred here?
preferred = []
preferred_vars = []
pns = {}
for p in eligible:
pns[dataCache.pkg_fn[p]] = p
for p in eligible:
pn = dataCache.pkg_fn[p]
provides = dataCache.pn_provides[pn]
for provide in provides:
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide, 1)
logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
if prefervar in pns and pns[prefervar] not in preferred:
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
prefervar = bb.data.getVar('PREFERRED_PROVIDER_%s' % provide, cfgData, 1)
if prefervar == pn:
var = "PREFERRED_PROVIDERS_%s = %s" % (provide, prefervar)
bb.msg.note(2, bb.msg.domain.Provider, "selecting %s to satisfy runtime %s due to %s" % (pn, item, var))
preferred_vars.append(var)
pref = pns[prefervar]
eligible.remove(pref)
eligible = [pref] + eligible
preferred.append(pref)
eligible.remove(p)
eligible = [p] + eligible
preferred.append(p)
break
numberPreferred = len(preferred)
if numberPreferred > 1:
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s", item, preferred, preferred_vars)
bb.msg.error(bb.msg.domain.Provider, "Conflicting PREFERRED_PROVIDERS entries were found which resulted in an attempt to select multiple providers (%s) for runtime dependecy %s\nThe entries resulting in this conflict were: %s" % (preferred, item, preferred_vars))
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
bb.msg.debug(1, bb.msg.domain.Provider, "sorted providers for %s are: %s" % (item, eligible))
return eligible, numberPreferred
regexp_cache = {}
def getRuntimeProviders(dataCache, rdepend):
"""
Return any providers of runtime dependency
@@ -352,7 +286,7 @@ def getRuntimeProviders(dataCache, rdepend):
rproviders = []
if rdepend in dataCache.rproviders:
rproviders += dataCache.rproviders[rdepend]
rproviders += dataCache.rproviders[rdepend]
if rdepend in dataCache.packages:
rproviders += dataCache.packages[rdepend]
@@ -363,15 +297,11 @@ def getRuntimeProviders(dataCache, rdepend):
# Only search dynamic packages if we can't find anything in other variables
for pattern in dataCache.packages_dynamic:
pattern = pattern.replace('+', "\+")
if pattern in regexp_cache:
regexp = regexp_cache[pattern]
else:
try:
regexp = re.compile(pattern)
except:
logger.error("Error parsing regular expression '%s'", pattern)
raise
regexp_cache[pattern] = regexp
try:
regexp = re.compile(pattern)
except:
bb.msg.error(bb.msg.domain.Provider, "Error parsing re expression: %s" % pattern)
raise
if regexp.match(rdepend):
rproviders += dataCache.packages_dynamic[pattern]

View File

@@ -1,710 +0,0 @@
# builtin.py - builtins and utilities definitions for pysh.
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
"""Builtin and internal utilities implementations.
- Beware not to use python interpreter environment as if it were the shell
environment. For instance, commands working directory must be explicitely handled
through env['PWD'] instead of relying on python working directory.
"""
import errno
import optparse
import os
import re
import subprocess
import sys
import time
def has_subprocess_bug():
return getattr(subprocess, 'list2cmdline') and \
( subprocess.list2cmdline(['']) == '' or \
subprocess.list2cmdline(['foo|bar']) == 'foo|bar')
# Detect python bug 1634343: "subprocess swallows empty arguments under win32"
# <http://sourceforge.net/tracker/index.php?func=detail&aid=1634343&group_id=5470&atid=105470>
# Also detect: "[ 1710802 ] subprocess must escape redirection characters under win32"
# <http://sourceforge.net/tracker/index.php?func=detail&aid=1710802&group_id=5470&atid=105470>
if has_subprocess_bug():
import subprocess_fix
subprocess.list2cmdline = subprocess_fix.list2cmdline
from sherrors import *
class NonExitingParser(optparse.OptionParser):
"""OptionParser default behaviour upon error is to print the error message and
exit. Raise a utility error instead.
"""
def error(self, msg):
raise UtilityError(msg)
#-------------------------------------------------------------------------------
# set special builtin
#-------------------------------------------------------------------------------
OPT_SET = NonExitingParser(usage="set - set or unset options and positional parameters")
OPT_SET.add_option( '-f', action='store_true', dest='has_f', default=False,
help='The shell shall disable pathname expansion.')
OPT_SET.add_option('-e', action='store_true', dest='has_e', default=False,
help="""When this option is on, if a simple command fails for any of the \
reasons listed in Consequences of Shell Errors or returns an exit status \
value >0, and is not part of the compound list following a while, until, \
or if keyword, and is not a part of an AND or OR list, and is not a \
pipeline preceded by the ! reserved word, then the shell shall immediately \
exit.""")
OPT_SET.add_option('-x', action='store_true', dest='has_x', default=False,
help="""The shell shall write to standard error a trace for each command \
after it expands the command and before it executes it. It is unspecified \
whether the command that turns tracing off is traced.""")
def builtin_set(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_SET.parse_args(args)
env = interp.get_env()
if option.has_f:
env.set_opt('-f')
if option.has_e:
env.set_opt('-e')
if option.has_x:
env.set_opt('-x')
return 0
#-------------------------------------------------------------------------------
# shift special builtin
#-------------------------------------------------------------------------------
def builtin_shift(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
params = interp.get_env().get_positional_args()
if args:
try:
n = int(args[0])
if n > len(params):
raise ValueError()
except ValueError:
return 1
else:
n = 1
params[:n] = []
interp.get_env().set_positional_args(params)
return 0
#-------------------------------------------------------------------------------
# export special builtin
#-------------------------------------------------------------------------------
OPT_EXPORT = NonExitingParser(usage="set - set or unset options and positional parameters")
OPT_EXPORT.add_option('-p', action='store_true', dest='has_p', default=False)
def builtin_export(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_EXPORT.parse_args(args)
if option.has_p:
raise NotImplementedError()
for arg in args:
try:
name, value = arg.split('=', 1)
except ValueError:
name, value = arg, None
env = interp.get_env().export(name, value)
return 0
#-------------------------------------------------------------------------------
# return special builtin
#-------------------------------------------------------------------------------
def builtin_return(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
res = 0
if args:
try:
res = int(args[0])
except ValueError:
res = 0
if not 0<=res<=255:
res = 0
# BUG: should be last executed command exit code
raise ReturnSignal(res)
#-------------------------------------------------------------------------------
# trap special builtin
#-------------------------------------------------------------------------------
def builtin_trap(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
if len(args) < 2:
stderr.write('trap: usage: trap [[arg] signal_spec ...]\n')
return 2
action = args[0]
for sig in args[1:]:
try:
env.traps[sig] = action
except Exception as e:
stderr.write('trap: %s\n' % str(e))
return 0
#-------------------------------------------------------------------------------
# unset special builtin
#-------------------------------------------------------------------------------
OPT_UNSET = NonExitingParser("unset - unset values and attributes of variables and functions")
OPT_UNSET.add_option( '-f', action='store_true', dest='has_f', default=False)
OPT_UNSET.add_option( '-v', action='store_true', dest='has_v', default=False)
def builtin_unset(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_UNSET.parse_args(args)
status = 0
env = interp.get_env()
for arg in args:
try:
if option.has_f:
env.remove_function(arg)
else:
del env[arg]
except KeyError:
pass
except VarAssignmentError:
status = 1
return status
#-------------------------------------------------------------------------------
# wait special builtin
#-------------------------------------------------------------------------------
def builtin_wait(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return interp.wait([int(arg) for arg in args])
#-------------------------------------------------------------------------------
# cat utility
#-------------------------------------------------------------------------------
def utility_cat(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
if not args:
args = ['-']
status = 0
for arg in args:
if arg == '-':
data = stdin.read()
else:
path = os.path.join(env['PWD'], arg)
try:
f = file(path, 'rb')
try:
data = f.read()
finally:
f.close()
except IOError as e:
if e.errno != errno.ENOENT:
raise
status = 1
continue
stdout.write(data)
stdout.flush()
return status
#-------------------------------------------------------------------------------
# cd utility
#-------------------------------------------------------------------------------
OPT_CD = NonExitingParser("cd - change the working directory")
def utility_cd(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_CD.parse_args(args)
env = interp.get_env()
directory = None
printdir = False
if not args:
home = env.get('HOME')
if home:
# Unspecified, do nothing
return 0
else:
directory = home
elif len(args)==1:
directory = args[0]
if directory=='-':
if 'OLDPWD' not in env:
raise UtilityError("OLDPWD not set")
printdir = True
directory = env['OLDPWD']
else:
raise UtilityError("too many arguments")
curpath = None
# Absolute directories will be handled correctly by the os.path.join call.
if not directory.startswith('.') and not directory.startswith('..'):
cdpaths = env.get('CDPATH', '.').split(';')
for cdpath in cdpaths:
p = os.path.join(cdpath, directory)
if os.path.isdir(p):
curpath = p
break
if curpath is None:
curpath = directory
curpath = os.path.join(env['PWD'], directory)
env['OLDPWD'] = env['PWD']
env['PWD'] = curpath
if printdir:
stdout.write('%s\n' % curpath)
return 0
#-------------------------------------------------------------------------------
# colon utility
#-------------------------------------------------------------------------------
def utility_colon(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return 0
#-------------------------------------------------------------------------------
# echo utility
#-------------------------------------------------------------------------------
def utility_echo(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# Echo only takes arguments, no options. Use printf if you need fancy stuff.
output = ' '.join(args) + '\n'
stdout.write(output)
stdout.flush()
return 0
#-------------------------------------------------------------------------------
# egrep utility
#-------------------------------------------------------------------------------
# egrep is usually a shell script.
# Unfortunately, pysh does not support shell scripts *with arguments* right now,
# so the redirection is implemented here, assuming grep is available.
def utility_egrep(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return run_command('grep', ['-E'] + args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# env utility
#-------------------------------------------------------------------------------
def utility_env(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
if args and args[0]=='-i':
raise NotImplementedError('env: -i option is not implemented')
i = 0
for arg in args:
if '=' not in arg:
break
# Update the current environment
name, value = arg.split('=', 1)
env[name] = value
i += 1
if args[i:]:
# Find then execute the specified interpreter
utility = env.find_in_path(args[i])
if not utility:
return 127
args[i:i+1] = utility
name = args[i]
args = args[i+1:]
try:
return run_command(name, args, interp, env, stdin, stdout, stderr,
debugflags)
except UtilityError:
stderr.write('env: failed to execute %s' % ' '.join([name]+args))
return 126
else:
for pair in env.get_variables().iteritems():
stdout.write('%s=%s\n' % pair)
return 0
#-------------------------------------------------------------------------------
# exit utility
#-------------------------------------------------------------------------------
def utility_exit(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
res = None
if args:
try:
res = int(args[0])
except ValueError:
res = None
if not 0<=res<=255:
res = None
if res is None:
# BUG: should be last executed command exit code
res = 0
raise ExitSignal(res)
#-------------------------------------------------------------------------------
# fgrep utility
#-------------------------------------------------------------------------------
# see egrep
def utility_fgrep(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return run_command('grep', ['-F'] + args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# gunzip utility
#-------------------------------------------------------------------------------
# see egrep
def utility_gunzip(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return run_command('gzip', ['-d'] + args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# kill utility
#-------------------------------------------------------------------------------
def utility_kill(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
for arg in args:
pid = int(arg)
status = subprocess.call(['pskill', '/T', str(pid)],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# pskill is asynchronous, hence the stupid polling loop
while 1:
p = subprocess.Popen(['pslist', str(pid)],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output = p.communicate()[0]
if ('process %d was not' % pid) in output:
break
time.sleep(1)
return status
#-------------------------------------------------------------------------------
# mkdir utility
#-------------------------------------------------------------------------------
OPT_MKDIR = NonExitingParser("mkdir - make directories.")
OPT_MKDIR.add_option('-p', action='store_true', dest='has_p', default=False)
def utility_mkdir(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# TODO: implement umask
# TODO: implement proper utility error report
option, args = OPT_MKDIR.parse_args(args)
for arg in args:
path = os.path.join(env['PWD'], arg)
if option.has_p:
try:
os.makedirs(path)
except IOError as e:
if e.errno != errno.EEXIST:
raise
else:
os.mkdir(path)
return 0
#-------------------------------------------------------------------------------
# netstat utility
#-------------------------------------------------------------------------------
def utility_netstat(name, args, interp, env, stdin, stdout, stderr, debugflags):
# Do you really expect me to implement netstat ?
# This empty form is enough for Mercurial tests since it's
# supposed to generate nothing upon success. Faking this test
# is not a big deal either.
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return 0
#-------------------------------------------------------------------------------
# pwd utility
#-------------------------------------------------------------------------------
OPT_PWD = NonExitingParser("pwd - return working directory name")
OPT_PWD.add_option('-L', action='store_true', dest='has_L', default=True,
help="""If the PWD environment variable contains an absolute pathname of \
the current directory that does not contain the filenames dot or dot-dot, \
pwd shall write this pathname to standard output. Otherwise, the -L option \
shall behave as the -P option.""")
OPT_PWD.add_option('-P', action='store_true', dest='has_L', default=False,
help="""The absolute pathname written shall not contain filenames that, in \
the context of the pathname, refer to files of type symbolic link.""")
def utility_pwd(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_PWD.parse_args(args)
stdout.write('%s\n' % env['PWD'])
return 0
#-------------------------------------------------------------------------------
# printf utility
#-------------------------------------------------------------------------------
RE_UNESCAPE = re.compile(r'(\\x[a-zA-Z0-9]{2}|\\[0-7]{1,3}|\\.)')
def utility_printf(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
def replace(m):
assert m.group()
g = m.group()[1:]
if g.startswith('x'):
return chr(int(g[1:], 16))
if len(g) <= 3 and len([c for c in g if c in '01234567']) == len(g):
# Yay, an octal number
return chr(int(g, 8))
return {
'a': '\a',
'b': '\b',
'f': '\f',
'n': '\n',
'r': '\r',
't': '\t',
'v': '\v',
'\\': '\\',
}.get(g)
# Convert escape sequences
format = re.sub(RE_UNESCAPE, replace, args[0])
stdout.write(format % tuple(args[1:]))
return 0
#-------------------------------------------------------------------------------
# true utility
#-------------------------------------------------------------------------------
def utility_true(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
return 0
#-------------------------------------------------------------------------------
# sed utility
#-------------------------------------------------------------------------------
RE_SED = re.compile(r'^s(.).*\1[a-zA-Z]*$')
# cygwin sed fails with some expressions when they do not end with a single space.
# see unit tests for details. Interestingly, the same expressions works perfectly
# in cygwin shell.
def utility_sed(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
# Scan pattern arguments and append a space if necessary
for i in xrange(len(args)):
if not RE_SED.search(args[i]):
continue
args[i] = args[i] + ' '
return run_command(name, args, interp, env, stdin, stdout,
stderr, debugflags)
#-------------------------------------------------------------------------------
# sleep utility
#-------------------------------------------------------------------------------
def utility_sleep(name, args, interp, env, stdin, stdout, stderr, debugflags):
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
time.sleep(int(args[0]))
return 0
#-------------------------------------------------------------------------------
# sort utility
#-------------------------------------------------------------------------------
OPT_SORT = NonExitingParser("sort - sort, merge, or sequence check text files")
def utility_sort(name, args, interp, env, stdin, stdout, stderr, debugflags):
def sort(path):
if path == '-':
lines = stdin.readlines()
else:
try:
f = file(path)
try:
lines = f.readlines()
finally:
f.close()
except IOError as e:
stderr.write(str(e) + '\n')
return 1
if lines and lines[-1][-1]!='\n':
lines[-1] = lines[-1] + '\n'
return lines
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
option, args = OPT_SORT.parse_args(args)
alllines = []
if len(args)<=0:
args += ['-']
# Load all files lines
curdir = os.getcwd()
try:
os.chdir(env['PWD'])
for path in args:
alllines += sort(path)
finally:
os.chdir(curdir)
alllines.sort()
for line in alllines:
stdout.write(line)
return 0
#-------------------------------------------------------------------------------
# hg utility
#-------------------------------------------------------------------------------
hgcommands = [
'add',
'addremove',
'commit', 'ci',
'debugrename',
'debugwalk',
'falabala', # Dummy command used in a mercurial test
'incoming',
'locate',
'pull',
'push',
'qinit',
'remove', 'rm',
'rename', 'mv',
'revert',
'showconfig',
'status', 'st',
'strip',
]
def rewriteslashes(name, args):
# Several hg commands output file paths, rewrite the separators
if len(args) > 1 and name.lower().endswith('python') \
and args[0].endswith('hg'):
for cmd in hgcommands:
if cmd in args[1:]:
return True
# svn output contains many paths with OS specific separators.
# Normalize these to unix paths.
base = os.path.basename(name)
if base.startswith('svn'):
return True
return False
def rewritehg(output):
if not output:
return output
# Rewrite os specific messages
output = output.replace(': The system cannot find the file specified',
': No such file or directory')
output = re.sub(': Access is denied.*$', ': Permission denied', output)
output = output.replace(': No connection could be made because the target machine actively refused it',
': Connection refused')
return output
def run_command(name, args, interp, env, stdin, stdout,
stderr, debugflags):
# Execute the command
if 'debug-utility' in debugflags:
print interp.log(' '.join([name, str(args), interp['PWD']]) + '\n')
hgbin = interp.options().hgbinary
ishg = hgbin and ('hg' in name or args and 'hg' in args[0])
unixoutput = 'cygwin' in name or ishg
exec_env = env.get_variables()
try:
# BUG: comparing file descriptor is clearly not a reliable way to tell
# whether they point on the same underlying object. But in pysh limited
# scope this is usually right, we do not expect complicated redirections
# besides usual 2>&1.
# Still there is one case we have but cannot deal with is when stdout
# and stderr are redirected *by pysh caller*. This the reason for the
# --redirect pysh() option.
# Now, we want to know they are the same because we sometimes need to
# transform the command output, mostly remove CR-LF to ensure that
# command output is unix-like. Cygwin utilies are a special case because
# they explicitely set their output streams to binary mode, so we have
# nothing to do. For all others commands, we have to guess whether they
# are sending text data, in which case the transformation must be done.
# Again, the NUL character test is unreliable but should be enough for
# hg tests.
redirected = stdout.fileno()==stderr.fileno()
if not redirected:
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
else:
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
except WindowsError as e:
raise UtilityError(str(e))
if not unixoutput:
def encode(s):
if '\0' in s:
return s
return s.replace('\r\n', '\n')
else:
encode = lambda s: s
if rewriteslashes(name, args):
encode1_ = encode
def encode(s):
s = encode1_(s)
s = s.replace('\\\\', '\\')
s = s.replace('\\', '/')
return s
if ishg:
encode2_ = encode
def encode(s):
return rewritehg(encode2_(s))
stdout.write(encode(out))
if not redirected:
stderr.write(encode(err))
return p.returncode

File diff suppressed because it is too large Load Diff

View File

@@ -1,116 +0,0 @@
#! /usr/bin/env python
import sys
from _lsprof import Profiler, profiler_entry
__all__ = ['profile', 'Stats']
def profile(f, *args, **kwds):
"""XXX docstring"""
p = Profiler()
p.enable(subcalls=True, builtins=True)
try:
f(*args, **kwds)
finally:
p.disable()
return Stats(p.getstats())
class Stats(object):
"""XXX docstring"""
def __init__(self, data):
self.data = data
def sort(self, crit="inlinetime"):
"""XXX docstring"""
if crit not in profiler_entry.__dict__:
raise ValueError("Can't sort by %s" % crit)
self.data.sort(lambda b, a: cmp(getattr(a, crit),
getattr(b, crit)))
for e in self.data:
if e.calls:
e.calls.sort(lambda b, a: cmp(getattr(a, crit),
getattr(b, crit)))
def pprint(self, top=None, file=None, limit=None, climit=None):
"""XXX docstring"""
if file is None:
file = sys.stdout
d = self.data
if top is not None:
d = d[:top]
cols = "% 12s %12s %11.4f %11.4f %s\n"
hcols = "% 12s %12s %12s %12s %s\n"
cols2 = "+%12s %12s %11.4f %11.4f + %s\n"
file.write(hcols % ("CallCount", "Recursive", "Total(ms)",
"Inline(ms)", "module:lineno(function)"))
count = 0
for e in d:
file.write(cols % (e.callcount, e.reccallcount, e.totaltime,
e.inlinetime, label(e.code)))
count += 1
if limit is not None and count == limit:
return
ccount = 0
if e.calls:
for se in e.calls:
file.write(cols % ("+%s" % se.callcount, se.reccallcount,
se.totaltime, se.inlinetime,
"+%s" % label(se.code)))
count += 1
ccount += 1
if limit is not None and count == limit:
return
if climit is not None and ccount == climit:
break
def freeze(self):
"""Replace all references to code objects with string
descriptions; this makes it possible to pickle the instance."""
# this code is probably rather ickier than it needs to be!
for i in range(len(self.data)):
e = self.data[i]
if not isinstance(e.code, str):
self.data[i] = type(e)((label(e.code),) + e[1:])
if e.calls:
for j in range(len(e.calls)):
se = e.calls[j]
if not isinstance(se.code, str):
e.calls[j] = type(se)((label(se.code),) + se[1:])
_fn2mod = {}
def label(code):
if isinstance(code, str):
return code
try:
mname = _fn2mod[code.co_filename]
except KeyError:
for k, v in sys.modules.items():
if v is None:
continue
if not hasattr(v, '__file__'):
continue
if not isinstance(v.__file__, str):
continue
if v.__file__.startswith(code.co_filename):
mname = _fn2mod[code.co_filename] = k
break
else:
mname = _fn2mod[code.co_filename] = '<%s>'%code.co_filename
return '%s:%d(%s)' % (mname, code.co_firstlineno, code.co_name)
if __name__ == '__main__':
import os
sys.argv = sys.argv[1:]
if not sys.argv:
print >> sys.stderr, "usage: lsprof.py <script> <arguments...>"
sys.exit(2)
sys.path.insert(0, os.path.abspath(os.path.dirname(sys.argv[0])))
stats = profile(execfile, sys.argv[0], globals(), locals())
stats.sort()
stats.pprint()

View File

@@ -1,167 +0,0 @@
# pysh.py - command processing for pysh.
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
import optparse
import os
import sys
import interp
SH_OPT = optparse.OptionParser(prog='pysh', usage="%prog [OPTIONS]", version='0.1')
SH_OPT.add_option('-c', action='store_true', dest='command_string', default=None,
help='A string that shall be interpreted by the shell as one or more commands')
SH_OPT.add_option('--redirect-to', dest='redirect_to', default=None,
help='Redirect script commands stdout and stderr to the specified file')
# See utility_command in builtin.py about the reason for this flag.
SH_OPT.add_option('--redirected', dest='redirected', action='store_true', default=False,
help='Tell the interpreter that stdout and stderr are actually the same objects, which is really stdout')
SH_OPT.add_option('--debug-parsing', action='store_true', dest='debug_parsing', default=False,
help='Trace PLY execution')
SH_OPT.add_option('--debug-tree', action='store_true', dest='debug_tree', default=False,
help='Display the generated syntax tree.')
SH_OPT.add_option('--debug-cmd', action='store_true', dest='debug_cmd', default=False,
help='Trace command execution before parameters expansion and exit status.')
SH_OPT.add_option('--debug-utility', action='store_true', dest='debug_utility', default=False,
help='Trace utility calls, after parameters expansions')
SH_OPT.add_option('--ast', action='store_true', dest='ast', default=False,
help='Encoded commands to execute in a subprocess')
SH_OPT.add_option('--profile', action='store_true', default=False,
help='Profile pysh run')
def split_args(args):
# Separate shell arguments from command ones
# Just stop at the first argument not starting with a dash. I know, this is completely broken,
# it ignores files starting with a dash or may take option values for command file. This is not
# supposed to happen for now
command_index = len(args)
for i,arg in enumerate(args):
if not arg.startswith('-'):
command_index = i
break
return args[:command_index], args[command_index:]
def fixenv(env):
path = env.get('PATH')
if path is not None:
parts = path.split(os.pathsep)
# Remove Windows utilities from PATH, they are useless at best and
# some of them (find) may be confused with other utilities.
parts = [p for p in parts if 'system32' not in p.lower()]
env['PATH'] = os.pathsep.join(parts)
if env.get('HOME') is None:
# Several utilities, including cvsps, cannot work without
# a defined HOME directory.
env['HOME'] = os.path.expanduser('~')
return env
def _sh(cwd, shargs, cmdargs, options, debugflags=None, env=None):
if os.environ.get('PYSH_TEXT') != '1':
import msvcrt
for fp in (sys.stdin, sys.stdout, sys.stderr):
msvcrt.setmode(fp.fileno(), os.O_BINARY)
hgbin = os.environ.get('PYSH_HGTEXT') != '1'
if debugflags is None:
debugflags = []
if options.debug_parsing: debugflags.append('debug-parsing')
if options.debug_utility: debugflags.append('debug-utility')
if options.debug_cmd: debugflags.append('debug-cmd')
if options.debug_tree: debugflags.append('debug-tree')
if env is None:
env = fixenv(dict(os.environ))
if cwd is None:
cwd = os.getcwd()
if not cmdargs:
# Nothing to do
return 0
ast = None
command_file = None
if options.command_string:
input = cmdargs[0]
if not options.ast:
input += '\n'
else:
args, input = interp.decodeargs(input), None
env, ast = args
cwd = env.get('PWD', cwd)
else:
command_file = cmdargs[0]
arguments = cmdargs[1:]
prefix = interp.resolve_shebang(command_file, ignoreshell=True)
if prefix:
input = ' '.join(prefix + [command_file] + arguments)
else:
# Read commands from file
f = file(command_file)
try:
# Trailing newline to help the parser
input = f.read() + '\n'
finally:
f.close()
redirect = None
try:
if options.redirected:
stdout = sys.stdout
stderr = stdout
elif options.redirect_to:
redirect = open(options.redirect_to, 'wb')
stdout = redirect
stderr = redirect
else:
stdout = sys.stdout
stderr = sys.stderr
# TODO: set arguments to environment variables
opts = interp.Options()
opts.hgbinary = hgbin
ip = interp.Interpreter(cwd, debugflags, stdout=stdout, stderr=stderr,
opts=opts)
try:
# Export given environment in shell object
for k,v in env.iteritems():
ip.get_env().export(k,v)
return ip.execute_script(input, ast, scriptpath=command_file)
finally:
ip.close()
finally:
if redirect is not None:
redirect.close()
def sh(cwd=None, args=None, debugflags=None, env=None):
if args is None:
args = sys.argv[1:]
shargs, cmdargs = split_args(args)
options, shargs = SH_OPT.parse_args(shargs)
if options.profile:
import lsprof
p = lsprof.Profiler()
p.enable(subcalls=True)
try:
return _sh(cwd, shargs, cmdargs, options, debugflags, env)
finally:
p.disable()
stats = lsprof.Stats(p.getstats())
stats.sort()
stats.pprint(top=10, file=sys.stderr, climit=5)
else:
return _sh(cwd, shargs, cmdargs, options, debugflags, env)
def main():
sys.exit(sh())
if __name__=='__main__':
main()

View File

@@ -1,888 +0,0 @@
# pyshlex.py - PLY compatible lexer for pysh.
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
# TODO:
# - review all "char in 'abc'" snippets: the empty string can be matched
# - test line continuations within quoted/expansion strings
# - eof is buggy wrt sublexers
# - the lexer cannot really work in pull mode as it would be required to run
# PLY in pull mode. It was designed to work incrementally and it would not be
# that hard to enable pull mode.
import re
try:
s = set()
del s
except NameError:
from Set import Set as set
from ply import lex
from sherrors import *
class NeedMore(Exception):
pass
def is_blank(c):
return c in (' ', '\t')
_RE_DIGITS = re.compile(r'^\d+$')
def are_digits(s):
return _RE_DIGITS.search(s) is not None
_OPERATORS = dict([
('&&', 'AND_IF'),
('||', 'OR_IF'),
(';;', 'DSEMI'),
('<<', 'DLESS'),
('>>', 'DGREAT'),
('<&', 'LESSAND'),
('>&', 'GREATAND'),
('<>', 'LESSGREAT'),
('<<-', 'DLESSDASH'),
('>|', 'CLOBBER'),
('&', 'AMP'),
(';', 'COMMA'),
('<', 'LESS'),
('>', 'GREATER'),
('(', 'LPARENS'),
(')', 'RPARENS'),
])
#Make a function to silence pychecker "Local variable shadows global"
def make_partial_ops():
partials = {}
for k in _OPERATORS:
for i in range(1, len(k)+1):
partials[k[:i]] = None
return partials
_PARTIAL_OPERATORS = make_partial_ops()
def is_partial_op(s):
"""Return True if s matches a non-empty subpart of an operator starting
at its first character.
"""
return s in _PARTIAL_OPERATORS
def is_op(s):
"""If s matches an operator, returns the operator identifier. Return None
otherwise.
"""
return _OPERATORS.get(s)
_RESERVEDS = dict([
('if', 'If'),
('then', 'Then'),
('else', 'Else'),
('elif', 'Elif'),
('fi', 'Fi'),
('do', 'Do'),
('done', 'Done'),
('case', 'Case'),
('esac', 'Esac'),
('while', 'While'),
('until', 'Until'),
('for', 'For'),
('{', 'Lbrace'),
('}', 'Rbrace'),
('!', 'Bang'),
('in', 'In'),
('|', 'PIPE'),
])
def get_reserved(s):
return _RESERVEDS.get(s)
_RE_NAME = re.compile(r'^[0-9a-zA-Z_]+$')
def is_name(s):
return _RE_NAME.search(s) is not None
def find_chars(seq, chars):
for i,v in enumerate(seq):
if v in chars:
return i,v
return -1, None
class WordLexer:
"""WordLexer parse quoted or expansion expressions and return an expression
tree. The input string can be any well formed sequence beginning with quoting
or expansion character. Embedded expressions are handled recursively. The
resulting tree is made of lists and strings. Lists represent quoted or
expansion expressions. Each list first element is the opening separator,
the last one the closing separator. In-between can be any number of strings
or lists for sub-expressions. Non quoted/expansion expression can written as
strings or as lists with empty strings as starting and ending delimiters.
"""
NAME_CHARSET = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_'
NAME_CHARSET = dict(zip(NAME_CHARSET, NAME_CHARSET))
SPECIAL_CHARSET = '@*#?-$!0'
#Characters which can be escaped depends on the current delimiters
ESCAPABLE = {
'`': set(['$', '\\', '`']),
'"': set(['$', '\\', '`', '"']),
"'": set(),
}
def __init__(self, heredoc = False):
# _buffer is the unprocessed input characters buffer
self._buffer = []
# _stack is empty or contains a quoted list being processed
# (this is the DFS path to the quoted expression being evaluated).
self._stack = []
self._escapable = None
# True when parsing unquoted here documents
self._heredoc = heredoc
def add(self, data, eof=False):
"""Feed the lexer with more data. If the quoted expression can be
delimited, return a tuple (expr, remaining) containing the expression
tree and the unconsumed data.
Otherwise, raise NeedMore.
"""
self._buffer += list(data)
self._parse(eof)
result = self._stack[0]
remaining = ''.join(self._buffer)
self._stack = []
self._buffer = []
return result, remaining
def _is_escapable(self, c, delim=None):
if delim is None:
if self._heredoc:
# Backslashes works as if they were double quoted in unquoted
# here-documents
delim = '"'
else:
if len(self._stack)<=1:
return True
delim = self._stack[-2][0]
escapables = self.ESCAPABLE.get(delim, None)
return escapables is None or c in escapables
def _parse_squote(self, buf, result, eof):
if not buf:
raise NeedMore()
try:
pos = buf.index("'")
except ValueError:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
result += ["'"]
return pos+1, True
def _parse_bquote(self, buf, result, eof):
if not buf:
raise NeedMore()
if buf[0]=='\n':
#Remove line continuations
result[:] = ['', '', '']
elif self._is_escapable(buf[0]):
result[-1] += buf[0]
result += ['']
else:
#Keep as such
result[:] = ['', '\\'+buf[0], '']
return 1, True
def _parse_dquote(self, buf, result, eof):
if not buf:
raise NeedMore()
pos, sep = find_chars(buf, '$\\`"')
if pos==-1:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
if sep=='"':
result += ['"']
return pos+1, True
else:
#Keep everything until the separator and defer processing
return pos, False
def _parse_command(self, buf, result, eof):
if not buf:
raise NeedMore()
chars = '$\\`"\''
if result[0] == '$(':
chars += ')'
pos, sep = find_chars(buf, chars)
if pos == -1:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
if (result[0]=='$(' and sep==')') or (result[0]=='`' and sep=='`'):
result += [sep]
return pos+1, True
else:
return pos, False
def _parse_parameter(self, buf, result, eof):
if not buf:
raise NeedMore()
pos, sep = find_chars(buf, '$\\`"\'}')
if pos==-1:
raise NeedMore()
result[-1] += ''.join(buf[:pos])
if sep=='}':
result += [sep]
return pos+1, True
else:
return pos, False
def _parse_dollar(self, buf, result, eof):
sep = result[0]
if sep=='$':
if not buf:
#TODO: handle empty $
raise NeedMore()
if buf[0]=='(':
if len(buf)==1:
raise NeedMore()
if buf[1]=='(':
result[0] = '$(('
buf[:2] = []
else:
result[0] = '$('
buf[:1] = []
elif buf[0]=='{':
result[0] = '${'
buf[:1] = []
else:
if buf[0] in self.SPECIAL_CHARSET:
result[-1] = buf[0]
read = 1
else:
for read,c in enumerate(buf):
if c not in self.NAME_CHARSET:
break
else:
if not eof:
raise NeedMore()
read += 1
result[-1] += ''.join(buf[0:read])
if not result[-1]:
result[:] = ['', result[0], '']
else:
result += ['']
return read,True
sep = result[0]
if sep=='$(':
parsefunc = self._parse_command
elif sep=='${':
parsefunc = self._parse_parameter
else:
raise NotImplementedError()
pos, closed = parsefunc(buf, result, eof)
return pos, closed
def _parse(self, eof):
buf = self._buffer
stack = self._stack
recurse = False
while 1:
if not stack or recurse:
if not buf:
raise NeedMore()
if buf[0] not in ('"\\`$\''):
raise ShellSyntaxError('Invalid quoted string sequence')
stack.append([buf[0], ''])
buf[:1] = []
recurse = False
result = stack[-1]
if result[0]=="'":
parsefunc = self._parse_squote
elif result[0]=='\\':
parsefunc = self._parse_bquote
elif result[0]=='"':
parsefunc = self._parse_dquote
elif result[0]=='`':
parsefunc = self._parse_command
elif result[0][0]=='$':
parsefunc = self._parse_dollar
else:
raise NotImplementedError()
read, closed = parsefunc(buf, result, eof)
buf[:read] = []
if closed:
if len(stack)>1:
#Merge in parent expression
parsed = stack.pop()
stack[-1] += [parsed]
stack[-1] += ['']
else:
break
else:
recurse = True
def normalize_wordtree(wtree):
"""Fold back every literal sequence (delimited with empty strings) into
parent sequence.
"""
def normalize(wtree):
result = []
for part in wtree[1:-1]:
if isinstance(part, list):
part = normalize(part)
if part[0]=='':
#Move the part content back at current level
result += part[1:-1]
continue
elif not part:
#Remove empty strings
continue
result.append(part)
if not result:
result = ['']
return [wtree[0]] + result + [wtree[-1]]
return normalize(wtree)
def make_wordtree(token, here_document=False):
"""Parse a delimited token and return a tree similar to the ones returned by
WordLexer. token may contain any combinations of expansion/quoted fields and
non-ones.
"""
tree = ['']
remaining = token
delimiters = '\\$`'
if not here_document:
delimiters += '\'"'
while 1:
pos, sep = find_chars(remaining, delimiters)
if pos==-1:
tree += [remaining, '']
return normalize_wordtree(tree)
tree.append(remaining[:pos])
remaining = remaining[pos:]
try:
result, remaining = WordLexer(heredoc = here_document).add(remaining, True)
except NeedMore:
raise ShellSyntaxError('Invalid token "%s"')
tree.append(result)
def wordtree_as_string(wtree):
"""Rewrite an expression tree generated by make_wordtree as string."""
def visit(node, output):
for child in node:
if isinstance(child, list):
visit(child, output)
else:
output.append(child)
output = []
visit(wtree, output)
return ''.join(output)
def unquote_wordtree(wtree):
"""Fold the word tree while removing quotes everywhere. Other expansion
sequences are joined as such.
"""
def unquote(wtree):
unquoted = []
if wtree[0] in ('', "'", '"', '\\'):
wtree = wtree[1:-1]
for part in wtree:
if isinstance(part, list):
part = unquote(part)
unquoted.append(part)
return ''.join(unquoted)
return unquote(wtree)
class HereDocLexer:
"""HereDocLexer delimits whatever comes from the here-document starting newline
not included to the closing delimiter line included.
"""
def __init__(self, op, delim):
assert op in ('<<', '<<-')
if not delim:
raise ShellSyntaxError('invalid here document delimiter %s' % str(delim))
self._op = op
self._delim = delim
self._buffer = []
self._token = []
def add(self, data, eof):
"""If the here-document was delimited, return a tuple (content, remaining).
Raise NeedMore() otherwise.
"""
self._buffer += list(data)
self._parse(eof)
token = ''.join(self._token)
remaining = ''.join(self._buffer)
self._token, self._remaining = [], []
return token, remaining
def _parse(self, eof):
while 1:
#Look for first unescaped newline. Quotes may be ignored
escaped = False
for i,c in enumerate(self._buffer):
if escaped:
escaped = False
elif c=='\\':
escaped = True
elif c=='\n':
break
else:
i = -1
if i==-1 or self._buffer[i]!='\n':
if not eof:
raise NeedMore()
#No more data, maybe the last line is closing delimiter
line = ''.join(self._buffer)
eol = ''
self._buffer[:] = []
else:
line = ''.join(self._buffer[:i])
eol = self._buffer[i]
self._buffer[:i+1] = []
if self._op=='<<-':
line = line.lstrip('\t')
if line==self._delim:
break
self._token += [line, eol]
if i==-1:
break
class Token:
#TODO: check this is still in use
OPERATOR = 'OPERATOR'
WORD = 'WORD'
def __init__(self):
self.value = ''
self.type = None
def __getitem__(self, key):
#Behave like a two elements tuple
if key==0:
return self.type
if key==1:
return self.value
raise IndexError(key)
class HereDoc:
def __init__(self, op, name=None):
self.op = op
self.name = name
self.pendings = []
TK_COMMA = 'COMMA'
TK_AMPERSAND = 'AMP'
TK_OP = 'OP'
TK_TOKEN = 'TOKEN'
TK_COMMENT = 'COMMENT'
TK_NEWLINE = 'NEWLINE'
TK_IONUMBER = 'IO_NUMBER'
TK_ASSIGNMENT = 'ASSIGNMENT_WORD'
TK_HERENAME = 'HERENAME'
class Lexer:
"""Main lexer.
Call add() until the script AST is returned.
"""
# Here-document handling makes the whole thing more complex because they basically
# force tokens to be reordered: here-content must come right after the operator
# and the here-document name, while some other tokens might be following the
# here-document expression on the same line.
#
# So, here-doc states are basically:
# *self._state==ST_NORMAL
# - self._heredoc.op is None: no here-document
# - self._heredoc.op is not None but name is: here-document operator matched,
# waiting for the document name/delimiter
# - self._heredoc.op and name are not None: here-document is ready, following
# tokens are being stored and will be pushed again when the document is
# completely parsed.
# *self._state==ST_HEREDOC
# - The here-document is being delimited by self._herelexer. Once it is done
# the content is pushed in front of the pending token list then all these
# tokens are pushed once again.
ST_NORMAL = 'ST_NORMAL'
ST_OP = 'ST_OP'
ST_BACKSLASH = 'ST_BACKSLASH'
ST_QUOTED = 'ST_QUOTED'
ST_COMMENT = 'ST_COMMENT'
ST_HEREDOC = 'ST_HEREDOC'
#Match end of backquote strings
RE_BACKQUOTE_END = re.compile(r'(?<!\\)(`)')
def __init__(self, parent_state = None):
self._input = []
self._pos = 0
self._token = ''
self._type = TK_TOKEN
self._state = self.ST_NORMAL
self._parent_state = parent_state
self._wordlexer = None
self._heredoc = HereDoc(None)
self._herelexer = None
### Following attributes are not used for delimiting token and can safely
### be changed after here-document detection (see _push_toke)
# Count the number of tokens following a 'For' reserved word. Needed to
# return an 'In' reserved word if it comes in third place.
self._for_count = None
def add(self, data, eof=False):
"""Feed the lexer with data.
When eof is set to True, returns unconsumed data or raise if the lexer
is in the middle of a delimiting operation.
Raise NeedMore otherwise.
"""
self._input += list(data)
self._parse(eof)
self._input[:self._pos] = []
return ''.join(self._input)
def _parse(self, eof):
while self._state:
if self._pos>=len(self._input):
if not eof:
raise NeedMore()
elif self._state not in (self.ST_OP, self.ST_QUOTED, self.ST_HEREDOC):
#Delimit the current token and leave cleanly
self._push_token('')
break
else:
#Let the sublexer handle the eof themselves
pass
if self._state==self.ST_NORMAL:
self._parse_normal()
elif self._state==self.ST_COMMENT:
self._parse_comment()
elif self._state==self.ST_OP:
self._parse_op(eof)
elif self._state==self.ST_QUOTED:
self._parse_quoted(eof)
elif self._state==self.ST_HEREDOC:
self._parse_heredoc(eof)
else:
assert False, "Unknown state " + str(self._state)
if self._heredoc.op is not None:
raise ShellSyntaxError('missing here-document delimiter')
def _parse_normal(self):
c = self._input[self._pos]
if c=='\n':
self._push_token(c)
self._token = c
self._type = TK_NEWLINE
self._push_token('')
self._pos += 1
elif c in ('\\', '\'', '"', '`', '$'):
self._state = self.ST_QUOTED
elif is_partial_op(c):
self._push_token(c)
self._type = TK_OP
self._token += c
self._pos += 1
self._state = self.ST_OP
elif is_blank(c):
self._push_token(c)
#Discard blanks
self._pos += 1
elif self._token:
self._token += c
self._pos += 1
elif c=='#':
self._state = self.ST_COMMENT
self._type = TK_COMMENT
self._pos += 1
else:
self._pos += 1
self._token += c
def _parse_op(self, eof):
assert self._token
while 1:
if self._pos>=len(self._input):
if not eof:
raise NeedMore()
c = ''
else:
c = self._input[self._pos]
op = self._token + c
if c and is_partial_op(op):
#Still parsing an operator
self._token = op
self._pos += 1
else:
#End of operator
self._push_token(c)
self._state = self.ST_NORMAL
break
def _parse_comment(self):
while 1:
if self._pos>=len(self._input):
raise NeedMore()
c = self._input[self._pos]
if c=='\n':
#End of comment, do not consume the end of line
self._state = self.ST_NORMAL
break
else:
self._token += c
self._pos += 1
def _parse_quoted(self, eof):
"""Precondition: the starting backquote/dollar is still in the input queue."""
if not self._wordlexer:
self._wordlexer = WordLexer()
if self._pos<len(self._input):
#Transfer input queue character into the subparser
input = self._input[self._pos:]
self._pos += len(input)
wtree, remaining = self._wordlexer.add(input, eof)
self._wordlexer = None
self._token += wordtree_as_string(wtree)
#Put unparsed character back in the input queue
if remaining:
self._input[self._pos:self._pos] = list(remaining)
self._state = self.ST_NORMAL
def _parse_heredoc(self, eof):
assert not self._token
if self._herelexer is None:
self._herelexer = HereDocLexer(self._heredoc.op, self._heredoc.name)
if self._pos<len(self._input):
#Transfer input queue character into the subparser
input = self._input[self._pos:]
self._pos += len(input)
self._token, remaining = self._herelexer.add(input, eof)
#Reset here-document state
self._herelexer = None
heredoc, self._heredoc = self._heredoc, HereDoc(None)
if remaining:
self._input[self._pos:self._pos] = list(remaining)
self._state = self.ST_NORMAL
#Push pending tokens
heredoc.pendings[:0] = [(self._token, self._type, heredoc.name)]
for token, type, delim in heredoc.pendings:
self._token = token
self._type = type
self._push_token(delim)
def _push_token(self, delim):
if not self._token:
return 0
if self._heredoc.op is not None:
if self._heredoc.name is None:
#Here-document name
if self._type!=TK_TOKEN:
raise ShellSyntaxError("expecting here-document name, got '%s'" % self._token)
self._heredoc.name = unquote_wordtree(make_wordtree(self._token))
self._type = TK_HERENAME
else:
#Capture all tokens until the newline starting the here-document
if self._type==TK_NEWLINE:
assert self._state==self.ST_NORMAL
self._state = self.ST_HEREDOC
self._heredoc.pendings.append((self._token, self._type, delim))
self._token = ''
self._type = TK_TOKEN
return 1
# BEWARE: do not change parser state from here to the end of the function:
# when parsing between an here-document operator to the end of the line
# tokens are stored in self._heredoc.pendings. Therefore, they will not
# reach the section below.
#Check operators
if self._type==TK_OP:
#False positive because of partial op matching
op = is_op(self._token)
if not op:
self._type = TK_TOKEN
else:
#Map to the specific operator
self._type = op
if self._token in ('<<', '<<-'):
#Done here rather than in _parse_op because there is no need
#to change the parser state since we are still waiting for
#the here-document name
if self._heredoc.op is not None:
raise ShellSyntaxError("syntax error near token '%s'" % self._token)
assert self._heredoc.op is None
self._heredoc.op = self._token
if self._type==TK_TOKEN:
if '=' in self._token and not delim:
if self._token.startswith('='):
#Token is a WORD... a TOKEN that is.
pass
else:
prev = self._token[:self._token.find('=')]
if is_name(prev):
self._type = TK_ASSIGNMENT
else:
#Just a token (unspecified)
pass
else:
reserved = get_reserved(self._token)
if reserved is not None:
if reserved=='In' and self._for_count!=2:
#Sorry, not a reserved word after all
pass
else:
self._type = reserved
if reserved in ('For', 'Case'):
self._for_count = 0
elif are_digits(self._token) and delim in ('<', '>'):
#Detect IO_NUMBER
self._type = TK_IONUMBER
elif self._token==';':
self._type = TK_COMMA
elif self._token=='&':
self._type = TK_AMPERSAND
elif self._type==TK_COMMENT:
#Comments are not part of sh grammar, ignore them
self._token = ''
self._type = TK_TOKEN
return 0
if self._for_count is not None:
#Track token count in 'For' expression to detect 'In' reserved words.
#Can only be in third position, no need to go beyond
self._for_count += 1
if self._for_count==3:
self._for_count = None
self.on_token((self._token, self._type))
self._token = ''
self._type = TK_TOKEN
return 1
def on_token(self, token):
raise NotImplementedError
tokens = [
TK_TOKEN,
# To silence yacc unused token warnings
# TK_COMMENT,
TK_NEWLINE,
TK_IONUMBER,
TK_ASSIGNMENT,
TK_HERENAME,
]
#Add specific operators
tokens += _OPERATORS.values()
#Add reserved words
tokens += _RESERVEDS.values()
class PLYLexer(Lexer):
"""Bridge Lexer and PLY lexer interface."""
def __init__(self):
Lexer.__init__(self)
self._tokens = []
self._current = 0
self.lineno = 0
def on_token(self, token):
value, type = token
self.lineno = 0
t = lex.LexToken()
t.value = value
t.type = type
t.lexer = self
t.lexpos = 0
t.lineno = 0
self._tokens.append(t)
def is_empty(self):
return not bool(self._tokens)
#PLY compliant interface
def token(self):
if self._current>=len(self._tokens):
return None
t = self._tokens[self._current]
self._current += 1
return t
def get_tokens(s):
"""Parse the input string and return a tuple (tokens, unprocessed) where
tokens is a list of parsed tokens and unprocessed is the part of the input
string left untouched by the lexer.
"""
lexer = PLYLexer()
untouched = lexer.add(s, True)
tokens = []
while 1:
token = lexer.token()
if token is None:
break
tokens.append(token)
tokens = [(t.value, t.type) for t in tokens]
return tokens, untouched

View File

@@ -1,779 +0,0 @@
# pyshyacc.py - PLY grammar definition for pysh
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
"""PLY grammar file.
"""
import os.path
import sys
import pyshlex
tokens = pyshlex.tokens
from ply import yacc
import sherrors
class IORedirect:
def __init__(self, op, filename, io_number=None):
self.op = op
self.filename = filename
self.io_number = io_number
class HereDocument:
def __init__(self, op, name, content, io_number=None):
self.op = op
self.name = name
self.content = content
self.io_number = io_number
def make_io_redirect(p):
"""Make an IORedirect instance from the input 'io_redirect' production."""
name, io_number, io_target = p
assert name=='io_redirect'
if io_target[0]=='io_file':
io_type, io_op, io_file = io_target
return IORedirect(io_op, io_file, io_number)
elif io_target[0]=='io_here':
io_type, io_op, io_name, io_content = io_target
return HereDocument(io_op, io_name, io_content, io_number)
else:
assert False, "Invalid IO redirection token %s" % repr(io_type)
class SimpleCommand:
"""
assigns contains (name, value) pairs.
"""
def __init__(self, words, redirs, assigns):
self.words = list(words)
self.redirs = list(redirs)
self.assigns = list(assigns)
class Pipeline:
def __init__(self, commands, reverse_status=False):
self.commands = list(commands)
assert self.commands #Grammar forbids this
self.reverse_status = reverse_status
class AndOr:
def __init__(self, op, left, right):
self.op = str(op)
self.left = left
self.right = right
class ForLoop:
def __init__(self, name, items, cmds):
self.name = str(name)
self.items = list(items)
self.cmds = list(cmds)
class WhileLoop:
def __init__(self, condition, cmds):
self.condition = list(condition)
self.cmds = list(cmds)
class UntilLoop:
def __init__(self, condition, cmds):
self.condition = list(condition)
self.cmds = list(cmds)
class FunDef:
def __init__(self, name, body):
self.name = str(name)
self.body = body
class BraceGroup:
def __init__(self, cmds):
self.cmds = list(cmds)
class IfCond:
def __init__(self, cond, if_cmds, else_cmds):
self.cond = list(cond)
self.if_cmds = if_cmds
self.else_cmds = else_cmds
class Case:
def __init__(self, name, items):
self.name = name
self.items = items
class SubShell:
def __init__(self, cmds):
self.cmds = cmds
class RedirectList:
def __init__(self, cmd, redirs):
self.cmd = cmd
self.redirs = list(redirs)
def get_production(productions, ptype):
"""productions must be a list of production tuples like (name, obj) where
name is the production string identifier.
Return the first production named 'ptype'. Raise KeyError if None can be
found.
"""
for production in productions:
if production is not None and production[0]==ptype:
return production
raise KeyError(ptype)
#-------------------------------------------------------------------------------
# PLY grammar definition
#-------------------------------------------------------------------------------
def p_multiple_commands(p):
"""multiple_commands : newline_sequence
| complete_command
| multiple_commands complete_command"""
if len(p)==2:
if p[1] is not None:
p[0] = [p[1]]
else:
p[0] = []
else:
p[0] = p[1] + [p[2]]
def p_complete_command(p):
"""complete_command : list separator
| list"""
if len(p)==3 and p[2] and p[2][1] == '&':
p[0] = ('async', p[1])
else:
p[0] = p[1]
def p_list(p):
"""list : list separator_op and_or
| and_or"""
if len(p)==2:
p[0] = [p[1]]
else:
#if p[2]!=';':
# raise NotImplementedError('AND-OR list asynchronous execution is not implemented')
p[0] = p[1] + [p[3]]
def p_and_or(p):
"""and_or : pipeline
| and_or AND_IF linebreak pipeline
| and_or OR_IF linebreak pipeline"""
if len(p)==2:
p[0] = p[1]
else:
p[0] = ('and_or', AndOr(p[2], p[1], p[4]))
def p_maybe_bang_word(p):
"""maybe_bang_word : Bang"""
p[0] = ('maybe_bang_word', p[1])
def p_pipeline(p):
"""pipeline : pipe_sequence
| bang_word pipe_sequence"""
if len(p)==3:
p[0] = ('pipeline', Pipeline(p[2][1:], True))
else:
p[0] = ('pipeline', Pipeline(p[1][1:]))
def p_pipe_sequence(p):
"""pipe_sequence : command
| pipe_sequence PIPE linebreak command"""
if len(p)==2:
p[0] = ['pipe_sequence', p[1]]
else:
p[0] = p[1] + [p[4]]
def p_command(p):
"""command : simple_command
| compound_command
| compound_command redirect_list
| function_definition"""
if p[1][0] in ( 'simple_command',
'for_clause',
'while_clause',
'until_clause',
'case_clause',
'if_clause',
'function_definition',
'subshell',
'brace_group',):
if len(p) == 2:
p[0] = p[1]
else:
p[0] = ('redirect_list', RedirectList(p[1], p[2][1:]))
else:
raise NotImplementedError('%s command is not implemented' % repr(p[1][0]))
def p_compound_command(p):
"""compound_command : brace_group
| subshell
| for_clause
| case_clause
| if_clause
| while_clause
| until_clause"""
p[0] = p[1]
def p_subshell(p):
"""subshell : LPARENS compound_list RPARENS"""
p[0] = ('subshell', SubShell(p[2][1:]))
def p_compound_list(p):
"""compound_list : term
| newline_list term
| term separator
| newline_list term separator"""
productions = p[1:]
try:
sep = get_production(productions, 'separator')
if sep[1]!=';':
raise NotImplementedError()
except KeyError:
pass
term = get_production(productions, 'term')
p[0] = ['compound_list'] + term[1:]
def p_term(p):
"""term : term separator and_or
| and_or"""
if len(p)==2:
p[0] = ['term', p[1]]
else:
if p[2] is not None and p[2][1] == '&':
p[0] = ['term', ('async', p[1][1:])] + [p[3]]
else:
p[0] = p[1] + [p[3]]
def p_maybe_for_word(p):
# Rearrange 'For' priority wrt TOKEN. See p_for_word
"""maybe_for_word : For"""
p[0] = ('maybe_for_word', p[1])
def p_for_clause(p):
"""for_clause : for_word name linebreak do_group
| for_word name linebreak in sequential_sep do_group
| for_word name linebreak in wordlist sequential_sep do_group"""
productions = p[1:]
do_group = get_production(productions, 'do_group')
try:
items = get_production(productions, 'in')[1:]
except KeyError:
raise NotImplementedError('"in" omission is not implemented')
try:
items = get_production(productions, 'wordlist')[1:]
except KeyError:
items = []
name = p[2]
p[0] = ('for_clause', ForLoop(name, items, do_group[1:]))
def p_name(p):
"""name : token""" #Was NAME instead of token
p[0] = p[1]
def p_in(p):
"""in : In"""
p[0] = ('in', p[1])
def p_wordlist(p):
"""wordlist : wordlist token
| token"""
if len(p)==2:
p[0] = ['wordlist', ('TOKEN', p[1])]
else:
p[0] = p[1] + [('TOKEN', p[2])]
def p_case_clause(p):
"""case_clause : Case token linebreak in linebreak case_list Esac
| Case token linebreak in linebreak case_list_ns Esac
| Case token linebreak in linebreak Esac"""
if len(p) < 8:
items = []
else:
items = p[6][1:]
name = p[2]
p[0] = ('case_clause', Case(name, [c[1] for c in items]))
def p_case_list_ns(p):
"""case_list_ns : case_list case_item_ns
| case_item_ns"""
p_case_list(p)
def p_case_list(p):
"""case_list : case_list case_item
| case_item"""
if len(p)==2:
p[0] = ['case_list', p[1]]
else:
p[0] = p[1] + [p[2]]
def p_case_item_ns(p):
"""case_item_ns : pattern RPARENS linebreak
| pattern RPARENS compound_list linebreak
| LPARENS pattern RPARENS linebreak
| LPARENS pattern RPARENS compound_list linebreak"""
p_case_item(p)
def p_case_item(p):
"""case_item : pattern RPARENS linebreak DSEMI linebreak
| pattern RPARENS compound_list DSEMI linebreak
| LPARENS pattern RPARENS linebreak DSEMI linebreak
| LPARENS pattern RPARENS compound_list DSEMI linebreak"""
if len(p) < 7:
name = p[1][1:]
else:
name = p[2][1:]
try:
cmds = get_production(p[1:], "compound_list")[1:]
except KeyError:
cmds = []
p[0] = ('case_item', (name, cmds))
def p_pattern(p):
"""pattern : token
| pattern PIPE token"""
if len(p)==2:
p[0] = ['pattern', ('TOKEN', p[1])]
else:
p[0] = p[1] + [('TOKEN', p[2])]
def p_maybe_if_word(p):
# Rearrange 'If' priority wrt TOKEN. See p_if_word
"""maybe_if_word : If"""
p[0] = ('maybe_if_word', p[1])
def p_maybe_then_word(p):
# Rearrange 'Then' priority wrt TOKEN. See p_then_word
"""maybe_then_word : Then"""
p[0] = ('maybe_then_word', p[1])
def p_if_clause(p):
"""if_clause : if_word compound_list then_word compound_list else_part Fi
| if_word compound_list then_word compound_list Fi"""
else_part = []
if len(p)==7:
else_part = p[5]
p[0] = ('if_clause', IfCond(p[2][1:], p[4][1:], else_part))
def p_else_part(p):
"""else_part : Elif compound_list then_word compound_list else_part
| Elif compound_list then_word compound_list
| Else compound_list"""
if len(p)==3:
p[0] = p[2][1:]
else:
else_part = []
if len(p)==6:
else_part = p[5]
p[0] = ('elif', IfCond(p[2][1:], p[4][1:], else_part))
def p_while_clause(p):
"""while_clause : While compound_list do_group"""
p[0] = ('while_clause', WhileLoop(p[2][1:], p[3][1:]))
def p_maybe_until_word(p):
# Rearrange 'Until' priority wrt TOKEN. See p_until_word
"""maybe_until_word : Until"""
p[0] = ('maybe_until_word', p[1])
def p_until_clause(p):
"""until_clause : until_word compound_list do_group"""
p[0] = ('until_clause', UntilLoop(p[2][1:], p[3][1:]))
def p_function_definition(p):
"""function_definition : fname LPARENS RPARENS linebreak function_body"""
p[0] = ('function_definition', FunDef(p[1], p[5]))
def p_function_body(p):
"""function_body : compound_command
| compound_command redirect_list"""
if len(p)!=2:
raise NotImplementedError('functions redirections lists are not implemented')
p[0] = p[1]
def p_fname(p):
"""fname : TOKEN""" #Was NAME instead of token
p[0] = p[1]
def p_brace_group(p):
"""brace_group : Lbrace compound_list Rbrace"""
p[0] = ('brace_group', BraceGroup(p[2][1:]))
def p_maybe_done_word(p):
#See p_assignment_word for details.
"""maybe_done_word : Done"""
p[0] = ('maybe_done_word', p[1])
def p_maybe_do_word(p):
"""maybe_do_word : Do"""
p[0] = ('maybe_do_word', p[1])
def p_do_group(p):
"""do_group : do_word compound_list done_word"""
#Do group contains a list of AndOr
p[0] = ['do_group'] + p[2][1:]
def p_simple_command(p):
"""simple_command : cmd_prefix cmd_word cmd_suffix
| cmd_prefix cmd_word
| cmd_prefix
| cmd_name cmd_suffix
| cmd_name"""
words, redirs, assigns = [], [], []
for e in p[1:]:
name = e[0]
if name in ('cmd_prefix', 'cmd_suffix'):
for sube in e[1:]:
subname = sube[0]
if subname=='io_redirect':
redirs.append(make_io_redirect(sube))
elif subname=='ASSIGNMENT_WORD':
assigns.append(sube)
else:
words.append(sube)
elif name in ('cmd_word', 'cmd_name'):
words.append(e)
cmd = SimpleCommand(words, redirs, assigns)
p[0] = ('simple_command', cmd)
def p_cmd_name(p):
"""cmd_name : TOKEN"""
p[0] = ('cmd_name', p[1])
def p_cmd_word(p):
"""cmd_word : token"""
p[0] = ('cmd_word', p[1])
def p_maybe_assignment_word(p):
#See p_assignment_word for details.
"""maybe_assignment_word : ASSIGNMENT_WORD"""
p[0] = ('maybe_assignment_word', p[1])
def p_cmd_prefix(p):
"""cmd_prefix : io_redirect
| cmd_prefix io_redirect
| assignment_word
| cmd_prefix assignment_word"""
try:
prefix = get_production(p[1:], 'cmd_prefix')
except KeyError:
prefix = ['cmd_prefix']
try:
value = get_production(p[1:], 'assignment_word')[1]
value = ('ASSIGNMENT_WORD', value.split('=', 1))
except KeyError:
value = get_production(p[1:], 'io_redirect')
p[0] = prefix + [value]
def p_cmd_suffix(p):
"""cmd_suffix : io_redirect
| cmd_suffix io_redirect
| token
| cmd_suffix token
| maybe_for_word
| cmd_suffix maybe_for_word
| maybe_done_word
| cmd_suffix maybe_done_word
| maybe_do_word
| cmd_suffix maybe_do_word
| maybe_until_word
| cmd_suffix maybe_until_word
| maybe_assignment_word
| cmd_suffix maybe_assignment_word
| maybe_if_word
| cmd_suffix maybe_if_word
| maybe_then_word
| cmd_suffix maybe_then_word
| maybe_bang_word
| cmd_suffix maybe_bang_word"""
try:
suffix = get_production(p[1:], 'cmd_suffix')
token = p[2]
except KeyError:
suffix = ['cmd_suffix']
token = p[1]
if isinstance(token, tuple):
if token[0]=='io_redirect':
p[0] = suffix + [token]
else:
#Convert maybe_* to TOKEN if necessary
p[0] = suffix + [('TOKEN', token[1])]
else:
p[0] = suffix + [('TOKEN', token)]
def p_redirect_list(p):
"""redirect_list : io_redirect
| redirect_list io_redirect"""
if len(p) == 2:
p[0] = ['redirect_list', make_io_redirect(p[1])]
else:
p[0] = p[1] + [make_io_redirect(p[2])]
def p_io_redirect(p):
"""io_redirect : io_file
| IO_NUMBER io_file
| io_here
| IO_NUMBER io_here"""
if len(p)==3:
p[0] = ('io_redirect', p[1], p[2])
else:
p[0] = ('io_redirect', None, p[1])
def p_io_file(p):
#Return the tuple (operator, filename)
"""io_file : LESS filename
| LESSAND filename
| GREATER filename
| GREATAND filename
| DGREAT filename
| LESSGREAT filename
| CLOBBER filename"""
#Extract the filename from the file
p[0] = ('io_file', p[1], p[2][1])
def p_filename(p):
#Return the filename
"""filename : TOKEN"""
p[0] = ('filename', p[1])
def p_io_here(p):
"""io_here : DLESS here_end
| DLESSDASH here_end"""
p[0] = ('io_here', p[1], p[2][1], p[2][2])
def p_here_end(p):
"""here_end : HERENAME TOKEN"""
p[0] = ('here_document', p[1], p[2])
def p_newline_sequence(p):
# Nothing in the grammar can handle leading NEWLINE productions, so add
# this one with the lowest possible priority relatively to newline_list.
"""newline_sequence : newline_list"""
p[0] = None
def p_newline_list(p):
"""newline_list : NEWLINE
| newline_list NEWLINE"""
p[0] = None
def p_linebreak(p):
"""linebreak : newline_list
| empty"""
p[0] = None
def p_separator_op(p):
"""separator_op : COMMA
| AMP"""
p[0] = p[1]
def p_separator(p):
"""separator : separator_op linebreak
| newline_list"""
if len(p)==2:
#Ignore newlines
p[0] = None
else:
#Keep the separator operator
p[0] = ('separator', p[1])
def p_sequential_sep(p):
"""sequential_sep : COMMA linebreak
| newline_list"""
p[0] = None
# Low priority TOKEN => for_word conversion.
# Let maybe_for_word be used as a token when necessary in higher priority
# rules.
def p_for_word(p):
"""for_word : maybe_for_word"""
p[0] = p[1]
def p_if_word(p):
"""if_word : maybe_if_word"""
p[0] = p[1]
def p_then_word(p):
"""then_word : maybe_then_word"""
p[0] = p[1]
def p_done_word(p):
"""done_word : maybe_done_word"""
p[0] = p[1]
def p_do_word(p):
"""do_word : maybe_do_word"""
p[0] = p[1]
def p_until_word(p):
"""until_word : maybe_until_word"""
p[0] = p[1]
def p_assignment_word(p):
"""assignment_word : maybe_assignment_word"""
p[0] = ('assignment_word', p[1][1])
def p_bang_word(p):
"""bang_word : maybe_bang_word"""
p[0] = ('bang_word', p[1][1])
def p_token(p):
"""token : TOKEN
| Fi"""
p[0] = p[1]
def p_empty(p):
'empty :'
p[0] = None
# Error rule for syntax errors
def p_error(p):
msg = []
w = msg.append
w('%r\n' % p)
w('followed by:\n')
for i in range(5):
n = yacc.token()
if not n:
break
w(' %r\n' % n)
raise sherrors.ShellSyntaxError(''.join(msg))
# Build the parser
try:
import pyshtables
except ImportError:
outputdir = os.path.dirname(__file__)
if not os.access(outputdir, os.W_OK):
outputdir = ''
yacc.yacc(tabmodule = 'pyshtables', outputdir = outputdir, debug = 0)
else:
yacc.yacc(tabmodule = 'pysh.pyshtables', write_tables = 0, debug = 0)
def parse(input, eof=False, debug=False):
"""Parse a whole script at once and return the generated AST and unconsumed
data in a tuple.
NOTE: eof is probably meaningless for now, the parser being unable to work
in pull mode. It should be set to True.
"""
lexer = pyshlex.PLYLexer()
remaining = lexer.add(input, eof)
if lexer.is_empty():
return [], remaining
if debug:
debug = 2
return yacc.parse(lexer=lexer, debug=debug), remaining
#-------------------------------------------------------------------------------
# AST rendering helpers
#-------------------------------------------------------------------------------
def format_commands(v):
"""Return a tree made of strings and lists. Make command trees easier to
display.
"""
if isinstance(v, list):
return [format_commands(c) for c in v]
if isinstance(v, tuple):
if len(v)==2 and isinstance(v[0], str) and not isinstance(v[1], str):
if v[0] == 'async':
return ['AsyncList', map(format_commands, v[1])]
else:
#Avoid decomposing tuples like ('pipeline', Pipeline(...))
return format_commands(v[1])
return format_commands(list(v))
elif isinstance(v, IfCond):
name = ['IfCond']
name += ['if', map(format_commands, v.cond)]
name += ['then', map(format_commands, v.if_cmds)]
name += ['else', map(format_commands, v.else_cmds)]
return name
elif isinstance(v, ForLoop):
name = ['ForLoop']
name += [repr(v.name)+' in ', map(str, v.items)]
name += ['commands', map(format_commands, v.cmds)]
return name
elif isinstance(v, AndOr):
return [v.op, format_commands(v.left), format_commands(v.right)]
elif isinstance(v, Pipeline):
name = 'Pipeline'
if v.reverse_status:
name = '!' + name
return [name, format_commands(v.commands)]
elif isinstance(v, Case):
name = ['Case']
name += [v.name, format_commands(v.items)]
elif isinstance(v, SimpleCommand):
name = ['SimpleCommand']
if v.words:
name += ['words', map(str, v.words)]
if v.assigns:
assigns = [tuple(a[1]) for a in v.assigns]
name += ['assigns', map(str, assigns)]
if v.redirs:
name += ['redirs', map(format_commands, v.redirs)]
return name
elif isinstance(v, RedirectList):
name = ['RedirectList']
if v.redirs:
name += ['redirs', map(format_commands, v.redirs)]
name += ['command', format_commands(v.cmd)]
return name
elif isinstance(v, IORedirect):
return ' '.join(map(str, (v.io_number, v.op, v.filename)))
elif isinstance(v, HereDocument):
return ' '.join(map(str, (v.io_number, v.op, repr(v.name), repr(v.content))))
elif isinstance(v, SubShell):
return ['SubShell', map(format_commands, v.cmds)]
else:
return repr(v)
def print_commands(cmds, output=sys.stdout):
"""Pretty print a command tree."""
def print_tree(cmd, spaces, output):
if isinstance(cmd, list):
for c in cmd:
print_tree(c, spaces + 3, output)
else:
print >>output, ' '*spaces + str(cmd)
formatted = format_commands(cmds)
print_tree(formatted, 0, output)
def stringify_commands(cmds):
"""Serialize a command tree as a string.
Returned string is not pretty and is currently used for unit tests only.
"""
def stringify(value):
output = []
if isinstance(value, list):
formatted = []
for v in value:
formatted.append(stringify(v))
formatted = ' '.join(formatted)
output.append(''.join(['<', formatted, '>']))
else:
output.append(value)
return ' '.join(output)
return stringify(format_commands(cmds))
def visit_commands(cmds, callable):
"""Visit the command tree and execute callable on every Pipeline and
SimpleCommand instances.
"""
if isinstance(cmds, (tuple, list)):
map(lambda c: visit_commands(c,callable), cmds)
elif isinstance(cmds, (Pipeline, SimpleCommand)):
callable(cmds)

View File

@@ -1,41 +0,0 @@
# sherrors.py - shell errors and signals
#
# Copyright 2007 Patrick Mezard
#
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
"""Define shell exceptions and error codes.
"""
class ShellError(Exception):
pass
class ShellSyntaxError(ShellError):
pass
class UtilityError(ShellError):
"""Raised upon utility syntax error (option or operand error)."""
pass
class ExpansionError(ShellError):
pass
class CommandNotFound(ShellError):
"""Specified command was not found."""
pass
class RedirectionError(ShellError):
pass
class VarAssignmentError(ShellError):
"""Variable assignment error."""
pass
class ExitSignal(ShellError):
"""Exit signal."""
pass
class ReturnSignal(ShellError):
"""Exit signal."""
pass

View File

@@ -1,77 +0,0 @@
# subprocess - Subprocesses with accessible I/O streams
#
# For more information about this module, see PEP 324.
#
# This module should remain compatible with Python 2.2, see PEP 291.
#
# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
#
# Licensed to PSF under a Contributor Agreement.
# See http://www.python.org/2.4/license for licensing details.
def list2cmdline(seq):
"""
Translate a sequence of arguments into a command line
string, using the same rules as the MS C runtime:
1) Arguments are delimited by white space, which is either a
space or a tab.
2) A string surrounded by double quotation marks is
interpreted as a single argument, regardless of white space
contained within. A quoted string can be embedded in an
argument.
3) A double quotation mark preceded by a backslash is
interpreted as a literal double quotation mark.
4) Backslashes are interpreted literally, unless they
immediately precede a double quotation mark.
5) If backslashes immediately precede a double quotation mark,
every pair of backslashes is interpreted as a literal
backslash. If the number of backslashes is odd, the last
backslash escapes the next double quotation mark as
described in rule 3.
"""
# See
# http://msdn.microsoft.com/library/en-us/vccelng/htm/progs_12.asp
result = []
needquote = False
for arg in seq:
bs_buf = []
# Add a space to separate this argument from the others
if result:
result.append(' ')
needquote = (" " in arg) or ("\t" in arg) or ("|" in arg) or arg == ""
if needquote:
result.append('"')
for c in arg:
if c == '\\':
# Don't know if we need to double yet.
bs_buf.append(c)
elif c == '"':
# Double backspaces.
result.append('\\' * len(bs_buf)*2)
bs_buf = []
result.append('\\"')
else:
# Normal char
if bs_buf:
result.extend(bs_buf)
bs_buf = []
result.append(c)
# Add remaining backspaces, if any.
if bs_buf:
result.extend(bs_buf)
if needquote:
result.extend(bs_buf)
result.append('"')
return ''.join(result)

File diff suppressed because it is too large Load Diff

View File

@@ -1,203 +0,0 @@
#
# BitBake 'dummy' Passthrough Server
#
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
# Copyright (C) 2006 - 2008 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements a passthrough server for BitBake.
Use register_idle_function() to add a function which the server
calls from within idle_commands when no requests are pending. Make sure
that those functions are non-blocking or else you will introduce latency
in the server's main loop.
"""
import time
import bb
import signal
DEBUG = False
import inspect, select
class BitBakeServerCommands():
def __init__(self, server):
self.server = server
def runCommand(self, command):
"""
Run a cooker command on the server
"""
#print "Running Command %s" % command
return self.cooker.command.runCommand(command)
def terminateServer(self):
"""
Trigger the server to quit
"""
self.server.server_exit()
#print "Server (cooker) exitting"
return
def ping(self):
"""
Dummy method which can be used to check the server is still alive
"""
return True
eventQueue = []
class BBUIEventQueue:
class event:
def __init__(self, parent):
self.parent = parent
@staticmethod
def send(event):
bb.server.none.eventQueue.append(event)
@staticmethod
def quit():
return
def __init__(self, BBServer):
self.eventQueue = bb.server.none.eventQueue
self.BBServer = BBServer
self.EventHandle = bb.event.register_UIHhandler(self)
def __popEvent(self):
if len(self.eventQueue) == 0:
return None
return self.eventQueue.pop(0)
def getEvent(self):
if len(self.eventQueue) == 0:
self.BBServer.idle_commands(0)
return self.__popEvent()
def waitEvent(self, delay):
event = self.__popEvent()
if event:
return event
self.BBServer.idle_commands(delay)
return self.__popEvent()
def queue_event(self, event):
self.eventQueue.append(event)
def system_quit( self ):
bb.event.unregister_UIHhandler(self.EventHandle)
# Dummy signal handler to ensure we break out of sleep upon SIGCHLD
def chldhandler(signum, stackframe):
pass
class BitBakeNoneServer():
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self):
self._idlefuns = {}
self.commands = BitBakeServerCommands(self)
def addcooker(self, cooker):
self.cooker = cooker
self.commands.cooker = cooker
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefuns[function] = data
def idle_commands(self, delay):
#print "Idle queue length %s" % len(self._idlefuns)
#print "Idle timeout, running idle functions"
#if len(self._idlefuns) == 0:
nextsleep = delay
for function, data in self._idlefuns.items():
try:
retval = function(self, data, False)
#print "Idle function returned %s" % (retval)
if retval is False:
del self._idlefuns[function]
elif retval is True:
nextsleep = None
elif nextsleep is None:
continue
elif retval < nextsleep:
nextsleep = retval
except SystemExit:
raise
except:
import traceback
traceback.print_exc()
self.commands.runCommand(["stateShutdown"])
pass
if nextsleep is not None:
#print "Sleeping for %s (%s)" % (nextsleep, delay)
signal.signal(signal.SIGCHLD, chldhandler)
time.sleep(nextsleep)
signal.signal(signal.SIGCHLD, signal.SIG_DFL)
def server_exit(self):
# Tell idle functions we're exiting
for function, data in self._idlefuns.items():
try:
retval = function(self, data, True)
except:
pass
class BitBakeServerConnection():
def __init__(self, server):
self.server = server.server
self.connection = self.server.commands
self.events = bb.server.none.BBUIEventQueue(self.server)
for event in bb.event.ui_queue:
self.events.queue_event(event)
def terminate(self):
try:
self.events.system_quit()
except:
pass
try:
self.connection.terminateServer()
except:
pass
class BitBakeServer(object):
def initServer(self):
self.server = BitBakeNoneServer()
def addcooker(self, cooker):
self.cooker = cooker
self.server.addcooker(cooker)
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
return
def detach(self, cooker_logfile):
self.logfile = cooker_logfile
def establishConnection(self):
self.connection = BitBakeServerConnection(self)
return self.connection
def launchUI(self, uifunc, *args):
return bb.cooker.server_main(self.cooker, uifunc, *args)

View File

@@ -1,270 +0,0 @@
#
# BitBake Process based server.
#
# Copyright (C) 2010 Bob Foerster <robert@erafx.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements a multiprocessing.Process based server for bitbake.
"""
import bb
import bb.event
import itertools
import logging
import multiprocessing
import os
import signal
import sys
import time
from Queue import Empty
from multiprocessing import Event, Process, util, Queue, Pipe, queues
logger = logging.getLogger('BitBake')
class ServerCommunicator():
def __init__(self, connection):
self.connection = connection
def runCommand(self, command):
# @todo try/except
self.connection.send(command)
while True:
# don't let the user ctrl-c while we're waiting for a response
try:
if self.connection.poll(.5):
return self.connection.recv()
else:
return None
except KeyboardInterrupt:
pass
class EventAdapter():
"""
Adapter to wrap our event queue since the caller (bb.event) expects to
call a send() method, but our actual queue only has put()
"""
def __init__(self, queue):
self.queue = queue
def send(self, event):
try:
self.queue.put(event)
except Exception as err:
print("EventAdapter puked: %s" % str(err))
class ProcessServer(Process):
profile_filename = "profile.log"
profile_processed_filename = "profile.log.processed"
def __init__(self, command_channel, event_queue):
Process.__init__(self)
self.command_channel = command_channel
self.event_queue = event_queue
self.event = EventAdapter(event_queue)
self._idlefunctions = {}
self.quit = False
self.keep_running = Event()
self.keep_running.set()
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefunctions[function] = data
def run(self):
for event in bb.event.ui_queue:
self.event_queue.put(event)
self.event_handle = bb.event.register_UIHhandler(self)
bb.cooker.server_main(self.cooker, self.main)
def main(self):
# Ignore SIGINT within the server, as all SIGINT handling is done by
# the UI and communicated to us
signal.signal(signal.SIGINT, signal.SIG_IGN)
while self.keep_running.is_set():
try:
if self.command_channel.poll():
command = self.command_channel.recv()
self.runCommand(command)
self.idle_commands(.1)
except Exception:
logger.exception('Running command %s', command)
self.event_queue.cancel_join_thread()
bb.event.unregister_UIHhandler(self.event_handle)
self.command_channel.close()
self.cooker.stop()
self.idle_commands(.1)
def idle_commands(self, delay):
nextsleep = delay
for function, data in self._idlefunctions.items():
try:
retval = function(self, data, False)
if retval is False:
del self._idlefunctions[function]
elif retval is True:
nextsleep = None
elif nextsleep is None:
continue
elif retval < nextsleep:
nextsleep = retval
except SystemExit:
raise
except Exception:
logger.exception('Running idle function')
if nextsleep is not None:
time.sleep(nextsleep)
def runCommand(self, command):
"""
Run a cooker command on the server
"""
self.command_channel.send(self.cooker.command.runCommand(command))
def stop(self):
self.keep_running.clear()
def bootstrap_2_6_6(self):
"""Pulled from python 2.6.6. Needed to ensure we have the fix from
http://bugs.python.org/issue5313 when running on python version 2.6.2
or lower."""
try:
self._children = set()
self._counter = itertools.count(1)
try:
sys.stdin.close()
sys.stdin = open(os.devnull)
except (OSError, ValueError):
pass
multiprocessing._current_process = self
util._finalizer_registry.clear()
util._run_after_forkers()
util.info('child process calling self.run()')
try:
self.run()
exitcode = 0
finally:
util._exit_function()
except SystemExit as e:
if not e.args:
exitcode = 1
elif type(e.args[0]) is int:
exitcode = e.args[0]
else:
sys.stderr.write(e.args[0] + '\n')
sys.stderr.flush()
exitcode = 1
except:
exitcode = 1
import traceback
sys.stderr.write('Process %s:\n' % self.name)
sys.stderr.flush()
traceback.print_exc()
util.info('process exiting with exitcode %d' % exitcode)
return exitcode
# Python versions 2.6.0 through 2.6.2 suffer from a multiprocessing bug
# which can result in a bitbake server hang during the parsing process
if (2, 6, 0) <= sys.version_info < (2, 6, 3):
_bootstrap = bootstrap_2_6_6
class BitBakeServerConnection():
def __init__(self, server):
self.server = server
self.procserver = server.server
self.connection = ServerCommunicator(server.ui_channel)
self.events = server.event_queue
def terminate(self, force = False):
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.procserver.stop()
if force:
self.procserver.join(0.5)
if self.procserver.is_alive():
self.procserver.terminate()
self.procserver.join()
else:
self.procserver.join()
while True:
try:
event = self.server.event_queue.get(block=False)
except (Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
self.server.ui_channel.close()
self.server.event_queue.close()
if force:
sys.exit(1)
# Wrap Queue to provide API which isn't server implementation specific
class ProcessEventQueue(multiprocessing.queues.Queue):
def waitEvent(self, timeout):
try:
return self.get(True, timeout)
except Empty:
return None
def getEvent(self):
try:
return self.get(False)
except Empty:
return None
class BitBakeServer(object):
def initServer(self):
# establish communication channels. We use bidirectional pipes for
# ui <--> server command/response pairs
# and a queue for server -> ui event notifications
#
self.ui_channel, self.server_channel = Pipe()
self.event_queue = ProcessEventQueue(0)
self.server = ProcessServer(self.server_channel, self.event_queue)
def addcooker(self, cooker):
self.cooker = cooker
self.server.cooker = cooker
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
return
def detach(self, cooker_logfile):
self.server.start()
return
def establishConnection(self):
self.connection = BitBakeServerConnection(self)
signal.signal(signal.SIGTERM, lambda i, s: self.connection.terminate(force=True))
return self.connection
def launchUI(self, uifunc, *args):
return bb.cooker.server_main(self.cooker, uifunc, *args)

View File

@@ -1,295 +0,0 @@
#
# BitBake XMLRPC Server
#
# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
# Copyright (C) 2006 - 2008 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements an xmlrpc server for BitBake.
Use this by deriving a class from BitBakeXMLRPCServer and then adding
methods which you want to "export" via XMLRPC. If the methods have the
prefix xmlrpc_, then registering those function will happen automatically,
if not, you need to call register_function.
Use register_idle_function() to add a function which the xmlrpc server
calls from within server_forever when no requests are pending. Make sure
that those functions are non-blocking or else you will introduce latency
in the server's main loop.
"""
import bb
import xmlrpclib, sys
from bb import daemonize
from bb.ui import uievent
DEBUG = False
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import inspect, select
if sys.hexversion < 0x020600F0:
print("Sorry, python 2.6 or later is required for bitbake's XMLRPC mode")
sys.exit(1)
##
# The xmlrpclib.Transport class has undergone various changes in Python 2.7
# which break BitBake's XMLRPC implementation.
# To work around this we subclass Transport and have a copy/paste of method
# implementations from Python 2.6.6's xmlrpclib.
#
# Upstream Python bug is #8194 (http://bugs.python.org/issue8194)
# This bug is relevant for Python 2.7.0 and 2.7.1 but was fixed for
# Python > 2.7.2
##
class BBTransport(xmlrpclib.Transport):
def request(self, host, handler, request_body, verbose=0):
h = self.make_connection(host)
if verbose:
h.set_debuglevel(1)
self.send_request(h, handler, request_body)
self.send_host(h, host)
self.send_user_agent(h)
self.send_content(h, request_body)
errcode, errmsg, headers = h.getreply()
if errcode != 200:
raise ProtocolError(
host + handler,
errcode, errmsg,
headers
)
self.verbose = verbose
try:
sock = h._conn.sock
except AttributeError:
sock = None
return self._parse_response(h.getfile(), sock)
def make_connection(self, host):
import httplib
host, extra_headers, x509 = self.get_host_info(host)
return httplib.HTTP(host)
def _parse_response(self, file, sock):
p, u = self.getparser()
while 1:
if sock:
response = sock.recv(1024)
else:
response = file.read(1024)
if not response:
break
if self.verbose:
print "body:", repr(response)
p.feed(response)
file.close()
p.close()
return u.close()
def _create_server(host, port):
# Python 2.7.0 and 2.7.1 have a buggy Transport implementation
# For those versions of Python, and only those versions, use our
# own copy/paste BBTransport class.
if (2, 7, 0) <= sys.version_info < (2, 7, 2):
t = BBTransport()
s = xmlrpclib.Server("http://%s:%d/" % (host, port), transport=t, allow_none=True)
else:
s = xmlrpclib.Server("http://%s:%d/" % (host, port), allow_none=True)
return s
class BitBakeServerCommands():
def __init__(self, server):
self.server = server
def registerEventHandler(self, host, port):
"""
Register a remote UI Event Handler
"""
s = _create_server(host, port)
return bb.event.register_UIHhandler(s)
def unregisterEventHandler(self, handlerNum):
"""
Unregister a remote UI Event Handler
"""
return bb.event.unregister_UIHhandler(handlerNum)
def runCommand(self, command):
"""
Run a cooker command on the server
"""
return self.cooker.command.runCommand(command)
def terminateServer(self):
"""
Trigger the server to quit
"""
self.server.quit = True
print("Server (cooker) exiting")
return
def ping(self):
"""
Dummy method which can be used to check the server is still alive
"""
return True
class BitBakeXMLRPCServer(SimpleXMLRPCServer):
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self, interface):
"""
Constructor
"""
SimpleXMLRPCServer.__init__(self, interface,
requestHandler=SimpleXMLRPCRequestHandler,
logRequests=False, allow_none=True)
self._idlefuns = {}
self.host, self.port = self.socket.getsockname()
#self.register_introspection_functions()
self.commands = BitBakeServerCommands(self)
self.autoregister_all_functions(self.commands, "")
def addcooker(self, cooker):
self.cooker = cooker
self.commands.cooker = cooker
def autoregister_all_functions(self, context, prefix):
"""
Convenience method for registering all functions in the scope
of this class that start with a common prefix
"""
methodlist = inspect.getmembers(context, inspect.ismethod)
for name, method in methodlist:
if name.startswith(prefix):
self.register_function(method, name[len(prefix):])
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefuns[function] = data
def serve_forever(self):
bb.cooker.server_main(self.cooker, self._serve_forever)
def _serve_forever(self):
"""
Serve Requests. Overloaded to honor a quit command
"""
self.quit = False
self.timeout = 0 # Run Idle calls for our first callback
while not self.quit:
#print "Idle queue length %s" % len(self._idlefuns)
self.handle_request()
#print "Idle timeout, running idle functions"
nextsleep = None
for function, data in self._idlefuns.items():
try:
retval = function(self, data, False)
if retval is False:
del self._idlefuns[function]
elif retval is True:
nextsleep = 0
elif nextsleep is 0:
continue
elif nextsleep is None:
nextsleep = retval
elif retval < nextsleep:
nextsleep = retval
except SystemExit:
raise
except:
import traceback
traceback.print_exc()
pass
if nextsleep is None and len(self._idlefuns) > 0:
nextsleep = 0
self.timeout = nextsleep
# Tell idle functions we're exiting
for function, data in self._idlefuns.items():
try:
retval = function(self, data, True)
except:
pass
self.server_close()
return
class BitbakeServerInfo():
def __init__(self, host, port):
self.host = host
self.port = port
class BitBakeServerConnection():
def __init__(self, serverinfo, clientinfo=("localhost", 0)):
self.connection = _create_server(serverinfo.host, serverinfo.port)
self.events = uievent.BBUIEventQueue(self.connection, clientinfo)
for event in bb.event.ui_queue:
self.events.queue_event(event)
def terminate(self):
# Don't wait for server indefinitely
import socket
socket.setdefaulttimeout(2)
try:
self.events.system_quit()
except:
pass
try:
self.connection.terminateServer()
except:
pass
class BitBakeServer(object):
def initServer(self, interface = ("localhost", 0)):
self.server = BitBakeXMLRPCServer(interface)
def addcooker(self, cooker):
self.cooker = cooker
self.server.addcooker(cooker)
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
self.serverinfo = BitbakeServerInfo(self.server.host, self.server.port)
def detach(self, cooker_logfile):
daemonize.createDaemon(self.server.serve_forever, cooker_logfile)
del self.cooker
del self.server
def establishConnection(self):
self.connection = BitBakeServerConnection(self.serverinfo)
return self.connection
def launchUI(self, uifunc, *args):
return uifunc(*args)

View File

@@ -52,14 +52,12 @@ PROBLEMS:
# Import and setup global variables
##########################################################################
from __future__ import print_function
from functools import reduce
try:
set
except NameError:
from sets import Set as set
import sys, os, readline, socket, httplib, urllib, commands, popen2, shlex, Queue, fnmatch
from bb import data, parse, build, cache, taskdata, runqueue, providers as Providers
import sys, os, readline, socket, httplib, urllib, commands, popen2, copy, shlex, Queue, fnmatch
from bb import data, parse, build, fatal, cache, taskdata, runqueue, providers as Providers
__version__ = "0.5.3.1"
__credits__ = """BitBake Shell Version %s (C) 2005 Michael 'Mickey' Lauer <mickey@Vanille.de>
@@ -100,7 +98,7 @@ class BitBakeShellCommands:
def _checkParsed( self ):
if not parsed:
print("SHELL: This command needs to parse bbfiles...")
print "SHELL: This command needs to parse bbfiles..."
self.parse( None )
def _findProvider( self, item ):
@@ -121,37 +119,40 @@ class BitBakeShellCommands:
"""Register a new name for a command"""
new, old = params
if not old in cmds:
print("ERROR: Command '%s' not known" % old)
print "ERROR: Command '%s' not known" % old
else:
cmds[new] = cmds[old]
print("OK")
print "OK"
alias.usage = "<alias> <command>"
def buffer( self, params ):
"""Dump specified output buffer"""
index = params[0]
print(self._shell.myout.buffer( int( index ) ))
print self._shell.myout.buffer( int( index ) )
buffer.usage = "<index>"
def buffers( self, params ):
"""Show the available output buffers"""
commands = self._shell.myout.bufferedCommands()
if not commands:
print("SHELL: No buffered commands available yet. Start doing something.")
print "SHELL: No buffered commands available yet. Start doing something."
else:
print("="*35, "Available Output Buffers", "="*27)
print "="*35, "Available Output Buffers", "="*27
for index, cmd in enumerate( commands ):
print("| %s %s" % ( str( index ).ljust( 3 ), cmd ))
print("="*88)
print "| %s %s" % ( str( index ).ljust( 3 ), cmd )
print "="*88
def build( self, params, cmd = "build" ):
"""Build a providee"""
global last_exception
globexpr = params[0]
self._checkParsed()
names = globfilter( cooker.status.pkg_pn, globexpr )
names = globfilter( cooker.status.pkg_pn.keys(), globexpr )
if len( names ) == 0: names = [ globexpr ]
print("SHELL: Building %s" % ' '.join( names ))
print "SHELL: Building %s" % ' '.join( names )
oldcmd = cooker.configuration.cmd
cooker.configuration.cmd = cmd
td = taskdata.TaskData(cooker.configuration.abort)
localdata = data.createCopy(cooker.configuration.data)
@@ -167,25 +168,28 @@ class BitBakeShellCommands:
if len(providers) == 0:
raise Providers.NoProvider
tasks.append([name, "do_%s" % cmd])
tasks.append([name, "do_%s" % cooker.configuration.cmd])
td.add_unresolved(localdata, cooker.status)
rq = runqueue.RunQueue(cooker, localdata, cooker.status, td, tasks)
rq.prepare_runqueue()
rq.execute_runqueue()
except Providers.NoProvider:
print("ERROR: No Provider")
print "ERROR: No Provider"
last_exception = Providers.NoProvider
except runqueue.TaskFailure as fnids:
except runqueue.TaskFailure, fnids:
for fnid in fnids:
print "ERROR: '%s' failed" % td.fn_index[fnid]
last_exception = runqueue.TaskFailure
except build.FuncFailed as e:
print("ERROR: Couldn't build '%s'" % names)
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % names
last_exception = e
cooker.configuration.cmd = oldcmd
build.usage = "<providee>"
@@ -204,11 +208,6 @@ class BitBakeShellCommands:
self.build( params, "configure" )
configure.usage = "<providee>"
def install( self, params ):
"""Execute 'install' on a providee"""
self.build( params, "install" )
install.usage = "<providee>"
def edit( self, params ):
"""Call $EDITOR on a providee"""
name = params[0]
@@ -216,7 +215,7 @@ class BitBakeShellCommands:
if bbfile is not None:
os.system( "%s %s" % ( os.environ.get( "EDITOR", "vi" ), bbfile ) )
else:
print("ERROR: Nothing provides '%s'" % name)
print "ERROR: Nothing provides '%s'" % name
edit.usage = "<providee>"
def environment( self, params ):
@@ -239,16 +238,20 @@ class BitBakeShellCommands:
global last_exception
name = params[0]
bf = completeFilePath( name )
print("SHELL: Calling '%s' on '%s'" % ( cmd, bf ))
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
oldcmd = cooker.configuration.cmd
cooker.configuration.cmd = cmd
try:
cooker.buildFile(bf, cmd)
cooker.buildFile(bf)
except parse.ParseError:
print("ERROR: Unable to open or parse '%s'" % bf)
except build.FuncFailed as e:
print("ERROR: Couldn't build '%s'" % name)
print "ERROR: Unable to open or parse '%s'" % bf
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % name
last_exception = e
cooker.configuration.cmd = oldcmd
fileBuild.usage = "<bbfile>"
def fileClean( self, params ):
@@ -270,60 +273,64 @@ class BitBakeShellCommands:
def fileReparse( self, params ):
"""(re)Parse a bb file"""
bbfile = params[0]
print("SHELL: Parsing '%s'" % bbfile)
print "SHELL: Parsing '%s'" % bbfile
parse.update_mtime( bbfile )
cooker.parser.reparse(bbfile)
cooker.bb_cache.cacheValidUpdate(bbfile)
fromCache = cooker.bb_cache.loadData(bbfile, cooker.configuration.data)
cooker.bb_cache.sync()
if False: #fromCache:
print("SHELL: File has not been updated, not reparsing")
print "SHELL: File has not been updated, not reparsing"
else:
print("SHELL: Parsed")
print "SHELL: Parsed"
fileReparse.usage = "<bbfile>"
def abort( self, params ):
"""Toggle abort task execution flag (see bitbake -k)"""
cooker.configuration.abort = not cooker.configuration.abort
print("SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort ))
print "SHELL: Abort Flag is now '%s'" % repr( cooker.configuration.abort )
def force( self, params ):
"""Toggle force task execution flag (see bitbake -f)"""
cooker.configuration.force = not cooker.configuration.force
print("SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force ))
print "SHELL: Force Flag is now '%s'" % repr( cooker.configuration.force )
def help( self, params ):
"""Show a comprehensive list of commands and their purpose"""
print("="*30, "Available Commands", "="*30)
for cmd in sorted(cmds):
function, numparams, usage, helptext = cmds[cmd]
print("| %s | %s" % (usage.ljust(30), helptext))
print("="*78)
print "="*30, "Available Commands", "="*30
allcmds = cmds.keys()
allcmds.sort()
for cmd in allcmds:
function,numparams,usage,helptext = cmds[cmd]
print "| %s | %s" % (usage.ljust(30), helptext)
print "="*78
def lastError( self, params ):
"""Show the reason or log that was produced by the last BitBake event exception"""
if last_exception is None:
print("SHELL: No Errors yet (Phew)...")
print "SHELL: No Errors yet (Phew)..."
else:
reason, event = last_exception.args
print("SHELL: Reason for the last error: '%s'" % reason)
print "SHELL: Reason for the last error: '%s'" % reason
if ':' in reason:
msg, filename = reason.split( ':' )
filename = filename.strip()
print("SHELL: Dumping log file for last error:")
print "SHELL: Dumping log file for last error:"
try:
print(open( filename ).read())
print open( filename ).read()
except IOError:
print("ERROR: Couldn't open '%s'" % filename)
print "ERROR: Couldn't open '%s'" % filename
def match( self, params ):
"""Dump all files or providers matching a glob expression"""
what, globexpr = params
if what == "files":
self._checkParsed()
for key in globfilter( cooker.status.pkg_fn, globexpr ): print(key)
for key in globfilter( cooker.status.pkg_fn.keys(), globexpr ): print key
elif what == "providers":
self._checkParsed()
for key in globfilter( cooker.status.pkg_pn, globexpr ): print(key)
for key in globfilter( cooker.status.pkg_pn.keys(), globexpr ): print key
else:
print("Usage: match %s" % self.print_.usage)
print "Usage: match %s" % self.print_.usage
match.usage = "<files|providers> <glob>"
def new( self, params ):
@@ -333,15 +340,15 @@ class BitBakeShellCommands:
fulldirname = "%s/%s" % ( packages, dirname )
if not os.path.exists( fulldirname ):
print("SHELL: Creating '%s'" % fulldirname)
print "SHELL: Creating '%s'" % fulldirname
os.mkdir( fulldirname )
if os.path.exists( fulldirname ) and os.path.isdir( fulldirname ):
if os.path.exists( "%s/%s" % ( fulldirname, filename ) ):
print("SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename ))
print "SHELL: ERROR: %s/%s already exists" % ( fulldirname, filename )
return False
print("SHELL: Creating '%s/%s'" % ( fulldirname, filename ))
print "SHELL: Creating '%s/%s'" % ( fulldirname, filename )
newpackage = open( "%s/%s" % ( fulldirname, filename ), "w" )
print("""DESCRIPTION = ""
print >>newpackage,"""DESCRIPTION = ""
SECTION = ""
AUTHOR = ""
HOMEPAGE = ""
@@ -368,7 +375,7 @@ SRC_URI = ""
#do_install() {
#
#}
""", file=newpackage)
"""
newpackage.close()
os.system( "%s %s/%s" % ( os.environ.get( "EDITOR" ), fulldirname, filename ) )
new.usage = "<directory> <filename>"
@@ -388,14 +395,14 @@ SRC_URI = ""
def pasteLog( self, params ):
"""Send the last event exception error log (if there is one) to http://rafb.net/paste"""
if last_exception is None:
print("SHELL: No Errors yet (Phew)...")
print "SHELL: No Errors yet (Phew)..."
else:
reason, event = last_exception.args
print("SHELL: Reason for the last error: '%s'" % reason)
print "SHELL: Reason for the last error: '%s'" % reason
if ':' in reason:
msg, filename = reason.split( ':' )
filename = filename.strip()
print("SHELL: Pasting log file to pastebin...")
print "SHELL: Pasting log file to pastebin..."
file = open( filename ).read()
sendToPastebin( "contents of " + filename, file )
@@ -407,7 +414,7 @@ SRC_URI = ""
def parse( self, params ):
"""(Re-)parse .bb files and calculate the dependency graph"""
cooker.status = cache.CacheData(cooker.caches_array)
cooker.status = cache.CacheData()
ignore = data.getVar("ASSUME_PROVIDED", cooker.configuration.data, 1) or ""
cooker.status.ignored_dependencies = set( ignore.split() )
cooker.handleCollections( data.getVar("BBFILE_COLLECTIONS", cooker.configuration.data, 1) )
@@ -417,23 +424,23 @@ SRC_URI = ""
cooker.buildDepgraph()
global parsed
parsed = True
print()
print
def reparse( self, params ):
"""(re)Parse a providee's bb file"""
bbfile = self._findProvider( params[0] )
if bbfile is not None:
print("SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] ))
print "SHELL: Found bbfile '%s' for '%s'" % ( bbfile, params[0] )
self.fileReparse( [ bbfile ] )
else:
print("ERROR: Nothing provides '%s'" % params[0])
print "ERROR: Nothing provides '%s'" % params[0]
reparse.usage = "<providee>"
def getvar( self, params ):
"""Dump the contents of an outer BitBake environment variable"""
var = params[0]
value = data.getVar( var, cooker.configuration.data, 1 )
print(value)
print value
getvar.usage = "<variable>"
def peek( self, params ):
@@ -441,11 +448,11 @@ SRC_URI = ""
name, var = params
bbfile = self._findProvider( name )
if bbfile is not None:
the_data = cache.Cache.loadDataFull(bbfile, cooker.configuration.data)
the_data = cooker.bb_cache.loadDataFull(bbfile, cooker.configuration.data)
value = the_data.getVar( var, 1 )
print(value)
print value
else:
print("ERROR: Nothing provides '%s'" % name)
print "ERROR: Nothing provides '%s'" % name
peek.usage = "<providee> <variable>"
def poke( self, params ):
@@ -453,7 +460,7 @@ SRC_URI = ""
name, var, value = params
bbfile = self._findProvider( name )
if bbfile is not None:
print("ERROR: Sorry, this functionality is currently broken")
print "ERROR: Sorry, this functionality is currently broken"
#d = cooker.pkgdata[bbfile]
#data.setVar( var, value, d )
@@ -461,7 +468,7 @@ SRC_URI = ""
#cooker.pkgdata.setDirty(bbfile, d)
#print "OK"
else:
print("ERROR: Nothing provides '%s'" % name)
print "ERROR: Nothing provides '%s'" % name
poke.usage = "<providee> <variable> <value>"
def print_( self, params ):
@@ -469,12 +476,12 @@ SRC_URI = ""
what = params[0]
if what == "files":
self._checkParsed()
for key in cooker.status.pkg_fn: print(key)
for key in cooker.status.pkg_fn.keys(): print key
elif what == "providers":
self._checkParsed()
for key in cooker.status.providers: print(key)
for key in cooker.status.providers.keys(): print key
else:
print("Usage: print %s" % self.print_.usage)
print "Usage: print %s" % self.print_.usage
print_.usage = "<files|providers>"
def python( self, params ):
@@ -486,7 +493,7 @@ SRC_URI = ""
interpreter.interact( "SHELL: Expert Mode - BitBake Python %s\nType 'help' for more information, press CTRL-D to switch back to BBSHELL." % sys.version )
def showdata( self, params ):
"""Execute 'showdata' on a providee"""
"""Show the parsed metadata for a given providee"""
cooker.showEnvironment(None, params)
showdata.usage = "<providee>"
@@ -494,7 +501,7 @@ SRC_URI = ""
"""Set an outer BitBake environment variable"""
var, value = params
data.setVar( var, value, cooker.configuration.data )
print("OK")
print "OK"
setVar.usage = "<variable> <value>"
def rebuild( self, params ):
@@ -506,27 +513,27 @@ SRC_URI = ""
def shell( self, params ):
"""Execute a shell command and dump the output"""
if params != "":
print(commands.getoutput( " ".join( params ) ))
print commands.getoutput( " ".join( params ) )
shell.usage = "<...>"
def stage( self, params ):
"""Execute 'stage' on a providee"""
self.build( params, "populate_staging" )
self.build( params, "stage" )
stage.usage = "<providee>"
def status( self, params ):
"""<just for testing>"""
print("-" * 78)
print("building list = '%s'" % cooker.building_list)
print("build path = '%s'" % cooker.build_path)
print("consider_msgs_cache = '%s'" % cooker.consider_msgs_cache)
print("build stats = '%s'" % cooker.stats)
if last_exception is not None: print("last_exception = '%s'" % repr( last_exception.args ))
print("memory output contents = '%s'" % self._shell.myout._buffer)
print "-" * 78
print "building list = '%s'" % cooker.building_list
print "build path = '%s'" % cooker.build_path
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
print "build stats = '%s'" % cooker.stats
if last_exception is not None: print "last_exception = '%s'" % repr( last_exception.args )
print "memory output contents = '%s'" % self._shell.myout._buffer
def test( self, params ):
"""<just for testing>"""
print("testCommand called with '%s'" % params)
print "testCommand called with '%s'" % params
def unpack( self, params ):
"""Execute 'unpack' on a providee"""
@@ -551,12 +558,12 @@ SRC_URI = ""
try:
providers = cooker.status.providers[item]
except KeyError:
print("SHELL: ERROR: Nothing provides", preferred)
print "SHELL: ERROR: Nothing provides", preferred
else:
for provider in providers:
if provider == pf: provider = " (***) %s" % provider
else: provider = " %s" % provider
print(provider)
print provider
which.usage = "<providee>"
##########################################################################
@@ -567,7 +574,7 @@ def completeFilePath( bbfile ):
"""Get the complete bbfile path"""
if not cooker.status: return bbfile
if not cooker.status.pkg_fn: return bbfile
for key in cooker.status.pkg_fn:
for key in cooker.status.pkg_fn.keys():
if key.endswith( bbfile ):
return key
return bbfile
@@ -581,7 +588,7 @@ def sendToPastebin( desc, content ):
mydata["nick"] = "%s@%s" % ( os.environ.get( "USER", "unknown" ), socket.gethostname() or "unknown" )
mydata["text"] = content
params = urllib.urlencode( mydata )
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
host = "rafb.net"
conn = httplib.HTTPConnection( "%s:80" % host )
@@ -592,9 +599,9 @@ def sendToPastebin( desc, content ):
if response.status == 302:
location = response.getheader( "location" ) or "unknown"
print("SHELL: Pasted to http://%s%s" % ( host, location ))
print "SHELL: Pasted to http://%s%s" % ( host, location )
else:
print("ERROR: %s %s" % ( response.status, response.reason ))
print "ERROR: %s %s" % ( response.status, response.reason )
def completer( text, state ):
"""Return a possible readline completion"""
@@ -611,7 +618,7 @@ def completer( text, state ):
allmatches = cooker.configuration.data.keys()
elif u == "<bbfile>":
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
else: allmatches = [ x.split("/")[-1] for x in cooker.status.pkg_fn ]
else: allmatches = [ x.split("/")[-1] for x in cooker.status.pkg_fn.keys() ]
elif u == "<providee>":
if cooker.status.pkg_fn is None: allmatches = [ "(No Matches Available. Parsed yet?)" ]
else: allmatches = cooker.status.providers.iterkeys()
@@ -641,7 +648,7 @@ def columnize( alist, width = 80 ):
return reduce(lambda line, word, width=width: '%s%s%s' %
(line,
' \n'[(len(line[line.rfind('\n')+1:])
+ len(word.split('\n', 1)[0]
+ len(word.split('\n',1)[0]
) >= width)],
word),
alist
@@ -716,7 +723,7 @@ class BitBakeShell:
except IOError:
pass # It doesn't exist yet.
print(__credits__)
print __credits__
def cleanup( self ):
"""Write readline history and clean up resources"""
@@ -724,7 +731,7 @@ class BitBakeShell:
try:
readline.write_history_file( self.historyfilename )
except:
print("SHELL: Unable to save command history")
print "SHELL: Unable to save command history"
def registerCommand( self, command, function, numparams = 0, usage = "", helptext = "" ):
"""Register a command"""
@@ -738,11 +745,11 @@ class BitBakeShell:
try:
function, numparams, usage, helptext = cmds[command]
except KeyError:
print("SHELL: ERROR: '%s' command is not a valid command." % command)
print "SHELL: ERROR: '%s' command is not a valid command." % command
self.myout.removeLast()
else:
if (numparams != -1) and (not len( params ) == numparams):
print("Usage: '%s'" % usage)
print "Usage: '%s'" % usage
return
result = function( self.commands, params )
@@ -757,7 +764,7 @@ class BitBakeShell:
if not cmdline:
continue
if "|" in cmdline:
print("ERROR: '|' in startup file is not allowed. Ignoring line")
print "ERROR: '|' in startup file is not allowed. Ignoring line"
continue
self.commandQ.put( cmdline.strip() )
@@ -799,10 +806,10 @@ class BitBakeShell:
sys.stdout.write( pipe.fromchild.read() )
#
except EOFError:
print()
print
return
except KeyboardInterrupt:
print()
print
##########################################################################
# Start function - called from the BitBake command line utility
@@ -817,4 +824,4 @@ def start( aCooker ):
bbshell.cleanup()
if __name__ == "__main__":
print("SHELL: Sorry, this program should only be called by BitBake.")
print "SHELL: Sorry, this program should only be called by BitBake."

View File

@@ -1,348 +0,0 @@
import hashlib
import logging
import os
import re
import bb.data
logger = logging.getLogger('BitBake.SigGen')
try:
import cPickle as pickle
except ImportError:
import pickle
logger.info('Importing cPickle failed. Falling back to a very slow implementation.')
def init(d):
siggens = [obj for obj in globals().itervalues()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
desired = d.getVar("BB_SIGNATURE_HANDLER", True) or "noop"
for sg in siggens:
if desired == sg.name:
return sg(d)
break
else:
logger.error("Invalid signature generator '%s', using default 'noop'\n"
"Available generators: %s",
', '.join(obj.name for obj in siggens))
return SignatureGenerator(d)
class SignatureGenerator(object):
"""
"""
name = "noop"
def __init__(self, data):
return
def finalise(self, fn, d, varient):
return
def get_taskhash(self, fn, task, deps, dataCache):
return "0"
def set_taskdata(self, hashes, deps):
return
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def dump_sigtask(self, fn, task, stampbase, runtime):
return
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
name = "basic"
def __init__(self, data):
self.basehash = {}
self.taskhash = {}
self.taskdeps = {}
self.runtaskdeps = {}
self.gendeps = {}
self.lookupcache = {}
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST", True) or "").split())
self.taskwhitelist = None
self.init_rundepcheck(data)
def init_rundepcheck(self, data):
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST", True) or None
if self.taskwhitelist:
self.twl = re.compile(self.taskwhitelist)
else:
self.twl = None
def _build_data(self, fn, d):
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
taskdeps = {}
basehash = {}
for task in tasklist:
data = d.getVar(task, False)
lookupcache[task] = data
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = ''
newdeps = gendeps[task]
seen = set()
while newdeps:
nextdeps = newdeps
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep in self.basewhitelist:
continue
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = seen - self.basewhitelist
for dep in sorted(alldeps):
data = data + dep
if dep in lookupcache:
var = lookupcache[dep]
else:
var = d.getVar(dep, False)
lookupcache[dep] = var
if var:
data = data + str(var)
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
taskdeps[task] = sorted(alldeps)
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
self.lookupcache[fn] = lookupcache
return taskdeps
def finalise(self, fn, d, variant):
if variant:
fn = "virtual:" + variant + ":" + fn
taskdeps = self._build_data(fn, d)
#Slow but can be useful for debugging mismatched basehashes
#for task in self.taskdeps[fn]:
# self.dump_sigtask(fn, task, d.getVar("STAMP", True), False)
for task in taskdeps:
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + "." + task])
def rundep_check(self, fn, recipename, task, dep, depname, dataCache):
# Return True if we should keep the dependency, False to drop it
# We only manipulate the dependencies for packages not in the whitelist
if self.twl and not self.twl.search(recipename):
# then process the actual dependencies
if self.twl.search(depname):
return False
return True
def get_taskhash(self, fn, task, deps, dataCache):
k = fn + "." + task
data = dataCache.basetaskhash[k]
self.runtaskdeps[k] = []
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
continue
if dep not in self.taskhash:
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?", dep)
data = data + self.taskhash[dep]
self.runtaskdeps[k].append(dep)
h = hashlib.md5(data).hexdigest()
self.taskhash[k] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
return h
def set_taskdata(self, hashes, deps):
self.runtaskdeps = deps
self.taskhash = hashes
def dump_sigtask(self, fn, task, stampbase, runtime):
k = fn + "." + task
if runtime == "customfile":
sigfile = stampbase
elif runtime and k in self.taskhash:
sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[k]
else:
sigfile = stampbase + "." + task + ".sigbasedata" + "." + self.basehash[k]
bb.utils.mkdirhier(os.path.dirname(sigfile))
data = {}
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
data['taskdeps'] = self.taskdeps[fn][task]
data['basehash'] = self.basehash[k]
data['gendeps'] = {}
data['varvals'] = {}
data['varvals'][task] = self.lookupcache[fn][task]
for dep in self.taskdeps[fn][task]:
if dep in self.basewhitelist:
continue
data['gendeps'][dep] = self.gendeps[fn][dep]
data['varvals'][dep] = self.lookupcache[fn][dep]
if runtime and k in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[k]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.taskhash[dep]
p = pickle.Pickler(file(sigfile, "wb"), -1)
p.dump(data)
def dump_sigs(self, dataCache):
for fn in self.taskdeps:
for task in self.taskdeps[fn]:
k = fn + "." + task
if k not in self.taskhash:
continue
if dataCache.basetaskhash[k] != self.basehash[k]:
bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
bb.error("The mismatched hashes were %s and %s" % (dataCache.basetaskhash[k], self.basehash[k]))
self.dump_sigtask(fn, task, dataCache.stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
def stampfile(self, stampbase, fn, taskname, extrainfo):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
k = fn + "." + taskname[:-9]
else:
k = fn + "." + taskname
h = self.taskhash[k]
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
def dump_this_task(outfile, d):
import bb.parse
fn = d.getVar("BB_FILENAME", True)
task = "do_" + d.getVar("BB_CURRENTTASK", True)
bb.parse.siggen.dump_sigtask(fn, task, outfile, "customfile")
def clean_basepath(a):
if a.startswith("virtual:"):
b = a.rsplit(":", 1)[0] + a.rsplit("/", 1)[1]
else:
b = a.rsplit("/", 1)[1]
return b
def clean_basepaths(a):
b = {}
for x in a:
b[clean_basepath(x)] = a[x]
return b
def compare_sigfiles(a, b):
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
p2 = pickle.Unpickler(file(b, "rb"))
b_data = p2.load()
def dict_diff(a, b, whitelist=set()):
sa = set(a.keys())
sb = set(b.keys())
common = sa & sb
changed = set()
for i in common:
if a[i] != b[i] and i not in whitelist:
changed.add(i)
added = sa - sb
removed = sb - sa
return changed, added, removed
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
print "basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist'])
if a_data['basewhitelist'] and b_data['basewhitelist']:
print "changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist'])
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
print "taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist'])
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
print "changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist'])
if a_data['taskdeps'] != b_data['taskdeps']:
print "Task dependencies changed from:\n%s\nto:\n%s" % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps']))
if a_data['basehash'] != b_data['basehash']:
print "basehash changed from %s to %s" % (a_data['basehash'], b_data['basehash'])
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
if changed:
for dep in changed:
print "List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep])
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
print "changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep])
if added:
for dep in added:
print "Dependency on variable %s was added" % (dep)
if removed:
for dep in removed:
print "Dependency on Variable %s was removed" % (dep)
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
for dep in changed:
print "Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep])
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
a = clean_basepaths(a_data['runtaskhashes'])
b = clean_basepaths(b_data['runtaskhashes'])
changed, added, removed = dict_diff(a, b)
if added:
for dep in added:
bdep_found = False
if removed:
for bdep in removed:
if a[dep] == b[bdep]:
#print "Dependency on task %s was replaced by %s with same hash" % (dep, bdep)
bdep_found = True
if not bdep_found:
print "Dependency on task %s was added with hash %s" % (dep, a[dep])
if removed:
for dep in removed:
adep_found = False
if added:
for adep in added:
if a[adep] == b[dep]:
#print "Dependency on task %s was replaced by %s with same hash" % (adep, dep)
adep_found = True
if not adep_found:
print "Dependency on task %s was removed with hash %s" % (dep, b[dep])
if changed:
for dep in changed:
print "Hash for dependent task %s changed from %s to %s" % (dep, a[dep], b[dep])
def dump_sigfile(a):
p1 = pickle.Unpickler(file(a, "rb"))
a_data = p1.load()
print "basewhitelist: %s" % (a_data['basewhitelist'])
print "taskwhitelist: %s" % (a_data['taskwhitelist'])
print "Task dependencies: %s" % (sorted(a_data['taskdeps']))
print "basehash: %s" % (a_data['basehash'])
for dep in a_data['gendeps']:
print "List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep])
for dep in a_data['varvals']:
print "Variable %s value is %s" % (dep, a_data['varvals'][dep])
if 'runtaskdeps' in a_data:
print "Tasks this task depends on: %s" % (a_data['runtaskdeps'])
if 'runtaskhashes' in a_data:
for dep in a_data['runtaskhashes']:
print "Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep])

View File

@@ -23,25 +23,14 @@ Task data collection and handling
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import logging
import re
import bb
logger = logging.getLogger("BitBake.TaskData")
def re_match_strings(target, strings):
"""
Whether or not the string 'target' matches
any one string of the strings which can be regular expression string
"""
return any(name == target or re.match(name, target)
for name in strings)
from bb import data, event, mkdirhier, utils
import bb, os
class TaskData:
"""
BitBake Task Data implementation
"""
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None):
def __init__(self, abort = True):
self.build_names_index = []
self.run_names_index = []
self.fn_index = []
@@ -68,9 +57,6 @@ class TaskData:
self.failed_fnids = []
self.abort = abort
self.tryaltconfigs = tryaltconfigs
self.skiplist = skiplist
def getbuild_id(self, name):
"""
@@ -85,7 +71,7 @@ class TaskData:
def getrun_id(self, name):
"""
Return an ID number for the run target name.
Return an ID number for the run target name.
If it doesn't exist, create one.
"""
if not name in self.run_names_index:
@@ -96,7 +82,7 @@ class TaskData:
def getfn_id(self, name):
"""
Return an ID number for the filename.
Return an ID number for the filename.
If it doesn't exist, create one.
"""
if not name in self.fn_index:
@@ -153,7 +139,7 @@ class TaskData:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
bb.msg.fatal("TaskData", "Trying to re-add a failed file? Something is broken...")
bb.msg.fatal(bb.msg.domain.TaskData, "Trying to re-add a failed file? Something is broken...")
# Check if we've already seen this fn
if fnid in self.tasks_fnid:
@@ -174,8 +160,6 @@ class TaskData:
ids = []
for dep in task_deps['depends'][task].split():
if dep:
if ":" not in dep:
bb.msg.fatal("TaskData", "Error for %s, dependency %s does not contain ':' character\n. Task 'depends' should be specified in the form 'packagename:task'" % (fn, dep))
ids.append(((self.getbuild_id(dep.split(":")[0])), dep.split(":")[1]))
self.tasks_idepends[taskid].extend(ids)
@@ -183,7 +167,7 @@ class TaskData:
if not fnid in self.depids:
dependids = {}
for depend in dataCache.deps[fn]:
logger.debug(2, "Added dependency %s for %s", depend, fn)
bb.msg.debug(2, bb.msg.domain.TaskData, "Added dependency %s for %s" % (depend, fn))
dependids[self.getbuild_id(depend)] = None
self.depids[fnid] = dependids.keys()
@@ -193,12 +177,12 @@ class TaskData:
rdepends = dataCache.rundeps[fn]
rrecs = dataCache.runrecs[fn]
for package in rdepends:
for rdepend in rdepends[package]:
logger.debug(2, "Added runtime dependency %s for %s", rdepend, fn)
for rdepend in bb.utils.explode_deps(rdepends[package]):
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime dependency %s for %s" % (rdepend, fn))
rdependids[self.getrun_id(rdepend)] = None
for package in rrecs:
for rdepend in rrecs[package]:
logger.debug(2, "Added runtime recommendation %s for %s", rdepend, fn)
for rdepend in bb.utils.explode_deps(rrecs[package]):
bb.msg.debug(2, bb.msg.domain.TaskData, "Added runtime recommendation %s for %s" % (rdepend, fn))
rdependids[self.getrun_id(rdepend)] = None
self.rdepids[fnid] = rdependids.keys()
@@ -272,12 +256,12 @@ class TaskData:
def get_unresolved_build_targets(self, dataCache):
"""
Return a list of build targets who's providers
Return a list of build targets who's providers
are unknown.
"""
unresolved = []
for target in self.build_names_index:
if re_match_strings(target, dataCache.ignored_dependencies):
if target in dataCache.ignored_dependencies:
continue
if self.build_names_index.index(target) in self.failed_deps:
continue
@@ -287,12 +271,12 @@ class TaskData:
def get_unresolved_run_targets(self, dataCache):
"""
Return a list of runtime targets who's providers
Return a list of runtime targets who's providers
are unknown.
"""
unresolved = []
for target in self.run_names_index:
if re_match_strings(target, dataCache.ignored_dependencies):
if target in dataCache.ignored_dependencies:
continue
if self.run_names_index.index(target) in self.failed_rdeps:
continue
@@ -305,7 +289,7 @@ class TaskData:
Return a list of providers of item
"""
targetid = self.getbuild_id(item)
return self.build_targets[targetid]
def get_dependees(self, itemid):
@@ -350,44 +334,31 @@ class TaskData:
dependees.append(self.fn_index[fnid])
return dependees
def get_reasons(self, item, runtime=False):
"""
Get the reason(s) for an item not being provided, if any
"""
reasons = []
if self.skiplist:
for fn in self.skiplist:
skipitem = self.skiplist[fn]
if skipitem.pn == item:
reasons.append("%s was skipped: %s" % (skipitem.pn, skipitem.skipreason))
elif runtime and item in skipitem.rprovides:
reasons.append("%s RPROVIDES %s but was skipped: %s" % (skipitem.pn, item, skipitem.skipreason))
elif not runtime and item in skipitem.provides:
reasons.append("%s PROVIDES %s but was skipped: %s" % (skipitem.pn, item, skipitem.skipreason))
return reasons
def add_provider(self, cfgData, dataCache, item):
try:
self.add_provider_internal(cfgData, dataCache, item)
except bb.providers.NoProvider:
if self.abort:
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
raise
self.remove_buildtarget(self.getbuild_id(item))
targetid = self.getbuild_id(item)
self.remove_buildtarget(targetid)
self.mark_external_target(item)
def add_provider_internal(self, cfgData, dataCache, item):
"""
Add the providers of item to the task data
Mark entries were specifically added externally as against dependencies
Mark entries were specifically added externally as against dependencies
added internally during dependency resolution
"""
if re_match_strings(item, dataCache.ignored_dependencies):
if item in dataCache.ignored_dependencies:
return
if not item in dataCache.providers:
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=self.get_reasons(item)), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData))
raise bb.providers.NoProvider(item)
if self.have_build_target(item):
@@ -396,10 +367,15 @@ class TaskData:
all_p = dataCache.providers[item]
eligible, foundUnique = bb.providers.filterProviders(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
for p in eligible:
fnid = self.getfn_id(p)
if fnid in self.failed_fnids:
eligible.remove(p)
if not eligible:
bb.event.fire(bb.event.NoProvider(item, dependees=self.get_dependees_str(item), reasons=["No eligible PROVIDERs exist for '%s'" % item]), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "No buildable provider PROVIDES '%s' but '%s' DEPENDS on or otherwise requires it. Enable debugging and see earlier logs to find unbuildable providers." % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData))
raise bb.providers.NoProvider(item)
if len(eligible) > 1 and foundUnique == False:
@@ -407,14 +383,16 @@ class TaskData:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list), cfgData)
bb.msg.note(1, bb.msg.domain.Provider, "multiple providers are available for %s (%s);" % (item, ", ".join(providers_list)))
bb.msg.note(1, bb.msg.domain.Provider, "consider defining PREFERRED_PROVIDER_%s" % item)
bb.event.fire(bb.event.MultipleProviders(item, providers_list, cfgData))
self.consider_msgs_cache.append(item)
for fn in eligible:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding %s to satisfy %s", fn, item)
bb.msg.debug(2, bb.msg.domain.Provider, "adding %s to satisfy %s" % (fn, item))
self.add_build_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -427,7 +405,7 @@ class TaskData:
(takes item names from RDEPENDS/PACKAGES namespace)
"""
if re_match_strings(item, dataCache.ignored_dependencies):
if item in dataCache.ignored_dependencies:
return
if self.have_runtime_target(item):
@@ -436,14 +414,20 @@ class TaskData:
all_p = bb.providers.getRuntimeProviders(dataCache, item)
if not all_p:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=self.get_reasons(item, True)), cfgData)
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables" % (self.get_rdependees_str(item), item))
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
raise bb.providers.NoRProvider(item)
eligible, numberPreferred = bb.providers.filterProvidersRunTime(all_p, item, cfgData, dataCache)
eligible = [p for p in eligible if not self.getfn_id(p) in self.failed_fnids]
for p in eligible:
fnid = self.getfn_id(p)
if fnid in self.failed_fnids:
eligible.remove(p)
if not eligible:
bb.event.fire(bb.event.NoProvider(item, runtime=True, dependees=self.get_rdependees_str(item), reasons=["No eligible RPROVIDERs exist for '%s'" % item]), cfgData)
bb.msg.error(bb.msg.domain.Provider, "'%s' RDEPENDS/RRECOMMENDS or otherwise requires the runtime entity '%s' but it wasn't found in any PACKAGE or RPROVIDES variables of any buildable targets.\nEnable debugging and see earlier logs to find unbuildable targets." % (self.get_rdependees_str(item), item))
bb.event.fire(bb.event.NoProvider(item, cfgData, runtime=True))
raise bb.providers.NoRProvider(item)
if len(eligible) > 1 and numberPreferred == 0:
@@ -451,7 +435,9 @@ class TaskData:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (%s);" % (item, ", ".join(providers_list)))
bb.msg.note(2, bb.msg.domain.Provider, "consider defining a PREFERRED_PROVIDER entry to match runtime %s" % item)
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
self.consider_msgs_cache.append(item)
if numberPreferred > 1:
@@ -459,7 +445,9 @@ class TaskData:
providers_list = []
for fn in eligible:
providers_list.append(dataCache.pkg_fn[fn])
bb.event.fire(bb.event.MultipleProviders(item, providers_list, runtime=True), cfgData)
bb.msg.note(2, bb.msg.domain.Provider, "multiple providers are available for runtime %s (top %s entries preferred) (%s);" % (item, numberPreferred, ", ".join(providers_list)))
bb.msg.note(2, bb.msg.domain.Provider, "consider defining only one PREFERRED_PROVIDER entry to match runtime %s" % item)
bb.event.fire(bb.event.MultipleProviders(item,providers_list, cfgData, runtime=True))
self.consider_msgs_cache.append(item)
# run through the list until we find one that we can build
@@ -467,7 +455,7 @@ class TaskData:
fnid = self.getfn_id(fn)
if fnid in self.failed_fnids:
continue
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
bb.msg.debug(2, bb.msg.domain.Provider, "adding '%s' to satisfy runtime '%s'" % (fn, item))
self.add_runtime_target(fn, item)
self.add_tasks(fn, dataCache)
@@ -480,7 +468,7 @@ class TaskData:
"""
if fnid in self.failed_fnids:
return
logger.debug(1, "File '%s' is unbuildable, removing...", self.fn_index[fnid])
bb.msg.debug(1, bb.msg.domain.Provider, "File '%s' is unbuildable, removing..." % self.fn_index[fnid])
self.failed_fnids.append(fnid)
for target in self.build_targets:
if fnid in self.build_targets[target]:
@@ -502,21 +490,20 @@ class TaskData:
missing_list = [self.build_names_index[targetid]]
else:
missing_list = [self.build_names_index[targetid]] + missing_list
logger.verbose("Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.build_names_index[targetid], missing_list)
bb.msg.note(2, bb.msg.domain.Provider, "Target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
self.failed_deps.append(targetid)
dependees = self.get_dependees(targetid)
for fnid in dependees:
self.fail_fnid(fnid, missing_list)
for taskid in xrange(len(self.tasks_idepends)):
for taskid in range(len(self.tasks_idepends)):
idepends = self.tasks_idepends[taskid]
for (idependid, idependtask) in idepends:
if idependid == targetid:
self.fail_fnid(self.tasks_fnid[taskid], missing_list)
if self.abort and targetid in self.external_targets:
target = self.build_names_index[targetid]
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
raise bb.providers.NoProvider(target)
bb.msg.error(bb.msg.domain.Provider, "Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s" % (self.build_names_index[targetid], missing_list))
raise bb.providers.NoProvider
def remove_runtarget(self, targetid, missing_list = []):
"""
@@ -528,7 +515,7 @@ class TaskData:
else:
missing_list = [self.run_names_index[targetid]] + missing_list
logger.info("Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s", self.run_names_index[targetid], missing_list)
bb.msg.note(1, bb.msg.domain.Provider, "Runtime target '%s' is unbuildable, removing...\nMissing or unbuildable dependency chain was: %s" % (self.run_names_index[targetid], missing_list))
self.failed_rdeps.append(targetid)
dependees = self.get_rdependees(targetid)
for fnid in dependees:
@@ -538,8 +525,8 @@ class TaskData:
"""
Resolve all unresolved build and runtime targets
"""
logger.info("Resolving any missing task queue dependencies")
while True:
bb.msg.note(1, bb.msg.domain.TaskData, "Resolving any missing task queue dependencies")
while 1:
added = 0
for target in self.get_unresolved_build_targets(dataCache):
try:
@@ -548,6 +535,7 @@ class TaskData:
except bb.providers.NoProvider:
targetid = self.getbuild_id(target)
if self.abort and targetid in self.external_targets:
bb.msg.error(bb.msg.domain.Provider, "Nothing PROVIDES '%s' (but '%s' DEPENDS on or otherwise requires it)" % (target, self.get_dependees_str(target)))
raise
self.remove_buildtarget(targetid)
for target in self.get_unresolved_run_targets(dataCache):
@@ -556,7 +544,7 @@ class TaskData:
added = added + 1
except bb.providers.NoRProvider:
self.remove_runtarget(self.getrun_id(target))
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
bb.msg.debug(1, bb.msg.domain.TaskData, "Resolved " + str(added) + " extra dependecies")
if added == 0:
break
# self.dump_data()
@@ -565,40 +553,42 @@ class TaskData:
"""
Dump some debug information on the internal data structures
"""
logger.debug(3, "build_names:")
logger.debug(3, ", ".join(self.build_names_index))
bb.msg.debug(3, bb.msg.domain.TaskData, "build_names:")
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.build_names_index))
logger.debug(3, "run_names:")
logger.debug(3, ", ".join(self.run_names_index))
bb.msg.debug(3, bb.msg.domain.TaskData, "run_names:")
bb.msg.debug(3, bb.msg.domain.TaskData, ", ".join(self.run_names_index))
logger.debug(3, "build_targets:")
for buildid in xrange(len(self.build_names_index)):
bb.msg.debug(3, bb.msg.domain.TaskData, "build_targets:")
for buildid in range(len(self.build_names_index)):
target = self.build_names_index[buildid]
targets = "None"
if buildid in self.build_targets:
targets = self.build_targets[buildid]
logger.debug(3, " (%s)%s: %s", buildid, target, targets)
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (buildid, target, targets))
logger.debug(3, "run_targets:")
for runid in xrange(len(self.run_names_index)):
bb.msg.debug(3, bb.msg.domain.TaskData, "run_targets:")
for runid in range(len(self.run_names_index)):
target = self.run_names_index[runid]
targets = "None"
if runid in self.run_targets:
targets = self.run_targets[runid]
logger.debug(3, " (%s)%s: %s", runid, target, targets)
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s: %s" % (runid, target, targets))
logger.debug(3, "tasks:")
for task in xrange(len(self.tasks_name)):
logger.debug(3, " (%s)%s - %s: %s",
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task])
bb.msg.debug(3, bb.msg.domain.TaskData, "tasks:")
for task in range(len(self.tasks_name)):
bb.msg.debug(3, bb.msg.domain.TaskData, " (%s)%s - %s: %s" % (
task,
self.fn_index[self.tasks_fnid[task]],
self.tasks_name[task],
self.tasks_tdepends[task]))
logger.debug(3, "dependency ids (per fn):")
bb.msg.debug(3, bb.msg.domain.TaskData, "dependency ids (per fn):")
for fnid in self.depids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.depids[fnid])
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.depids[fnid]))
logger.debug(3, "runtime dependency ids (per fn):")
bb.msg.debug(3, bb.msg.domain.TaskData, "runtime dependency ids (per fn):")
for fnid in self.rdepids:
logger.debug(3, " %s %s: %s", fnid, self.fn_index[fnid], self.rdepids[fnid])
bb.msg.debug(3, bb.msg.domain.TaskData, " %s %s: %s" % (fnid, self.fn_index[fnid], self.rdepids[fnid]))

View File

@@ -1,17 +0,0 @@
#
# BitBake UI Implementation
#
# Copyright (C) 2006-2007 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -1,17 +0,0 @@
#
# Gtk+ UI pieces for BitBake
#
# Copyright (C) 2006-2007 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobwidget import hic
from bb.ui.crumbs.runningbuild import RunningBuildTreeView
from bb.ui.crumbs.hobpages import HobPage
#
# BuildDetailsPage
#
class BuildDetailsPage (HobPage):
def __init__(self, builder):
super(BuildDetailsPage, self).__init__(builder, "Building ...")
# create visual elements
self.create_visual_elements()
def create_visual_elements(self):
# create visual elements
self.vbox = gtk.VBox(False, 15)
self.progress_box = gtk.HBox(False, 5)
self.progress_bar = HobProgressBar()
self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
self.stop_button = gtk.LinkButton("Stop the build process", "Stop")
self.stop_button.connect("clicked", self.stop_button_clicked_cb)
self.progress_box.pack_end(self.stop_button, expand=False, fill=False)
self.build_tv = RunningBuildTreeView(readonly=True)
self.build_tv.set_model(self.builder.handler.build.model)
self.scrolled_view = gtk.ScrolledWindow ()
self.scrolled_view.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
self.scrolled_view.add(self.build_tv)
self.button_box = gtk.HBox(False, 5)
self.back_button = gtk.LinkButton("Go back to Image Configuration screen", "<< Back to image configuration")
self.back_button.connect("clicked", self.back_button_clicked_cb)
self.button_box.pack_start(self.back_button, expand=False, fill=False)
def _remove_all_widget(self):
children = self.vbox.get_children() or []
for child in children:
self.vbox.remove(child)
children = self.box_group_area.get_children() or []
for child in children:
self.box_group_area.remove(child)
children = self.get_children() or []
for child in children:
self.remove(child)
def show_page(self, step):
self._remove_all_widget()
if step == self.builder.PACKAGE_GENERATING or step == self.builder.FAST_IMAGE_GENERATING:
self.title = "Building packages ..."
else:
self.title = "Building image ..."
self.build_details_top = self.add_onto_top_bar(None)
self.pack_start(self.build_details_top, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.vbox, expand=True, fill=True)
self.progress_bar.reset()
self.vbox.pack_start(self.progress_box, expand=False, fill=False)
self.vbox.pack_start(self.scrolled_view, expand=True, fill=True)
self.box_group_area.pack_end(self.button_box, expand=False, fill=False)
self.show_all()
self.back_button.hide()
def update_progress_bar(self, title, fraction, status=True):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def back_button_clicked_cb(self, button):
self.builder.show_configuration()
def show_back_button(self):
self.back_button.show()
def stop_button_clicked_cb(self, button):
self.builder.stop_build()
def hide_stop_button(self):
self.stop_button.hide()

View File

@@ -1,873 +0,0 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import copy
import os
import subprocess
import shlex
from bb.ui.crumbs.template import TemplateMgr
from bb.ui.crumbs.imageconfigurationpage import ImageConfigurationPage
from bb.ui.crumbs.recipeselectionpage import RecipeSelectionPage
from bb.ui.crumbs.packageselectionpage import PackageSelectionPage
from bb.ui.crumbs.builddetailspage import BuildDetailsPage
from bb.ui.crumbs.imagedetailspage import ImageDetailsPage
from bb.ui.crumbs.hobwidget import hwc
from bb.ui.crumbs.hig import CrumbsDialog, BinbDialog, \
AdvancedSettingDialog, LayerSelectionDialog, \
DeployImageDialog, ImageSelectionDialog
class Configuration:
'''Represents the data structure of configuration.'''
def __init__(self, params):
# Settings
self.curr_mach = ""
self.curr_distro = params["distro"]
self.dldir = params["dldir"]
self.sstatedir = params["sstatedir"]
self.sstatemirror = params["sstatemirror"]
self.pmake = params["pmake"]
self.bbthread = params["bbthread"]
self.curr_package_format = " ".join(params["pclass"].split("package_")).strip()
self.image_rootfs_size = params["image_rootfs_size"]
self.image_extra_size = params["image_extra_size"]
self.image_overhead_factor = params['image_overhead_factor']
self.incompat_license = params["incompat_license"]
self.curr_sdk_machine = params["sdk_machine"]
self.extra_setting = {}
self.toolchain_build = False
self.image_fstypes = params["image_fstypes"].split()
# bblayers.conf
self.layers = params["layer"].split()
# image/recipes/packages
self.selected_image = None
self.selected_recipes = []
self.selected_packages = []
def load(self, template):
self.curr_mach = template.getVar("MACHINE")
self.curr_package_format = " ".join(template.getVar("PACKAGE_CLASSES").split("package_")).strip()
self.curr_distro = template.getVar("DISTRO")
self.dldir = template.getVar("DL_DIR")
self.sstatedir = template.getVar("SSTATE_DIR")
self.sstatemirror = template.getVar("SSTATE_MIRROR")
self.pmake = int(template.getVar("PARALLEL_MAKE").split()[1])
self.bbthread = int(template.getVar("BB_NUMBER_THREAD"))
self.image_rootfs_size = int(template.getVar("IMAGE_ROOTFS_SIZE"))
self.image_extra_size = int(template.getVar("IMAGE_EXTRA_SPACE"))
# image_overhead_factor is read-only.
self.incompat_license = template.getVar("INCOMPATIBLE_LICENSE")
self.curr_sdk_machine = template.getVar("SDKMACHINE")
self.extra_setting = eval(template.getVar("EXTRA_SETTING"))
self.toolchain_build = eval(template.getVar("TOOLCHAIN_BUILD"))
self.image_fstypes = template.getVar("IMAGE_FSTYPES").split()
# bblayers.conf
self.layers = template.getVar("BBLAYERS").split()
# image/recipes/packages
self.selected_image = template.getVar("__SELECTED_IMAGE__")
self.selected_recipes = template.getVar("DEPENDS").split()
self.selected_packages = template.getVar("IMAGE_INSTALL").split()
def save(self, template, filename):
# bblayers.conf
template.setVar("BBLAYERS", " ".join(self.layers))
# local.conf
template.setVar("MACHINE", self.curr_mach)
template.setVar("DISTRO", self.curr_distro)
template.setVar("DL_DIR", self.dldir)
template.setVar("SSTATE_DIR", self.sstatedir)
template.setVar("SSTATE_MIRROR", self.sstatemirror)
template.setVar("PARALLEL_MAKE", "-j %s" % self.pmake)
template.setVar("BB_NUMBER_THREAD", self.bbthread)
template.setVar("PACKAGE_CLASSES", " ".join(["package_" + i for i in self.curr_package_format.split()]))
template.setVar("IMAGE_ROOTFS_SIZE", self.image_rootfs_size)
template.setVar("IMAGE_EXTRA_SPACE", self.image_extra_size)
template.setVar("INCOMPATIBLE_LICENSE", self.incompat_license)
template.setVar("SDKMACHINE", self.curr_sdk_machine)
template.setVar("EXTRA_SETTING", self.extra_setting)
template.setVar("TOOLCHAIN_BUILD", self.toolchain_build)
template.setVar("IMAGE_FSTYPES", " ".join(self.image_fstypes).lstrip(" "))
# image/recipes/packages
self.selected_image = filename
template.setVar("__SELECTED_IMAGE__", self.selected_image)
template.setVar("DEPENDS", self.selected_recipes)
template.setVar("IMAGE_INSTALL", self.selected_packages)
class Parameters:
'''Represents other variables like available machines, etc.'''
def __init__(self, params):
# Variables
self.all_machines = []
self.all_package_formats = []
self.all_distros = []
self.all_sdk_machines = []
self.max_threads = params["max_threads"]
self.all_layers = []
self.core_base = params["core_base"]
self.image_names = []
self.image_addr = params["image_addr"]
self.image_types = params["image_types"].split()
class Builder(gtk.Window):
(MACHINE_SELECTION,
LAYER_CHANGED,
RCPPKGINFO_POPULATING,
RCPPKGINFO_POPULATED,
RECIPE_SELECTION,
PACKAGE_GENERATING,
PACKAGE_GENERATED,
PACKAGE_SELECTION,
FAST_IMAGE_GENERATING,
IMAGE_GENERATING,
IMAGE_GENERATED,
MY_IMAGE_OPENED,
BACK,
END_NOOP) = range(14)
(IMAGE_CONFIGURATION,
RECIPE_DETAILS,
BUILD_DETAILS,
PACKAGE_DETAILS,
IMAGE_DETAILS,
END_TAB) = range(6)
__step2page__ = {
MACHINE_SELECTION : IMAGE_CONFIGURATION,
LAYER_CHANGED : IMAGE_CONFIGURATION,
RCPPKGINFO_POPULATING : IMAGE_CONFIGURATION,
RCPPKGINFO_POPULATED : IMAGE_CONFIGURATION,
RECIPE_SELECTION : RECIPE_DETAILS,
PACKAGE_GENERATING : BUILD_DETAILS,
PACKAGE_GENERATED : PACKAGE_DETAILS,
PACKAGE_SELECTION : PACKAGE_DETAILS,
FAST_IMAGE_GENERATING : BUILD_DETAILS,
IMAGE_GENERATING : BUILD_DETAILS,
IMAGE_GENERATED : IMAGE_DETAILS,
MY_IMAGE_OPENED : IMAGE_DETAILS,
END_NOOP : None,
}
def __init__(self, hobHandler, recipe_model, package_model):
super(Builder, self).__init__()
# handler
self.handler = hobHandler
self.template = None
# settings
params = self.handler.get_parameters()
self.configuration = Configuration(params)
self.parameters = Parameters(params)
# build step
self.current_step = None
self.previous_step = None
self.stopping = False
self.build_succeeded = True
# recipe model and package model
self.recipe_model = recipe_model
self.package_model = package_model
# create visual elements
self.create_visual_elements()
# connect the signals to functions
#self.connect("configure-event", self.resize_window_cb)
self.connect("delete-event", self.destroy_window_cb)
self.recipe_model.connect ("recipe-selection-changed", self.recipelist_changed_cb)
self.package_model.connect("package-selection-changed", self.packagelist_changed_cb)
self.recipe_model.connect ("recipelist-populated", self.recipelist_populated_cb)
self.package_model.connect("packagelist-populated", self.packagelist_populated_cb)
self.handler.connect("config-updated", self.handler_config_updated_cb)
self.handler.connect("package-formats-updated", self.handler_package_formats_updated_cb)
self.handler.connect("layers-updated", self.handler_layers_updated_cb)
self.handler.connect("parsing-started", self.handler_parsing_started_cb)
self.handler.connect("parsing", self.handler_parsing_cb)
self.handler.connect("parsing-completed", self.handler_parsing_completed_cb)
self.handler.build.connect("build-started", self.handler_build_started_cb)
self.handler.build.connect("build-succeeded", self.handler_build_succeeded_cb)
self.handler.build.connect("build-failed", self.handler_build_failed_cb)
self.handler.build.connect("task-started", self.handler_task_started_cb)
self.handler.connect("generating-data", self.handler_generating_data_cb)
self.handler.connect("data-generated", self.handler_data_generated_cb)
self.handler.connect("command-succeeded", self.handler_command_succeeded_cb)
self.handler.connect("command-failed", self.handler_command_failed_cb)
self.switch_page(self.MACHINE_SELECTION)
def create_visual_elements(self):
self.set_title("HOB -- Image Creator")
self.set_icon_name("applications-development")
self.set_position(gtk.WIN_POS_CENTER_ALWAYS)
self.set_resizable(True)
window_width = self.get_screen().get_width()
window_height = self.get_screen().get_height()
if window_width >= hwc.MAIN_WIN_WIDTH:
window_width = hwc.MAIN_WIN_WIDTH
window_height = hwc.MAIN_WIN_HEIGHT
else:
lbl = "<b>Screen dimension mismatched</b>\nfor better usability and visual effects,"
lbl = lbl + " the screen dimension should be 1024x768 or above."
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
self.set_size_request(window_width, window_height)
self.vbox = gtk.VBox(False, 0)
self.vbox.set_border_width(0)
self.add(self.vbox)
# create pages
self.image_configuration_page = ImageConfigurationPage(self)
self.recipe_details_page = RecipeSelectionPage(self)
self.build_details_page = BuildDetailsPage(self)
self.package_details_page = PackageSelectionPage(self)
self.image_details_page = ImageDetailsPage(self)
self.nb = gtk.Notebook()
self.nb.set_show_tabs(False)
self.nb.insert_page(self.image_configuration_page, None, self.IMAGE_CONFIGURATION)
self.nb.insert_page(self.recipe_details_page, None, self.RECIPE_DETAILS)
self.nb.insert_page(self.build_details_page, None, self.BUILD_DETAILS)
self.nb.insert_page(self.package_details_page, None, self.PACKAGE_DETAILS)
self.nb.insert_page(self.image_details_page, None, self.IMAGE_DETAILS)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.show_all()
self.nb.set_current_page(0)
def get_split_model(self):
return self.handler.split_model
def load_template(self, path):
self.template = TemplateMgr()
self.template.load(path)
self.configuration.load(self.template)
if self.get_split_model():
if not set(self.configuration.layers) <= set(self.parameters.all_layers):
return False
else:
for layer in self.configuration.layers:
if not os.path.exists(layer+'/conf/layer.conf'):
return False
self.switch_page(self.LAYER_CHANGED)
self.template.destroy()
self.template = None
def save_template(self, path):
if path.rfind("/") == -1:
filename = "default"
path = "."
else:
filename = path[path.rfind("/") + 1:len(path)]
path = path[0:path.rfind("/")]
self.template = TemplateMgr()
self.template.open(filename, path)
self.configuration.save(self.template, filename)
self.template.save()
self.template.destroy()
self.template = None
def switch_page(self, next_step):
# Main Workflow (Business Logic)
self.nb.set_current_page(self.__step2page__[next_step])
if next_step == self.MACHINE_SELECTION: # init step
self.image_configuration_page.show_machine()
elif next_step == self.LAYER_CHANGED:
# after layers is changd by users
self.image_configuration_page.show_machine()
self.handler.refresh_layers(self.configuration.layers)
elif next_step == self.RCPPKGINFO_POPULATING:
# MACHINE CHANGED action or SETTINGS CHANGED
# show the progress bar
self.image_configuration_page.show_info_populating()
self.generate_recipes()
elif next_step == self.RCPPKGINFO_POPULATED:
self.image_configuration_page.show_info_populated()
elif next_step == self.RECIPE_SELECTION:
pass
elif next_step == self.PACKAGE_SELECTION:
pass
elif next_step == self.PACKAGE_GENERATING or next_step == self.FAST_IMAGE_GENERATING:
# both PACKAGE_GENEATING and FAST_IMAGE_GENERATING share the same page
self.build_details_page.show_page(next_step)
self.generate_packages()
elif next_step == self.PACKAGE_GENERATED:
pass
elif next_step == self.IMAGE_GENERATING:
# after packages are generated, selected_packages need to
# be updated in package_model per selected_image in recipe_model
self.build_details_page.show_page(next_step)
self.generate_image()
elif next_step == self.IMAGE_GENERATED:
self.image_details_page.show_page(next_step)
elif next_step == self.MY_IMAGE_OPENED:
self.image_details_page.show_page(next_step)
self.previous_step = self.current_step
self.current_step = next_step
def set_user_config(self):
self.handler.init_cooker()
# set bb layers
self.handler.set_bblayers(self.configuration.layers)
# set local configuration
self.handler.set_machine(self.configuration.curr_mach)
self.handler.set_package_format(self.configuration.curr_package_format)
self.handler.set_distro(self.configuration.curr_distro)
self.handler.set_dl_dir(self.configuration.dldir)
self.handler.set_sstate_dir(self.configuration.sstatedir)
self.handler.set_sstate_mirror(self.configuration.sstatemirror)
self.handler.set_pmake(self.configuration.pmake)
self.handler.set_bbthreads(self.configuration.bbthread)
self.handler.set_rootfs_size(self.configuration.image_rootfs_size)
self.handler.set_extra_size(self.configuration.image_extra_size)
self.handler.set_incompatible_license(self.configuration.incompat_license)
self.handler.set_sdk_machine(self.configuration.curr_sdk_machine)
self.handler.set_image_fstypes(self.configuration.image_fstypes)
self.handler.set_extra_config(self.configuration.extra_setting)
self.handler.set_extra_inherit("packageinfo")
def reset_recipe_model(self):
self.recipe_model.reset()
def reset_package_model(self):
self.package_model.reset()
def update_recipe_model(self, selected_image, selected_recipes):
self.recipe_model.set_selected_image(selected_image)
self.recipe_model.set_selected_recipes(selected_recipes)
def update_package_model(self, selected_packages):
left = self.package_model.set_selected_packages(selected_packages)
self.configuration.selected_packages += left
def generate_packages(self):
# Build packages
_, all_recipes = self.recipe_model.get_selected_recipes()
self.set_user_config()
self.handler.reset_build()
self.handler.generate_packages(all_recipes)
def generate_recipes(self):
# Parse recipes
self.set_user_config()
self.handler.generate_recipes()
def generate_image(self):
# Build image
self.set_user_config()
all_packages = self.package_model.get_selected_packages()
self.handler.reset_build()
self.handler.generate_image(all_packages, self.configuration.toolchain_build)
# Callback Functions
def handler_config_updated_cb(self, handler, which, values):
if which == "distro":
self.parameters.all_distros = values
elif which == "machine":
self.parameters.all_machines = values
self.image_configuration_page.update_machine_combo()
elif which == "machine-sdk":
self.parameters.all_sdk_machines = values
def handler_package_formats_updated_cb(self, handler, formats):
self.parameters.all_package_formats = formats
def handler_layers_updated_cb(self, handler, layers):
self.parameters.all_layers = layers
def handler_command_succeeded_cb(self, handler, initcmd):
if initcmd == self.handler.LAYERS_REFRESH:
self.image_configuration_page.switch_machine_combo()
elif initcmd in [self.handler.GENERATE_RECIPES,
self.handler.GENERATE_PACKAGES,
self.handler.GENERATE_IMAGE]:
self.handler.request_package_info_async()
elif initcmd == self.handler.POPULATE_PACKAGEINFO:
if self.current_step == self.FAST_IMAGE_GENERATING:
self.switch_page(self.IMAGE_GENERATING)
elif self.current_step == self.RCPPKGINFO_POPULATING:
self.switch_page(self.RCPPKGINFO_POPULATED)
elif self.current_step == self.PACKAGE_GENERATING:
self.switch_page(self.PACKAGE_GENERATED)
elif self.current_step == self.IMAGE_GENERATING:
self.switch_page(self.IMAGE_GENERATED)
def handler_command_failed_cb(self, handler, msg):
lbl = "<b>Error</b>\n"
lbl = lbl + "%s\n\n" % msg
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_WARNING)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
response = dialog.run()
dialog.destroy()
self.handler.clear_busy()
self.configuration.curr_mach = None
self.image_configuration_page.switch_machine_combo()
self.switch_page(self.MACHINE_SELECTION)
def window_sensitive(self, sensitive):
self.set_sensitive(sensitive)
if sensitive:
self.get_root_window().set_cursor(gtk.gdk.Cursor(gtk.gdk.LEFT_PTR))
else:
self.get_root_window().set_cursor(gtk.gdk.Cursor(gtk.gdk.WATCH))
def handler_generating_data_cb(self, handler):
self.window_sensitive(False)
def handler_data_generated_cb(self, handler):
self.window_sensitive(True)
def recipelist_populated_cb(self, recipe_model):
selected_image = self.configuration.selected_image
selected_recipes = self.configuration.selected_recipes[:]
selected_packages = self.configuration.selected_packages[:]
self.recipe_model.image_list_append(selected_image,
" ".join(selected_recipes),
" ".join(selected_packages))
self.image_configuration_page.update_image_combo(self.recipe_model, selected_image)
self.update_recipe_model(selected_image, selected_recipes)
def packagelist_populated_cb(self, package_model):
selected_packages = self.configuration.selected_packages[:]
self.update_package_model(selected_packages)
def recipelist_changed_cb(self, recipe_model):
self.recipe_details_page.refresh_selection()
def packagelist_changed_cb(self, package_model):
self.package_details_page.refresh_selection()
def handler_parsing_started_cb(self, handler, message):
if self.current_step != self.RCPPKGINFO_POPULATING:
return
fraction = 0
if message["eventname"] == "TreeDataPreparationStarted":
fraction = 0.6 + fraction
self.image_configuration_page.update_progress_bar(message["title"], fraction)
def handler_parsing_cb(self, handler, message):
if self.current_step != self.RCPPKGINFO_POPULATING:
return
fraction = message["current"] * 1.0/message["total"]
if message["eventname"] == "TreeDataPreparationProgress":
fraction = 0.6 + 0.4 * fraction
else:
fraction = 0.6 * fraction
self.image_configuration_page.update_progress_bar(message["title"], fraction)
def handler_parsing_completed_cb(self, handler, message):
if self.current_step != self.RCPPKGINFO_POPULATING:
return
if message["eventname"] == "TreeDataPreparationCompleted":
fraction = 1.0
else:
fraction = 0.6
self.image_configuration_page.update_progress_bar(message["title"], fraction)
def handler_build_started_cb(self, running_build):
if self.current_step == self.FAST_IMAGE_GENERATING:
fraction = 0
elif self.current_step == self.IMAGE_GENERATING:
if self.previous_step == self.FAST_IMAGE_GENERATING:
fraction = 0.9
else:
fraction = 0
elif self.current_step == self.PACKAGE_GENERATING:
fraction = 0
self.build_details_page.update_progress_bar("Build Started: ", fraction)
def handler_build_succeeded_cb(self, running_build):
self.build_succeeded = True
if self.current_step == self.FAST_IMAGE_GENERATING:
fraction = 0.9
elif self.current_step == self.IMAGE_GENERATING:
fraction = 1.0
self.parameters.image_names = []
linkname = 'hob-image-' + self.configuration.curr_mach
for image_type in self.parameters.image_types:
linkpath = self.parameters.image_addr + '/' + linkname + '.' + image_type
if os.path.exists(linkpath):
self.parameters.image_names.append(os.readlink(linkpath))
elif self.current_step == self.PACKAGE_GENERATING:
fraction = 1.0
self.build_details_page.update_progress_bar("Build Completed: ", fraction)
def handler_build_failed_cb(self, running_build):
self.build_succeeded = False
if self.current_step == self.FAST_IMAGE_GENERATING:
fraction = 0.9
elif self.current_step == self.IMAGE_GENERATING:
fraction = 1.0
elif self.current_step == self.PACKAGE_GENERATING:
fraction = 1.0
self.build_details_page.update_progress_bar("Build Failed: ", fraction, False)
self.build_details_page.show_back_button()
self.build_details_page.hide_stop_button()
self.handler.build_failed_async()
self.stopping = False
def handler_task_started_cb(self, running_build, message):
fraction = message["current"] * 1.0/message["total"]
title = "Build packages"
if self.current_step == self.FAST_IMAGE_GENERATING:
if message["eventname"] == "sceneQueueTaskStarted":
fraction = 0.27 * fraction
elif message["eventname"] == "runQueueTaskStarted":
fraction = 0.27 + 0.63 * fraction
elif self.current_step == self.IMAGE_GENERATING:
title = "Build image"
if self.previous_step == self.FAST_IMAGE_GENERATING:
if message["eventname"] == "sceneQueueTaskStarted":
fraction = 0.27 + 0.63 + 0.03 * fraction
elif message["eventname"] == "runQueueTaskStarted":
fraction = 0.27 + 0.63 + 0.03 + 0.07 * fraction
else:
if message["eventname"] == "sceneQueueTaskStarted":
fraction = 0.2 * fraction
elif message["eventname"] == "runQueueTaskStarted":
fraction = 0.2 + 0.8 * fraction
elif self.current_step == self.PACKAGE_GENERATING:
if message["eventname"] == "sceneQueueTaskStarted":
fraction = 0.2 * fraction
elif message["eventname"] == "runQueueTaskStarted":
fraction = 0.2 + 0.8 * fraction
self.build_details_page.update_progress_bar(title + ": ", fraction)
def destroy_window_cb(self, widget, event):
lbl = "<b>Do you really want to exit the Hob image creator?</b>"
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_YES, gtk.RESPONSE_YES)
dialog.add_button(gtk.STOCK_NO, gtk.RESPONSE_NO)
dialog.set_default_response(gtk.RESPONSE_NO)
response = dialog.run()
dialog.destroy()
if response == gtk.RESPONSE_YES:
gtk.main_quit()
return False
else:
return True
def build_packages(self):
_, all_recipes = self.recipe_model.get_selected_recipes()
if not all_recipes:
lbl = "<b>No selections made</b>\nYou have not made any selections"
lbl = lbl + " so there isn't anything to bake at this time."
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
return
self.switch_page(self.PACKAGE_GENERATING)
def build_image(self):
selected_packages = self.package_model.get_selected_packages()
if not selected_packages:
lbl = "<b>No selections made</b>\nYou have not made any selections"
lbl = lbl + " so there isn't anything to bake at this time."
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
return
self.switch_page(self.IMAGE_GENERATING)
def just_bake(self):
selected_image = self.recipe_model.get_selected_image()
selected_packages = self.package_model.get_selected_packages() or []
# If no base image and no selected packages don't build anything
if not (selected_packages or selected_image != self.recipe_model.__dummy_image__):
lbl = "<b>No selections made</b>\nYou have not made any selections"
lbl = lbl + " so there isn't anything to bake at this time."
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
return
self.switch_page(self.FAST_IMAGE_GENERATING)
def show_binb_dialog(self, binb):
binb_dialog = BinbDialog("Brought in by:", binb, self)
binb_dialog.run()
binb_dialog.destroy()
def show_layer_selection_dialog(self):
dialog = LayerSelectionDialog(title = "Layer Selection",
layers = copy.deepcopy(self.configuration.layers),
all_layers = self.parameters.all_layers,
split_model = self.get_split_model(),
parent = self,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR,
buttons = (gtk.STOCK_OK, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
response = dialog.run()
if response == gtk.RESPONSE_YES:
self.configuration.layers = dialog.layers
# DO refresh layers
if dialog.layers_changed:
self.switch_page(self.LAYER_CHANGED)
dialog.destroy()
def show_load_template_dialog(self):
dialog = gtk.FileChooserDialog("Load Template Files", self,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_OPEN, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
filter = gtk.FileFilter()
filter.set_name("HOB Files")
filter.add_pattern("*.hob")
dialog.add_filter(filter)
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
self.load_template(path)
dialog.destroy()
def show_save_template_dialog(self):
dialog = gtk.FileChooserDialog("Save Template Files", self,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_SAVE, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
dialog.set_current_name("hob")
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
self.save_template(path)
dialog.destroy()
def show_load_my_images_dialog(self):
dialog = ImageSelectionDialog(self.parameters.image_addr, self.parameters.image_types,
"Open My Images", self,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_OPEN, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
response = dialog.run()
if response == gtk.RESPONSE_YES:
if not dialog.image_names:
lbl = "<b>No selections made</b>\nYou have not made any selections"
crumbs_dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
crumbs_dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
crumbs_dialog.run()
crumbs_dialog.destroy()
dialog.destroy()
return
self.parameters.image_addr = dialog.image_folder
self.parameters.image_names = dialog.image_names[:]
self.switch_page(self.MY_IMAGE_OPENED)
dialog.destroy()
def show_adv_settings_dialog(self):
dialog = AdvancedSettingDialog(title = "Settings",
configuration = copy.deepcopy(self.configuration),
all_image_types = self.parameters.image_types,
all_package_formats = self.parameters.all_package_formats,
all_distros = self.parameters.all_distros,
all_sdk_machines = self.parameters.all_sdk_machines,
max_threads = self.parameters.max_threads,
split_model = self.get_split_model(),
parent = self,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR,
buttons = ("Save", gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
response = dialog.run()
if response == gtk.RESPONSE_YES:
self.configuration = dialog.configuration
# DO reparse recipes
if dialog.settings_changed:
if self.configuration.curr_mach == "":
self.switch_page(self.MACHINE_SELECTION)
else:
self.switch_page(self.RCPPKGINFO_POPULATING)
dialog.destroy()
def deploy_image(self, image_name):
if not image_name:
lbl = "<b>Please select an image to deploy.</b>"
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
return
image_path = os.path.join(self.parameters.image_addr, image_name)
dialog = DeployImageDialog(title = "Usb Image Maker",
image_path = image_path,
parent = self,
flags = gtk.DIALOG_MODAL
| gtk.DIALOG_DESTROY_WITH_PARENT
| gtk.DIALOG_NO_SEPARATOR,
buttons = ("Close", gtk.RESPONSE_NO,
"Make usb image", gtk.RESPONSE_YES))
response = dialog.run()
dialog.destroy()
def runqemu_image(self, image_name):
if not image_name:
lbl = "<b>Please select an image to launch in QEMU.</b>"
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
return
dialog = gtk.FileChooserDialog("Load Kernel Files", self,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_OPEN, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
filter = gtk.FileFilter()
filter.set_name("Kernel Files")
filter.add_pattern("*.bin")
dialog.add_filter(filter)
dialog.set_current_folder(self.parameters.image_addr)
response = dialog.run()
if response == gtk.RESPONSE_YES:
kernel_path = dialog.get_filename()
image_path = os.path.join(self.parameters.image_addr, image_name)
dialog.destroy()
if response == gtk.RESPONSE_YES:
source_env_path = os.path.join(self.parameters.core_base, "oe-init-build-env")
tmp_path = os.path.join(os.getcwd(), "tmp")
if os.path.exists(image_path) and os.path.exists(kernel_path) \
and os.path.exists(source_env_path) and os.path.exists(tmp_path):
cmdline = "/usr/bin/xterm -e "
cmdline += "\" export OE_TMPDIR=" + tmp_path + "; "
cmdline += "source " + source_env_path + " " + os.getcwd() + "; "
cmdline += "runqemu " + kernel_path + " " + image_path + "; bash\""
subprocess.Popen(shlex.split(cmdline))
else:
lbl = "<b>Path error</b>\nOne of your paths is wrong,"
lbl = lbl + " please make sure the following paths exist:\n"
lbl = lbl + "image path:" + image_path + "\n"
lbl = lbl + "kernel path:" + kernel_path + "\n"
lbl = lbl + "source environment path:" + source_env_path + "\n"
lbl = lbl + "tmp path: " + tmp_path + "."
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
dialog.run()
dialog.destroy()
def show_packages(self, ask=True):
_, selected_recipes = self.recipe_model.get_selected_recipes()
if selected_recipes and ask:
lbl = "<b>Package list may be incomplete!</b>\nDo you want to build selected recipes"
lbl = lbl + " to get a full list (Yes) or just view the existing packages (No)?"
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_INFO)
dialog.add_button(gtk.STOCK_YES, gtk.RESPONSE_YES)
dialog.add_button(gtk.STOCK_NO, gtk.RESPONSE_NO)
dialog.set_default_response(gtk.RESPONSE_YES)
response = dialog.run()
dialog.destroy()
if response == gtk.RESPONSE_YES:
self.switch_page(self.PACKAGE_GENERATING)
else:
self.switch_page(self.PACKAGE_SELECTION)
else:
self.switch_page(self.PACKAGE_SELECTION)
def show_recipes(self):
self.switch_page(self.RECIPE_SELECTION)
def initiate_new_build(self):
self.configuration.curr_mach = ""
self.image_configuration_page.switch_machine_combo()
self.switch_page(self.MACHINE_SELECTION)
def show_configuration(self):
self.switch_page(self.RCPPKGINFO_POPULATED)
def stop_build(self):
if self.stopping:
lbl = "<b>Force Stop build?</b>\nYou've already selected Stop once,"
lbl = lbl + " would you like to 'Force Stop' the build?\n\n"
lbl = lbl + "This will stop the build as quickly as possible but may"
lbl = lbl + " well leave your build directory in an unusable state"
lbl = lbl + " that requires manual steps to fix.\n"
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_WARNING)
dialog.add_button(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL)
dialog.add_button("Force Stop", gtk.RESPONSE_YES)
else:
lbl = "<b>Stop build?</b>\n\nAre you sure you want to stop this"
lbl = lbl + " build?\n\n'Force Stop' will stop the build as quickly as"
lbl = lbl + " possible but may well leave your build directory in an"
lbl = lbl + " unusable state that requires manual steps to fix.\n\n"
lbl = lbl + "'Stop' will stop the build as soon as all in"
lbl = lbl + " progress build tasks are finished. However if a"
lbl = lbl + " lengthy compilation phase is in progress this may take"
lbl = lbl + " some time."
dialog = CrumbsDialog(self, lbl, gtk.STOCK_DIALOG_WARNING)
dialog.add_button(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL)
dialog.add_button("Stop", gtk.RESPONSE_OK)
dialog.add_button("Force Stop", gtk.RESPONSE_YES)
response = dialog.run()
dialog.destroy()
if response != gtk.RESPONSE_CANCEL:
self.stopping = True
if response == gtk.RESPONSE_OK:
self.handler.cancel_build()
elif response == gtk.RESPONSE_YES:
self.handler.cancel_build(True)

View File

@@ -1,455 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2008 Intel Corporation
#
# Authored by Rob Bradford <rob@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import threading
import os
import datetime
import time
class BuildConfiguration:
""" Represents a potential *or* historic *or* concrete build. It
encompasses all the things that we need to tell bitbake to do to make it
build what we want it to build.
It also stored the metadata URL and the set of possible machines (and the
distros / images / uris for these. Apart from the metdata URL these are
not serialised to file (since they may be transient). In some ways this
functionality might be shifted to the loader class."""
def __init__ (self):
self.metadata_url = None
# Tuple of (distros, image, urls)
self.machine_options = {}
self.machine = None
self.distro = None
self.image = None
self.urls = []
self.extra_urls = []
self.extra_pkgs = []
def get_machines_model (self):
model = gtk.ListStore (gobject.TYPE_STRING)
for machine in self.machine_options.keys():
model.append ([machine])
return model
def get_distro_and_images_models (self, machine):
distro_model = gtk.ListStore (gobject.TYPE_STRING)
for distro in self.machine_options[machine][0]:
distro_model.append ([distro])
image_model = gtk.ListStore (gobject.TYPE_STRING)
for image in self.machine_options[machine][1]:
image_model.append ([image])
return (distro_model, image_model)
def get_repos (self):
self.urls = self.machine_options[self.machine][2]
return self.urls
# It might be a lot lot better if we stored these in like, bitbake conf
# file format.
@staticmethod
def load_from_file (filename):
conf = BuildConfiguration()
with open(filename, "r") as f:
for line in f:
data = line.split (";")[1]
if (line.startswith ("metadata-url;")):
conf.metadata_url = data.strip()
continue
if (line.startswith ("url;")):
conf.urls += [data.strip()]
continue
if (line.startswith ("extra-url;")):
conf.extra_urls += [data.strip()]
continue
if (line.startswith ("machine;")):
conf.machine = data.strip()
continue
if (line.startswith ("distribution;")):
conf.distro = data.strip()
continue
if (line.startswith ("image;")):
conf.image = data.strip()
continue
return conf
# Serialise to a file. This is part of the build process and we use this
# to be able to repeat a given build (using the same set of parameters)
# but also so that we can include the details of the image / machine /
# distro in the build manager tree view.
def write_to_file (self, filename):
f = open (filename, "w")
lines = []
if (self.metadata_url):
lines += ["metadata-url;%s\n" % (self.metadata_url)]
for url in self.urls:
lines += ["url;%s\n" % (url)]
for url in self.extra_urls:
lines += ["extra-url;%s\n" % (url)]
if (self.machine):
lines += ["machine;%s\n" % (self.machine)]
if (self.distro):
lines += ["distribution;%s\n" % (self.distro)]
if (self.image):
lines += ["image;%s\n" % (self.image)]
f.writelines (lines)
f.close ()
class BuildResult(gobject.GObject):
""" Represents an historic build. Perhaps not successful. But it includes
things such as the files that are in the directory (the output from the
build) as well as a deserialised BuildConfiguration file that is stored in
".conf" in the directory for the build.
This is GObject so that it can be included in the TreeStore."""
(STATE_COMPLETE, STATE_FAILED, STATE_ONGOING) = \
(0, 1, 2)
def __init__ (self, parent, identifier):
gobject.GObject.__init__ (self)
self.date = None
self.files = []
self.status = None
self.identifier = identifier
self.path = os.path.join (parent, identifier)
# Extract the date, since the directory name is of the
# format build-<year><month><day>-<ordinal> we can easily
# pull it out.
# TODO: Better to stat a file?
(_, date, revision) = identifier.split ("-")
print(date)
year = int (date[0:4])
month = int (date[4:6])
day = int (date[6:8])
self.date = datetime.date (year, month, day)
self.conf = None
# By default builds are STATE_FAILED unless we find a "complete" file
# in which case they are STATE_COMPLETE
self.state = BuildResult.STATE_FAILED
for file in os.listdir (self.path):
if (file.startswith (".conf")):
conffile = os.path.join (self.path, file)
self.conf = BuildConfiguration.load_from_file (conffile)
elif (file.startswith ("complete")):
self.state = BuildResult.STATE_COMPLETE
else:
self.add_file (file)
def add_file (self, file):
# Just add the file for now. Don't care about the type.
self.files += [(file, None)]
class BuildManagerModel (gtk.TreeStore):
""" Model for the BuildManagerTreeView. This derives from gtk.TreeStore
but it abstracts nicely what the columns mean and the setup of the columns
in the model. """
(COL_IDENT, COL_DESC, COL_MACHINE, COL_DISTRO, COL_BUILD_RESULT, COL_DATE, COL_STATE) = \
(0, 1, 2, 3, 4, 5, 6)
def __init__ (self):
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_OBJECT,
gobject.TYPE_INT64,
gobject.TYPE_INT)
class BuildManager (gobject.GObject):
""" This class manages the historic builds that have been found in the
"results" directory but is also used for starting a new build."""
__gsignals__ = {
'population-finished' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'populate-error' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def update_build_result (self, result, iter):
# Convert the date into something we can sort by.
date = long (time.mktime (result.date.timetuple()))
# Add a top level entry for the build
self.model.set (iter,
BuildManagerModel.COL_IDENT, result.identifier,
BuildManagerModel.COL_DESC, result.conf.image,
BuildManagerModel.COL_MACHINE, result.conf.machine,
BuildManagerModel.COL_DISTRO, result.conf.distro,
BuildManagerModel.COL_BUILD_RESULT, result,
BuildManagerModel.COL_DATE, date,
BuildManagerModel.COL_STATE, result.state)
# And then we use the files in the directory as the children for the
# top level iter.
for file in result.files:
self.model.append (iter, (None, file[0], None, None, None, date, -1))
# This function is called as an idle by the BuildManagerPopulaterThread
def add_build_result (self, result):
gtk.gdk.threads_enter()
self.known_builds += [result]
self.update_build_result (result, self.model.append (None))
gtk.gdk.threads_leave()
def notify_build_finished (self):
# This is a bit of a hack. If we have a running build running then we
# will have a row in the model in STATE_ONGOING. Find it and make it
# as if it was a proper historic build (well, it is completed now....)
# We need to use the iters here rather than the Python iterator
# interface to the model since we need to pass it into
# update_build_result
iter = self.model.get_iter_first()
while (iter):
(ident, state) = self.model.get(iter,
BuildManagerModel.COL_IDENT,
BuildManagerModel.COL_STATE)
if state == BuildResult.STATE_ONGOING:
result = BuildResult (self.results_directory, ident)
self.update_build_result (result, iter)
iter = self.model.iter_next(iter)
def notify_build_succeeded (self):
# Write the "complete" file so that when we create the BuildResult
# object we put into the model
complete_file_path = os.path.join (self.cur_build_directory, "complete")
f = file (complete_file_path, "w")
f.close()
self.notify_build_finished()
def notify_build_failed (self):
# Without a "complete" file then this will mark the build as failed:
self.notify_build_finished()
# This function is called as an idle
def emit_population_finished_signal (self):
gtk.gdk.threads_enter()
self.emit ("population-finished")
gtk.gdk.threads_leave()
class BuildManagerPopulaterThread (threading.Thread):
def __init__ (self, manager, directory):
threading.Thread.__init__ (self)
self.manager = manager
self.directory = directory
def run (self):
# For each of the "build-<...>" directories ..
if os.path.exists (self.directory):
for directory in os.listdir (self.directory):
if not directory.startswith ("build-"):
continue
build_result = BuildResult (self.directory, directory)
self.manager.add_build_result (build_result)
gobject.idle_add (BuildManager.emit_population_finished_signal,
self.manager)
def __init__ (self, server, results_directory):
gobject.GObject.__init__ (self)
# The builds that we've found from walking the result directory
self.known_builds = []
# Save out the bitbake server, we need this for issuing commands to
# the cooker:
self.server = server
# The TreeStore that we use
self.model = BuildManagerModel ()
# The results directory is where we create (and look for) the
# build-<xyz>-<n> directories. We need to populate ourselves from
# directory
self.results_directory = results_directory
self.populate_from_directory (self.results_directory)
def populate_from_directory (self, directory):
thread = BuildManager.BuildManagerPopulaterThread (self, directory)
thread.start()
# Come up with the name for the next build ident by combining "build-"
# with the date formatted as yyyymmdd and then an ordinal. We do this by
# an optimistic algorithm incrementing the ordinal if we find that it
# already exists.
def get_next_build_ident (self):
today = datetime.date.today ()
datestr = str (today.year) + str (today.month) + str (today.day)
revision = 0
test_name = "build-%s-%d" % (datestr, revision)
test_path = os.path.join (self.results_directory, test_name)
while (os.path.exists (test_path)):
revision += 1
test_name = "build-%s-%d" % (datestr, revision)
test_path = os.path.join (self.results_directory, test_name)
return test_name
# Take a BuildConfiguration and then try and build it based on the
# parameters of that configuration. S
def do_build (self, conf):
server = self.server
# Work out the build directory. Note we actually create the
# directories here since we need to write the ".conf" file. Otherwise
# we could have relied on bitbake's builder thread to actually make
# the directories as it proceeds with the build.
ident = self.get_next_build_ident ()
build_directory = os.path.join (self.results_directory,
ident)
self.cur_build_directory = build_directory
os.makedirs (build_directory)
conffile = os.path.join (build_directory, ".conf")
conf.write_to_file (conffile)
# Add a row to the model representing this ongoing build. It's kinda a
# fake entry. If this build completes or fails then this gets updated
# with the real stuff like the historic builds
date = long (time.time())
self.model.append (None, (ident, conf.image, conf.machine, conf.distro,
None, date, BuildResult.STATE_ONGOING))
try:
server.runCommand(["setVariable", "BUILD_IMAGES_FROM_FEEDS", 1])
server.runCommand(["setVariable", "MACHINE", conf.machine])
server.runCommand(["setVariable", "DISTRO", conf.distro])
server.runCommand(["setVariable", "PACKAGE_CLASSES", "package_ipk"])
server.runCommand(["setVariable", "BBFILES", \
"""${OEROOT}/meta/packages/*/*.bb ${OEROOT}/meta-moblin/packages/*/*.bb"""])
server.runCommand(["setVariable", "TMPDIR", "${OEROOT}/build/tmp"])
server.runCommand(["setVariable", "IPK_FEED_URIS", \
" ".join(conf.get_repos())])
server.runCommand(["setVariable", "DEPLOY_DIR_IMAGE",
build_directory])
server.runCommand(["buildTargets", [conf.image], "rootfs"])
except Exception as e:
print(e)
class BuildManagerTreeView (gtk.TreeView):
""" The tree view for the build manager. This shows the historic builds
and so forth. """
# We use this function to control what goes in the cell since we store
# the date in the model as seconds since the epoch (for sorting) and so we
# need to make it human readable.
def date_format_custom_cell_data_func (self, col, cell, model, iter):
date = model.get (iter, BuildManagerModel.COL_DATE)[0]
datestr = time.strftime("%A %d %B %Y", time.localtime(date))
cell.set_property ("text", datestr)
# This format function controls what goes in the cell. We use this to map
# the integer state to a string and also to colourise the text
def state_format_custom_cell_data_fun (self, col, cell, model, iter):
state = model.get (iter, BuildManagerModel.COL_STATE)[0]
if (state == BuildResult.STATE_ONGOING):
cell.set_property ("text", "Active")
cell.set_property ("foreground", "#000000")
elif (state == BuildResult.STATE_FAILED):
cell.set_property ("text", "Failed")
cell.set_property ("foreground", "#ff0000")
elif (state == BuildResult.STATE_COMPLETE):
cell.set_property ("text", "Complete")
cell.set_property ("foreground", "#00ff00")
else:
cell.set_property ("text", "")
def __init__ (self):
gtk.TreeView.__init__(self)
# Misc descriptiony thing
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn (None, renderer,
text=BuildManagerModel.COL_DESC)
self.append_column (col)
# Machine
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Machine", renderer,
text=BuildManagerModel.COL_MACHINE)
self.append_column (col)
# distro
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Distribution", renderer,
text=BuildManagerModel.COL_DISTRO)
self.append_column (col)
# date (using a custom function for formatting the cell contents it
# takes epoch -> human readable string)
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Date", renderer,
text=BuildManagerModel.COL_DATE)
self.append_column (col)
col.set_cell_data_func (renderer,
self.date_format_custom_cell_data_func)
# For status.
renderer = gtk.CellRendererText ()
col = gtk.TreeViewColumn ("Status", renderer,
text = BuildManagerModel.COL_STATE)
self.append_column (col)
col.set_cell_data_func (renderer,
self.state_format_custom_cell_data_fun)

View File

@@ -1,644 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import hashlib
import os
import re
import subprocess
import shlex
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import HobWidget
from bb.ui.crumbs.progressbar import HobProgressBar
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
#
# CrumbsDialog
#
class CrumbsDialog(gtk.Dialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, parent=None, label="", icon=gtk.STOCK_INFO):
super(CrumbsDialog, self).__init__("", parent, gtk.DIALOG_DESTROY_WITH_PARENT)
#self.set_property("has-separator", False) # note: deprecated in 2.22
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)
first_row = gtk.HBox(spacing=12)
first_row.set_property("border-width", 6)
first_row.show()
self.vbox.add(first_row)
self.icon = gtk.Image()
self.icon.set_from_stock(icon, gtk.ICON_SIZE_DIALOG)
self.icon.set_property("yalign", 0.00)
self.icon.show()
first_row.add(self.icon)
self.label = gtk.Label()
self.label.set_use_markup(True)
self.label.set_line_wrap(True)
self.label.set_markup(label)
self.label.set_property("yalign", 0.00)
self.label.show()
first_row.add(self.label)
#
# Brought-in-by Dialog
#
class BinbDialog(gtk.Dialog):
"""
A dialog widget to show "brought in by" info when a recipe/package is clicked.
"""
def __init__(self, title, content, parent=None):
super(BinbDialog, self).__init__(title, parent, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT, None)
self.set_position(gtk.WIN_POS_MOUSE)
self.set_resizable(False)
self.modify_bg(gtk.STATE_NORMAL, gtk.gdk.Color(HobColors.DARK))
hbox = gtk.HBox(False, 0)
self.vbox.pack_start(hbox, expand=False, fill=False, padding=10)
label = gtk.Label(content)
label.set_alignment(0, 0)
label.set_line_wrap(True)
label.modify_fg(gtk.STATE_NORMAL, gtk.gdk.Color(HobColors.WHITE))
hbox.pack_start(label, expand=False, fill=False, padding=10)
self.vbox.show_all()
#
# AdvancedSettings Dialog
#
class AdvancedSettingDialog (gtk.Dialog):
def __init__(self, title, configuration, all_image_types,
all_package_formats, all_distros, all_sdk_machines,
max_threads, split_model, parent, flags, buttons):
super(AdvancedSettingDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
# bitbake settings from Builder.Configuration
self.configuration = configuration
self.image_types = all_image_types
self.all_package_formats = all_package_formats
self.all_distros = all_distros
self.all_sdk_machines = all_sdk_machines
self.max_threads = max_threads
self.split_model = split_model
# class members for internal use
self.pkgfmt_store = None
self.distro_combo = None
self.dldir_text = None
self.sstatedir_text = None
self.sstatemirror_text = None
self.bb_spinner = None
self.pmake_spinner = None
self.rootfs_size_spinner = None
self.extra_size_spinner = None
self.gplv3_checkbox = None
self.toolchain_checkbox = None
self.setting_store = None
self.image_types_checkbuttons = {}
self.variables = {}
self.variables["PACKAGE_FORMAT"] = self.configuration.curr_package_format
self.variables["INCOMPATIBLE_LICENSE"] = self.configuration.incompat_license
self.variables["IMAGE_FSTYPES"] = self.configuration.image_fstypes
self.md5 = hashlib.md5(str(sorted(self.variables.items()))).hexdigest()
self.settings_changed = False
# create visual elements on the dialog
self.create_visual_elements()
self.connect("response", self.response_cb)
def create_visual_elements(self):
self.set_size_request(500, 500)
self.nb = gtk.Notebook()
self.nb.set_show_tabs(True)
self.nb.append_page(self.create_image_types_page(), gtk.Label("Image types"))
self.nb.append_page(self.create_output_page(), gtk.Label("Output"))
self.nb.append_page(self.create_build_environment_page(), gtk.Label("Build environment"))
self.nb.append_page(self.create_others_page(), gtk.Label("Others"))
self.nb.set_current_page(0)
self.vbox.pack_start(self.nb, expand=True, fill=True)
self.vbox.pack_end(gtk.HSeparator(), expand=True, fill=True)
self.show_all()
def create_image_types_page(self):
advanced_vbox = gtk.VBox(False, 15)
advanced_vbox.set_border_width(20)
rows = (len(self.image_types)+1)/2
table = gtk.Table(rows + 1, 10, True)
advanced_vbox.pack_start(table, expand=False, fill=False)
tooltip = "Select image file system types that will be used."
image = gtk.Image()
image.show()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Select image types:</span>")
table.attach(label, 0, 9, 0, 1)
table.attach(image, 9, 10, 0, 1)
i = 1
j = 1
for image_type in self.image_types:
self.image_types_checkbuttons[image_type] = gtk.CheckButton(image_type)
self.image_types_checkbuttons[image_type].set_tooltip_text("Build an %s image" % image_type)
table.attach(self.image_types_checkbuttons[image_type], j, j + 4, i, i + 1)
if image_type in self.configuration.image_fstypes:
self.image_types_checkbuttons[image_type].set_active(True)
i += 1
if i > rows:
i = 1
j = j + 4
return advanced_vbox
def create_output_page(self):
advanced_vbox = gtk.VBox(False, 15)
advanced_vbox.set_border_width(20)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Packaging Format:</span>")
tooltip = "Select package formats that will be used. "
tooltip += "The first format will be used for final image"
pkgfmt_widget, self.pkgfmt_store = HobWidget.gen_pkgfmt_widget(self.configuration.curr_package_format, self.all_package_formats, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(pkgfmt_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Image Rootfs Size: (MB)</span>")
tooltip = "Sets the size of your target image.\nThis is the basic size of your target image, unless your selected package size exceeds this value, or you set value to \"Image Extra Size\"."
rootfs_size_widget, self.rootfs_size_spinner = HobWidget.gen_spinner_widget(int(self.configuration.image_rootfs_size*1.0/1024), 0, 1024, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(rootfs_size_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Image Extra Size: (MB)</span>")
tooltip = "Sets the extra free space of your target image.\nDefaultly, system will reserve 30% of your image size as your free space. If your image contains zypper, it will bring in 50MB more space. The maximum free space is 1024MB."
extra_size_widget, self.extra_size_spinner = HobWidget.gen_spinner_widget(int(self.configuration.image_extra_size*1.0/1024), 0, 1024, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(extra_size_widget, expand=False, fill=False)
self.gplv3_checkbox = gtk.CheckButton("Exclude GPLv3 packages")
self.gplv3_checkbox.set_tooltip_text("Check this box to prevent GPLv3 packages from being included in your image")
if "GPLv3" in self.configuration.incompat_license.split():
self.gplv3_checkbox.set_active(True)
else:
self.gplv3_checkbox.set_active(False)
advanced_vbox.pack_start(self.gplv3_checkbox, expand=False, fill=False)
sub_hbox = gtk.HBox(False, 5)
advanced_vbox.pack_start(sub_hbox, expand=False, fill=False)
self.toolchain_checkbox = gtk.CheckButton("Build Toolchain")
self.toolchain_checkbox.set_tooltip_text("Check this box to build the related toolchain with your image")
self.toolchain_checkbox.set_active(self.configuration.toolchain_build)
sub_hbox.pack_start(self.toolchain_checkbox, expand=False, fill=False)
tooltip = "This is the Host platform you would like to run the toolchain"
sdk_machine_widget, self.sdk_machine_combo = HobWidget.gen_combo_widget(self.configuration.curr_sdk_machine, self.all_sdk_machines, tooltip)
sub_hbox.pack_start(sdk_machine_widget, expand=False, fill=False)
return advanced_vbox
def create_build_environment_page(self):
advanced_vbox = gtk.VBox(False, 15)
advanced_vbox.set_border_width(20)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Select Distro:</span>")
tooltip = "This is the Yocto distribution you would like to use"
distro_widget, self.distro_combo = HobWidget.gen_combo_widget(self.configuration.curr_distro, self.all_distros, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(distro_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">BB_NUMBER_THREADS:</span>")
tooltip = "Sets the number of threads that bitbake tasks can run simultaneously"
bbthread_widget, self.bb_spinner = HobWidget.gen_spinner_widget(self.configuration.bbthread, 1, self.max_threads, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(bbthread_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">PARALLEL_MAKE:</span>")
tooltip = "Sets the make parallism, as known as 'make -j'"
pmake_widget, self.pmake_spinner = HobWidget.gen_spinner_widget(self.configuration.pmake, 1, self.max_threads, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(pmake_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Set Download Directory:</span>")
tooltip = "Select a folder that caches the upstream project source code"
dldir_widget, self.dldir_text = HobWidget.gen_entry_widget(self.split_model, self.configuration.dldir, self, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(dldir_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Select SSTATE Directory:</span>")
tooltip = "Select a folder that caches your prebuilt results"
sstatedir_widget, self.sstatedir_text = HobWidget.gen_entry_widget(self.split_model, self.configuration.sstatedir, self, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sstatedir_widget, expand=False, fill=False)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Select SSTATE Mirror:</span>")
tooltip = "Select the prebuilt mirror that will fasten your build speed"
sstatemirror_widget, self.sstatemirror_text = HobWidget.gen_entry_widget(self.split_model, self.configuration.sstatemirror, self, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(sstatemirror_widget, expand=False, fill=False)
return advanced_vbox
def create_others_page(self):
advanced_vbox = gtk.VBox(False, 15)
advanced_vbox.set_border_width(20)
sub_vbox = gtk.VBox(False, 5)
advanced_vbox.pack_start(sub_vbox, expand=True, fill=True)
label = HobWidget.gen_label_widget("<span weight=\"bold\">Add your own variables:</span>")
tooltip = "This is the key/value pair for your extra settings"
setting_widget, self.setting_store = HobWidget.gen_editable_settings(self.configuration.extra_setting, tooltip)
sub_vbox.pack_start(label, expand=False, fill=False)
sub_vbox.pack_start(setting_widget, expand=True, fill=True)
return advanced_vbox
def response_cb(self, dialog, response_id):
self.variables = {}
self.configuration.curr_package_format = ""
it = self.pkgfmt_store.get_iter_first()
while it:
value = self.pkgfmt_store.get_value(it, 2)
if value:
self.configuration.curr_package_format += (self.pkgfmt_store.get_value(it, 1) + " ")
it = self.pkgfmt_store.iter_next(it)
self.configuration.curr_package_format = self.configuration.curr_package_format.strip()
self.variables["PACKAGE_FORMAT"] = self.configuration.curr_package_format
self.configuration.curr_distro = self.distro_combo.get_active_text()
self.configuration.dldir = self.dldir_text.get_text()
self.configuration.sstatedir = self.sstatedir_text.get_text()
self.configuration.sstatemirror = self.sstatemirror_text.get_text()
self.configuration.bbthread = self.bb_spinner.get_value_as_int()
self.configuration.pmake = self.pmake_spinner.get_value_as_int()
self.configuration.image_rootfs_size = self.rootfs_size_spinner.get_value_as_int() * 1024
self.configuration.image_extra_size = self.extra_size_spinner.get_value_as_int() * 1024
self.configuration.image_fstypes = []
for image_type in self.image_types:
if self.image_types_checkbuttons[image_type].get_active():
self.configuration.image_fstypes.append(image_type)
self.variables["IMAGE_FSTYPES"] = self.configuration.image_fstypes
if self.gplv3_checkbox.get_active():
if "GPLv3" not in self.configuration.incompat_license.split():
self.configuration.incompat_license += " GPLv3"
else:
if "GPLv3" in self.configuration.incompat_license.split():
self.configuration.incompat_license = self.configuration.incompat_license.split().remove("GPLv3")
self.configuration.incompat_license = " ".join(self.configuration.incompat_license or [])
self.configuration.incompat_license = self.configuration.incompat_license.strip()
self.variables["INCOMPATIBLE_LICENSE"] = self.configuration.incompat_license
self.configuration.toolchain_build = self.toolchain_checkbox.get_active()
self.configuration.extra_setting = {}
it = self.setting_store.get_iter_first()
while it:
key = self.setting_store.get_value(it, 0)
value = self.setting_store.get_value(it, 1)
self.configuration.extra_setting[key] = value
self.variables[key] = value
it = self.setting_store.iter_next(it)
md5 = hashlib.md5(str(sorted(self.variables.items()))).hexdigest()
self.settings_changed = (self.md5 != md5)
#
# DeployImageDialog
#
class DeployImageDialog (gtk.Dialog):
__dummy_usb__ = "--select a usb drive--"
def __init__(self, title, image_path, parent, flags, buttons):
super(DeployImageDialog, self).__init__(title, parent, flags, buttons)
self.image_path = image_path
self.create_visual_elements()
self.connect("response", self.response_cb)
def create_visual_elements(self):
self.set_border_width(20)
self.set_default_size(500, 250)
label = gtk.Label()
label.set_alignment(0.0, 0.5)
markup = "<span font_desc='12'>The image to be written into usb drive:</span>"
label.set_markup(markup)
self.vbox.pack_start(label, expand=False, fill=False, padding=2)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
tv = gtk.TextView()
tv.set_editable(False)
tv.set_wrap_mode(gtk.WRAP_WORD)
tv.set_cursor_visible(False)
buf = gtk.TextBuffer()
buf.set_text(self.image_path)
tv.set_buffer(buf)
scroll.add(tv)
self.vbox.pack_start(scroll, expand=True, fill=True)
self.usb_desc = gtk.Label()
self.usb_desc.set_alignment(0.0, 0.5)
markup = "<span font_desc='12'>You haven't chosen any USB drive.</span>"
self.usb_desc.set_markup(markup)
self.usb_combo = gtk.combo_box_new_text()
self.usb_combo.connect("changed", self.usb_combo_changed_cb)
model = self.usb_combo.get_model()
model.clear()
self.usb_combo.append_text(self.__dummy_usb__)
for usb in self.find_all_usb_devices():
self.usb_combo.append_text("/dev/" + usb)
self.usb_combo.set_active(0)
self.vbox.pack_start(self.usb_combo, expand=True, fill=True)
self.vbox.pack_start(self.usb_desc, expand=False, fill=False, padding=2)
self.progress_bar = HobProgressBar()
self.vbox.pack_start(self.progress_bar, expand=False, fill=False)
separator = gtk.HSeparator()
self.vbox.pack_start(separator, expand=False, fill=True, padding=10)
self.vbox.show_all()
self.progress_bar.hide()
def popen_read(self, cmd):
return os.popen("%s 2>/dev/null" % cmd).read().strip()
def find_all_usb_devices(self):
usb_devs = [ os.readlink(u)
for u in self.popen_read('ls /dev/disk/by-id/usb*').split()
if not re.search(r'part\d+', u) ]
return [ '%s' % u[u.rfind('/')+1:] for u in usb_devs ]
def get_usb_info(self, dev):
return "%s %s" % \
(self.popen_read('cat /sys/class/block/%s/device/vendor' % dev),
self.popen_read('cat /sys/class/block/%s/device/model' % dev))
def usb_combo_changed_cb(self, usb_combo):
combo_item = self.usb_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_usb__:
markup = "<span font_desc='12'>You haven't chosen any USB drive.</span>"
self.usb_desc.set_markup(markup)
else:
markup = "<span font_desc='12'>" + self.get_usb_info(combo_item.lstrip("/dev/")) + "</span>"
self.usb_desc.set_markup(markup)
def response_cb(self, dialog, response_id):
if response_id == gtk.RESPONSE_YES:
combo_item = self.usb_combo.get_active_text()
if combo_item and combo_item != self.__dummy_usb__:
cmdline = "/usr/bin/xterm -e "
cmdline += "\"sudo dd if=" + self.image_path + " of=" + combo_item + "; bash\""
subprocess.Popen(args=shlex.split(cmdline))
def update_progress_bar(self, title, fraction, status=True):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def write_file(self, ifile, ofile):
self.progress_bar.reset()
self.progress_bar.show()
f_from = os.open(ifile, os.O_RDONLY)
f_to = os.open(ofile, os.O_WRONLY)
total_size = os.stat(ifile).st_size
written_size = 0
while True:
buf = os.read(f_from, 1024*1024)
if not buf:
break
os.write(f_to, buf)
written_size += 1024*1024
self.update_progress_bar("Writing to usb:", written_size * 1.0/total_size)
self.update_progress_bar("Writing completed:", 1.0)
os.close(f_from)
os.close(f_to)
self.progress_bar.hide()
#
# LayerSelectionDialog
#
class LayerSelectionDialog (gtk.Dialog):
def __init__(self, title, layers, all_layers, split_model,
parent, flags, buttons):
super(LayerSelectionDialog, self).__init__(title, parent, flags, buttons)
# class members from other objects
self.layers = layers
self.all_layers = all_layers
self.split_model = split_model
self.layers_changed = False
# class members for internal use
self.layer_store = None
# create visual elements on the dialog
self.create_visual_elements()
self.connect("response", self.response_cb)
def create_visual_elements(self):
self.set_border_width(20)
self.set_default_size(400, 250)
hbox_top = gtk.HBox()
self.set_border_width(12)
self.vbox.pack_start(hbox_top, expand=False, fill=False)
if self.split_model:
label = HobWidget.gen_label_widget("<span weight=\"bold\" font_desc='12'>Select Layers:</span>\n(Available layers under '${COREBASE}/layers/' directory)")
else:
label = HobWidget.gen_label_widget("<span weight=\"bold\" font_desc='12'>Select Layers:</span>")
hbox_top.pack_start(label, expand=False, fill=False)
tooltip = "Layer is a collection of bb files and conf files"
image = gtk.Image()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
hbox_top.pack_end(image, expand=False, fill=False)
layer_widget, self.layer_store = HobWidget.gen_layer_widget(self.split_model, self.layers, self.all_layers, self, None)
self.vbox.pack_start(layer_widget, expand=True, fill=True)
separator = gtk.HSeparator()
self.vbox.pack_start(separator, False, True, 5)
separator.show()
hbox_button = gtk.HBox()
self.vbox.pack_end(hbox_button, expand=False, fill=False)
hbox_button.show()
label = HobWidget.gen_label_widget("<i>'meta' is Core layer for Yocto images</i>\n"
"<span weight=\"bold\">Please do not remove it</span>")
hbox_button.pack_start(label, expand=False, fill=False)
self.show_all()
def response_cb(self, dialog, response_id):
model = self.layer_store
it = model.get_iter_first()
layers = []
while it:
if self.split_model:
inc = model.get_value(it, 1)
if inc:
layers.append(model.get_value(it, 0))
else:
layers.append(model.get_value(it, 0))
it = model.iter_next(it)
self.layers_changed = (self.layers != layers)
self.layers = layers
class ImageSelectionDialog (gtk.Dialog):
def __init__(self, image_folder, image_types, title, parent, flags, buttons):
super(ImageSelectionDialog, self).__init__(title, parent, flags, buttons)
self.connect("response", self.response_cb)
self.image_folder = image_folder
self.image_types = image_types
self.image_names = []
# create visual elements on the dialog
self.create_visual_elements()
self.image_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
self.fill_image_store()
def create_visual_elements(self):
self.set_border_width(20)
self.set_default_size(600, 300)
self.vbox.set_spacing(10)
hbox = gtk.HBox(False, 10)
self.vbox.pack_start(hbox, expand=False, fill=False)
entry = gtk.Entry()
entry.set_text(self.image_folder)
table = gtk.Table(1, 10, True)
table.set_size_request(560, -1)
hbox.pack_start(table, expand=False, fill=False)
table.attach(entry, 0, 9, 0, 1)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_OPEN,gtk.ICON_SIZE_BUTTON)
open_button = gtk.Button()
open_button.set_image(image)
open_button.connect("clicked", self.select_path_cb, self, entry)
table.attach(open_button, 9, 10, 0, 1)
imgtv_widget, self.imgsel_tv = HobWidget.gen_imgtv_widget(400, 160)
self.vbox.pack_start(imgtv_widget, expand=True, fill=True)
self.show_all()
def select_path_cb(self, action, parent, entry):
dialog = gtk.FileChooserDialog("", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER,
(gtk.STOCK_OK, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
entry.set_text(path)
self.image_folder = path
self.fill_image_store()
dialog.destroy()
def fill_image_store(self):
self.image_store.clear()
imageset = set()
for root, dirs, files in os.walk(self.image_folder):
for f in files:
for image_type in self.image_types:
if f.endswith('.' + image_type):
imageset.add(f.rsplit('.' + image_type)[0])
for image in imageset:
self.image_store.set(self.image_store.append(), 0, image, 1, False)
self.imgsel_tv.set_model(self.image_store)
def response_cb(self, dialog, response_id):
self.image_names = []
if response_id == gtk.RESPONSE_YES:
iter = self.image_store.get_iter_first()
while iter:
path = self.image_store.get_path(iter)
if self.image_store[path][1]:
for root, dirs, files in os.walk(self.image_folder):
for f in files:
if f.startswith(self.image_store[path][0] + '.'):
self.image_names.append(f)
break
iter = self.image_store.iter_next(iter)

View File

@@ -1,35 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
class HobColors:
WHITE = "#ffffff"
PALE_GREEN = "#aaffaa"
ORANGE = "#ff7c24"
PALE_RED = "#ffaaaa"
GRAY = "#aaaaaa"
LIGHT_GRAY = "#dddddd"
DARK = "#3c3b37"
BLACK = "#000000"
LIGHT_ORANGE = "#f7a787"
OK = WHITE
RUNNING = PALE_GREEN
WARNING = ORANGE
ERROR = PALE_RED

View File

@@ -1,459 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import logging
from bb.ui.crumbs.runningbuild import RunningBuild
class HobHandler(gobject.GObject):
"""
This object does BitBake event handling for the hob gui.
"""
__gsignals__ = {
"layers-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"package-formats-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"config-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING, gobject.TYPE_PYOBJECT,)),
"command-succeeded" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,)),
"command-failed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"data-generated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"parsing-started" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"parsing" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"parsing-completed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
}
(CFG_AVAIL_LAYERS, CFG_PATH_LAYERS, CFG_FILES_DISTRO, CFG_FILES_MACH, CFG_FILES_SDKMACH, FILES_MATCH_CLASS, PARSE_CONFIG, PARSE_BBFILES, GENERATE_TGTS, GENERATE_PACKAGEINFO, BUILD_TARGET_RECIPES, BUILD_TARGET_IMAGE, CMD_END) = range(13)
(LAYERS_REFRESH, GENERATE_RECIPES, GENERATE_PACKAGES, GENERATE_IMAGE, POPULATE_PACKAGEINFO) = range(5)
def __init__(self, server, server_addr, client_addr, recipe_model, package_model):
super(HobHandler, self).__init__()
self.build = RunningBuild(sequential=True)
self.recipe_model = recipe_model
self.package_model = package_model
self.commands_async = []
self.generating = False
self.current_phase = None
self.building = False
self.recipe_queue = []
self.package_queue = []
self.server = server
self.error_msg = ""
self.initcmd = None
self.split_model = False
if server_addr and client_addr:
self.split_model = (server_addr != client_addr)
self.reset_server() # reset server if server was found just now
self.server_addr = server_addr
def kick(self):
import xmlrpclib
try:
# kick the while thing off
if self.split_model:
self.commands_async.append(self.CFG_AVAIL_LAYERS)
else:
self.commands_async.append(self.CFG_PATH_LAYERS)
self.commands_async.append(self.CFG_FILES_DISTRO)
self.commands_async.append(self.CFG_FILES_MACH)
self.commands_async.append(self.CFG_FILES_SDKMACH)
self.commands_async.append(self.FILES_MATCH_CLASS)
self.run_next_command()
return True
except xmlrpclib.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
return False
def set_busy(self):
if not self.generating:
self.emit("generating-data")
self.generating = True
def clear_busy(self):
if self.generating:
self.emit("data-generated")
self.generating = False
def run_next_command(self, initcmd=None):
if initcmd != None:
self.initcmd = initcmd
if self.commands_async:
self.set_busy()
next_command = self.commands_async.pop(0)
else:
self.clear_busy()
if self.initcmd != None:
self.emit("command-succeeded", self.initcmd)
return
if next_command == self.CFG_AVAIL_LAYERS:
self.server.runCommand(["findCoreBaseFiles", "layers", "conf/layer.conf"])
elif next_command == self.CFG_PATH_LAYERS:
self.server.runCommand(["findConfigFilePath", "bblayers.conf"])
elif next_command == self.CFG_FILES_DISTRO:
self.server.runCommand(["findConfigFiles", "DISTRO"])
elif next_command == self.CFG_FILES_MACH:
self.server.runCommand(["findConfigFiles", "MACHINE"])
elif next_command == self.CFG_FILES_SDKMACH:
self.server.runCommand(["findConfigFiles", "MACHINE-SDK"])
elif next_command == self.FILES_MATCH_CLASS:
self.server.runCommand(["findFilesMatchingInDir", "rootfs_", "classes"])
elif next_command == self.PARSE_CONFIG:
self.server.runCommand(["parseConfigurationFiles", "", ""])
elif next_command == self.PARSE_BBFILES:
self.server.runCommand(["parseFiles"])
elif next_command == self.GENERATE_TGTS:
self.server.runCommand(["generateTargetsTree", "classes/image.bbclass", [], True])
elif next_command == self.GENERATE_PACKAGEINFO:
self.server.runCommand(["triggerEvent", "bb.event.RequestPackageInfo()"])
elif next_command == self.BUILD_TARGET_RECIPES:
self.clear_busy()
self.building = True
self.server.runCommand(["buildTargets", self.recipe_queue, "build"])
self.recipe_queue = []
elif next_command == self.BUILD_TARGET_IMAGE:
self.clear_busy()
self.building = True
targets = ["hob-image"]
self.server.runCommand(["setVariable", "LINGUAS_INSTALL", ""])
self.server.runCommand(["setVariable", "PACKAGE_INSTALL", " ".join(self.package_queue)])
if self.toolchain_build:
pkgs = self.package_queue + [i+'-dev' for i in self.package_queue] + [i+'-dbg' for i in self.package_queue]
self.server.runCommand(["setVariable", "TOOLCHAIN_TARGET_TASK", " ".join(pkgs)])
targets.append("hob-toolchain")
self.server.runCommand(["buildTargets", targets, "build"])
def handle_event(self, event):
if not event:
return
if self.building:
self.current_phase = "building"
self.build.handle_event(event)
if isinstance(event, bb.event.PackageInfo):
self.package_model.populate(event._pkginfolist)
self.run_next_command()
elif(isinstance(event, logging.LogRecord)):
if event.levelno >= logging.ERROR:
self.error_msg += event.msg + '\n'
elif isinstance(event, bb.event.TargetsTreeGenerated):
self.current_phase = "data generation"
if event._model:
self.recipe_model.populate(event._model)
elif isinstance(event, bb.event.CoreBaseFilesFound):
self.current_phase = "configuration lookup"
paths = event._paths
self.emit('layers-updated', paths)
elif isinstance(event, bb.event.ConfigFilesFound):
self.current_phase = "configuration lookup"
var = event._variable
values = event._values
values.sort()
self.emit("config-updated", var, values)
elif isinstance(event, bb.event.ConfigFilePathFound):
self.current_phase = "configuration lookup"
elif isinstance(event, bb.event.FilesMatchingFound):
self.current_phase = "configuration lookup"
# FIXME: hard coding, should at least be a variable shared between
# here and the caller
if event._pattern == "rootfs_":
formats = []
for match in event._matches:
classname, sep, cls = match.rpartition(".")
fs, sep, format = classname.rpartition("_")
formats.append(format)
formats.sort()
self.emit("package-formats-updated", formats)
elif isinstance(event, bb.command.CommandCompleted):
self.current_phase = None
self.run_next_command()
elif isinstance(event, bb.event.NoProvider):
if event._runtime:
r = "R"
else:
r = ""
if event._dependees:
self.error_msg += " Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r)
else:
self.error_msg += " Nothing %sPROVIDES '%s'" % (r, event._item)
if event._reasons:
for reason in event._reasons:
self.error_msg += " %s" % reason
self.commands_async = []
self.emit("command-failed", self.error_msg)
self.error_msg = ""
elif isinstance(event, bb.command.CommandFailed):
self.commands_async = []
if self.error_msg:
self.emit("command-failed", self.error_msg)
self.error_msg = ""
elif isinstance(event, (bb.event.ParseStarted,
bb.event.CacheLoadStarted,
bb.event.TreeDataPreparationStarted,
)):
message = {}
message["eventname"] = bb.event.getName(event)
message["current"] = 0
message["total"] = None
message["title"] = "Parsing recipes: "
self.emit("parsing-started", message)
elif isinstance(event, (bb.event.ParseProgress,
bb.event.CacheLoadProgress,
bb.event.TreeDataPreparationProgress)):
message = {}
message["eventname"] = bb.event.getName(event)
message["current"] = event.current
message["total"] = event.total
message["title"] = "Parsing recipes: "
self.emit("parsing", message)
elif isinstance(event, (bb.event.ParseCompleted,
bb.event.CacheLoadCompleted,
bb.event.TreeDataPreparationCompleted)):
message = {}
message["eventname"] = bb.event.getName(event)
message["current"] = event.total
message["total"] = event.total
message["title"] = "Parsing recipes: "
self.emit("parsing-completed", message)
return
def init_cooker(self):
self.server.runCommand(["initCooker"])
def refresh_layers(self, bblayers):
self.server.runCommand(["initCooker"])
self.server.runCommand(["setVariable", "BBLAYERS", " ".join(bblayers)])
self.commands_async.append(self.PARSE_CONFIG)
self.commands_async.append(self.CFG_FILES_DISTRO)
self.commands_async.append(self.CFG_FILES_MACH)
self.commands_async.append(self.CFG_FILES_SDKMACH)
self.commands_async.append(self.FILES_MATCH_CLASS)
self.run_next_command(self.LAYERS_REFRESH)
def set_extra_inherit(self, bbclass):
inherits = self.server.runCommand(["getVariable", "INHERIT"]) or ""
inherits = inherits + " " + bbclass
self.server.runCommand(["setVariable", "INHERIT", inherits])
def set_bblayers(self, bblayers):
self.server.runCommand(["setVariable", "BBLAYERS", " ".join(bblayers)])
def set_machine(self, machine):
self.server.runCommand(["setVariable", "MACHINE", machine])
def set_sdk_machine(self, sdk_machine):
self.server.runCommand(["setVariable", "SDKMACHINE", sdk_machine])
def set_image_fstypes(self, image_fstypes):
self.server.runCommand(["setVariable", "IMAGE_FSTYPES", " ".join(image_fstypes).lstrip(" ")])
def set_distro(self, distro):
self.server.runCommand(["setVariable", "DISTRO", distro])
def set_package_format(self, format):
package_classes = ""
for pkgfmt in format.split():
package_classes += ("package_%s" % pkgfmt + " ")
self.server.runCommand(["setVariable", "PACKAGE_CLASSES", package_classes])
def set_bbthreads(self, threads):
self.server.runCommand(["setVariable", "BB_NUMBER_THREADS", threads])
def set_pmake(self, threads):
pmake = "-j %s" % threads
self.server.runCommand(["setVariable", "PARALLEL_MAKE", pmake])
def set_dl_dir(self, directory):
self.server.runCommand(["setVariable", "DL_DIR", directory])
def set_sstate_dir(self, directory):
self.server.runCommand(["setVariable", "SSTATE_DIR", directory])
def set_sstate_mirror(self, url):
self.server.runCommand(["setVariable", "SSTATE_MIRROR", url])
def set_extra_size(self, image_extra_size):
self.server.runCommand(["setVariable", "IMAGE_ROOTFS_EXTRA_SPACE", str(image_extra_size)])
def set_rootfs_size(self, image_rootfs_size):
self.server.runCommand(["setVariable", "IMAGE_ROOTFS_SIZE", str(image_rootfs_size)])
def set_incompatible_license(self, incompat_license):
self.server.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", incompat_license])
def set_extra_config(self, extra_setting):
for key in extra_setting.keys():
value = extra_setting[key]
self.server.runCommand(["setVariable", key, value])
def request_package_info_async(self):
self.commands_async.append(self.GENERATE_PACKAGEINFO)
self.run_next_command(self.POPULATE_PACKAGEINFO)
def generate_recipes(self):
self.commands_async.append(self.PARSE_CONFIG)
self.commands_async.append(self.GENERATE_TGTS)
self.run_next_command(self.GENERATE_RECIPES)
def generate_packages(self, tgts):
targets = []
targets.extend(tgts)
self.recipe_queue = targets
self.commands_async.append(self.PARSE_CONFIG)
self.commands_async.append(self.PARSE_BBFILES)
self.commands_async.append(self.BUILD_TARGET_RECIPES)
self.run_next_command(self.GENERATE_PACKAGES)
def generate_image(self, tgts, toolchain_build=False):
self.package_queue = tgts
self.toolchain_build = toolchain_build
self.commands_async.append(self.PARSE_CONFIG)
self.commands_async.append(self.PARSE_BBFILES)
self.commands_async.append(self.BUILD_TARGET_IMAGE)
self.run_next_command(self.GENERATE_IMAGE)
def build_failed_async(self):
self.initcmd = None
self.commands_async = []
self.building = False
def cancel_build(self, force=False):
if force:
# Force the cooker to stop as quickly as possible
self.server.runCommand(["stateStop"])
else:
# Wait for tasks to complete before shutting down, this helps
# leave the workdir in a usable state
self.server.runCommand(["stateShutdown"])
def reset_server(self):
self.server.runCommand(["resetCooker"])
def reset_build(self):
self.build.reset()
def get_parameters(self):
# retrieve the parameters from bitbake
params = {}
params["core_base"] = self.server.runCommand(["getVariable", "COREBASE"]) or ""
hob_layer = params["core_base"] + "/meta-hob"
params["layer"] = (self.server.runCommand(["getVariable", "BBLAYERS"]) or "") + " " + hob_layer
params["dldir"] = self.server.runCommand(["getVariable", "DL_DIR"]) or ""
params["machine"] = self.server.runCommand(["getVariable", "MACHINE"]) or ""
params["distro"] = self.server.runCommand(["getVariable", "DISTRO"]) or "defaultsetup"
params["pclass"] = self.server.runCommand(["getVariable", "PACKAGE_CLASSES"]) or ""
params["sstatedir"] = self.server.runCommand(["getVariable", "SSTATE_DIR"]) or ""
params["sstatemirror"] = self.server.runCommand(["getVariable", "SSTATE_MIRROR"]) or ""
num_threads = self.server.runCommand(["getCpuCount"])
if not num_threads:
num_threads = 1
max_threads = 65536
else:
num_threads = int(num_threads)
max_threads = 16 * num_threads
params["max_threads"] = max_threads
bbthread = self.server.runCommand(["getVariable", "BB_NUMBER_THREADS"])
if not bbthread:
bbthread = num_threads
else:
bbthread = int(bbthread)
params["bbthread"] = bbthread
pmake = self.server.runCommand(["getVariable", "PARALLEL_MAKE"])
if not pmake:
pmake = num_threads
elif isinstance(pmake, int):
pass
else:
pmake = int(pmake.lstrip("-j "))
params["pmake"] = pmake
image_addr = self.server.runCommand(["getVariable", "DEPLOY_DIR_IMAGE"]) or ""
if self.server_addr:
image_addr = "http://" + self.server_addr + ":" + image_addr
params["image_addr"] = image_addr
image_extra_size = self.server.runCommand(["getVariable", "IMAGE_ROOTFS_EXTRA_SPACE"])
if not image_extra_size:
image_extra_size = 0
else:
image_extra_size = int(image_extra_size)
params["image_extra_size"] = image_extra_size
image_rootfs_size = self.server.runCommand(["getVariable", "IMAGE_ROOTFS_SIZE"])
if not image_rootfs_size:
image_rootfs_size = 0
else:
image_rootfs_size = int(image_rootfs_size)
params["image_rootfs_size"] = image_rootfs_size
image_overhead_factor = self.server.runCommand(["getVariable", "IMAGE_OVERHEAD_FACTOR"])
if not image_overhead_factor:
image_overhead_factor = 1
else:
image_overhead_factor = float(image_overhead_factor)
params['image_overhead_factor'] = image_overhead_factor
params["incompat_license"] = self.server.runCommand(["getVariable", "INCOMPATIBLE_LICENSE"]) or ""
params["sdk_machine"] = self.server.runCommand(["getVariable", "SDKMACHINE"]) or self.server.runCommand(["getVariable", "SDK_ARCH"]) or ""
#params["image_types"] = self.server.runCommand(["getVariable", "IMAGE_TYPES"]) or ""
params["image_fstypes"] = self.server.runCommand(["getVariable", "IMAGE_FSTYPES"]) or ""
"""
A workaround
"""
params["image_types"] = "jffs2 sum.jffs2 cramfs ext2 ext2.gz ext2.bz2 ext3 ext3.gz ext2.lzma btrfs live squashfs squashfs-lzma ubi tar tar.gz tar.bz2 tar.xz cpio cpio.gz cpio.xz cpio.lzma"
return params

View File

@@ -1,765 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
#
# PackageListModel
#
class PackageListModel(gtk.TreeStore):
"""
This class defines an gtk.TreeStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.TreeStore whilst also
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_BINB, COL_INC) = range(11)
__gsignals__ = {
"packagelist-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"package-selection-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
}
def __init__(self):
self.contents = None
self.images = None
self.pkgs_size = 0
self.pn_path = {}
self.pkg_path = {}
gtk.TreeStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN)
"""
Find the model path for the item_name
Returns the path in the model or None
"""
def find_path_for_item(self, item_name):
if item_name not in self.pkg_path.keys():
return None
else:
return self.pkg_path[item_name]
def find_item_for_path(self, item_path):
return self[item_path][self.COL_NAME]
"""
Helper function to determine whether an item is an item specified by filter
"""
def tree_model_filter(self, model, it, filter):
for key in filter.keys():
if model.get_value(it, key) not in filter[key]:
return False
return True
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items specified by filter
"""
def tree_model(self, filter):
model = self.filter_new()
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
sort.set_default_sort_func(None)
return sort
def convert_vpath_to_path(self, view_model, view_path):
# view_model is the model sorted
# get the path of the model filtered
filtered_model_path = view_model.convert_path_to_child_path(view_path)
# get the model filtered
filtered_model = view_model.get_model()
# get the path of the original model
path = filtered_model.convert_path_to_child_path(filtered_model_path)
return path
def convert_path_to_vpath(self, view_model, path):
name = self.find_item_for_path(path)
it = view_model.get_iter_first()
while it:
child_it = view_model.iter_children(it)
while child_it:
view_name = view_model.get_value(child_it, self.COL_NAME)
if view_name == name:
view_path = view_model.get_path(child_it)
return view_path
child_it = view_model.iter_next(child_it)
it = view_model.iter_next(it)
return None
"""
The populate() function takes as input the data from a
bb.event.PackageInfo event and populates the package list.
Once the population is done it emits gsignal packagelist-populated
to notify any listeners that the model is ready
"""
def populate(self, pkginfolist):
self.clear()
self.pkgs_size = 0
self.pn_path = {}
self.pkg_path = {}
for pkginfo in pkginfolist:
pn = pkginfo['PN']
pv = pkginfo['PV']
pr = pkginfo['PR']
if pn in self.pn_path.keys():
pniter = self.get_iter(self.pn_path[pn])
else:
pniter = self.append(None)
self.set(pniter, self.COL_NAME, pn + '-' + pv + '-' + pr,
self.COL_INC, False)
self.pn_path[pn] = self.get_path(pniter)
pkg = pkginfo['PKG']
pkgv = pkginfo['PKGV']
pkgr = pkginfo['PKGR']
pkgsize = pkginfo['PKGSIZE_%s' % pkg] if 'PKGSIZE_%s' % pkg in pkginfo.keys() else "0"
pkg_rename = pkginfo['PKG_%s' % pkg] if 'PKG_%s' % pkg in pkginfo.keys() else ""
section = pkginfo['SECTION_%s' % pkg] if 'SECTION_%s' % pkg in pkginfo.keys() else ""
summary = pkginfo['SUMMARY_%s' % pkg] if 'SUMMARY_%s' % pkg in pkginfo.keys() else ""
rdep = pkginfo['RDEPENDS_%s' % pkg] if 'RDEPENDS_%s' % pkg in pkginfo.keys() else ""
rrec = pkginfo['RRECOMMENDS_%s' % pkg] if 'RRECOMMENDS_%s' % pkg in pkginfo.keys() else ""
rprov = pkginfo['RPROVIDES_%s' % pkg] if 'RPROVIDES_%s' % pkg in pkginfo.keys() else ""
if 'ALLOW_EMPTY_%s' % pkg in pkginfo.keys():
allow_empty = pkginfo['ALLOW_EMPTY_%s' % pkg]
elif 'ALLOW_EMPTY' in pkginfo.keys():
allow_empty = pkginfo['ALLOW_EMPTY']
else:
allow_empty = ""
if pkgsize == "0" and not allow_empty:
continue
if len(pkgsize) > 3:
size = '%.1f' % (int(pkgsize)*1.0/1024) + ' MB'
else:
size = pkgsize + ' KB'
it = self.append(pniter)
self.pkg_path[pkg] = self.get_path(it)
self.set(it, self.COL_NAME, pkg, self.COL_VER, pkgv,
self.COL_REV, pkgr, self.COL_RNM, pkg_rename,
self.COL_SEC, section, self.COL_SUM, summary,
self.COL_RDEP, rdep + ' ' + rrec,
self.COL_RPROV, rprov, self.COL_SIZE, size,
self.COL_BINB, "", self.COL_INC, False)
self.emit("packagelist-populated")
"""
Check whether the item at item_path is included or not
"""
def path_included(self, item_path):
return self[item_path][self.COL_INC]
"""
Update the model, send out the notification.
"""
def selection_change_notification(self):
self.emit("package-selection-changed")
"""
Mark a certain package as selected.
All its dependencies are marked as selected.
The recipe provides the package is marked as selected.
If user explicitly selects a recipe, all its providing packages are selected
"""
def include_item(self, item_path, binb=""):
if self.path_included(item_path):
return
item_name = self[item_path][self.COL_NAME]
item_rdep = self[item_path][self.COL_RDEP]
self[item_path][self.COL_INC] = True
self.selection_change_notification()
it = self.get_iter(item_path)
# If user explicitly selects a recipe, all its providing packages are selected.
if not self[item_path][self.COL_VER] and binb == "User Selected":
child_it = self.iter_children(it)
while child_it:
child_path = self.get_path(child_it)
child_included = self.path_included(child_path)
if not child_included:
self.include_item(child_path, binb="User Selected")
child_it = self.iter_next(child_it)
return
# The recipe provides the package is also marked as selected
parent_it = self.iter_parent(it)
if parent_it:
parent_path = self.get_path(parent_it)
self[parent_path][self.COL_INC] = True
item_bin = self[item_path][self.COL_BINB].split(', ')
if binb and not binb in item_bin:
item_bin.append(binb)
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
if item_rdep:
# Ensure all of the items deps are included and, where appropriate,
# add this item to their COL_BINB
for dep in item_rdep.split(" "):
if dep.startswith('('):
continue
# If the contents model doesn't already contain dep, add it
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_included = self.path_included(dep_path)
if dep_included and not dep in item_bin:
# don't set the COL_BINB to this item if the target is an
# item in our own COL_BINB
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if not item_name in dep_bin:
dep_bin.append(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
elif not dep_included:
self.include_item(dep_path, binb=item_name)
"""
Mark a certain package as de-selected.
All other packages that depends on this package are marked as de-selected.
If none of the packages provided by the recipe, the recipe should be marked as de-selected.
If user explicitly de-select a recipe, all its providing packages are de-selected.
"""
def exclude_item(self, item_path):
if not self.path_included(item_path):
return
self[item_path][self.COL_INC] = False
self.selection_change_notification()
item_name = self[item_path][self.COL_NAME]
item_rdep = self[item_path][self.COL_RDEP]
it = self.get_iter(item_path)
# If user explicitly de-select a recipe, all its providing packages are de-selected.
if not self[item_path][self.COL_VER]:
child_it = self.iter_children(it)
while child_it:
child_path = self.get_path(child_it)
child_included = self[child_path][self.COL_INC]
if child_included:
self.exclude_item(child_path)
child_it = self.iter_next(child_it)
return
# If none of the packages provided by the recipe, the recipe should be marked as de-selected.
parent_it = self.iter_parent(it)
peer_iter = self.iter_children(parent_it)
enabled = 0
while peer_iter:
peer_path = self.get_path(peer_iter)
if self[peer_path][self.COL_INC]:
enabled = 1
break
peer_iter = self.iter_next(peer_iter)
if not enabled:
parent_path = self.get_path(parent_it)
self[parent_path][self.COL_INC] = False
# All packages that depends on this package are de-selected.
if item_rdep:
for dep in item_rdep.split(" "):
if dep.startswith('('):
continue
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if item_name in dep_bin:
dep_bin.remove(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
item_bin = self[item_path][self.COL_BINB].split(', ')
if item_bin:
for binb in item_bin:
binb_path = self.find_path_for_item(binb)
if not binb_path:
continue
self.exclude_item(binb_path)
"""
Package model may be incomplete, therefore when calling the
set_selected_packages(), some packages will not be set included.
Return the un-set packages list.
"""
def set_selected_packages(self, packagelist):
left = []
for pn in packagelist:
if pn in self.pkg_path.keys():
path = self.pkg_path[pn]
self.include_item(item_path=path,
binb="User Selected")
else:
left.append(pn)
return left
def get_selected_packages(self):
packagelist = []
it = self.get_iter_first()
while it:
child_it = self.iter_children(it)
while child_it:
if self.get_value(child_it, self.COL_INC):
name = self.get_value(child_it, self.COL_NAME)
packagelist.append(name)
child_it = self.iter_next(child_it)
it = self.iter_next(it)
return packagelist
"""
Return the selected package size, unit is KB.
"""
def get_packages_size(self):
packages_size = 0
it = self.get_iter_first()
while it:
child_it = self.iter_children(it)
while child_it:
if self.get_value(child_it, self.COL_INC):
str_size = self.get_value(child_it, self.COL_SIZE)
if not str_size:
continue
unit = str_size.split()
if unit[1] == 'MB':
size = float(unit[0])*1024
else:
size = float(unit[0])
packages_size += size
child_it = self.iter_next(child_it)
it = self.iter_next(it)
return "%f" % packages_size
"""
Empty self.contents by setting the include of each entry to None
"""
def reset(self):
self.pkgs_size = 0
it = self.get_iter_first()
while it:
self.set(it, self.COL_INC, False)
child_it = self.iter_children(it)
while child_it:
self.set(child_it,
self.COL_INC, False,
self.COL_BINB, "")
child_it = self.iter_next(child_it)
it = self.iter_next(it)
self.selection_change_notification()
#
# RecipeListModel
#
class RecipeListModel(gtk.ListStore):
"""
This class defines an gtk.ListStore subclass which will convert the output
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN) = range(11)
__dummy_image__ = "--select a base image--"
__gsignals__ = {
"recipelist-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"recipe-selection-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
}
"""
"""
def __init__(self):
gtk.ListStore.__init__ (self,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING)
"""
Find the model path for the item_name
Returns the path in the model or None
"""
def find_path_for_item(self, item_name):
if self.non_target_name(item_name) or item_name not in self.pn_path.keys():
return None
else:
return self.pn_path[item_name]
def find_item_for_path(self, item_path):
return self[item_path][self.COL_NAME]
"""
Helper method to determine whether name is a target pn
"""
def non_target_name(self, name):
if name and ('-native' in name):
return True
return False
"""
Helper function to determine whether an item is an item specified by filter
"""
def tree_model_filter(self, model, it, filter):
name = model.get_value(it, self.COL_NAME)
if self.non_target_name(name):
return False
for key in filter.keys():
if model.get_value(it, key) not in filter[key]:
return False
return True
def sort_func(self, model, iter1, iter2):
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
return val1 > val2
"""
Create, if required, and return a filtered gtk.TreeModelSort
containing only the items which are items specified by filter
"""
def tree_model(self, filter):
model = self.filter_new()
model.set_visible_func(self.tree_model_filter, filter)
sort = gtk.TreeModelSort(model)
sort.set_default_sort_func(self.sort_func)
return sort
def convert_vpath_to_path(self, view_model, view_path):
filtered_model_path = view_model.convert_path_to_child_path(view_path)
filtered_model = view_model.get_model()
# get the path of the original model
path = filtered_model.convert_path_to_child_path(filtered_model_path)
return path
def convert_path_to_vpath(self, view_model, path):
it = view_model.get_iter_first()
while it:
name = self.find_item_for_path(path)
view_name = view_model.get_value(it, RecipeListModel.COL_NAME)
if view_name == name:
view_path = view_model.get_path(it)
return view_path
it = view_model.iter_next(it)
return None
def map_runtime(self, event_model, runtime, rdep_type, name):
if rdep_type not in ['pkg', 'pn'] or runtime not in ['rdepends', 'rrecs']:
return
package_depends = event_model["%s-%s" % (runtime, rdep_type)].get(name, [])
pn_depends = []
for package_depend in package_depends:
if 'task-' not in package_depend and package_depend in event_model["packages"].keys():
pn_depends.append(event_model["packages"][package_depend]["pn"])
else:
pn_depends.append(package_depend)
return list(set(pn_depends))
def subpkg_populate(self, event_model, pkg, desc, lic, group, atype, pn):
pn_depends = self.map_runtime(event_model, "rdepends", "pkg", pkg)
pn_depends += self.map_runtime(event_model, "rrecs", "pkg", pkg)
self.set(self.append(), self.COL_NAME, pkg, self.COL_DESC, desc,
self.COL_LIC, lic, self.COL_GROUP, group,
self.COL_DEPS, " ".join(pn_depends), self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, pn)
"""
The populate() function takes as input the data from a
bb.event.TargetsTreeGenerated event and populates the RecipeList.
Once the population is done it emits gsignal recipelist-populated
to notify any listeners that the model is ready
"""
def populate(self, event_model):
# First clear the model, in case repopulating
self.clear()
# dummy image for prompt
self.set(self.append(), self.COL_NAME, self.__dummy_image__,
self.COL_DESC, "",
self.COL_LIC, "", self.COL_GROUP, "",
self.COL_DEPS, "", self.COL_BINB, "",
self.COL_TYPE, "image", self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__dummy_image__)
for item in event_model["pn"]:
name = item
desc = event_model["pn"][item]["description"]
lic = event_model["pn"][item]["license"]
group = event_model["pn"][item]["section"]
install = []
if ('task-' in name):
if ('lib32-' in name or 'lib64-' in name):
atype = 'mltask'
else:
atype = 'task'
for pkg in event_model["pn"][name]["packages"]:
self.subpkg_populate(event_model, pkg, desc, lic, group, atype, name)
continue
elif ('-image-' in name):
atype = 'image'
depends = event_model["depends"].get(item, [])
rdepends = self.map_runtime(event_model, 'rdepends', 'pn', name)
depends = depends + rdepends
install = event_model["rdepends-pn"].get(item, [])
elif ('meta-' in name):
atype = 'toolchain'
elif (name == 'dummy-image' or name == 'dummy-toolchain'):
atype = 'dummy'
else:
if ('lib32-' in name or 'lib64-' in name):
atype = 'mlrecipe'
else:
atype = 'recipe'
depends = event_model["depends"].get(item, [])
depends += self.map_runtime(event_model, 'rdepends', 'pn', item)
for pkg in event_model["pn"][name]["packages"]:
depends += self.map_runtime(event_model, 'rdepends', 'pkg', item)
depends += self.map_runtime(event_model, 'rrecs', 'pkg', item)
self.set(self.append(), self.COL_NAME, item, self.COL_DESC, desc,
self.COL_LIC, lic, self.COL_GROUP, group,
self.COL_DEPS, " ".join(depends), self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item)
self.pn_path = {}
it = self.get_iter_first()
while it:
pn = self.get_value(it, self.COL_NAME)
path = self.get_path(it)
self.pn_path[pn] = path
it = self.iter_next(it)
self.emit("recipelist-populated")
"""
Update the model, send out the notification.
"""
def selection_change_notification(self):
self.emit("recipe-selection-changed")
def path_included(self, item_path):
return self[item_path][self.COL_INC]
"""
Append a certain image into the combobox
"""
def image_list_append(self, name, deps, install):
# check whether a certain image is there
if not name or self.find_path_for_item(name):
return
it = self.append()
self.set(it, self.COL_NAME, name, self.COL_DESC, "",
self.COL_LIC, "", self.COL_GROUP, "",
self.COL_DEPS, deps, self.COL_BINB, "",
self.COL_TYPE, "image", self.COL_INC, False,
self.COL_IMG, False, self.COL_INSTALL, install,
self.COL_PN, name)
self.pn_path[name] = self.get_path(it)
"""
Add this item, and any of its dependencies, to the image contents
"""
def include_item(self, item_path, binb="", image_contents=False):
if self.path_included(item_path):
return
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_DEPS]
self[item_path][self.COL_INC] = True
self.selection_change_notification()
item_bin = self[item_path][self.COL_BINB].split(', ')
if binb and not binb in item_bin:
item_bin.append(binb)
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
# We want to do some magic with things which are brought in by the
# base image so tag them as so
if image_contents:
self[item_path][self.COL_IMG] = True
if item_deps:
# Ensure all of the items deps are included and, where appropriate,
# add this item to their COL_BINB
for dep in item_deps.split(" "):
# If the contents model doesn't already contain dep, add it
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_included = self.path_included(dep_path)
if dep_included and not dep in item_bin:
# don't set the COL_BINB to this item if the target is an
# item in our own COL_BINB
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if not item_name in dep_bin:
dep_bin.append(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
elif not dep_included:
self.include_item(dep_path, binb=item_name, image_contents=image_contents)
def exclude_item(self, item_path):
if not self.path_included(item_path):
return
self[item_path][self.COL_INC] = False
self.selection_change_notification()
item_name = self[item_path][self.COL_NAME]
item_deps = self[item_path][self.COL_DEPS]
if item_deps:
for dep in item_deps.split(" "):
dep_path = self.find_path_for_item(dep)
if not dep_path:
continue
dep_bin = self[dep_path][self.COL_BINB].split(', ')
if item_name in dep_bin:
dep_bin.remove(item_name)
self[dep_path][self.COL_BINB] = ', '.join(dep_bin).lstrip(', ')
item_bin = self[item_path][self.COL_BINB].split(', ')
if item_bin:
for binb in item_bin:
binb_path = self.find_path_for_item(binb)
if not binb_path:
continue
self.exclude_item(binb_path)
def reset(self):
it = self.get_iter_first()
while it:
self.set(it,
self.COL_INC, False,
self.COL_BINB, "",
self.COL_IMG, False)
it = self.iter_next(it)
self.selection_change_notification()
"""
Returns two lists. One of user selected recipes and the other containing
all selected recipes
"""
def get_selected_recipes(self):
allrecipes = []
userrecipes = []
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_PN)
type = self.get_value(it, self.COL_TYPE)
if type != "image":
allrecipes.append(name)
sel = "User Selected" in self.get_value(it, self.COL_BINB)
if sel:
userrecipes.append(name)
it = self.iter_next(it)
return list(set(userrecipes)), list(set(allrecipes))
def set_selected_recipes(self, recipelist):
for pn in recipelist:
if pn in self.pn_path.keys():
path = self.pn_path[pn]
self.include_item(item_path=path,
binb="User Selected")
def get_selected_image(self):
it = self.get_iter_first()
while it:
if self.get_value(it, self.COL_INC):
name = self.get_value(it, self.COL_PN)
type = self.get_value(it, self.COL_TYPE)
if type == "image":
sel = "User Selected" in self.get_value(it, self.COL_BINB)
if sel:
return name
it = self.iter_next(it)
return None
def set_selected_image(self, img):
if img == None:
return
path = self.find_path_for_item(img)
self.include_item(item_path=path,
binb="User Selected",
image_contents=True)

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hwc
#
# HobPage: the super class for all Hob-related pages
#
class HobPage (gtk.VBox):
def __init__(self, builder, title = None):
super(HobPage, self).__init__(False, 0)
self.builder = builder
self.builder_width, self.builder_height = self.builder.size_request()
if title == None:
self.title = "HOB -- Image Creator"
else:
self.title = title
self.box_group_area = gtk.VBox(False, 15)
self.box_group_area.set_size_request(self.builder_width - 73 - 73, self.builder_height - 88 - 15 - 15)
self.group_align = gtk.Alignment(xalign = 0, yalign=0.5, xscale=1, yscale=1)
self.group_align.set_padding(15, 15, 73, 73)
self.group_align.add(self.box_group_area)
self.box_group_area.set_homogeneous(False)
def add_onto_top_bar(self, widget = None, padding = 0):
# the top button occupies 1/7 of the page height
# setup an event box
eventbox = gtk.EventBox()
style = eventbox.get_style().copy()
style.bg[gtk.STATE_NORMAL] = eventbox.get_colormap().alloc_color(HobColors.LIGHT_GRAY, False, False)
eventbox.set_style(style)
eventbox.set_size_request(-1, 88)
hbox = gtk.HBox()
label = gtk.Label()
label.set_markup("<span font_desc=\'14\'>%s</span>" % self.title)
hbox.pack_start(label, expand=False, fill=False, padding=20)
if widget != None:
# add the widget in the event box
hbox.pack_end(widget, expand=False, fill=False, padding=padding)
eventbox.add(hbox)
return eventbox
def span_tag(self, px="12px", weight="normal", forground="#1c1c1c"):
span_tag = "weight=\'%s\' foreground=\'%s\' font_desc=\'%s\'" % (weight, forground, px)
return span_tag
def append_toolbar_button(self, toolbar, buttonname, icon_disp, icon_hovor, tip, cb):
# Create a button and append it on the toolbar according to button name
icon = gtk.Image()
icon_display = icon_disp
icon_hover = icon_hovor
pix_buffer = gtk.gdk.pixbuf_new_from_file(icon_display)
icon.set_from_pixbuf(pix_buffer)
tip_text = tip
button = toolbar.append_element(gtk.TOOLBAR_CHILD_RADIOBUTTON, None,
buttonname, tip_text, "Private text", icon,
cb, None)
return toolbar, button

View File

@@ -1,805 +0,0 @@
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011-2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import gobject
import os
import os.path
from bb.ui.crumbs.hobcolor import HobColors
class hwc:
MAIN_WIN_WIDTH = 1024
MAIN_WIN_HEIGHT = 700
class hic:
HOB_ICON_BASE_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), ("ui/icons/"))
ICON_RCIPE_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('recipe/recipe_display.png'))
ICON_RCIPE_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('recipe/recipe_hover.png'))
ICON_PACKAGES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('packages/packages_display.png'))
ICON_PACKAGES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('packages/packages_hover.png'))
ICON_LAYERS_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('layers/layers_display.png'))
ICON_LAYERS_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('layers/layers_hover.png'))
ICON_TEMPLATES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('templates/templates_display.png'))
ICON_TEMPLATES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('templates/templates_hover.png'))
ICON_IMAGES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('images/images_display.png'))
ICON_IMAGES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('images/images_hover.png'))
ICON_SETTINGS_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('settings/settings_display.png'))
ICON_SETTINGS_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('settings/settings_hover.png'))
ICON_INFO_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('info/info_display.png'))
ICON_INFO_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('info/info_hover.png'))
ICON_INDI_CONFIRM_FILE = os.path.join(HOB_ICON_BASE_DIR, ('indicators/confirmation.png'))
ICON_INDI_ERROR_FILE = os.path.join(HOB_ICON_BASE_DIR, ('indicators/error.png'))
class HobWidget:
@classmethod
def resize_widget(cls, screen, widget, widget_width, widget_height):
screen_width, screen_height = screen.get_size_request()
ratio_height = screen_width * hwc.MAIN_WIN_HEIGHT/hwc.MAIN_WIN_WIDTH
if ratio_height < screen_height:
screen_height = ratio_height
widget_width = widget_width * screen_width/hwc.MAIN_WIN_WIDTH
widget_height = widget_height * screen_height/hwc.MAIN_WIN_HEIGHT
widget.set_size_request(widget_width, widget_height)
@classmethod
def _toggle_cb(cls, cell, path, model, column):
it = model.get_iter(path)
val = model.get_value(it, column)
val = not val
model.set(it, column, val)
@classmethod
def _pkgfmt_up_clicked_cb(cls, button, tree_selection):
(model, it) = tree_selection.get_selected()
if not it:
return
path = model.get_path(it)
if path[0] <= 0:
return
pre_it = model.get_iter_first()
if not pre_it:
return
else:
while model.iter_next(pre_it) :
if model.get_value(model.iter_next(pre_it), 1) != model.get_value(it, 1):
pre_it = model.iter_next(pre_it)
else:
break
cur_index = model.get_value(it, 0)
pre_index = cur_index
if pre_it:
model.set(pre_it, 0, pre_index)
cur_index = cur_index - 1
model.set(it, 0, cur_index)
@classmethod
def _pkgfmt_down_clicked_cb(cls, button, tree_selection):
(model, it) = tree_selection.get_selected()
if not it:
return
next_it = model.iter_next(it)
if not next_it:
return
cur_index = model.get_value(it, 0)
next_index = cur_index
model.set(next_it, 0, next_index)
cur_index = cur_index + 1
model.set(it, 0, cur_index)
@classmethod
def _tree_selection_changed_cb(cls, tree_selection, button1, button2):
(model, it) = tree_selection.get_selected()
inc = model.get_value(it, 2)
if inc:
button1.set_sensitive(True)
button2.set_sensitive(True)
else:
button1.set_sensitive(False)
button2.set_sensitive(False)
@classmethod
def _sort_func(cls, model, iter1, iter2, data):
val1 = model.get_value(iter1, 0)
val2 = model.get_value(iter2, 0)
inc1 = model.get_value(iter1, 2)
inc2 = model.get_value(iter2, 2)
if inc1 != inc2:
return inc2 - inc1
else:
return val1 - val2
@classmethod
def gen_pkgfmt_widget(cls, curr_package_format, all_package_format, tooltip=""):
pkgfmt_hbox = gtk.HBox(False, 15)
pkgfmt_store = gtk.ListStore(int, str, gobject.TYPE_BOOLEAN)
for format in curr_package_format.split():
pkgfmt_store.set(pkgfmt_store.append(), 1, format, 2, True)
for format in all_package_format:
if format not in curr_package_format:
pkgfmt_store.set(pkgfmt_store.append(), 1, format, 2, False)
pkgfmt_tree = gtk.TreeView(pkgfmt_store)
pkgfmt_tree.set_headers_clickable(True)
pkgfmt_tree.set_headers_visible(False)
tree_selection = pkgfmt_tree.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
col = gtk.TreeViewColumn('NO')
col.set_sort_column_id(0)
col.set_sort_order(gtk.SORT_ASCENDING)
col.set_clickable(False)
col1 = gtk.TreeViewColumn('TYPE')
col1.set_min_width(130)
col1.set_max_width(140)
col2 = gtk.TreeViewColumn('INCLUDED')
col2.set_min_width(60)
col2.set_max_width(70)
pkgfmt_tree.append_column(col1)
pkgfmt_tree.append_column(col2)
cell = gtk.CellRendererText()
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 10)
cell2 = gtk.CellRendererToggle()
cell2.set_property('activatable', True)
cell2.connect("toggled", cls._toggle_cb, pkgfmt_store, 2)
col.pack_start(cell, True)
col1.pack_start(cell1, True)
col2.pack_end(cell2, True)
col.set_attributes(cell, text=0)
col1.set_attributes(cell1, text=1)
col2.set_attributes(cell2, active=2)
pkgfmt_store.set_sort_func(0, cls._sort_func, None)
pkgfmt_store.set_sort_column_id(0, gtk.SORT_ASCENDING)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(pkgfmt_tree)
scroll.set_size_request(200,60)
pkgfmt_hbox.pack_start(scroll, False, False, 0)
vbox = gtk.VBox(False, 5)
pkgfmt_hbox.pack_start(vbox, False, False, 15)
up = gtk.Button()
image = gtk.Image()
image.set_from_stock(gtk.STOCK_GO_UP, gtk.ICON_SIZE_MENU)
up.set_image(image)
up.set_size_request(50,30)
up.connect("clicked", cls._pkgfmt_up_clicked_cb, tree_selection)
vbox.pack_start(up, False, False, 5)
down = gtk.Button()
image = gtk.Image()
image.set_from_stock(gtk.STOCK_GO_DOWN, gtk.ICON_SIZE_MENU)
down.set_image(image)
down.set_size_request(50,30)
down.connect("clicked", cls._pkgfmt_down_clicked_cb, tree_selection)
vbox.pack_start(down, False, False, 5)
tree_selection.connect("changed", cls._tree_selection_changed_cb, up, down)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
pkgfmt_hbox.pack_start(image, expand=False, fill=False)
pkgfmt_hbox.show_all()
return pkgfmt_hbox, pkgfmt_store
@classmethod
def gen_combo_widget(cls, curr_item, all_item, tooltip=""):
hbox = gtk.HBox(False, 10)
combo = gtk.combo_box_new_text()
hbox.pack_start(combo, expand=False, fill=False)
index = 0
for item in all_item or []:
combo.append_text(item)
if item == curr_item:
combo.set_active(index)
index += 1
image = gtk.Image()
image.show()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
hbox.pack_start(image, expand=False, fill=False)
hbox.show_all()
return hbox, combo
@classmethod
def gen_label_widget(cls, content):
label = gtk.Label()
label.set_alignment(0, 0)
label.set_markup(content)
label.show()
return label
@classmethod
def _select_path_cb(cls, action, parent, entry):
dialog = gtk.FileChooserDialog("", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER,
(gtk.STOCK_OK, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
response = dialog.run()
if response == gtk.RESPONSE_YES:
path = dialog.get_filename()
entry.set_text(path)
dialog.destroy()
@classmethod
def gen_entry_widget(cls, split_model, content, parent, tooltip=""):
hbox = gtk.HBox(False, 10)
entry = gtk.Entry()
entry.set_text(content)
if split_model:
hbox.pack_start(entry, expand=True, fill=True)
else:
table = gtk.Table(1, 10, True)
hbox.pack_start(table, expand=True, fill=True)
table.attach(entry, 0, 9, 0, 1)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_OPEN,gtk.ICON_SIZE_BUTTON)
open_button = gtk.Button()
open_button.set_image(image)
open_button.connect("clicked", cls._select_path_cb, parent, entry)
table.attach(open_button, 9, 10, 0, 1)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
hbox.pack_start(image, expand=False, fill=False)
hbox.show_all()
return hbox, entry
@classmethod
def gen_spinner_widget(cls, content, lower, upper, tooltip=""):
hbox = gtk.HBox(False, 10)
adjust = gtk.Adjustment(value=content, lower=lower, upper=upper, step_incr=1)
spinner = gtk.SpinButton(adjustment=adjust, climb_rate=1, digits=0)
spinner.set_value(content)
hbox.pack_start(spinner, expand=False, fill=False)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
hbox.pack_start(image, expand=False, fill=False)
hbox.show_all()
return hbox, spinner
@classmethod
def conf_error(cls, parent, lbl):
dialog = CrumbsDialog(parent, lbl)
dialog.add_button(gtk.STOCK_OK, gtk.RESPONSE_OK)
response = dialog.run()
dialog.destroy()
@classmethod
def _add_layer_cb(cls, action, layer_store, parent):
dialog = gtk.FileChooserDialog("Add new layer", parent,
gtk.FILE_CHOOSER_ACTION_SELECT_FOLDER,
(gtk.STOCK_OK, gtk.RESPONSE_YES,
gtk.STOCK_CANCEL, gtk.RESPONSE_NO))
label = gtk.Label("Select the layer you wish to add")
label.show()
dialog.set_extra_widget(label)
response = dialog.run()
path = dialog.get_filename()
dialog.destroy()
lbl = "<b>Error</b>\nUnable to load layer <i>%s</i> because " % path
if response == gtk.RESPONSE_YES:
import os
import os.path
layers = []
it = layer_store.get_iter_first()
while it:
layers.append(layer_store.get_value(it, 0))
it = layer_store.iter_next(it)
if not path:
lbl += "it is an invalid path."
elif not os.path.exists(path+"/conf/layer.conf"):
lbl += "there is no layer.conf inside the directory."
elif path in layers:
lbl += "it is already in loaded layers."
else:
layer_store.append([path])
return
cls.conf_error(parent, lbl)
@classmethod
def _del_layer_cb(cls, action, tree_selection, layer_store):
model, iter = tree_selection.get_selected()
if iter:
layer_store.remove(iter)
@classmethod
def _toggle_layer_cb(cls, cell, path, layer_store):
name = layer_store[path][0]
toggle = not layer_store[path][1]
layer_store[path][1] = toggle
@classmethod
def gen_layer_widget(cls, split_model, layers, layers_avail, window, tooltip=""):
hbox = gtk.HBox(False, 10)
layer_tv = gtk.TreeView()
layer_tv.set_rules_hint(True)
layer_tv.set_headers_visible(False)
tree_selection = layer_tv.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
col0= gtk.TreeViewColumn('Path')
cell0 = gtk.CellRendererText()
cell0.set_padding(5,2)
col0.pack_start(cell0, True)
col0.set_attributes(cell0, text=0)
layer_tv.append_column(col0)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(layer_tv)
table_layer = gtk.Table(2, 10, False)
hbox.pack_start(table_layer, expand=True, fill=True)
if split_model:
table_layer.attach(scroll, 0, 10, 0, 2)
layer_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
for layer in layers:
layer_store.set(layer_store.append(), 0, layer, 1, True)
for layer in layers_avail:
if layer not in layers:
layer_store.set(layer_store.append(), 0, layer, 1, False)
col1 = gtk.TreeViewColumn('Included')
layer_tv.append_column(col1)
cell1 = gtk.CellRendererToggle()
cell1.connect("toggled", cls._toggle_layer_cb, layer_store)
col1.pack_start(cell1, True)
col1.set_attributes(cell1, active=1)
else:
table_layer.attach(scroll, 0, 10, 0, 1)
layer_store = gtk.ListStore(gobject.TYPE_STRING)
for layer in layers:
layer_store.set(layer_store.append(), 0, layer)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_ADD,gtk.ICON_SIZE_MENU)
add_button = gtk.Button()
add_button.set_image(image)
add_button.connect("clicked", cls._add_layer_cb, layer_store, window)
table_layer.attach(add_button, 0, 5, 1, 2, gtk.EXPAND | gtk.FILL, 0, 0, 0)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_REMOVE,gtk.ICON_SIZE_MENU)
del_button = gtk.Button()
del_button.set_image(image)
del_button.connect("clicked", cls._del_layer_cb, tree_selection, layer_store)
table_layer.attach(del_button, 5, 10, 1, 2, gtk.EXPAND | gtk.FILL, 0, 0, 0)
layer_tv.set_model(layer_store)
hbox.show_all()
return hbox, layer_store
@classmethod
def _toggle_single_cb(cls, cell, select_path, treeview, toggle_column):
model = treeview.get_model()
if not model:
return
iter = model.get_iter_first()
while iter:
path = model.get_path(iter)
model[path][toggle_column] = False
iter = model.iter_next(iter)
model[select_path][toggle_column] = True
@classmethod
def gen_imgtv_widget(cls, col0_width, col1_width):
vbox = gtk.VBox(False, 10)
imgsel_tv = gtk.TreeView()
imgsel_tv.set_rules_hint(True)
imgsel_tv.set_headers_visible(False)
tree_selection = imgsel_tv.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
col0= gtk.TreeViewColumn('Image name')
cell0 = gtk.CellRendererText()
cell0.set_padding(5,2)
col0.pack_start(cell0, True)
col0.set_attributes(cell0, text=0)
col0.set_max_width(col0_width)
col0.set_min_width(col0_width)
imgsel_tv.append_column(col0)
col1= gtk.TreeViewColumn('Select')
cell1 = gtk.CellRendererToggle()
cell1.set_padding(5,2)
cell1.connect("toggled", cls._toggle_single_cb, imgsel_tv, 1)
col1.pack_start(cell1, True)
col1.set_attributes(cell1, active=1)
col1.set_max_width(col1_width)
col1.set_min_width(col1_width)
imgsel_tv.append_column(col1)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(imgsel_tv)
vbox.pack_start(scroll, expand=True, fill=True)
return vbox, imgsel_tv
@classmethod
def gen_images_widget(cls, col0_width, col1_width, col2_width):
vbox = gtk.VBox(False, 10)
imgsel_tv = gtk.TreeView()
imgsel_tv.set_rules_hint(True)
imgsel_tv.set_headers_visible(False)
tree_selection = imgsel_tv.get_selection()
tree_selection.set_mode(gtk.SELECTION_SINGLE)
col0= gtk.TreeViewColumn('Image name')
cell0 = gtk.CellRendererText()
cell0.set_padding(5,2)
col0.pack_start(cell0, True)
col0.set_attributes(cell0, text=0)
col0.set_max_width(col0_width)
col0.set_min_width(col0_width)
imgsel_tv.append_column(col0)
col1= gtk.TreeViewColumn('Image size')
cell1 = gtk.CellRendererText()
cell1.set_padding(5,2)
col1.pack_start(cell1, True)
col1.set_attributes(cell1, text=1)
col1.set_max_width(col1_width)
col1.set_min_width(col1_width)
imgsel_tv.append_column(col1)
col2= gtk.TreeViewColumn('Select')
cell2 = gtk.CellRendererToggle()
cell2.set_padding(5,2)
cell2.connect("toggled", cls._toggle_single_cb, imgsel_tv, 2)
col2.pack_start(cell2, True)
col2.set_attributes(cell2, active=2)
col2.set_max_width(col2_width)
col2.set_min_width(col2_width)
imgsel_tv.append_column(col2)
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(imgsel_tv)
vbox.pack_start(scroll, expand=True, fill=True)
return vbox, imgsel_tv
@classmethod
def _on_add_item_clicked(cls, button, model):
new_item = ["##KEY##", "##VALUE##"]
iter = model.append()
model.set (iter,
0, new_item[0],
1, new_item[1],
)
@classmethod
def _on_remove_item_clicked(cls, button, treeview):
selection = treeview.get_selection()
model, iter = selection.get_selected()
if iter:
path = model.get_path(iter)[0]
model.remove(iter)
@classmethod
def _on_cell_edited(cls, cell, path_string, new_text, model):
it = model.get_iter_from_string(path_string)
column = cell.get_data("column")
model.set(it, column, new_text)
@classmethod
def gen_editable_settings(cls, setting, tooltip=""):
setting_hbox = gtk.HBox(False, 10)
vbox = gtk.VBox(False, 10)
setting_hbox.pack_start(vbox, expand=True, fill=True)
setting_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING)
for key in setting.keys():
setting_store.set(setting_store.append(), 0, key, 1, setting[key])
setting_tree = gtk.TreeView(setting_store)
setting_tree.set_headers_visible(True)
setting_tree.set_size_request(300, 100)
col = gtk.TreeViewColumn('Key')
col.set_min_width(100)
col.set_max_width(150)
col.set_resizable(True)
col1 = gtk.TreeViewColumn('Value')
col1.set_min_width(100)
col1.set_max_width(150)
col1.set_resizable(True)
setting_tree.append_column(col)
setting_tree.append_column(col1)
cell = gtk.CellRendererText()
cell.set_property('width-chars', 10)
cell.set_property('editable', True)
cell.set_data("column", 0)
cell.connect("edited", cls._on_cell_edited, setting_store)
cell1 = gtk.CellRendererText()
cell1.set_property('width-chars', 10)
cell1.set_property('editable', True)
cell1.set_data("column", 1)
cell1.connect("edited", cls._on_cell_edited, setting_store)
col.pack_start(cell, True)
col1.pack_end(cell1, True)
col.set_attributes(cell, text=0)
col1.set_attributes(cell1, text=1)
scroll = gtk.ScrolledWindow()
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
scroll.add(setting_tree)
vbox.pack_start(scroll, expand=True, fill=True)
# some buttons
hbox = gtk.HBox(True, 4)
vbox.pack_start(hbox, False, False)
button = gtk.Button(stock=gtk.STOCK_ADD)
button.connect("clicked", cls._on_add_item_clicked, setting_store)
hbox.pack_start(button)
button = gtk.Button(stock=gtk.STOCK_REMOVE)
button.connect("clicked", cls._on_remove_item_clicked, setting_tree)
hbox.pack_start(button)
image = gtk.Image()
image.set_from_stock(gtk.STOCK_INFO, gtk.ICON_SIZE_BUTTON)
image.set_tooltip_text(tooltip)
setting_hbox.pack_start(image, expand=False, fill=False)
return setting_hbox, setting_store
class HobViewTable (gtk.VBox):
"""
A VBox to contain the table for different recipe views and package view
"""
def __init__(self, columns, reset_clicked_cb=None, toggled_cb=None):
gtk.VBox.__init__(self, False, 6)
self.table_tree = gtk.TreeView()
self.table_tree.set_headers_visible(True)
self.table_tree.set_headers_clickable(True)
self.table_tree.set_enable_search(True)
self.table_tree.set_search_column(0)
self.table_tree.get_selection().set_mode(gtk.SELECTION_SINGLE)
for i in range(len(columns)):
col = gtk.TreeViewColumn(columns[i]['col_name'])
col.set_clickable(True)
col.set_resizable(True)
col.set_sort_column_id(columns[i]['col_id'])
col.set_min_width(columns[i]['col_min'])
col.set_max_width(columns[i]['col_max'])
self.table_tree.append_column(col)
if columns[i]['col_style'] == 'toggle':
cell = gtk.CellRendererToggle()
cell.set_property('activatable', True)
cell.connect("toggled", toggled_cb, self.table_tree)
col.pack_end(cell, True)
col.set_attributes(cell, active=columns[i]['col_id'])
elif columns[i]['col_style'] == 'text':
cell = gtk.CellRendererText()
col.pack_start(cell, True)
col.set_attributes(cell, text=columns[i]['col_id'])
scroll = gtk.ScrolledWindow()
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
scroll.set_shadow_type(gtk.SHADOW_IN)
scroll.add(self.table_tree)
self.pack_start(scroll, True, True, 0)
hbox = gtk.HBox(False, 5)
button = gtk.Button("Reset")
button.connect('clicked', reset_clicked_cb)
hbox.pack_end(button, False, False, 0)
self.pack_start(hbox, False, False, 0)
class HobViewBar (gtk.EventBox):
"""
A EventBox with the specified gray background color is associated with a notebook.
And the toolbar to simulate the tabs.
"""
def __init__(self, notebook):
if not notebook:
return
self.notebook = notebook
# setup an event box
gtk.EventBox.__init__(self)
self.set_border_width(2)
style = self.get_style().copy()
style.bg[gtk.STATE_NORMAL] = self.get_colormap().alloc_color (HobColors.GRAY, False, False)
self.set_style(style)
hbox = gtk.HBox()
self.add(hbox)
# setup a tool bar in the event box
self.toolbar = gtk.Toolbar()
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
self.toolbar.set_style(gtk.TOOLBAR_TEXT)
self.toolbar.set_border_width(5)
self.toolbuttons = []
for index in range(self.notebook.get_n_pages()):
child = self.notebook.get_nth_page(index)
label = self.notebook.get_tab_label_text(child)
tip_text = 'switch to ' + label + ' page'
toolbutton = self.toolbar.append_element(gtk.TOOLBAR_CHILD_RADIOBUTTON, None,
label, tip_text, "Private text", None,
self.toolbutton_cb, index)
toolbutton.set_size_request(200, 100)
self.toolbuttons.append(toolbutton)
# set the default current page
self.modify_toolbuttons_bg(0)
self.notebook.set_current_page(0)
self.toolbar.append_space()
# add the tool bar into the event box
hbox.pack_start(self.toolbar, expand=False, fill=False)
self.search = gtk.Entry()
self.align = gtk.Alignment(xalign=0.5, yalign=0.5)
self.align.add(self.search)
hbox.pack_end(self.align, expand=False, fill=False)
self.label = gtk.Label(" Search: ")
self.label.set_alignment(0.5, 0.5)
hbox.pack_end(self.label, expand=False, fill=False)
def toolbutton_cb(self, widget, index):
if index >= self.notebook.get_n_pages():
return
self.notebook.set_current_page(index)
self.modify_toolbuttons_bg(index)
def modify_toolbuttons_bg(self, index):
if index >= len(self.toolbuttons):
return
for i in range(0, len(self.toolbuttons)):
toolbutton = self.toolbuttons[i]
if i == index:
self.modify_toolbutton_bg(toolbutton, True)
else:
self.modify_toolbutton_bg(toolbutton)
def modify_toolbutton_bg(self, toolbutton, active=False):
if active:
toolbutton.modify_bg(gtk.STATE_NORMAL, gtk.gdk.Color(HobColors.WHITE))
toolbutton.modify_bg(gtk.STATE_ACTIVE, gtk.gdk.Color(HobColors.WHITE))
toolbutton.modify_bg(gtk.STATE_SELECTED, gtk.gdk.Color(HobColors.WHITE))
toolbutton.modify_bg(gtk.STATE_PRELIGHT, gtk.gdk.Color(HobColors.WHITE))
else:
toolbutton.modify_bg(gtk.STATE_NORMAL, gtk.gdk.Color(HobColors.GRAY))
toolbutton.modify_bg(gtk.STATE_ACTIVE, gtk.gdk.Color(HobColors.GRAY))
toolbutton.modify_bg(gtk.STATE_SELECTED, gtk.gdk.Color(HobColors.GRAY))
toolbutton.modify_bg(gtk.STATE_PRELIGHT, gtk.gdk.Color(HobColors.GRAY))
class HobXpmLabelButtonBox(gtk.EventBox):
""" label: name of buttonbox
description: the simple description
"""
def __init__(self, display_file="", hover_file="", label="", description=""):
gtk.EventBox.__init__(self)
self._base_state_flags = gtk.STATE_NORMAL
self.set_events(gtk.gdk.MOTION_NOTIFY | gtk.gdk.BUTTON_PRESS | gtk.gdk.EXPOSE)
self.connect("expose-event", self.cb)
self.connect("enter-notify-event", self.pointer_enter_cb)
self.connect("leave-notify-event", self.pointer_leave_cb)
self.icon_hover = gtk.Image()
self.icon_hover.set_name("icon_image")
if type(hover_file) == str:
pixbuf = gtk.gdk.pixbuf_new_from_file(hover_file)
self.icon_hover.set_from_pixbuf(pixbuf)
self.icon_display = gtk.Image()
self.icon_display.set_name("icon_image")
if type(display_file) == str:
pixbuf = gtk.gdk.pixbuf_new_from_file(display_file)
self.icon_display.set_from_pixbuf(pixbuf)
self.tb = gtk.Table(2, 10, True)
self.tb.set_row_spacing(1, False)
self.tb.set_col_spacing(1, False)
self.add(self.tb)
self.tb.attach(self.icon_display, 0, 2, 0, 2, 0, 0)
self.tb.attach(self.icon_hover, 0, 2, 0, 2, 0, 0)
lbl = gtk.Label()
lbl.set_alignment(0.0, 0.5)
lbl.set_markup("<span foreground=\'#1C1C1C\' font_desc=\'18px\'>%s</span>" % label)
self.tb.attach(lbl, 2, 10, 0, 1)
lbl = gtk.Label()
lbl.set_alignment(0.0, 0.5)
lbl.set_markup("<span foreground=\'#1C1C1C\' font_desc=\'14px\'>%s</span>" % description)
self.tb.attach(lbl, 2, 10, 1, 2)
def pointer_enter_cb(self, *args):
#if not self.is_focus():
self.set_state(gtk.STATE_PRELIGHT)
self._base_state_flags = gtk.STATE_PRELIGHT
self.icon_hover.show()
self.icon_display.hide()
def pointer_leave_cb(self, *args):
self.set_state(gtk.STATE_NORMAL)
self._base_state_flags = gtk.STATE_NORMAL
self.icon_display.show()
self.icon_hover.hide()
def cb(self, w,e):
""" Hide items - first time """
pass

View File

@@ -1,358 +0,0 @@
#!/usr/bin/env python
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2012 Intel Corporation
#
# Authored by Dongxiao Xu <dongxiao.xu@intel.com>
# Authored by Shane Wang <shane.wang@intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
import glib
from bb.ui.crumbs.progressbar import HobProgressBar
from bb.ui.crumbs.hobcolor import HobColors
from bb.ui.crumbs.hobwidget import hic, HobXpmLabelButtonBox
from bb.ui.crumbs.hoblistmodel import RecipeListModel
from bb.ui.crumbs.hobpages import HobPage
from bb.ui.crumbs.hig import CrumbsDialog, BinbDialog, \
AdvancedSettingDialog, LayerSelectionDialog
#
# ImageConfigurationPage
#
class ImageConfigurationPage (HobPage):
__dummy_machine__ = "--select a machine--"
def __init__(self, builder):
super(ImageConfigurationPage, self).__init__(builder, "Image configuration")
self.image_combo_id = None
self.create_visual_elements()
def create_visual_elements(self):
# create visual elements
self.toolbar = gtk.Toolbar()
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
_, template_button = self.append_toolbar_button(self.toolbar,
"Template",
hic.ICON_TEMPLATES_DISPLAY_FILE,
hic.ICON_TEMPLATES_HOVER_FILE,
"Load a hob building template saved before",
self.template_button_clicked_cb)
_, my_images_button = self.append_toolbar_button(self.toolbar,
"My images",
hic.ICON_IMAGES_DISPLAY_FILE,
hic.ICON_IMAGES_HOVER_FILE,
"Open images built out previously for running or deployment",
self.my_images_button_clicked_cb)
_, settings_button = self.append_toolbar_button(self.toolbar,
"Settings",
hic.ICON_SETTINGS_DISPLAY_FILE,
hic.ICON_SETTINGS_HOVER_FILE,
"Other advanced settings for build",
self.settings_button_clicked_cb)
self.config_top_button = self.add_onto_top_bar(self.toolbar)
self.gtable = gtk.Table(40, 40, True)
self.create_config_machine()
self.create_config_baseimg()
self.config_build_button = self.create_config_build_button()
def _remove_all_widget(self):
children = self.gtable.get_children() or []
for child in children:
self.gtable.remove(child)
children = self.box_group_area.get_children() or []
for child in children:
self.box_group_area.remove(child)
children = self.get_children() or []
for child in children:
self.remove(child)
def _pack_components(self):
self._remove_all_widget()
self.pack_start(self.config_top_button, expand=False, fill=False)
self.pack_start(self.group_align, expand=True, fill=True)
self.box_group_area.pack_start(self.gtable, expand=True, fill=True)
self.box_group_area.pack_end(self.config_build_button, expand=False, fill=False)
def show_machine(self):
self._pack_components()
self.set_config_machine_layout()
self.show_all()
self.progress_bar.reset()
self.progress_bar.hide()
self.config_build_button.hide_all()
def update_progress_bar(self, title, fraction, status=True):
self.progress_bar.update(fraction)
self.progress_bar.set_title(title)
self.progress_bar.set_rcstyle(status)
def show_info_populating(self):
self._pack_components()
self.set_config_machine_layout()
self.show_all()
self.config_build_button.hide_all()
def show_info_populated(self):
self._pack_components()
self.set_config_machine_layout()
self.set_config_baseimg_layout()
self.show_all()
self.progress_bar.reset()
self.progress_bar.hide()
def create_config_machine(self):
self.machine_title = gtk.Label()
self.machine_title.set_alignment(0.0, 0.5)
mark = "<span %s>Select a machine</span>" % self.span_tag('24px', 'bold')
self.machine_title.set_markup(mark)
self.machine_title_desc = gtk.Label()
self.machine_title_desc.set_alignment(0, 0.5)
mark = ("<span %s>This is the profile of the target machine for which you"
" are building the image.\n</span>") % (self.span_tag(px='14px'))
self.machine_title_desc.set_markup(mark)
self.machine_combo = gtk.combo_box_new_text()
self.machine_combo.connect("changed", self.machine_combo_changed_cb)
icon_file = hic.ICON_LAYERS_DISPLAY_FILE
hover_file = hic.ICON_LAYERS_HOVER_FILE
self.layer_button = HobXpmLabelButtonBox(icon_file, hover_file,
"Layers", "Add support for machines, software, etc")
self.layer_button.connect("button-release-event", self.layer_button_clicked_cb)
icon_file = hic.ICON_INFO_DISPLAY_FILE
self.layer_info_icon = gtk.Image()
pix_buffer = gtk.gdk.pixbuf_new_from_file(icon_file)
self.layer_info_icon.set_from_pixbuf(pix_buffer)
markup = "Layers are a powerful mechanism to extend the Yocto Project "
markup += "with your own functionality.\n"
markup += "For more on layers, check:\n"
markup += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
markup += "poky-ref-manual.html#usingpoky-changes-layers."
self.layer_info_icon.set_tooltip_markup(markup)
self.progress_bar = HobProgressBar()
self.machine_separator = gtk.HSeparator()
def set_config_machine_layout(self):
self.gtable.attach(self.machine_title, 0, 40, 0, 4)
self.gtable.attach(self.machine_title_desc, 0, 40, 4, 6)
self.gtable.attach(self.machine_combo, 0, 12, 6, 9)
self.gtable.attach(self.layer_button, 12, 36, 6, 10)
self.gtable.attach(self.layer_info_icon, 36, 40, 6, 9)
self.gtable.attach(self.progress_bar, 0, 40, 13, 17)
self.gtable.attach(self.machine_separator, 0, 40, 12, 13)
def create_config_baseimg(self):
self.image_title = gtk.Label()
self.image_title.set_alignment(0, 1.0)
mark = "<span %s>Select a base image</span>" % self.span_tag('24px', 'bold')
self.image_title.set_markup(mark)
self.image_title_desc = gtk.Label()
self.image_title_desc.set_alignment(0, 0.5)
mark = ("<span %s>Base images are a starting point for the type of image you want. "
"You can build them as \n"
"they are or customize them to your specific needs.\n</span>") % self.span_tag('14px')
self.image_title_desc.set_markup(mark)
self.image_combo = gtk.combo_box_new_text()
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
self.image_desc = gtk.Label()
self.image_desc.set_alignment(0, 0)
self.image_desc.set_line_wrap(True)
# button to view recipes
icon_file = hic.ICON_RCIPE_DISPLAY_FILE
hover_file = hic.ICON_RCIPE_HOVER_FILE
self.view_recipes_button = HobXpmLabelButtonBox(icon_file, hover_file,
"View Recipes", "Add/remove recipes and collections")
self.view_recipes_button.connect("button-release-event", self.view_recipes_button_clicked_cb)
# button to view packages
icon_file = hic.ICON_PACKAGES_DISPLAY_FILE
hover_file = hic.ICON_PACKAGES_HOVER_FILE
self.view_packages_button = HobXpmLabelButtonBox(icon_file, hover_file,
"View Packages", "Add/remove packages")
self.view_packages_button.connect("button-release-event", self.view_packages_button_clicked_cb)
self.image_separator = gtk.HSeparator()
def set_config_baseimg_layout(self):
self.gtable.attach(self.image_title, 0, 40, 13, 17)
self.gtable.attach(self.image_title_desc, 0, 40, 17, 22)
self.gtable.attach(self.image_combo, 0, 12, 22, 25)
self.gtable.attach(self.image_desc, 14, 38, 22, 27)
self.gtable.attach(self.view_recipes_button, 0, 20, 28, 32)
self.gtable.attach(self.view_packages_button, 20, 40, 28, 32)
self.gtable.attach(self.image_separator, 0, 40, 35, 36)
def create_config_build_button(self):
# Create the "Build packages" and "Just bake" buttons at the bottom
button_box = gtk.HBox(False, 5)
# create button "Just bake"
just_bake_button = gtk.Button()
label = gtk.Label()
mark = "<span %s>Just bake</span>" % self.span_tag('24px', 'bold')
label.set_markup(mark)
just_bake_button.set_image(label)
just_bake_button.set_size_request(205, 49)
just_bake_button.modify_bg(gtk.STATE_NORMAL, gtk.gdk.Color(HobColors.ORANGE))
just_bake_button.modify_bg(gtk.STATE_PRELIGHT, gtk.gdk.Color(HobColors.ORANGE))
just_bake_button.modify_bg(gtk.STATE_SELECTED, gtk.gdk.Color(HobColors.ORANGE))
just_bake_button.set_tooltip_text("Build image to get your target image")
just_bake_button.set_flags(gtk.CAN_DEFAULT)
just_bake_button.grab_default()
just_bake_button.connect("clicked", self.just_bake_button_clicked_cb)
button_box.pack_end(just_bake_button, expand=False, fill=False)
label = gtk.Label(" or ")
button_box.pack_end(label, expand=False, fill=False)
# create button "Build Packages"
build_packages_button = gtk.LinkButton("Build packages first based on recipe selection "
"for late customization on packages for the target image", "Build Packages")
build_packages_button.connect("clicked", self.build_packages_button_clicked_cb)
button_box.pack_end(build_packages_button, expand=False, fill=False)
return button_box
def machine_combo_changed_cb(self, machine_combo):
combo_item = machine_combo.get_active_text()
if not combo_item or combo_item == self.__dummy_machine__:
self.builder.switch_page(self.builder.MACHINE_SELECTION)
else:
self.builder.configuration.curr_mach = combo_item
# Do reparse recipes
self.builder.switch_page(self.builder.RCPPKGINFO_POPULATING)
def update_machine_combo(self):
all_machines = [self.__dummy_machine__] + self.builder.parameters.all_machines
model = self.machine_combo.get_model()
model.clear()
for machine in all_machines:
self.machine_combo.append_text(machine)
self.machine_combo.set_active(0)
def switch_machine_combo(self):
model = self.machine_combo.get_model()
active = 0
while active < len(model):
if model[active][0] == self.builder.configuration.curr_mach:
self.machine_combo.set_active(active)
return
active += 1
self.machine_combo.set_active(0)
def image_combo_changed_idle_cb(self, selected_image, selected_recipes, selected_packages):
self.builder.update_recipe_model(selected_image, selected_recipes)
self.builder.update_package_model(selected_packages)
self.builder.window_sensitive(True)
def image_combo_changed_cb(self, combo):
self.builder.window_sensitive(False)
selected_image = self.image_combo.get_active_text()
if not selected_image:
return
selected_recipes = []
image_path = self.builder.recipe_model.pn_path[selected_image]
image_iter = self.builder.recipe_model.get_iter(image_path)
selected_packages = self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_INSTALL).split()
mark = ("<span %s>%s</span>\n") % (self.span_tag('14px'), self.builder.recipe_model.get_value(image_iter, self.builder.recipe_model.COL_DESC))
self.image_desc.set_markup(mark)
self.builder.recipe_model.reset()
self.builder.package_model.reset()
glib.idle_add(self.image_combo_changed_idle_cb, selected_image, selected_recipes, selected_packages)
def _image_combo_connect_signal(self):
if not self.image_combo_id:
self.image_combo_id = self.image_combo.connect("changed", self.image_combo_changed_cb)
def _image_combo_disconnect_signal(self):
if self.image_combo_id:
self.image_combo.disconnect(self.image_combo_id)
self.image_combo_id = None
def update_image_combo(self, recipe_model, selected_image):
# Update the image combo according to the images in the recipe_model
# populate image combo
filter = {RecipeListModel.COL_TYPE : ['image']}
image_model = recipe_model.tree_model(filter)
active = 0
cnt = 0
it = image_model.get_iter_first()
self._image_combo_disconnect_signal()
model = self.image_combo.get_model()
model.clear()
# append and set active
while it:
path = image_model.get_path(it)
image_name = image_model[path][recipe_model.COL_NAME]
self.image_combo.append_text(image_name)
if image_name == selected_image:
active = cnt
it = image_model.iter_next(it)
cnt = cnt + 1
self._image_combo_connect_signal()
self.image_combo.set_active(-1)
self.image_combo.set_active(active)
def layer_button_clicked_cb(self, event, data):
# Create a layer selection dialog
self.builder.show_layer_selection_dialog()
def view_recipes_button_clicked_cb(self, event, data):
self.builder.show_recipes()
def view_packages_button_clicked_cb(self, event, data):
self.builder.show_packages()
def just_bake_button_clicked_cb(self, button):
self.builder.just_bake()
def build_packages_button_clicked_cb(self, button):
self.builder.build_packages()
def template_button_clicked_cb(self, button):
self.builder.show_load_template_dialog()
def my_images_button_clicked_cb(self, button):
self.builder.show_load_my_images_dialog()
def settings_button_clicked_cb(self, button):
# Create an advanced settings dialog
self.builder.show_adv_settings_dialog()

Some files were not shown because too many files have changed in this diff Show More